id
string | text
string | source
string | created
timestamp[s] | added
string | metadata
dict |
---|---|---|---|---|---|
1105.0304
|
# Effective mass overshoot in single degree of freedom mechanical systems with
a particle damper
Martín Sánchez Luis A. Pugnaloni luis@iflysib.unlp.edu.ar Instituto de
Física de Líquidos y Sistemas Biológicos (CONICET La Plata, UNLP), Calle 59
Nro 789, 1900 La Plata, Argentina. Departamento de Ingeniería Mecánica,
Facultad Regional La Plata, Universidad Tecnológica Nacional, 60 esq. 124 S/N,
1900 La Plata, Argentina.
###### Abstract
We study the response of a single degree of freedom mechanical system composed
of a primary mass, $M$, a linear spring, a viscous damper and a particle
damper. The particle damper consists in a prismatic enclosure of variable
height that contains spherical grains (total mass $m_{\mathrm{p}}$). Contrary
to what it has been discussed in previous experimental and simulation studies,
we show that, for small containers, the system does not approach the fully
detuned mass limit in a monotonous way. Rather, the system increases its
effective mass up and above $M+m_{\mathrm{p}}$ before reaching this expected
limiting value (which is associated with the immobilization of the particles
due to a very restrictive container). Moreover, we show that a similar effect
appears in the tall container limit where the system reaches effective masses
below the expected asymptotic value $M$. We present a discussion on the origin
of these overshoot responses and the consequences for industrial applications.
###### keywords:
Particle dampers , Granular materials , Effective mass
††journal: Journal of Sound and Vibration
## 1 Introduction
Most mechanical systems, such as rotating machinery and aeronautic or
aerospace structures, achieve damping through viscoelastic materials or
viscous fluids. In general, viscoelastic materials and viscous fluids are very
effective at moderate temperatures (less than 250∘C), but the performance of
these is poor at low and high temperatures. Moreover, these materials degrade
over time and lose their effectiveness.
In recent years, particle dampers (PD) have been studied extensively for use
in harsh environments where other types of damping are not efficient. A PD is
an element that increases the structural damping by inserting dissipative
particles in a box attached to the primary system or by embedding grains
within holes in a vibrating structure [1]. The grains absorb the kinetic
energy of the primary system and convert it into heat through inelastic
collisions and friction between the particles and between the particles and
the walls of the box or hole. This results in a highly nonlinear mechanical
system. PD are effective over a wide frequency range [1]. Moreover, PD are
durable, inexpensive, easy to maintain and have great potential for vibration
and noise suppression for many applications (see e.g. [2] and [3]).
Parameters such as the size and shape of the particles, density, coefficient
of restitution, size and shape of the enclosure, and the type of excitation of
the primary system, among many other features, are important in damping
performance [4]. Thus, appropriate treatment of the PD in a given structure
requires careful analysis and design.
PD use a large number of small grains; therefore, its behavior is directly
related to cooperative movements of those inside the cavity. The theoretical
models derived from single particle systems [5] are not applicable to predict
the performance of multi-particle systems. For more than 15 years, particle
dynamics simulations have been used as a powerful tool for investigating the
behavior of these types of granular systems [6, 7, 8, 9].
In previous works, particle dampers composed of containers of various sizes
have been considered. In all of these works, the resonant frequency of the
Single-Degree-of-Freedom (SDoF) system falls with respect to the undamped
system. This is generally attributed to the addition of the mass of the
particles, $m_{\mathrm{p}}$. At very low excitation amplitudes, the system
behaves as if the entire mass of the particles was attached to the primary
mass, $M$, of the container (i.e. $M+m_{\mathrm{p}}$). If the excitation level
is increased, the resonant frequency gradually increases (and the damping
performance increases). Eventually, the resonant frequency tends to the
resonant frequency of the undamped system. This overall behavior has been
discussed in various papers [9, 10].
Yang [11] has studied, experimentally, particle dampers under different
conditions of excitation, different frequencies and variable gap size. The gap
size is the free space left between the granular bed and the enclosure ceiling
when the system is at rest. He has found that, under some conditions, the
system may display effective masses above $M+m_{\mathrm{p}}$ or below $M$.
However, a careful analysis of this phenomenon has not been carried out yet.
In this paper, we discuss results, obtained through simulations via a Discrete
Elements Method (DEM), on the resonant frequency shift of SDoF mechanical
systems with granular damping. We show that, contrary to what has been
discussed previously, for small containers, the system does not approach the
fully detuned mass limit in a monotonous way. Rather, the system increases its
effective mass up and above $M+m_{\mathrm{p}}$ before reaching the expected
limiting value. Moreover, there is a similar effect in the tall enclosure
limit, where the system reaches effective masses below the expected asymptotic
value $M$.
## 2 Discrete Elements Method
In order to simulate the motion of the particles in the enclosure of a PD we
use a DEM. This scheme, first used by Cundall and Strack [12], is widely used
for numerical simulations of granular media [13]. We implement a velocity
Verlet algorithm [14] to update the positions (orientations) and velocities of
the particles. Orientations are represented through quaternions to prevent
numerical singularities [15].
We consider spherical soft particles. If $R_{i}$ and $R_{j}$ are the radii of
two colliding particles, $\alpha=R_{i}+R_{j}-d_{ij}$ is the normal
displacement or virtual overlap between the spheres, where $d_{ij}$ is the
distance between the centers of the two spheres. Under these conditions, the
interaction force $F_{\mathrm{n}}$ in the normal direction is based on the
Hertz–Kuwabara–Kono model [16, 17].
$F_{\mathrm{n}}=-k_{\mathrm{n}}\alpha^{3/2}-\gamma_{\mathrm{n}}\upsilon_{\mathrm{n}}\sqrt{\alpha}$
(1)
where $k_{\mathrm{n}}=\frac{2}{3}E\sqrt{\frac{R}{2}}(1-\upsilon^{2})^{-1}$ is
the normal stiffness (with $E$ the Young’s modulus, $\upsilon$ the Poisson’s
ratio and $R^{-1}=R_{i}^{-1}+R_{j}^{-1}$), $\gamma_{\mathrm{n}}$ the normal
damping coefficient of the viscoelastic contact, and $\upsilon_{\mathrm{n}}$
the relative normal velocity.
On the other hand, the tangential force $F_{\mathrm{s}}$ is based on Coulomb’s
law of friction [16, 18]. We used a simplified model in which the friction
force takes the minimum value between the shear damping force and the dynamic
friction.
$F_{\mathrm{s}}=-\min\left(\left|\gamma_{\mathrm{s}}\upsilon_{\mathrm{s}}\sqrt{\alpha}\right|,\left|\mu_{\mathrm{d}}F_{\mathrm{n}}\right|\right)\rm{sgn}\left(\upsilon_{\mathrm{s}}\right)$
(2)
where $\gamma_{\mathrm{s}}$ is the shear damping coefficient,
$\upsilon_{\mathrm{s}}$ the relative tangential velocity between the two
spheres in contact and $\mu_{\mathrm{d}}$ the dynamic friction coefficient.
The sign function indicates that the friction force always opposes the
direction of the relative tangential velocity.
The particles are enclosed in a prismatic container (the box or enclosure)
built up of six flat walls with the same material properties as the particles
defined through the parameters in Eqs. (1) and (2).
## 3 The SDoF model
Figure 1 shows the model of our SDoF system with PD which is assumed to move
only along the direction of the vertical $z$-axis. The primary system consists
of a mass $M=2.37$ kg, a spring $K=21500$ Nm-1 and a viscous damper with
damping constant $C$. We have used two different values for viscous damping,
$C=7.6$ and $26.3$ Nsm-1. The undamped ($C=0$ and no PD) natural frequency of
the primary system is $f_{0}=15.16$ Hz.
Figure 1: (a) Model of the SDoF system with a particle damper. (b) Snapshots
of the particles in the enclosure during a typical simulation.
The PD is modeled as $N=250$ spherical grains in a prismatic enclosure of
lateral side $L_{x}=L_{y}=0.03675$ m and different heights $L_{z}$. The
material properties of the particles (and walls) and the simulation parameters
are listed in Table 1. The gravitational field $g=9.8$ ms-2 is considered in
the negative vertical direction. Although the SDoF system can only move in the
vertical direction, the particles move freely inside the enclosure.
The system is excited by the harmonic displacement of the base to which the
spring and viscous damper are attached (see Fig. 1). Let $u(t)$ and $z(t)$ be
the displacement of the base and the primary mass, respectively. Then, the
equation of motion for the system is given by
$M\ddot{z}(t)+C\dot{z}(t)+K{z}(t)=C\dot{u}(t)+Ku(t)+F_{\mathrm{part}}(t),\ \ \
\ \ \ u(t)=U\cos(\omega t),$ (3)
where $F_{\mathrm{part}}(t)$ is the $z$-component of the force resulting from
all the interactions (normal and tangential) of the enclosure walls with the
particles. The amplitude, $U$, and the angular frequency, $\omega$, of the
harmonic vibrating base, are control parameters.
Property | Value
---|---
Young’s modulus $E$ | $2.03\times 10^{11}$ Nm-2
Density | 8030 kgm-3
Poisson’s ratio $\upsilon$ | 0.28
Friction coefficient $\mu_{\mathrm{d}}$ | 0.3
Normal damping coefficient $\gamma_{\mathrm{n}}$ | $3.660\times 10^{3}$ kgs-1m-1/2
Shear damping coefficient $\gamma_{\mathrm{s}}$ | $1.098\times 10^{4}$ kg$s^{-1}$m-1/2
Excitation amplitude $U$ | 0.0045 m
Time step $\delta t$ | $8.75\times 10^{-8}$ s
Time of simulation | 13.12 s
Particle radius | 0.003 m
Total particle mass $m_{\mathrm{p}}$ | 0.227 kg
Table 1: Material properties of the particles and simulation parameters.
We have obtained the frequency response function (FRF) for different enclosure
heights $L_{z}$. The initial condition for each simulation consists of a
simple deposition of the particles inside the enclosure, starting from a
dilute random arrangement, before applying the base excitation.
## 4 Data analysis
As shown in Table 1, we have simulated the vibration of the system for $13.12$
s. After an initial transient, the system reaches a steady state. This steady
state can display either regular or chaotic behavior, depending on the
excitation frequency, box height, etc. In all cases, the final 10% of the time
of the simulations has been used for the data analysis, which has proved to be
sufficient to ensure that the steady state has been reached. We have studied
frequency excitations in the range ($0.5-30.0$ Hz).
We carry out a simple evaluation of the effective mass and effective damping
of the PD by fitting the FRF to a SDoF system with no particles in the
enclosure. Other approximate methods such as the power flow method used by
Yang [11] present numerical instabilities and can yield negative effective
masses (see e.g. [19]).
Figure 2: Examples of the FRF of the SDoF system with PD for different
$L_{z}$. Green squares: $L_{z}=0.057$ m, blue circles: $L_{z}=0.1225$ m and
red diamonds: $L_{z}=0.372$ m. These results correspond to the simulations
with $C=7.6$ Nsm-1. The solid lines are fits of the FRF with an equivalent
mass–spring–dashpot model without PD [see Eq. (4)].
The amplitude of the response $X$ of a system with no PD is given by
$X=U\left[\frac{K^{2}+(C_{\mathrm{eff}}\omega)^{2}}{(K-M_{\mathrm{eff}}\omega^{2})^{2}+(C_{\mathrm{eff}}\omega)^{2}}\right]^{1/2}$
(4)
We carry out a least-squares curve fitting of the DEM data with Eq. (4). The
values of $K$ and $U$ are fixed to the corresponding values in our simulations
and $C_{\mathrm{eff}}$ and $M_{\mathrm{eff}}$ are fitting parameters.
It is important to mention that for certain enclosure heights, the FRF is not
a smooth function as the one corresponding to Eq. (4). However, the overall
shape is well described by this fit. For small and large enclosures, the shape
of the FRF of the PD is rather smooth and the fits are warranted. Examples of
the quality of the fits are shown in Fig. 2 for three different $L_{z}$. An
improvement of the fits for the FRF at intermediate $L_{z}$ for which some
fluctuations are present could be achieved by using models with extra degrees
of freedoms (e.g. a tuned mass damper). However, the characteristic
frequencies of the PD are not well separated and the equivalent
mass–spring–dashpot model constitute a good first order approximation.
## 5 Results
Figure 3: Frequency response function for the SDoF system with PD. Each curve
corresponds to a different box height $L_{z}$. Red squares: $L_{z}=0.057$ m,
green circles: $L_{z}=0.1225$ m and blue diamonds: $L_{z}=0.372$ m. The black
continuous line corresponds to the SDoF system without particles in the
enclosure. The black dashed line corresponds to an equivalent system with an
added mass $m_{\mathrm{p}}$ equal to the mass of the particles. These results
correspond to the simulations with $C=7.6$ Nsm-1. The green circles correspond
to value of $L_{z}$ for which the maximum effective damping is obtained.
Examples of the FRF of the PD can been seen in Fig. 3 for a few box sizes
$L_{z}$. It is important to note that, in general, the gap size and not
$L_{z}$ controls the dynamics. In this work all simulations are carried out
with the same number of particles and particle size. Hence, the gap size is
directly obtained as $L_{z}-h$; with $h=0.039$ m the approximate height of the
granular bed at rest. For a study on the dependence of the dynamics on the
number of particles and particle size see Ref. [7, 20].
The general trends observed in Fig. 3 are consistent with previous
experimental and simulation works (see e.g. [7], [9] and [10]). For small gaps
($L_{z}<0.087$ m) the FRF tends to the response of a system without particles
but with an added mass equivalent to the mass $m_{\mathrm{p}}$ of the
particles. For very tall boxes ($L_{z}>0.222$ m), the response follows the one
expected for an empty container. For intermediate values of $L_{z}$,
significant granular damping is obtained with damped resonant frequencies
intermediate between the two asymptotic cases. However, these intermediate
values of $L_{z}$ yield more complex FRFs with the presence of more than one
peak [10].
### 5.1 Effective damping and effective mass
A simple evaluation of the effective damping yield by the particles can be
done by fitting the FRF to a SDoF system including only a viscous damper as
discussed in Section 4. The effective damping, $C_{\mathrm{eff}}$ is plotted
as a function of $L_{z}$ in Fig. 4. A clear optimum value of $L_{z}$ is
predicted as in several previous works [7, 20]. Notice that an increase of the
viscous damping $C$ leads to a less important influence of the PD on the
effective damping $C_{\mathrm{eff}}$. This is due to the fact that the damping
due to the dashpot of the SDoF system reduces the transfer of energy to the
particles inside the enclosure.
Figure 4: Effective damping $C_{\mathrm{eff}}$ relative to the viscous damping
$C$ as a function of $L_{z}$. The blue line corresponds to the SDoF system
with $C=7.6$ Nsm-1 and the green line to the system with $C=26.365$ Nsm-1. The
error bars correspond to the asymptotic standard error of the
$C_{\mathrm{eff}}$ best fit.
The two limiting cases (small $L_{z}$ and large $L_{z}$) have been explained
[9, 11, 21] in terms of the proportion of the time that particles spend in
contact with the enclosure. For small $L_{z}$, particles are essentially fixed
in their positions since the constraint imposed by the walls impede their
relative motion. Therefore, the particles behave as a simple mass added to the
primary mass $M$ of the system and provide no extra damping (in this limit the
effective damping equals the viscous damping). Conversely, if the enclosure
leaves sufficient room for the motion of the particles, the granular bed will
reach a gas-like state (at the large excitation levels achieved near
resonance) in which most of the time particles are in the air and only
occasionally collide against the floor and ceiling of the box. In this case,
the particles will barely influence the motion of the primary system and
$C_{\mathrm{eff}}$ falls back to the baseline, $C$, imposed by the viscous
damper.
In Fig. 5 we plot the effective mass, $M_{\mathrm{eff}}$, obtained from the
fits. The expected limiting masses $M$ and $M+m_{\mathrm{p}}$ are shown as
horizontal lines for reference. Our data surveys a larger number of box
heights in comparison with previous studies. We can see that, in contrast with
the suggestion of previous studies, the two limiting cases are not approached
in a monotonous way. The system reaches effective masses above
$M+m_{\mathrm{p}}$ as $L_{z}$ is decreased and finally falls towards the limit
value. Similarly, for large $L_{z}$, an increase of the box height leads to
effective masses below $M$ before the limit value is approached. It is worth
mentioning that a similar behavior has been observed in some experiments [11].
The author reports that the resonant frequency (which is simply related to our
effective mass by $M_{\mathrm{eff}}=k/(2\pi f_{0})^{2}$) can reach values
above the expected for an empty enclosure.
Figure 5: Effective mass as a function of $L_{z}$. The blue line corresponds
to the SDoF system with $C=7.6$ Nsm-1 and the green line to the system with
$C=26.365$ Nsm-1. The error bars correspond to the asymptotic standard error
of the $M_{\mathrm{eff}}$ best fit.
The presence of these overshoots when approaching the two limiting cases (zero
gap size and infinite gap size) has been overlooked or received little
attention in the past and is somewhat against intuition. In general, studies
of PD are focused on the region of gap sizes where the maximum effective
damping is observed. There, the effective mass is, as expected, intermediate
between $M$ and $M+m_{\mathrm{p}}$. However, design constraints may require a
PD to work at gap sizes off the optimum damping and in the region of effective
mass overshoot. In what follows, we discuss the origin of such behavior in the
response of the PD by considering the internal motion of the granular bed.
### 5.2 Internal motion of the granular bed
In Fig. 6 we plot the trajectory of the floor and ceiling of the enclosure
over a few periods of excitation in the steady state regime. We have chosen
frequencies close to the resonant frequency for each $L_{z}$ considered. This
is because the effective values of mass and damping obtained by fitting are
determined to a large extent by the frequencies with larger amplitude of
motion. In Fig. 6, we have indicated the position of the granular bed inside
the enclosure by a band limited by the $z$-coordinates of the uppermost and
lowermost particle at any given time. Notice that such representation lets us
have an indication of the density of the granular bed and the approximate time
of impact with the box top and bottom walls. However, if the granular bed is
somewhat dilute, the time of impact of the uppermost or lowermost particle
does not coincide with the time at which the most substantial momentum
exchange happens between the grains and the enclosure. For this reason we also
plot in Fig. 6 the total force exerted by the grains on the enclosure in the
$z$-direction (i.e. $F_{\mathrm{part}}$, see Eq. (3)) which not only provides
a more precise assessment of the time of impact but of the intensity of such
impacts.
We recall here that the effective mass of an equivalent mass–spring-dashpot
model is associated with the response of the PD in phase with the spring
force, whereas the effective damping will be associated with the response in
phase with the viscous force.
As we mentioned, for very small gaps, the response of the system is similar to
the response of an equivalent SDoF system where the total mass of the
particles is simply added to the primary mass (i.e.,
$M_{\mathrm{eff}}=M+m_{\mathrm{p}}$). This is due to the fact that the
particles are not able to move in such reduced enclosures. As we slightly
increase $L_{z}$, particles behave as a dense pack that travels between the
floor and ceiling of the enclosure (see Fig. 6(a)). However, within an
oscillation, the granular bed is in full contact with the ceiling or the floor
during important portions of the time. During such periods, the granular bed
can be considered essentially as an added mass $m_{\mathrm{p}}$. Notice that
after leaving the ceiling (floor) the grains hit the floor (ceiling) before
the primary system has reached its maximum displacement. The particles
transfer momentum to the enclosure against the direction of the spring force.
As a consequence, the effective inertia of the system during the impact is
equivalent to a sudden mass increase. This effective mass increase exceeds the
loss due to the short periods in which the granular bed is detached from the
floor (ceiling) (see the zero force segments in Fig. 6(a)). The overall result
is an effective mass $M_{\mathrm{eff}}$ above $M+m_{\mathrm{p}}$.
If we further increase $L_{z}$, the period of detachment within an oscillation
increases, but the transfer of momentum at impact also increases leading to an
overall increase of $M_{\mathrm{eff}}$ (see Fig. 6(b)). Eventually, the
transfer of momentum at impact is exactly balanced by the loss of added mass
due to the detachment periods. This makes the system render again
$M_{\mathrm{eff}}=M+m_{\mathrm{p}}$ (see Fig. 6(c)). Interestingly, this
crossover occurs when the granular bed hits the floor (ceiling) at the point
of maximum displacement.
Figure 6: Displacement and force of particles against the enclosure for
different $L_{z}$ with $C=7.6$ Nsm-1. The solid lines show the position of
floor and ceiling of the enclosure and the colored area indicates the limits
of the granular bed defined as the position of the uppermost and lowermost
particle. (a) $L_{z}=0.042$ m and $f=14.5$ Hz. (b) $L_{z}=0.057$ m and
$f=14.5$ Hz. (c) $L_{z}=0.087$ m and $f=14.5$ Hz. (d) $L_{z}=0.1095$ m and
$f=15.0$ Hz. (e) $L_{z}=0.1225$ m and $f=15.0$ Hz. (f) $L_{z}=0.147$ m and
$f=15.5$ Hz. (g) $L_{z}=0.372$ m and $f=15.0$ Hz. (h) Same as in Fig. 5 with
arrows indicating the values of $L_{z}$ corresponding to each panel (a)-(g).
There exist a range of values of $L_{z}$ for which
$M<M_{\mathrm{eff}}<M+m_{\mathrm{p}}$. In such cases the effective damping is
rather high. This is mainly due to the fact that the granular bed hits the
enclosure out of phase. In particular, the grains hit the base when both the
primary mass is moving upward and the spring force is pulling upward (see Fig.
6(d)). This results in a strong reduction of the maximum displacement of the
system and in an effective added mass with respect to $M$. However, since the
periods of detachment from the enclosure are significantly long, the average
added mass due to the impacts is smaller than $m_{\mathrm{p}}$.
If the impacts happens to be always at the time where no spring force is
applied to the primary mass, then the transfer of momentum will not result in
an effective added mass. This happens when the primary mass pass through the
equilibrium point of the spring (i.e., zero displacement). Indeed, we find
that there exists a particular value of $L_{z}$ at which the granular bed hits
the floor (ceiling) at this point and the effective mass obtained by fitting
corresponds to $M$ (i.e., no added mass, see Fig. 6(e)). It is important to
realize that this is also the value of $L_{z}$ at which the maximum effective
damping is obtained (see Fig. 4 and Ref.[11, 22, 23]). Therefore, at the
optimum $L_{z}$ where maximum damping is achieved, the effective mass
coincides with the primary mass. This means that the addition of particles to
create the PD do not affect the resonant frequency of the system if the
optimum $L_{z}$ is chosen, which implies that under such conditions there is
no need for compensation of the mass of the particles during design.
As expected, an increase of $L_{z}$ beyond the optimum damping leads the
granular bed to hit the enclosure in phase with the spring force (see Fig.
6(f)). That is, the grains hit the floor when the spring pulls the system
downward. The effect in the apparent inertial response is as if the system
suffered a sudden mass decrease. Therefore, the effective mass is smaller than
the primary mass (notice that here the granular bed is only in contact with
the enclosure during the impacts).
Much larger gap sizes lead the granular bed to expand significantly. The
granular sample enters a gas-like state with only a few particles colliding
with the enclosure in each oscillation. This transfers little momentum to the
primary mass and the system presents an $M_{\mathrm{eff}}$ which is close to
$M$ (see Fig. 6(g)).
## 6 Conclusions
We have studied a PD by means of simulations via a DEM. We have considered the
effective mass and effective damping of the entire SDoF system by fitting the
FRF to a simple mass–spring–dashpot system. In particular, we study the effect
of the height $L_{z}$ of the enclosure.
We have observed that the effective mass of the system reaches the two limits
described in the literature for small and large enclosures. However, those
limits are not approached in a monotonous way and clear overshoots appear. For
small gap sizes, the system presents effective masses above the direct sum of
the primary mass $M$ and the particle mass $m_{\mathrm{p}}$. For large
enclosures, the effective masses fall below $M$.
We have observed that such behavior can be explained by considering both the
period of time over which the granular bed is in full contact with the
enclosure and the inertial effects due to the grains hitting the floor or
ceiling in or out of phase with the spring force.
Interestingly, we found that the value of $L_{z}$ at which $M_{\mathrm{eff}}$
crosses $M$ coincides with the optimum damping value described in the
literature. Since the optimum $L_{z}$ should be simple to interpolate from the
intersection with the horizontal $M$ level in a plot of $M_{\mathrm{eff}}$ vs
$L_{z}$, we suggest that such estimation can be a more suitable approach than
the search for a maximum in a plot of $C_{\mathrm{eff}}$ versus $L_{z}$.
The overshoot effects described are present outside the range of maximal
damping performance and might be considered of secondary interest in
industrial applications at first sight. However, design constrains may require
a PD to work off the optimum damping and in one of the overshoot regions. In
particular, for enclosures somewhat taller than the one corresponding to the
optimum damping, one achieves effective masses only slightly below the primary
mass. This implies that the resonant frequency is almost unaltered upon
addition of the particles. Moreover, such values of $L_{z}$ achieve a
remarkable damping (although not maximal) while still presenting a FRF with a
shape very similar to the one observed for a simple SDoF mass–spring–dashpot
system. This may simplify the prediction of the behavior of the PD under such
conditions. On the other hand, if the primary system has more degrees of
freedom, the off-design resonant frequencies may fall in either overshoot
regime and the side effects should be taken into consideration.
Although we have studied a PD driven in the direction of gravity, similar
results are expected if a horizontal setup is considered. It has been shown
that near the resonant frequency the response of an impact damper does not
depend on the relative direction between the motion of the system and the
gravity [5]. Since the effective mass is largely determined by the response
near the resonant frequency, the same general trends should be found in
horizontally driven PD.
## Acknowledgments
LAP acknowledges financial support from CONICET (Argentina).
## References
* [1] H. V. Panossian, Structural damping enhancement via non-obstructive particle damping technique. Journal of Vibration and Acoustics 114 (1992) 101-105. doi:10.1115/1.2930221
* [2] S. S. Simonian, Particle beam damper. Proceedings of the SPIE Conference on Passive Damping, Vol. 2445, Newport Beach, CA, (1995), 149-160.
* [3] Z. Xu, M. Y. Wang, T. Chenc, Particle damper for vibration and noise reduction. Journal of Sound and Vibration 270 (2004) 1033-1040. doi:10.1016/S0022-460X(03)00503-0
* [4] K. S. Marhadi, V. K. Kinra, Particle impact damping: effect of mass ratio, material, and shape. Journal of Sound and Vibration 283 (2005) 433-448. doi:10.1016/j.jsv.2004.04.013
* [5] M. R. Duncan, C. R. Wassgren and C. M. Krousgrill, The damping performance of a single particle impact damper. Journal of Sound and Vibration 286 (2005) 123-144. doi:10.1016/j.jsv.2004.09.028
* [6] K. Mao, M. Y. Wang, Z. Xu and T. Chend, DEM simulation of particle damping. Powder Technology 142(2-3) (2004) 154-165. doi:10.1016/j.powtec.2004.04.031
* [7] M. Saeki, Impact damping with granular materials in a horizontally vibration system. Journal of Sound and Vibration 251(1) (2002) 153-161. doi:10.1006/jsvi.2001.3985
* [8] X. M. Bai, L. M. Keer, Q. J. Wang and R. Q. Snurr, Investigation of particle damping mechanism via particle dynamics simulations. Granular Matter 11 (2009) 417-429. doi:10.1007/s10035-009-0150-6
* [9] X. Fang and J. Tang, Granular Damping in Forced Vibration: Qualitative and Quantitative Analyses. Journal of Vibration and Acoustics 128 (2006) 489-500. doi:10.1115/1.2203339
* [10] W. Liu, G. R. Tomlinson, J. A. Rongong, The dynamic characterisation of disk geometry particle dampers. Journal of Sound and Vibration 280 (2005) 849-861. doi:10.1016/j.jsv.2003.12.047
* [11] M. Y. Yang Development of master design curves for particle impact dampers, PhD Thesis, The Pennsylvania State University, 2003
* [12] P. A. Cundall and O. D. L. Strack, A discrete numerical model for granular assemblies. Geotechnique 29 (1979) 47-65.
* [13] T. Pöschel and T. Schwager, Computational granular dynamics: Models and algorithms, Springer-Verlag, Berlin-Heidelberg (2005).
* [14] M. P. Allen and D. J. Tildesley, Computer Simulation of Liquids. Oxford Science Publications, 1989.
* [15] H. Goldstein, Classical Mechanics (3rd Edition). Addison Wesley, 2002.
* [16] J. Schäfer, S. Dippel and D. E. Wolf, Force schemes in simulations of granular materials. Journal de Physique I 6 (1996) 5-20.
* [17] Kruggel-Emden, E. Simsek, S. Rickelt, S. Wirtz and V. Scherer, Review and extension of normal force models for the Discrete Element Method. Powder Technology 171 (2007) 157-173. doi:10.1016/j.powtec.2006.10.004
* [18] H. Kruggel-Emden, S. Wirtz, and V. Scherer, A study on tangential force laws applicable to the discrete element method (DEM) for materials with viscoelastic or plastic behavior. Chemical Engineering Science 63 (2008) 1523-1541. doi:10.1016/j.ces.2007.11.025
* [19] C. X. Wong, A. B. Spencer and J. A. Rongong, Effects of Enclosure Geometry on Particle Damping Performance. 50th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Palm Springs, California (2009)
* [20] M. Sánchez and L. A. Pugnaloni, Modelling of a granular damper, Mecánica Computacional XXIX (2010) 1849-1859.
* [21] S. Brennan, S. S. Simonian, Parametric test results on Particle Dampers. 48th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Material Conference, Honolulu, Hawaii (2007)
* [22] R. D. Friend and V. K. Kinra, Particle impact damping. Journal of Sound and Vibration 233(1) (2000) 93-118. doi:10.1006/jsvi.1999.2795
* [23] Z. Lu, X. Lu and S. F. Masri, Studies of the performance of particle dampers under dynamic loads. Journal of Sound and Vibration 329 (2010) 5415-5433. doi:10.1016/j.jsv.2010.06.027
|
arxiv-papers
| 2011-05-02T12:04:26 |
2024-09-04T02:49:18.526176
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Mart\\'in S\\'anchez, Luis A. Pugnaloni",
"submitter": "Luis Ariel Pugnaloni",
"url": "https://arxiv.org/abs/1105.0304"
}
|
1105.0344
|
11institutetext: I. Physikalisches Institut, Universität zu Köln, Zülpicher
Str. 77, 50937 Köln, Germany
11email: qin@ph1.uni-koeln.de, schilke@ph1.uni-koeln.de 22institutetext: Max-
Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany
33institutetext: California Institute of Technology, Cahill Center for
Astronomy and Astrophysics 301-17, Pasadena, CA 91125 USA 44institutetext:
Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge MA
02138, USA
# Submillimeter continuum observations of Sagittarius B2 at subarcsecond
spatial resolution
S.-L. Qin P. Schilke 11 R. Rolffs 11 C. Comito 1122 D.C. Lis and Q. Zhang
223344
We report the first high spatial resolution submillimeter continuum
observations of the Sagittarius B2 cloud complex using the Submillimeter Array
(SMA). With the subarcsecond resolution provided by the SMA, the two massive
star-forming clumps Sgr B2(N) and Sgr B2(M) are resolved into multiple compact
sources. In total, twelve submillimeter cores are identified in the Sgr B2(M)
region, while only two components are observed in the Sgr B2(N) clump. The gas
mass and column density are estimated from the dust continuum emission. We
find that most of the cores have gas masses in excess of 100 M⊙ and column
densities above 1025 cm-2. The very fragmented appearance of Sgr B2(M), in
contrast to the monolithic structure of Sgr B2 (N), suggests that the former
is more evolved. The density profile of the Sgr B2(N)-SMA1 core is well fitted
by a Plummer density distribution. This would lead one to believe that in the
evolutionary sequence of the Sgr B2 cloud complex, a massive star forms first
in an homogeneous core, and the rest of the cluster forms subsequently in the
then fragmenting structure.
###### Key Words.:
ISM: clouds: radio continuum — ISM: individual objects (Sgr
B2)—stars:formation
## 1 Introduction
The Sagittarius B2 star-forming region is located $\sim$ 100 pc from Sgr A∗,
within the $\sim 400$ pc wide dense central molecular zone (CMZ) of the
Galactic center, at a distance of $\sim$8 kpc from the Sun (Reid et al. 2009).
It is the strongest submillimeter continuum source in the CMZ (Schuller et al.
2009). It contains dense cores, Sgr B2(N) and Sgr B2(M), hosting clusters of
compact Hii regions (Gaume et al. 1995; de Pree, Goss & Gaume 1998). It has
been suggested that these two hot cores are at different evolutionary stages
(Reid et al 2009; Lis et al. 1993; Hollis et al. 2003; Qin et al. 2008).
Spectral observations in centimeter and millimeter regimes have been conducted
towards Sgr B2 (e.g. Carlstrom & Vogel 1989; Mehringer & Menten 1997; Nummelin
et al. 1998; Liu & Snyder 1999; Hollis et al. 2003; Friedel et al. 2004; Jones
et al. 2008; Belloche et al. 2008), suggesting that Sgr B2(N) is chemically
more active. Nearly half of all known interstellar molecules were first
identified in Sgr B2(N), although sulphur-bearing molecules are more abundant
in Sgr B2(M) than in Sgr B2(N).
The differences between Sgr B2(N) and Sgr B2(M), in terms of both kinematics
and chemistry, may originate from different physical conditions and thus
different chemical histories, or may simply be an evolutionary effect. A
clearer understanding of the small-scale source structure and the exact origin
of the molecular line emission is needed to distinguish between these two
possibilities. In this Letter, we present high spatial resolution
submillimeter continuum observations of Sgr B2(N) and Sgr B2(M), using the
SMA111The Submillimeter Array is a joint project between the Smithsonian
Astrophysical Observatory and the Academia Sinica Institute of Astronomy and
Astrophysics, and is funded by the Smithsonian Institution and the Academia
Sinica.. The observations presented here resolve both of the Sgr B2 clumps
into multiple submillimeter components. Combining with the SMA spectral line
data cubes and ongoing Herschel/HIFI complete spectral surveys towards Sgr
B2(N) and Sgr B2(M) in the HEXOS key project, these observations will help us
to answer fundamental questions about the chemical composition and physical
conditions in Sgr B2(N) and Sgr B2(M).
## 2 Observations
The SMA observations of Sgr B2 presented here were carried out using seven
antennas in the compact configuration on 2010 June 11, and using eight
antennas in the very extended configuration on 2010 July 11. The phase
tracking centers were
$\alpha\,(\mathrm{J}\,2000.0)=17^{\mathrm{h}\,}47^{\mathrm{m}\,}19.883^{\mathrm{s}\,},\delta\,(\mathrm{J}\,2000.0)=-28^{\circ}22^{\prime}18.4^{\prime\prime}$
for Sgr B2(N) and
$\alpha\,(\mathrm{J}\,2000.0)=17^{\mathrm{h}\,}47^{\mathrm{m}\,}20.158^{\mathrm{s}\,},\delta\,(\mathrm{J}\,2000.0)=-28^{\circ}23^{\prime}05.0^{\prime\prime}$
for Sgr B2(M). Both tracks were observed in double-bandwidth mode with a 4 GHz
bandwidth in each of the lower sideband (LSB) and upper sideband (USB). The
spectral resolution was 0.8125 MHz per channel, corresponding to a velocity
resolution of $\sim$0.7 km s-1. The observations covered rest frequencies from
342.2 to 346.2 GHz (LSB), and from 354.2 to 358.2 GHz (USB). Observations of
QSOs 1733-130 and 1924-292 were evenly interleaved with the array pointings
toward Sgr B2(N) and Sgr B2(M) during the observations in both configurations,
to perform antenna gain calibration.
For the compact configuration observations, the typical system temperature was
273 K. Mars, QSOs 3c454.3, and 3c279 were observed for bandpass calibration.
The flux calibration was based on the observations of Neptune ($\sim$1.1′′).
For the very extended array observations, the typical system temperature was
292 K. Both 3c454.3 and 3c279 were used for bandpass calibration, and Uranus
($\sim$1.7′′) was used for flux calibration. The absolute flux scale is
estimated to be accurate to within 20%.
The calibration and imaging were performed in Miriad (Sault, Teuben & Wright
1995). We note that there are spectral-window-based bandpass errors in both
amplitude and phase on some baselines in the compact array data, which were
corrected by use of a bright point source using the BLCAL task. The system
temperature measurements for antennas 2 and 7 in the very extended array data
were not recorded properly and were corrected using the SMAFIX task. As done
by Qin et al. (2008), we selected line free channels which may contain some
contributions from weak and densely spaced blended lines, although at least
obvious line contributions were excluded. The continuum images were
constructed from those, combining the LSB and USB data of both the compact and
very extended array observations. We performed a self-calibration on the
continuum data using the model from ‘CLEANed’ components for a few iterations
to remove residual errors. Using the model from ‘CLEANed’ components, the
self-calibration in our case did not introduce errors in the source structure,
but improved the image quality and minimized the gain calibration errors. The
final images were corrected for the primary beam response. The projected
baselines ranged from 9 to 80 k$\lambda$ in the compact configuration and from
27 to 590 k$\lambda$ in the very extended configuration. The resulting
synthesized beam is 0$\hbox to0.0pt{.\hss}^{\prime\prime}$4$\times$0$\hbox
to0.0pt{.\hss}^{\prime\prime}$24 (PA=14.4∘) using uniform weighting, and the
1$\sigma$ rms noise levels are 21 and 31 mJy for Sgr B2(M) and Sgr B2(N)
images, respectively. The difference in rms noise is caused by having more
line free channels for continuum images in Sgr B2(M) (2417 channels) than in
Sgr B2(N) (740 channels). Since there are no systematic offsets between the
submillimeter and cm sources, we believe that the absolute astrometry is as
good as 0.1′′.
## 3 Results
Continuum images of Sgr B2(N) and Sgr B2(M) at 850 $\mu$m are shown in Figure
1. Multiple submillimeter continuum cores are clearly detected and resolved
towards both Sgr B2(M) and Sgr B2(N) with a spatial resolution of 0$\hbox
to0.0pt{.\hss}^{\prime\prime}$4$\times$0$\hbox
to0.0pt{.\hss}^{\prime\prime}$24\. Unlike in radio observations at 1.3 cm with
a comparable resolution, which detected the UC Hii regions K1, K2, K3, and K4
(Gaume et al. 1995), only two submillimeter continuum sources, SMA1 and SMA2,
are observed in Sgr B2(N) (Fig. 1). The bright component Sgr B2(N)-SMA1 is
situated close to the UC Hii region K2, and Sgr B2(N)-SMA2 is located
$\sim$5′′ north of Sgr B2(N)-SMA1. The observations showed that large
saturated molecules exist only within a small region ($<5^{\prime\prime}$) of
Sgr B2(N) called the Large Molecule Heimat, Sgr B2(N-LMH) (Snyder, Kuan & Miao
1994). Our current observations indicate that Sgr B2(N-LMH) coincides with Sgr
B2(N)-SMA1. Sgr B2(N)-SMA2 was also detected in continuum emission at 7 mm and
3 mm (Rolffs et al. 2011; Liu & Snyder 1999) and molecular lines of CH3OH and
C2H5CN at 7 mm (Mehringer & Menten 1997; Hollis et al. 2003). Lower resolution
continuum observations at 1.3 mm (see note to Table 1 of Qin et al. 2008)
suggested that the Sgr B2(N)K1-K3 clump could not be fitted with a single
Gaussian component, and another component existed at Sgr B2(N)-SMA2 position.
Our observations here resolved out Sgr B2(N)-SMA2, confirming it to be a high-
mass core.
Figure 1: Continuum maps of Sgr B2 at 850 $\mu$m, with a synthesized beam of
0$\hbox to0.0pt{.\hss}^{\prime\prime}$4$\times$0$\hbox
to0.0pt{.\hss}^{\prime\prime}$24, PA=14.4∘ (lower-right corner in each panel).
The left panels shows the image of Sgr B2(N), with contour levels ($-$1, 1,
…14)$\times$4$\sigma$ (1$\sigma$=0.031 Jy beam-1). The cross symbols indicate
the positions of UC Hii regions detected in 1.3 cm continuum (Gaume et al.
1995). The right panel present the image of Sgr B2(M), with contour levels
($-$1, 1, 2, 3, 4, 4.5, 5.5, 5, 6, 7, 8, 10, …28)$\times$4$\sigma$
(1$\sigma$=0.021 Jy beam-1). The cross symbols indicate the positions of UC
Hii regions detected in 7 mm continuum (de Pree, Goss & Gaume 1998). In each
panel, the filled circle symbols present the peak positions of the
submillimeter continuum sources.
The continuum image of Sgr B2(M) (Fig. 1, right panel) shows a complicated
morphology, with a roughly north-south extending envelope encompassing several
compact components. In total, twelve submillimeter sources are resolved in Sgr
B2(M). Using the Very Large Array (VLA), Gaume et al. (1995) detected four
bright UC Hii regions (F1–F4) at 1.3 cm within 2′′ in Sgr B2(M). The highest
resolution 7 mm VLA image with a position accuracy of $0.1^{\prime\prime}$ (de
Pree, Goss & Gaume 1998) resolved nineteen UC Hii regions in the central
region of Sgr B2(M)(F1–F4). Five submillimeter components, Sgr B2(M)-SMA1 to
SMA5, are detected in the central region of Sgr B2(M) in our observations.
Outside the central region, seven components, Sgr B2(M)-SMA6 to SMA12 are
identified. Given the positional accuracies of our observations and the
observations by de Pree, Goss & Gaume (1998), the projected positions of Sgr
B2(M)-SMA2, SMA6, SMA11 and SMA12 coincide with those of the UC HII regions
F1, F10.37, F10.30, and F10.318, respectively. No centimeter source is
detected towards Sgr B2(M)-SMA10, which is located south-west of the central
region and displays an extended structure.
Table 1: Properties of the Continuum Sources
Source | $\alpha$ (J2000.0) | $\delta$ (J2000.0) | Deconvolved size | Peak Intensity | Flux Density | $M_{\rm H_{2}}$ | $N_{\rm H_{2}}$
---|---|---|---|---|---|---|---
| | | | (Jy beam-1) | (Jy) | (102M⊙) | ($10^{25}$cm-2)
Sgr B2(N)-SMA1 | 17 47 19.889 | –28 22 18.22 | 1$\hbox to0.0pt{.\hss}^{\prime\prime}72\times 1\hbox to0.0pt{.\hss}^{\prime\prime}28(-7.7^{\circ}$) | 1.79$\pm$0.039 | 47.48$\pm$1.034 | 27.31$\pm$0.59 | 4.54$\pm$0.1
Sgr B2(N)-SMA2 | 17 47 19.885 | –28 22 13.29 | 1$\hbox to0.0pt{.\hss}^{\prime\prime}44\times 1\hbox to0.0pt{.\hss}^{\prime\prime}02(-24.5^{\circ}$) | 0.601$\pm$0.031 | 10.12$\pm$0.521 | 5.82$\pm$ 0.38 | 1.45$\pm$0.1
Sgr B2(M)-SMA1 | 17 47 20.157 | –28 23 04.53 | 1$\hbox to0.0pt{.\hss}^{\prime\prime}51\times 0\hbox to0.0pt{.\hss}^{\prime\prime}59(10.6^{\circ}$) | 2.39$\pm$0.138 | 20.9$\pm$1.21 | 12.02$\pm$0.7 | 4.94$\pm$0.29
Sgr B2(M)-SMA2 | 17 47 20.133 | –28 23 04.06 | 0$\hbox to0.0pt{.\hss}^{\prime\prime}98\times 0\hbox to0.0pt{.\hss}^{\prime\prime}58(14.7^{\circ}$) | 1.75$\pm$0.12 | 12.51$\pm$0.859 | 7.19$\pm$0.49 | 4.63$\pm$0.32
Sgr B2(M)-SMA3 | 17 47 20.098 | –28 23 03.9 | 0$\hbox to0.0pt{.\hss}^{\prime\prime}93\times 0\hbox to0.0pt{.\hss}^{\prime\prime}3(-9.9^{\circ}$) | 0.866$\pm$0.09 | 4.147$\pm$0.431 | 2.38$\pm$0.25 | 3.13$\pm$0.33
Sgr B2(M)-SMA4 | 17 47 20.150 | –28 23 03.26 | 0$\hbox to0.0pt{.\hss}^{\prime\prime}55\times 0\hbox to0.0pt{.\hss}^{\prime\prime}32(-1.5^{\circ}$) | 0.483$\pm$0.021 | 1.443$\pm$0.063 | 0.83$\pm$0.04 | 1.73$\pm$0.08
Sgr B2(M)-SMA5 | 17 47 20.205 | –28 23 04.78 | 0$\hbox to0.0pt{.\hss}^{\prime\prime}66\times 0\hbox to0.0pt{.\hss}^{\prime\prime}52(15.9^{\circ}$) | 0.52$\pm$0.051 | 2.488$\pm$0.244 | 1.43$\pm$0.14 | 1.53$\pm$0.15
Sgr B2(M)-SMA6 | 17 47 20.171 | –28 23 05.91 | 0$\hbox to0.0pt{.\hss}^{\prime\prime}75\times 0\hbox to0.0pt{.\hss}^{\prime\prime}43(-24.3^{\circ}$) | 0.71$\pm$0.067 | 3.05$\pm$0.288 | 1.75$\pm$0.17 | 1.99$\pm$0.19
Sgr B2(M)-SMA7 | 17 47 20.11 | –28 23 06.18 | 0$\hbox to0.0pt{.\hss}^{\prime\prime}72\times 0\hbox to0.0pt{.\hss}^{\prime\prime}37(14.9^{\circ}$) | 0.6$\pm$0.051 | 2.366$\pm$0.201 | 1.36$\pm$0.12 | 1.87$\pm$0.16
Sgr B2(M)-SMA8 | 17 47 20.216 | –28 23 06.48 | 0$\hbox to0.0pt{.\hss}^{\prime\prime}52\times 0\hbox to0.0pt{.\hss}^{\prime\prime}45(-63.4^{\circ}$) | 0.31$\pm$0.048 | 1.246$\pm$0.193 | 0.71$\pm$0.11 | 1.12$\pm$0.17
Sgr B2(M)-SMA9 | 17 47 20.238 | –28 23 06.87 | 0$\hbox to0.0pt{.\hss}^{\prime\prime}83\times 0\hbox to0.0pt{.\hss}^{\prime\prime}42(23.9^{\circ}$) | 0.369$\pm$0.032 | 1.922$\pm$0.164 | 1.11$\pm$0.09 | 1.16$\pm$0.1
Sgr B2(M)-SMA10 | 17 47 19.989 | –28 23 05.83 | 3$\hbox to0.0pt{.\hss}^{\prime\prime}06\times 1\hbox to0.0pt{.\hss}^{\prime\prime}0(-25.8^{\circ}$) | 0.183$\pm$0.015 | 4.321$\pm$0.363 | 2.49$\pm$0.29 | 0.3$\pm$0.03
Sgr B2(M)-SMA11 | 17 47 20.106 | –28 23 03.01 | 0$\hbox to0.0pt{.\hss}^{\prime\prime}97\times 0\hbox to0.0pt{.\hss}^{\prime\prime}48(-27.7^{\circ}$) | 0.695$\pm$0.058 | 4.423$\pm$0.37 | 2.54$\pm$0.21 | 2$\pm$0.17
Sgr B2(M)-SMA12 | 17 47 20.132 | –28 23 02.24 | 0$\hbox to0.0pt{.\hss}^{\prime\prime}82\times 0\hbox to0.0pt{.\hss}^{\prime\prime}52(0.9^{\circ}$) | 0.394$\pm$0.018 | 2.629$\pm$0.121 | 1.51$\pm$0.08 | 1.3$\pm$0.07
Note: Units of right ascension are hours, minutes, and seconds, and units of
declination are degrees, arcminutes, and arcseconds.
Multi-component Gaussian fits were carried out towards both the Sgr B2(N) and
Sgr B2 (M) clumps using the IMFIT task. The residual fluxes after fitting are
4 and 11 Jy for Sgr B2(N) and Sgr B2(M), respectively. The large residual
error in Sgr B2(M) clump is most likely caused by its complicated source
structure. The peak positions, deconvolved angular sizes (FWHM), peak
intensities, and total flux densities of the continuum components are
summarized in Table 1. The total flux densities of the Sgr B2(N) and Sgr B2
(M) cores are $\sim$58 and 61 Jy, while the peak fluxes measured by the
bolometer array LABOCA at the 12-m telescope APEX are 150 and 138 Jy in a
$18.2^{\prime\prime}$ beam, respectively (Schuller, priv. comm.), indicating
that $\sim$60% of the flux is filtered out and that our SMA observations only
pick up the densest parts of the Sgr B2 cores.
Assuming that the 850 $\mu$m continuum is due to optically thin dust emission
and using an average grain radius of 0.1 $\mu$m, grain density of 3 g cm-3,
and a gas-to-dust ratio of 100 (Hildebrand 1983; Lis, Carlstrom & Keene 1991),
the mass and column density can be calculated using the formulae given in Lis,
Carlstrom & Keen (1991). We adopt $Q({\nu})=4\times$10-5 at 850 $\mu$m
(Hildebrand 1983; Lis, Carlstrom & Keene 1991) and a dust temperature of 150 K
(Carlstrom & Vogel 1989; Lis et al. 1993) in the calculation. On the basis of
flux densities at 1.3 cm and 7 mm (Gaume et al. 1995; de Pree, Goss & Gaume
1998; Rolffs et al. 2011 ), most K and F subcomponents (except for K2 and F3)
at 7 mm have fluxes less than or comparable with those at 22.4 GHz, which
produces descending spectra and is indicative of optically thin Hii regions at
short wavelengths. The contributions of the free-free emission to the flux
densities of the submillimeter components are smaller than 0.7% for K2 and F3
and smaller than 0.1% for other components, and can safely be ignored. The
estimated clump masses and column densities are given in Table 1. The flux
density of the fourteen detected submillimeter components ranges from 1.2 to
47 Jy, corresponding to gas masses from 71 to 2731 $M_{\odot}$. The column
densities are a few times 1025 cm-2. Under the Rayleigh-Jeans approximation, 1
Jy beam-1 in our SMA observations corresponds to a brightness temperature of
113 K. The peak brightness temperatures of Sgr B2(M)-SMA1 and Sgr B2(N)-SMA1
are 270 and 200 K respectively, which place a lower limit on the dust
temperatures at the peak position of the continuum. In this paper, the column
densities and masses are estimated by use of source-averaged continuum fluxes.
Based on the model fitting (Lis et al. 1991, 1993), the adopted $Q({\nu})$,
dust temperature, and optically thin approximation are reasonable guesses for
the Sgr B2(N) and Sgr B2(M) cloud complexes. The total masses of Sgr B2(N) and
(M), determined by summing up over the components, are comparable, 3313 and
3532 M$\odot$, respectively, in spite of very different morphologies. We
consider the adopted gas temperature of 150 K a reasonable guess for the
average core temperatures, although some of the massive cores show higher peak
brightness temperatures. For those, the optically thin assumption is probably
also not justified, and we would be underestimating their masses.
## 4 Modeling
A striking feature in the maps is the appearance of Sgr B2(N)-SMA1, which,
although well resolved, does not appear to be fragmented. We used the three-
dimensional radiative-transfer code RADMC-3D222http://www.ita.uni-
heidelberg.de/~dullemond/software/radmc-3d, developed by C. Dullemond, to
model the continuum emission of Sgr B2(N)-SMA1. Figure 2 shows three example
models, whose radial profiles were obtained by Fourier-transforming the
computed dust continuum maps, ‘observing’ with the uv coverage of the data,
and averaging the image in circular annuli. All models are heated by the stars
in the UC Hii regions K2 and K3, assumed to have luminosities of $\approx
10^{5}$ L⊙, which is uncertain, but should give the right order of magnitude
(Rolffs et al. 2011). The dust mass opacity (0.6 cm2 g-1) is interpolated from
Ossenkopf & Henning (1994) without grain mantles or coagulation, which
corresponds to a $Q({\nu})$ of a few times 10-5 at 850 $\mu$m and is
consistent with the value used in our calculation of masses and column
densities. The model with a density distribution that follows the Plummer
profile given by $n=1.7\times 10^{8}\times\left(1+\left(\frac{r}{11500{\rm
AU}}\right)^{2}\right)^{-2.5}$ H2 cm-3 (half-power radius 6500 AU) provides
the best fit. We slso show in Fig. 2 a Gaussian model with central density
$2\times 10^{8}$ H2 cm-3 and half-power radius 6500 AU, and a model whose
density follows a radial power law, $n=10^{9}\times\left(\frac{r}{1000{\rm
AU}}\right)^{-1.5}$ H2 cm-3 (outside of 1000 AU, the radius of the Hii region
K2). The latter model reproduces the peak flux measured by the bolometer array
LABOCA at the 12-m telescope APEX, which is 150 Jy in a $18.2^{\prime\prime}$
beam, but does not fit the inner regions observed by the SMA very well. The
Plummer and Gauss models fit Sgr B2(N)-SMA1 well, in terms of both shape and
absolute flux. This implies that an extended component exists, which is
filtered out in the SMA map but picked up by LABOCA.
Figure 2: Radial profile of the Sgr B2(N)-SMA1 component, with errorbars
denoting the rms in circular annuli. Overlaid are three models with different
density distributions (red: Plummer; blue: Gaussian; green: power-law). The
beam is depicted as a dashed Gaussian.
It remains unclear whether the Plummer profile, which is also used to describe
density profiles of star clusters, has any relevance to an evolving star
cluster, or whether the cluster loses its memory of the gas density profile in
the subsequent dynamical evolution. The model has a mass of around 3000 M⊙
inside a radius of 11500 AU (0.056 pc), which represents an average density of
$4\times 10^{6}$ M⊙ pc-3. This is only the mass contained in the gas, and does
not include the mass of the already formed compact objects providing the
luminosity. Sgr B2(N) might represent a very young, embedded stage in the
formation of a massive star cluster.
## 5 Discussion
Our SMA observations resolve the Sgr B2(M) and (N) cores into fourteen compact
submillimeter continuum components. The two cores display very different
morphologies. The source Sgr B2(N)-SMA1 is located north-east of the
centimeter source K2, with an offset of $\sim$0.2′′. The continuum
observations at 1.3 cm having a resolution of 0.25′′ (Gaume et al. 1995) and
at 7 mm a resolution of 0.1′′ (Rolffs et al. 2011) also detected a compact
component centered on K2. We failed to detect any submillimeter continuum
emission associated with sources K1, K3, and K4. Sgr B2(N)-SMA2 is a high-mass
dust core.
In contrast to Sgr B2(N), a very fragmented cluster of high mass submillimeter
sources is detected in Sgr B2(M). In addition to the two brightest and most
massive components, Sgr B2(M)-SMA1 and Sgr B2(M)-SMA2, situated in the central
region of Sgr B2(M), ten additional sources are detected, which indicates that
there has been a high degree of fragmentation. The sensitivity of 0.021 Jy
beam-1 in our observations corresponds to a detectable gas mass of 1.2 M⊙, but
the observations are likely to be dynamic-range-limited, so it is difficult to
determine the clump mass function in Sgr B2(M) down to smaller masses.
The estimated column densities ($10^{25}$cm-2=33.4 g cm-2) in both the
homogeneous starforming region Sgr B2(N) and the clustered Sgr B2(M) region
are well in excess of the threshold of 1 g cm-2 for preventing cloud
fragmentation and formation of massive stars (Krumholz & McKee 2008). The
source sizes and masses derived from Gaussian fitting, assuming a spherical
source, lead to volume densities in excess of $10^{7}$cm-3 for all
submillimeter sources detected in the SMA images. Assuming a gas temperature
of 150 K, the thermal Jeans masses are less than 10 M⊙, but the turbulent
support is considerable. The large column densities, the gas masses, and the
velocity field (Rolffs et al. 2010) suggest that the submillimeter components
in the two regions are gravitationally unstable and in the process of forming
massive stars. This process seems more advanced in Sgr B2(M), which is also
reflected in the large number of embedded UC Hii regions found there. However,
star formation in Sgr B2(N) does not appears to have progressed very far, a
conclusion also supported by the presence of only one UC Hii region embedded
in one of the two clumps studied here. The observations also showed that
massive star formation taking place in the two clumps with outflow ages of
$\sim 10^{3}$ and $\sim 10^{4}$ years for Sgr B2(N) and Sgr B2(M) clumps,
respectively (Lis et al. 1993). If one can generalize these two examples, and
if they provide snapshots in time of the evolution of basically equal cores,
it seems that a massive star forms first in a relatively homogeneous core,
another example with Plummer density profile and without fragmentation being
the high-mass core G10.47 (Rolffs et al., in prep.), followed by fragmentation
or at least visible break-up of the core and subsequent star formation,
perhaps aided by radiative or outflow feedback from the first star. This
scenario may well apply to extremely high-mass cluster forming cores or to
special environments only, since high-mass IRDCs have been shown to fragment
early (Zhang et al. 2009).
## References
* Belloche, (2008) Belloche, A., Menten, K. M., Comito, C., et al. 2008, A&A, 482, 179
* Carlstrom (89) Carlstrom, J. E., & Vogel, S. N. 1989, A&A, 377, 408
* (3) de Pree, C. G., Goss, W. M., & Gaume, R. A. 1998, ApJ, 500, 847
* Friedel, (2004) Friedel, D. N., Snyder, L. E., Turner, B. E., Remijan, A. 2004, ApJ, 600, 234
* (5) Gaume, R. A., Claussen, M. J., de Pree, C. G., Goss, W. M., Mehringer, D. M. 1995, ApJ, 438, 776
* Hildebrand, (1983) Hildebrand, R. H. 1983, QJRAS, 24, 267
* Hollis, (2003) Hollis, J. M., Pedelty J. A., Boboltz D. A. et al. 2003, ApJ, 596, L235
* Jones, (2008) Jones, P. A., Burton, M. G., Cunningham, M. R., et al. 2008, MNRAS, 386, 117
* Krumholz, (2008) Krumholz, M. R., & McKee, C. F. 2008, Nature, 451, 1082
* Lis et al., (1991) Lis, D. C., Carlstrom, J. E., & Keene, J. 1991, ApJ, 380, 429
* (11) Lis, D. C., Goldsmith, P. F., Carlstrom, J. E., Scoville, N. Z. 1993, ApJ, 402, 238
* (12) Liu, Sheng-Yuan, & Snyder, L.E. 1999, ApJ, 523, 683
* Mehringer et al., (1997) Mehringer, D. M., & Menten, K. M. 1997, ApJ, 474, 346
* Nummelin (1998) Nummelin, A., Bergman, P., Hjalmarson, A., et al. 1998, ApJS, 117, 427
* Ossenkopf & Henning (1994) Ossenkopf, V., & Henning, T. 1994, A&A, 291, 943
* Qin (2008) Qin, Sheng-Li, Zhao, Jun-Hui, Moran James M., et al. 2008, ApJ, 677, 353
* reid et al., (2009) Reid, M. J., Menten, K. M., Zheng, X. W., et al. 2009, ApJ, 705, 1548
* Rolffs (2010) Rolffs, R., Schilke, P., Comito, C., et al., 2010, A&A, 521, L46
* Rolffs et al. (2011) Rolffs, R., Schilke, P., Wyrowski, F., et al. 2011, A&A, 529, 76
* Sault et al. (1995) Sault, R. J., Teuben, P. J., & Wright, M. C. H. 1995, in ASP Conf. Ser. 77, Astronomical Data Analysis Software and Systems IV, ed. R. A. Shaw, H. E. Payne, & J. J. E. Hayes (San Francisco, CA: ASP), 433
* Schuller et al. (2009) Schuller, F., et al. 2009, A&A, 504, 415
* Snyder et al. (1994) Snyder, L. E., Kuan, Y.-J., & Miao, Y. 1994, in The Structure and Content of Molecular Clouds, ed. T. L. Wilson & K. J. Johnston (Berlin: Springer), 187
* Zhang et al. (1994) Zhang, Q., Wang, Y., Pillai, T., & Rathborne, J. 2009, ApJ, 696, 268
|
arxiv-papers
| 2011-05-02T14:40:45 |
2024-09-04T02:49:18.533749
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "S.-L. Qin, P. Schilke, R. Rolffs, C. Comito, D.C. Lis and Q. Zhang",
"submitter": "Shengli Qin",
"url": "https://arxiv.org/abs/1105.0344"
}
|
1105.0386
|
# Fourier and Gegenbauer expansions for a fundamental solution of the
Laplacian in the hyperboloid model of hyperbolic geometry
H S Cohl1,2 and E G Kalnins3 1Information Technology Laboratory, National
Institute of Standards and Technology, Gaithersburg, MD, USA 2Department of
Mathematics, University of Auckland, Auckland, New Zealand 3Department of
Mathematics, University of Waikato, Hamilton, New Zealand hcohl@nist.gov
###### Abstract
Due to the isotropy $d$-dimensional hyperbolic space, there exist a
spherically symmetric fundamental solution for its corresponding Laplace-
Beltrami operator. On the $R$-radius hyperboloid model of $d$-dimensional
hyperbolic geometry with $R>0$ and $d\geq 2$, we compute azimuthal Fourier
expansions for a fundamental solution of Laplace’s equation. For $d\geq 2$, we
compute a Gegenbauer polynomial expansion in geodesic polar coordinates for a
fundamental solution of Laplace’s equation on this negative-constant sectional
curvature Riemannian manifold. In three-dimensions, an addition theorem for
the azimuthal Fourier coefficients of a fundamental solution for Laplace’s
equation is obtained through comparison with its corresponding Gegenbauer
expansion.
###### pacs:
02.30.Em, 02.30.Gp, 02.30.Jr, 02.30.Nw, 02.40.Ky, 02.40.Vh
###### ams:
31C12, 32Q45, 33C05, 33C45, 35A08, 35J05, 42A16
## 1 Introduction
In this paper we discuss eigenfunction expansions for a fundamental solution
of Laplace’s equation in the hyperboloid model of $d$-dimensional hyperbolic
geometry. In particular, for a fixed $R\in(0,\infty)$ and $d\geq 2$, we derive
and discuss Fourier cosine and Gegenbauer polynomial expansions in
rotationally invariant coordinate systems, for a previously derived (see Cohl
& Kalnins (2011) [8]) spherically symmetric fundamental solution of the
Laplace-Beltrami operator in the hyperboloid model of hyperbolic geometry.
Useful background material relevant for this paper can be found in Vilenkin
(1968) [29], Thurston (1997) [27], Lee (1997) [20] and Pogosyan & Winternitz
(2002) [26].
This paper is organized as follows. In section 2, for the hyperboloid model of
$d$-dimensional hyperbolic geometry, we describe some of its global
properties, such as its geodesic distance function, geodesic polar
coordinates, Laplace-Beltrami operator (Laplacian), radial harmonics, and
several previously derived equivalent expressions for a radial fundamental
solution of Laplace’s equation. In section 3, for $d\geq 2$, we derive and
discuss Fourier cosine series for a fundamental solution of Laplace’s equation
about an appropriate azimuthal angle in rotationally invariant coordinate
systems, and show how the resulting Fourier coefficients compare to the those
in Euclidean space. In section 4, for $d\geq 2$, we compute Gegenbauer
polynomial expansions in geodesic polar coordinates, for a fundamental
solution of Laplace’s equation in the hyperboloid model of hyperbolic
geometry. In section 5 we discuss possible directions of research in this
area.
Throughout this paper we rely on the following definitions. For
$a_{1},a_{2},\ldots\in{\mathbf{C}}$, if $i,j\in{\mathbf{Z}}$ and $j<i$ then
$\sum_{n=i}^{j}a_{n}=0$ and $\prod_{n=i}^{j}a_{n}=1$. The set of natural
numbers is given by ${\mathbf{N}}:=\\{1,2,\ldots\\}$, the set
${\mathbf{N}}_{0}:=\\{0,1,2,\ldots\\}={\mathbf{N}}\cup\\{0\\}$, and the set
${\mathbf{Z}}:=\\{0,\pm 1,\pm 2,\ldots\\}.$ The set ${\mathbf{R}}$ represents
the real numbers.
## 2 Global analysis on the hyperboloid
### 2.1 The hyperboloid model of hyperbolic geometry
Hyperbolic space, developed independently by Lobachevsky and Bolyai around
1830 (see Trudeau (1987) [28]), is a fundamental example of a space exhibiting
hyperbolic geometry. Hyperbolic geometry is analogous to Euclidean geometry,
but such that Euclid’s parallel postulate is no longer assumed to hold. There
are several models of $d$-dimensional hyperbolic space ${\mathbf{H}}_{R}^{d}$,
including the Klein, Poincaré, hyperboloid, upper-half space and hemisphere
models (see Thurston (1997) [27]). The hyperboloid model for $d$-dimensional
hyperbolic geometry (hereafter referred to as the hyperboloid model, or more
simply, the hyperboloid), is closely related to the Klein and Poincaré models:
each can be obtained projectively from the others. The upper-half space and
hemisphere models can be obtained from one another by inversions with the
Poincaré model (see section 2.2 in Thurston (1997) [27]). The model of
hyperbolic geometry which we will be focusing on in this paper, is the
hyperboloid model.
Minkowski space ${\mathbf{R}}^{d,1}$ is a $(d+1)$-dimensional pseudo-
Riemannian manifold which is a real finite-dimensional vector space, with
Cartesian coordinates given by ${\bf x}=(x_{0},x_{1},\ldots,x_{d})$. The
hyperboloid model, also known as the Minkowski or Lorentz models, represents
points in this space by the upper sheet $(x_{0}>0)$ of a two-sheeted
hyperboloid embedded in the Minkowski space ${\mathbf{R}}^{d,1}$. It is
equipped with a nondegenerate, symmetric bilinear form, the Minkowski bilinear
form
$[{\bf x},{\mathbf{y}}]=x_{0}y_{0}-x_{1}y_{1}-\ldots-x_{d}y_{d}.$
The above bilinear form is symmetric, but not positive-definite, so it is not
an inner product. It is defined analogously with the Euclidean inner product
for ${\mathbf{R}}^{d+1}$
$({\bf x},{\mathbf{y}})=x_{0}y_{0}+x_{1}y_{1}+\ldots+x_{d}y_{d}.$
The variety $[{\bf x},{\bf x}]=x_{0}^{2}-x_{1}^{2}-\ldots-x_{d}^{2}=R^{2}$,
for ${\bf x}\in{\mathbf{R}}^{d,1}$, using the language of Beltrami (1869) [3]
(see also p. 504 in Vilenkin (1968) [29]), defines a pseudo-sphere of radius
$R$. Points on the pseudo-sphere with zero radius coincide with a cone. Points
on the pseudo-sphere with radius greater than zero lie within this cone, and
points on the pseudo-sphere with purely imaginary radius lie outside the cone.
For a fixed $R\in(0,\infty),$ the $R$-radius hyperboloid model is a maximally
symmetric, simply connected, $d$-dimensional Riemannian manifold with
negative-constant sectional curvature (given by $-1/R^{2}$, see for instance
p. 148 in Lee (1997) [20]), whereas Euclidean space ${\mathbf{R}}^{d}$
equipped with the Pythagorean norm, is a Riemannian manifold with zero
sectional curvature. The hypersphere ${\mathbf{S}}^{d}$, is an example of a
space (submanifold) with positive-constant sectional curvature (given by
$1/R^{2}$).
In our discussion of a fundamental solution for Laplace’s equation in the
hyperboloid model ${\mathbf{H}}_{R}^{d}$, we focus on the positive radius
pseudo-sphere which can be parametrized through subgroup-type coordinates,
i.e. those which correspond to a maximal subgroup chain $O(d,1)\supset\ldots$
(see for instance Pogosyan & Winternitz (2002) [26]). There exist separable
coordinate systems which parametrize points on the positive radius pseudo-
sphere (i.e. such as those which are analogous to parabolic coordinates, etc.)
which can not be constructed using maximal subgroup chains (we will no longer
discuss these).
Geodesic polar coordinates are coordinates which correspond to the maximal
subgroup chain given by $O(d,1)\supset O(d)\supset\ldots$. What we will refer
to as standard geodesic polar coordinates correspond to the subgroup chain
given by $O(d,1)\supset O(d)\supset O(d-1)\supset\cdots\supset O(2).$ Standard
geodesic polar coordinates (see Olevskiĭ (1950) [23]; Grosche, Pogosyan &
Sissakian (1997) [16]), similar to standard hyperspherical coordinates in
Euclidean space, can be given by
$\left.\begin{array}[]{rcl}x_{0}&=&R\cosh r\\\\[0.56917pt] x_{1}&=&R\sinh
r\cos\theta_{1}\\\\[2.84544pt] x_{2}&=&R\sinh
r\sin\theta_{1}\cos\theta_{2}\\\\[2.84544pt] &\vdots&\\\\[0.56917pt]
x_{d-2}&=&R\sinh r\sin\theta_{1}\cdots\cos\theta_{d-2}\\\\[2.84544pt]
x_{d-1}&=&R\sinh r\sin\theta_{1}\cdots\sin\theta_{d-2}\cos\phi\\\\[2.84544pt]
x_{d}&=&R\sinh
r\sin\theta_{1}\cdots\sin\theta_{d-2}\sin\phi,\end{array}\quad\right\\}$ (1)
where $r\in[0,\infty)$, $\phi\in[0,2\pi)$, and $\theta_{i}\in[0,\pi]$ for
$i\in\\{1,\ldots,d-2\\}$.
In order to study fundamental solutions on the hyperboloid, we need to
describe how one computes distances in this space. One may naturally compare
distances on the positive radius pseudo-sphere through analogy with the
$R$-radius hypersphere. Distances on the hypersphere are simply given by arc
lengths, angles between two arbitrary vectors, from the origin, in the ambient
Euclidean space. We consider the $d$-dimensional hypersphere embedded in
${\mathbf{R}}^{d+1}$. Points on the hypersphere can be parametrized using
hyperspherical coordinate systems. Any parametrization of the hypersphere
${\mathbf{S}}^{d}$, must have $({\bf x},{\bf
x})=x_{0}^{2}+\ldots+x_{d}^{2}=R^{2}$, with $R>0$. The distance between two
points on the hypersphere is given by
$d({\bf x},{{\bf x}^{\prime}})=R\gamma=R\cos^{-1}\left(\frac{({\bf x},{{\bf
x}^{\prime}})}{({\bf x},{\bf x})({{\bf x}^{\prime}},{{\bf
x}^{\prime}})}\right)=R\cos^{-1}\left(\frac{1}{R^{2}}({\bf x},{{\bf
x}^{\prime}})\right).$ (2)
This is evident from the fact that the geodesics on ${\mathbf{S}}^{d}$ are
great circles (i.e. intersections of ${\mathbf{S}}^{d}$ with planes through
the origin) with constant speed parametrizations (see p. 82 in Lee (1997)
[20]).
Accordingly, we now look at the geodesic distance function on the
$d$-dimensional positive radius pseudo-sphere ${\mathbf{H}}_{R}^{d}.$
Distances between two points on the positive radius pseudo-sphere are given by
the hyperangle between two arbitrary vectors, from the origin, in the ambient
Minkowski space. Any parametrization of the hyperboloid
${\mathbf{H}}_{R}^{d}$, must have $[{\bf x},{\bf x}]=R^{2}$. The geodesic
distance $\rho\in[0,\infty)$ between two points ${\bf x},{{\bf
x}^{\prime}}\in{\mathbf{H}}_{R}^{d}$ is given by
$d({\bf x},{{\bf x}^{\prime}})=R\cosh^{-1}\left(\frac{[{\bf x},{{\bf
x}^{\prime}}]}{[{\bf x},{\bf x}][{{\bf x}^{\prime}},{{\bf
x}^{\prime}}]}\right)=R\cosh^{-1}\left(\frac{1}{R^{2}}[{\bf x},{{\bf
x}^{\prime}}]\right),$ (3)
where the inverse hyperbolic cosine with argument $x\in(1,\infty)$ is given by
(see (4.37.19) in Olver et al. (2010) [25])
$\cosh^{-1}x=\log\left(x+\sqrt{x^{2}-1}\right).$ Geodesics on
${\mathbf{H}}_{R}^{d}$ are great hyperbolas (i.e. intersections of
${\mathbf{H}}_{R}^{d}$ with planes through the origin) with constant speed
parametrizations (see p. 84 in Lee (1997) [20]). We also define a global
function $\rho:{\mathbf{H}}^{d}\times{\mathbf{H}}^{d}\to[0,\infty)$ which
represents the projection of global geodesic distance function (3) on
${\mathbf{H}}_{R}^{d}$ onto the corresponding unit radius hyperboloid
${\mathbf{H}}^{d}$, namely
$\rho({\widehat{\bf x}},{\widehat{\bf x}}^{\prime}):=d({\bf x},{{\bf
x}^{\prime}})/R,$ (4)
where ${\widehat{\bf x}}={\bf x}/R$ and ${\widehat{\bf x}}^{\prime}={{\bf
x}^{\prime}}/R$. Note that when we refer to $d({\widehat{\bf x}},{\widehat{\bf
x}}^{\prime})$ below, we specifically mean that projected distance given by
(4).
### 2.2 Laplace’s equation and harmonics on the hyperboloid
Parametrizations of a submanifold embedded in either a Euclidean or Minkowski
space is given in terms of coordinate systems whose coordinates are
curvilinear. These are coordinates based on some transformation that converts
the standard Cartesian coordinates in the ambient space to a coordinate system
with the same number of coordinates as the dimension of the submanifold in
which the coordinate lines are curved.
The Laplace-Beltrami operator (Laplacian) in curvilinear coordinates
${\mathbf{\xi}}=(\xi^{1},\ldots,\xi^{d})$ on a $d$-dimensional Riemannian
manifold (a manifold together with a Riemannian metric $g$) is given by
$\Delta=\sum_{i,j=1}^{d}\frac{1}{\sqrt{|g|}}\frac{\partial}{\partial\xi^{i}}\left(\sqrt{|g|}g^{ij}\frac{\partial}{\partial\xi^{j}}\right),$
(5)
where $|g|=|\det(g_{ij})|,$ the infinitesimal distance is given by
$ds^{2}=\sum_{i,j=1}^{d}g_{ij}d\xi^{i}d\xi^{j},\ $ (6)
and
$\sum_{i=1}^{d}g_{ki}g^{ij}=\delta_{k}^{j},$
where $\delta_{i}^{j}\in\\{0,1\\}$ with $i,j\in{\mathbf{Z}}$, is the Kronecker
delta defined such that
$\delta_{i}^{j}:=\left\\{\begin{array}[]{ll}\displaystyle 1&\qquad\mathrm{if}\
i=j,\\\\[2.84544pt] \displaystyle 0&\qquad\mathrm{if}\ i\neq
j.\end{array}\right.$
For a submanifold, the relation between the metric tensor in the ambient space
and $g_{ij}$ of (5) and (6) is
$g_{ij}({\mathbf{\xi}})=\sum_{k,l=0}^{d}G_{kl}\frac{\partial
x^{k}}{\partial\xi^{i}}\frac{\partial x^{l}}{\partial\xi^{j}}.$
The ambient space for the hyperboloid is Minkowski, and therefore
$G_{ij}=\mathrm{diag}(1,-1,\ldots,-1)$.
The set of all geodesic polar coordinate systems corresponds to the many ways
one can put coordinates on a hyperbolic hypersphere, i.e., the Riemannian
submanifold $U\subset{\mathbf{H}}_{R}^{d}$ defined for a fixed ${{\bf
x}^{\prime}}\in{\mathbf{H}}_{R}^{d}$ such that $d({\bf x},{{\bf
x}^{\prime}})=b=const,$ where $b\in(0,\infty)$. These are coordinate systems
which correspond to subgroup chains starting with $O(d,1)\supset
O(d)\supset\cdots$, with standard geodesic polar coordinates given by (1)
being only one of them. (For a thorough description of these see section X.5
in Vilenkin (1968) [29].) They all share the property that they are described
by $(d+1)$-variables: $r\in[0,\infty)$ plus $d$-angles each being given by the
values $[0,2\pi)$, $[0,\pi]$, $[-\pi/2,\pi/2]$ or $[0,\pi/2]$ (see Izmest’ev
et al. (1999, 2001) [17, 18]).
In any of the geodesic polar coordinate systems, the geodesic distance between
two points on the submanifold is given by (cf. (3))
$d({\bf x},{{\bf x}^{\prime}})=R\cosh^{-1}\bigl{(}\cosh r\cosh
r^{\prime}-\sinh r\sinh r^{\prime}\cos\gamma\bigr{)},$ (7)
where $\gamma$ is the unique separation angle given in each hyperspherical
coordinate system. For instance, the separation angle in standard geodesic
polar coordinates (1) is given by the formula
$\displaystyle\cos\,\gamma=\cos(\phi-\phi^{\prime})\prod_{i=1}^{d-2}\sin\theta_{i}{\sin\theta_{i}}^{\prime}+\sum_{i=1}^{d-2}\cos\theta_{i}{\cos\theta_{i}}^{\prime}\prod_{j=1}^{i-1}\sin\theta_{j}{\sin\theta_{j}}^{\prime}.$
(8)
Corresponding separation angle formulae for any geodesic polar coordinate
system can be computed using (2), (3), and the associated formulae for the
appropriate inner-products.
The infinitesimal distance in a geodesic polar coordinate system on this
submanifold is
$ds^{2}=R^{2}(dr^{2}+\sinh^{2}r\ d\gamma^{2}),$ (9)
where an appropriate expression for $\gamma$ in a curvilinear coordinate
system is given. If one combines (1), (5), (8) and (9), then in a particular
geodesic polar coordinate system, Laplace’s equation on ${\mathbf{H}}_{R}^{d}$
is
$\Delta f=\frac{1}{R^{2}}\left[\frac{\partial^{2}f}{\partial r^{2}}+(d-1)\coth
r\frac{\partial f}{\partial
r}+\frac{1}{\sinh^{2}r}\Delta_{{\mathbf{S}}^{d-1}}f\right]=0,$ (10)
where $\Delta_{{\mathbf{S}}^{d-1}}$ is the corresponding Laplace-Beltrami
operator on ${\mathbf{S}}^{d-1}$ with unit radius.
From this point onwards, ${\mathbf{S}}^{d-1}$ will always refer to the
$(d-1)$-dimensional unit hypersphere, which is a compact Riemannian
submanifold with positive constant sectional curvature, embedded in
${\mathbf{R}}^{d}$ and given by the variety $x_{1}^{2}+\ldots+x_{d}^{2}=1$.
Geodesic polar coordinate systems partition ${\mathbf{H}}_{R}^{d}$ into a
family of concentric $(d-1)$-dimensional hyperspheres, each with a radius
$r\in(0,\infty),$ on which all possible hyperspherical coordinate systems for
${\mathbf{S}}^{d-1}$ may be used (see for instance, in Vilenkin (1968) [29]).
One then must also consider the limiting case for $r=0$ to fill out all of
${\mathbf{H}}_{R}^{d}$. In standard geodesic polar coordinates one can compute
the normalized hyperspherical harmonics in this space by solving the Laplace
equation using separation of variables which results in a general procedure
which is given explicitly in Izmest’ev et al. (1999, 2001) [17, 18]. These
angular harmonics are given as general expressions involving trigonometric
functions, Gegenbauer polynomials and Jacobi polynomials.
The harmonics in geodesic polar coordinate systems are given in terms of a
radial solution multiplied by the angular harmonics. The angular harmonics are
eigenfunctions of the Laplace-Beltrami operator on ${\mathbf{S}}^{d-1}$ with
unit radius which satisfy the following eigenvalue problem
$\Delta_{{\mathbf{S}}^{d-1}}Y_{l}^{K}({\widehat{\bf
x}})=-l(l+d-2)Y_{l}^{K}({\widehat{\bf x}}),$ (11)
where ${\widehat{\bf x}}\in{\mathbf{S}}^{d-1}$, $Y_{l}^{K}({\widehat{\bf x}})$
are normalized hyperspherical harmonics, $l\in{\mathbf{N}}_{0}$ is the angular
momentum quantum number, and $K$ stands for the set of $(d-2)$-quantum numbers
identifying degenerate harmonics for each $l$. The degeneracy as a function of
the dimension $d$ tells you how many linearly independent solutions exist for
a particular $l$ value. The hyperspherical harmonics are normalized such that
$\int_{{\mathbf{S}}^{d-1}}Y_{l}^{K}({\widehat{\bf
x}})\overline{Y_{l^{\prime}}^{K^{\prime}}({\widehat{\bf
x}})}d\omega=\delta_{l}^{l^{\prime}}\delta_{K}^{K^{\prime}},$
where $d\omega$ is a volume measure on ${\mathbf{S}}^{d-1}$ which is invariant
under the isometry group $SO(d)$ (cf. (13)), and for $x+iy=z\in{\mathbf{C}}$,
$\overline{z}=x-iy$, represents complex conjugation. The generalized Kronecker
delta $\delta_{K}^{K^{\prime}}$ (cf. (2.2)) is defined such that it equals 1
if all of the $(d-2)$-quantum numbers identifying degenerate harmonics for
each $l$ coincide, and equals zero otherwise.
Since the angular solutions (hyperspherical harmonics) are well-known (see
Chapter IX in Vilenkin (1968) [29]; Chapter 11 in Erdélyi et al. (1981) [12]),
we will now focus on the radial solutions, which satisfy the following
ordinary differential equation
$\frac{d^{2}u}{dr^{2}}+(d-1)\coth
r\frac{du}{dr}-\frac{l(l+d-2)}{\sinh^{2}r}u=0.$
Four solutions to this ordinary differential equation
$u_{1\pm}^{d,l},u_{2\pm}^{d,l}:(1\infty)\to{\mathbf{C}}$ are given by
${\displaystyle u_{1\pm}^{d,l}(\cosh
r)=\frac{1}{\sinh^{d/2-1}r}P_{d/2-1}^{\pm(d/2-1+l)}(\cosh r)},$
and
${\displaystyle u_{2\pm}^{d,l}(\cosh
r)=\frac{1}{\sinh^{d/2-1}r}Q_{d/2-1}^{\pm(d/2-1+l)}(\cosh r)},$
where $P_{\nu}^{\mu},Q_{\nu}^{\mu}:(1,\infty)\to{\mathbf{C}}$ are associated
Legendre functions of the first and second kind respectively (see for instance
Chapter 14 in Olver et al. (2010) [25]).
### 2.3 Fundamental solution of Laplace’s equation on the hyperboloid
Due to the fact that the space ${\mathbf{H}}_{R}^{d}$ is homogeneous with
respect to its isometry group, the pseudo-orthogonal group $SO(d,1)$, and
therefore an isotropic manifold, we expect that there exist a fundamental
solution on this space with spherically symmetric dependence. We specifically
expect these solutions to be given in terms of associated Legendre functions
of the second kind with argument given by $\cosh r$. This associated Legendre
function naturally fits our requirements because it is singular at $r=0$ and
vanishes at infinity, whereas the associated Legendre functions of the first
kind, with the same argument, are regular at $r=0$ and singular at infinity.
In computing a fundamental solution of the Laplacian on
${\mathbf{H}}_{R}^{d}$, we know that
$-\Delta{\mathcal{H}}_{R}^{d}({\bf x},{{\bf x}^{\prime}})=\delta_{g}({\bf
x},{{\bf x}^{\prime}}),$ (12)
where $g$ is the Riemannian metric on ${\mathbf{H}}_{R}^{d}$ and
$\delta_{g}({\bf x},{{\bf x}^{\prime}})$ is the Dirac delta function on the
manifold ${\mathbf{H}}_{R}^{d}$. The Dirac delta function is defined for an
open set $U\subset{\mathbf{H}}_{R}^{d}$ with ${\bf x},{{\bf
x}^{\prime}}\in{\mathbf{H}}_{R}^{d}$ such that
$\int_{U}\delta_{g}({\bf x},{{\bf
x}^{\prime}})d\mbox{vol}_{g}=\left\\{\begin{array}[]{ll}\displaystyle
1&\qquad\mathrm{if}\ {{\bf x}^{\prime}}\in U,\\\\[2.84544pt] \displaystyle
0&\qquad\mathrm{if}\ {{\bf x}^{\prime}}\notin U,\end{array}\right.$
where $d\mbox{vol}_{g}$ is a volume measure, invariant under the isometry
group $SO(d,1)$ of the Riemannian manifold ${\mathbf{H}}_{R}^{d}$, given (in
standard geodesic polar coordinates) by
$d\mbox{vol}_{g}=R^{d}\sinh^{d-1}rd\omega:=R^{d}\sinh^{d-1}r\sin^{d-2}\theta_{d-1}\cdots\sin\theta_{2}d\theta_{1}\cdots
d\theta_{d-1}.$ (13)
Notice that as $r\to 0^{+}$ that $d\mbox{vol}_{g}$ goes to the Euclidean
measure, invariant under the Euclidean motion group $E(d)$, in spherical
coordinates. Therefore in spherical coordinates, we have the following
$\delta_{g}({\bf x},{{\bf
x}^{\prime}})=\frac{\delta(r-r^{\prime})}{R^{d}\sinh^{d-1}r^{\prime}}\frac{\delta(\theta_{1}-\theta_{1}^{\prime})\cdots\delta(\theta_{d-1}-\theta_{d-1}^{\prime})}{\sin\theta_{2}^{\prime}\cdots\sin^{d-2}\theta_{d-1}^{\prime}}.$
(14)
In general since we can add any harmonic function to a fundamental solution
for the Laplacian and still have a fundamental solution, we will use this
freedom to make our fundamental solution as simple as possible. It is
reasonable to expect that there exists a particular spherically symmetric
fundamental solution ${\mathcal{H}}_{R}^{d}({\bf x},{{\bf x}^{\prime}})$ on
the hyperboloid with pure radial $\rho({\widehat{\bf x}},{\widehat{\bf
x}}^{\prime})=d({\bf x},{{\bf x}^{\prime}})/R$ (cf. (4)) and constant angular
dependence (invariant under rotations centered about the origin), due to the
influence of the point-like nature of the Dirac delta function. For a
spherically symmetric solution to the Laplace equation, the corresponding
$\Delta_{{\mathbf{S}}^{d-1}}$ term vanishes since only the $l=0$ term
survives. In other words, we expect there to exist a fundamental solution of
Laplace’s equation such that ${\mathcal{H}}_{R}^{d}({\bf x},{{\bf
x}^{\prime}})=f(\rho)$.
In Cohl & Kalnins (2011) [8], we have proven that on the $R$-radius
hyperboloid ${\mathbf{H}}_{R}^{d}$, a fundamental solution of Laplace’s
equation can be given as follows.
###### Theorem 2.1.
Let $d\in\\{2,3,\ldots\\}.$ Define
${\mathcal{I}}_{d}:(0,\infty)\to{\mathbf{R}}$ as
${\mathcal{I}}_{d}(\rho):=\int_{\rho}^{\infty}\frac{dx}{\sinh^{d-1}x},$
${\bf x},{{\bf x}^{\prime}}\in{\mathbf{H}}_{R}^{d}$, and
${\mathcal{H}}_{R}^{d}:({\mathbf{H}}_{R}^{d}\times{\mathbf{H}}_{R}^{d})\setminus\\{({\bf
x},{\bf x}):{\bf x}\in{\mathbf{H}}_{R}^{d}\\}\to{\mathbf{R}}$ defined such
that
${\mathcal{H}}_{R}^{d}({\bf x},{\bf
x}^{\prime}):={\displaystyle\frac{\Gamma\left(d/2\right)}{2\pi^{d/2}R^{d-2}}{\mathcal{I}}_{d}(\rho)},$
where $\rho:=\cosh^{-1}\left([{\widehat{\bf x}},{\widehat{\bf
x}}^{\prime}]\right)$ is the geodesic distance between ${\widehat{\bf x}}$ and
${\widehat{\bf x}}^{\prime}$ on the pseudo-sphere of unit radius
${\mathbf{H}}^{d}$, with ${\widehat{\bf x}}={\bf x}/R,$ ${\widehat{\bf
x}}^{\prime}={{\bf x}^{\prime}}/R$, then ${\mathcal{H}}_{R}^{d}$ is a
fundamental solution for $-\Delta$ where $\Delta$ is the Laplace-Beltrami
operator on ${\mathbf{H}}_{R}^{d}$. Moreover,
$\displaystyle{\mathcal{I}}_{d}(\rho)=\left\\{\begin{array}[]{ll}\displaystyle(-1)^{d/2-1}\frac{(d-3)!!}{(d-2)!!}\Biggl{[}\log\coth\frac{\rho}{2}+\cosh\rho\sum_{k=1}^{d/2-1}\frac{(2k-2)!!(-1)^{k}}{(2k-1)!!\sinh^{2k}\rho}\Biggr{]}&\mathrm{if}\
d\ \mathrm{even},\\\\[17.07182pt]
\left\\{\begin{array}[]{l}\displaystyle(-1)^{(d-1)/2}\Biggl{[}\frac{(d-3)!!}{(d-2)!!}\\\\[11.38092pt]
\displaystyle\hskip
34.14322pt+\left(\frac{d-3}{2}\right)!\sum_{k=1}^{(d-1)/2}\frac{(-1)^{k}\coth^{2k-1}\rho}{(2k-1)(k-1)!((d-2k-1)/2)!}\Biggr{]},\\\\[12.80365pt]
\mathrm{or}\\\\[0.0pt]
\displaystyle(-1)^{(d-1)/2}\frac{(d-3)!!}{(d-2)!!}\left[1+\cosh\rho\sum_{k=1}^{(d-1)/2}\frac{(2k-3)!!(-1)^{k}}{(2k-2)!!\sinh^{2k-1}\rho}\right],\end{array}\right\\}&\mathrm{if}\
d\ \mathrm{odd}.\end{array}\right.$
$\displaystyle=\frac{1}{(d-1)\cosh^{d-1}\rho}\,{}_{2}F_{1}\left(\frac{d-1}{2},\frac{d}{2};\frac{d+1}{2};\frac{1}{\cosh^{2}\rho}\right),$
$\displaystyle=\frac{1}{(d-1)\cosh\rho\,\sinh^{d-2}\rho}\,{}_{2}F_{1}\left(\frac{1}{2},1;\frac{d+1}{2};\frac{1}{\cosh^{2}\rho}\right),$
$\displaystyle=\frac{e^{-i\pi(d/2-1)}}{2^{d/2-1}\Gamma\left(d/2\right)\sinh^{d/2-1}\rho}\,Q_{d/2-1}^{d/2-1}(\cosh\rho),$
where $!!$ is the double factorial, ${}_{2}F_{1}$ is the Gauss hypergeometric
function, and $Q_{\nu}^{\mu}$ is the associated Legendre function of the
second kind.
For a proof of this theorem, see Cohl & Kalnins (2011) [8].
## 3 Fourier expansions for a Green’s function on the hyperboloid
Now we compute the Fourier expansions for a fundamental solution of the
Laplace-Beltrami operator on ${\mathbf{H}}_{R}^{d}$.
### 3.1 Fourier expansion for a fundamental solution of the Laplacian on
${\mathbf{H}}_{R}^{2}$
The generating function for Chebyshev polynomials of the first kind (Fox &
Parker (1968) [14], p. 51) is given as
$\frac{1-z^{2}}{1+z^{2}-2xz}=\sum_{n=0}^{\infty}\epsilon_{n}T_{n}(x)z^{n},$
(17)
where $|z|<1,$ $T_{n}:[-1,1]\to{\mathbf{R}}$ is the Chebyshev polynomial of
the first kind defined as $T_{l}(x):=\cos(l\cos^{-1}x),$ and
$\epsilon_{n}:=2-\delta_{n}^{0}$ is the Neumann factor (see p. 744 in Morse &
Feshbach (1953) [22]), commonly-occurring in Fourier cosine series. If
substitute $z=e^{-\eta}$ with $\eta\in(0,\infty)$ in (17), then we obtain
$\frac{\sinh\eta}{\cosh\eta-\cos\psi}=\sum_{n=0}^{\infty}\epsilon_{n}\cos(n\psi)e^{-n\eta}.$
(18)
Integrating both sides of (18) with respect to $\eta$, we obtain the following
formula (cf. Magnus, Oberhettinger & Soni (1966) [21], p. 259)
$\log\left(1+z^{2}-2z\cos\psi\right)=-2\sum_{n=1}^{\infty}\frac{\cos(n\psi)}{n}z^{n}.$
(19)
In Euclidean space ${\mathbf{R}}^{d}$, a Green’s function for Laplace’s
equation (fundamental solution for the Laplacian) is well-known and is given
in the following theorem (see Folland (1976) [13]; p. 94, Gilbarg & Trudinger
(1983) [15]; p. 17, Bers et al. (1964) [4], p. 211).
###### Theorem 3.1.
Let $d\in{\mathbf{N}}$. Define
${\mathcal{G}}^{d}({\bf x},{\bf
x}^{\prime})=\left\\{\begin{array}[]{ll}\displaystyle\frac{\Gamma(d/2)}{2\pi^{d/2}(d-2)}\|{\bf
x}-{\bf x}^{\prime}\|^{2-d}&\qquad\mathrm{if}\ d=1\mathrm{\ or\ }d\geq
3,\\\\[10.0pt] \displaystyle\frac{1}{2\pi}\log\|{\bf x}-{\bf
x}^{\prime}\|^{-1}&\qquad\mathrm{if}\ d=2,\end{array}\right.$
then ${\mathcal{G}}^{d}$ is a fundamental solution for $-\Delta$ in Euclidean
space ${\mathbf{R}}^{d}$, where $\Delta$ is the Laplace operator in
${\mathbf{R}}^{d}$.
Therefore if we take $z=r_{<}/r_{>}$ in (19), where
$r_{\lessgtr}:={\min\atop\max}\\{r,r^{\prime}\\}$ with
$r,r^{\prime}\in[0,\infty),$ then using polar coordinates, we can derive the
Fourier expansion for a fundamental solution of the Laplacian in Euclidean
space for $d=2$ (cf. Theorem 3.1), namely
${\mathfrak{g}}^{2}:=\log\|{\bf x}-{{\bf x}^{\prime}}\|=\log
r_{>}-\sum_{n=1}^{\infty}\frac{\cos(n(\phi-\phi^{\prime}))}{n}\biggl{(}\frac{r_{<}}{r_{>}}\biggr{)}^{n},$
(20)
where ${\mathfrak{g}}^{2}=-2\pi{\mathcal{G}}^{2}$ (cf. Theorem 3.1). On the
hyperboloid for $d=2$ we have a fundamental solution of Laplace’s equation
given by
${\mathfrak{h}}^{2}:=\log\coth\frac{1}{2}d({\widehat{\bf x}},{\widehat{\bf
x}}^{\prime})=\frac{1}{2}\log\frac{\cosh d({\widehat{\bf x}},{\widehat{\bf
x}}^{\prime})+1}{\cosh d({\widehat{\bf x}},{\widehat{\bf x}}^{\prime})-1},$
where ${\mathfrak{h}}^{2}=2\pi{\mathcal{H}}_{R}^{2}$ (cf. Theorem 2.1 and (32)
below). Note that because of the $R^{d-2}$ dependence of a fundamental
solution of Laplace’s equation for $d=2$ in Theorem 2.1, there is no strict
dependence on $R$ for ${\mathcal{H}}_{R}^{2}$ or ${\mathfrak{h}}^{2}$, but
will retain the notation nonetheless. In standard geodesic polar coordinates
on ${\mathbf{H}}_{R}^{2}$ (cf. (1)), using (7) and
$\cos\gamma=\cos(\phi-\phi^{\prime})$ (cf. (8)) produces
$\cosh d({\widehat{\bf x}},{\widehat{\bf x}}^{\prime})=\cosh r\cosh
r^{\prime}-\sinh r\sinh r^{\prime}\cos(\phi-\phi^{\prime}),$
therefore
${\mathfrak{h}}^{2}=\frac{1}{2}\log\frac{\cosh r\cosh r^{\prime}+1-\sinh
r\sinh r^{\prime}\cos(\phi-\phi^{\prime})}{\cosh r\cosh r^{\prime}-1-\sinh
r\sinh r^{\prime}\cos(\phi-\phi^{\prime})}.$
Replacing $\psi=\phi-\phi^{\prime}$ and rearranging the logarithms yield
${\mathfrak{h}}^{2}=\frac{1}{2}\log\frac{\cosh r\cosh r^{\prime}+1}{\cosh
r\cosh
r^{\prime}-1}+\frac{1}{2}\log\left(1-z_{+}\cos\psi\right)-\frac{1}{2}\log\left(1-z_{-}\cos\psi\right),$
where
$z_{\pm}:=\frac{\sinh r\sinh r^{\prime}}{\cosh r\cosh r^{\prime}\pm 1}.$
Note that $z_{\pm}\in(0,1)$ for $r,r^{\prime}\in(0,\infty)$. We have the
following MacLaurin series
$\log(1-x)=-\sum_{n=1}^{\infty}\frac{x^{n}}{n},$
where $x\in[-1,1)$. Therefore away from the singularity at ${\bf x}={{\bf
x}^{\prime}}$ we have
$\lambda_{\pm}:=\log\left(1-z_{\pm}\cos\psi\right)=-\sum_{k=1}^{\infty}\frac{z_{\pm}^{k}}{k}\cos^{k}\psi.$
(21)
We can expand the powers of cosine using the following trigonometric identity
$\cos^{k}\psi=\frac{1}{2^{k}}\sum_{n=0}^{k}\left(\begin{array}[]{c}\displaystyle\\!\\!k\\\\[1.0pt]
\displaystyle\\!\\!n\end{array}\\!\\!\right)\cos[(2n-k)\psi],$
which is the standard expansion for powers using Chebyshev polynomials (see
for instance p. 52 in Fox & Parker (1968) [14]). Inserting this expression in
(21), we obtain the following double-summation expression
$\lambda_{\pm}=-\sum_{k=1}^{\infty}\sum_{n=0}^{k}\frac{z_{\pm}^{k}}{2^{k}k}\left(\begin{array}[]{c}\displaystyle\\!\\!k\\\\[1.0pt]
\displaystyle\\!\\!n\end{array}\\!\\!\right)\cos[(2n-k)\psi].$ (22)
Now we perform a double-index replacement in (22). We break this sum into two
separate sums, one for $k\leq 2n$ and another for $k\geq 2n$. There is an
overlap when both sums satisfy the equality, and in that situation we must
halve after we sum over both sums. If $k\leq 2n$, make the substitution
$k^{\prime}=k-n$ and $n^{\prime}=2n-k$. It follows that
$k=2k^{\prime}+n^{\prime}$ and $n=n^{\prime}+k^{\prime}$, therefore
$\left(\begin{array}[]{c}\displaystyle\\!\\!k\\\\[1.0pt]
\displaystyle\\!\\!n\end{array}\\!\\!\right)=\left(\begin{array}[]{c}\displaystyle\\!\\!2k^{\prime}+n^{\prime}\\\\[1.0pt]
\displaystyle\\!\\!n^{\prime}+k^{\prime}\end{array}\\!\\!\right)=\left(\begin{array}[]{c}\displaystyle\\!\\!2k^{\prime}+n^{\prime}\\\\[1.0pt]
\displaystyle\\!\\!n^{\prime}+k^{\prime}\end{array}\\!\\!\right).$
If $k\geq 2n$ make the substitution $k^{\prime}=n$ and $n^{\prime}=k-2n$. Then
$k=2k^{\prime}+n^{\prime}$ and $n=k^{\prime}$, therefore
$\left(\begin{array}[]{c}\displaystyle\\!\\!k\\\\[1.0pt]
\displaystyle\\!\\!n\end{array}\\!\\!\right)=\left(\begin{array}[]{c}\displaystyle\\!\\!2k^{\prime}+n^{\prime}\\\\[1.0pt]
\displaystyle\\!\\!n\end{array}\\!\\!\right)=\left(\begin{array}[]{c}\displaystyle\\!\\!2k^{\prime}+n^{\prime}\\\\[1.0pt]
\displaystyle\\!\\!k^{\prime}+n^{\prime}\end{array}\\!\\!\right),$
where the equalities of the binomial coefficients are confirmed using the
following identity
$\left(\begin{array}[]{c}\displaystyle\\!\\!n\\\\[1.0pt]
\displaystyle\\!\\!k\end{array}\\!\\!\right)=\left(\begin{array}[]{c}\displaystyle\\!\\!n\\\\[1.0pt]
\displaystyle\\!\\!n-k\end{array}\\!\\!\right),$
where $n,k\in{\mathbf{Z}}$, except where $k<0$ or $n-k<0$. To take into
account the double-counting which occurs when $k=2n$ (which occurs when
$n^{\prime}=0$), we introduce a factor of $\epsilon_{n^{\prime}}/2$ into the
expression (and relabel $k^{\prime}\mapsto k$ and $n^{\prime}\mapsto n$). We
are left with
$\lambda_{\pm}=-\frac{1}{2}\sum_{k=1}^{\infty}\frac{z_{\pm}^{2k}}{2^{k}k}\left(\begin{array}[]{c}\displaystyle\\!\\!2k\\\\[1.0pt]
\displaystyle\\!\\!k\end{array}\\!\\!\right)-2\sum_{n=1}^{\infty}\cos(n\psi)\sum_{k=0}^{\infty}\frac{z_{\pm}^{2k+n}}{2^{2k+n}(2k+n)}\left(\begin{array}[]{c}\displaystyle\\!\\!2k+n\\\\[1.0pt]
\displaystyle\\!\\!k\end{array}\\!\\!\right).$ (23)
If we substitute
$\left(\begin{array}[]{c}\displaystyle\\!\\!2k\\\\[1.0pt]
\displaystyle\\!\\!k\end{array}\\!\\!\right)=\frac{2^{2k}\left(\frac{1}{2}\right)_{k}}{k!}$
into the first term of (23), then we obtain
$I_{\pm}:=-\frac{1}{2}\sum_{k=1}^{\infty}\frac{\left(\frac{1}{2}\right)_{k}z_{\pm}^{2k}}{k!k}=-\int_{0}^{z_{\pm}}\frac{dz_{\pm}^{\prime}}{z_{\pm}^{\prime}}\sum_{k=1}^{\infty}\frac{\left(\frac{1}{2}\right)_{k}{z_{\pm}^{\prime}}^{2k}}{k!}=-\int_{0}^{z_{\pm}}\frac{d{z_{\pm}^{\prime}}}{z_{\pm}^{\prime}}\left[\frac{1}{\sqrt{1-{z_{\pm}^{\prime}}^{2}}}-1\right].$
We are left with
$I_{\pm}=-\log 2+\log\left(1+\sqrt{1-z_{\pm}^{2}}\right)=-\log
2+\log\left(\frac{(\cosh r_{>}\pm 1)(\cosh r_{<}+1)}{\cosh r\cosh
r^{\prime}\pm 1}\right).$
If we substitute
$\displaystyle\left(\begin{array}[]{c}\displaystyle\\!\\!2k+n\\\\[1.0pt]
\displaystyle\\!\\!k\end{array}\\!\\!\right)=\frac{\displaystyle
2^{2k}\left(\frac{n+1}{2}\right)_{k}\left(\frac{n+2}{2}\right)_{k}}{k!(n+1)_{k}},$
into the second term of (23), then the Fourier coefficient reduces to
$\displaystyle J_{\pm}$ $\displaystyle:=$
$\displaystyle\frac{1}{2^{n-1}}\sum_{k=0}^{\infty}\frac{\displaystyle\left(\frac{n+1}{2}\right)_{k}\left(\frac{n+2}{2}\right)_{k}}{\displaystyle
k!(n+1)_{k}}\frac{z_{\pm}^{2k+n}}{2k+n}$ $\displaystyle=$
$\displaystyle\frac{1}{2^{n-1}}\int_{0}^{z_{\pm}}dz_{\pm}^{\prime}{z_{\pm}^{\prime}}^{n-1}\sum_{k=0}^{\infty}\frac{\displaystyle\left(\frac{n+1}{2}\right)_{k}\left(\frac{n+2}{2}\right)_{k}}{k!(n+1)_{k}}{z_{\pm}^{\prime}}^{2k}.$
The series in the integrand is a Gauss hypergeometric function which can be
given as
$\sum_{k=0}^{\infty}\frac{\displaystyle\left(\frac{n+1}{2}\right)_{k}\left(\frac{n+2}{2}\right)_{k}}{k!(n+1)_{k}}z^{2k}=\frac{2^{n}n!}{z^{n}\sqrt{1-z^{2}}}P_{0}^{-n}\left(\sqrt{1-z^{2}}\right),$
where $P_{0}^{-n}$ is an associated Legendre function of the first kind with
vanishing degree and order given by $-n$. This is a consequence of
${}_{2}F_{1}\left(a,b;a+b-\frac{1}{2};x\right)=2^{2+b-3/2}\Gamma\left(a+b-\frac{1}{2}\right)\frac{x^{(3-2a-2b)/4}}{\sqrt{1-x}}P_{b-a-1/2}^{3/2-a-b}\left(\sqrt{1-x}\right),$
where $x\in(0,1)$ (see for instance Magnus, Oberhettinger & Soni (1966) [21],
p. 53), and the Legendre function is evaluated using (cf. (8.1.2) in
Abramowitz & Stegun (1972) [1])
$P_{0}^{-n}(x)=\frac{1}{n!}\left(\frac{1-x}{1+x}\right)^{n/2},$
where $n\in{\mathbf{N}}_{0}$. Therefore the Fourier coefficient is given by
$J_{\pm}=2\int_{\sqrt{1-z_{\pm}^{2}}}^{1}\frac{dz_{\pm}^{\prime}}{1-{z_{\pm}^{\prime}}^{2}}\left(\frac{1-z_{\pm}^{\prime}}{1+z_{\pm}^{\prime}}\right)^{n/2}=\frac{2}{n}\left[\frac{1-\sqrt{1-z_{\pm}^{2}}}{1+\sqrt{1-z_{\pm}^{2}}}\right]^{n/2}.$
Finally we have
$\displaystyle\lambda_{\pm}=-\log 2+\log\left(\frac{(\cosh r_{>}\pm 1)(\cosh
r_{<}+1)}{\cosh r\cosh r^{\prime}\pm 1}\right)$ $\displaystyle\hskip
36.98866pt-2\sum_{n=1}^{\infty}\frac{\cos(n\psi)}{n}\left[\frac{(\cosh
r_{>}\mp 1)(\cosh r_{<}-1)}{(\cosh r_{>}\pm 1)(\cosh r_{<}+1)}\right]^{n/2},$
and the Fourier expansion for a fundamental solution of Laplace’s equation for
the $d=2$ hyperboloid is given by
$\displaystyle{\mathfrak{h}}^{2}=\frac{1}{2}\log\frac{\cosh r_{>}+1}{\cosh
r_{>}-1}$
$\displaystyle{}\displaystyle+\sum_{n=1}^{\infty}\frac{\cos(n(\phi-\phi^{\prime}))}{n}\left[\frac{\cosh
r_{<}-1}{\cosh r_{<}+1}\right]^{n/2}\left\\{\left[\frac{\cosh r_{>}+1}{\cosh
r_{>}-1}\right]^{n/2}-\left[\frac{\cosh r_{>}-1}{\cosh
r_{>}+1}\right]^{n/2}\right\\}.$ (24)
This exactly matches up to the Euclidean Fourier expansion
${\mathfrak{g}}^{2}$ (20) as $r,r^{\prime}\to 0^{+}$.
### 3.2 Fourier expansion for a fundamental solution of the Laplacian on
${\mathbf{H}}_{R}^{3}$
The Fourier expansion for a fundamental solution of the Laplacian in three-
dimensional Euclidean space (here given in standard spherical coordinates
${\bf x}=(r\sin\theta\cos\phi,r\sin\theta\sin\phi,r\cos\theta)$) is given by
(cf. Theorem 3.1, and see (1.3) in Cohl et al. (2001) [9])
$\displaystyle{\mathcal{G}}^{3}\simeq{\mathfrak{g}}^{3}:=\frac{1}{\|{\bf
x}-{{\bf x}^{\prime}}\|}$
$\displaystyle=\frac{1}{\pi\sqrt{rr^{\prime}\sin\theta\sin\theta^{\prime}}}\sum_{m=-\infty}^{\infty}e^{im(\phi-\phi^{\prime})}Q_{m-1/2}\left(\frac{r^{2}+{r^{\prime}}^{2}-2rr^{\prime}\cos\theta\cos\theta^{\prime}}{2rr^{\prime}\sin\theta\sin\theta^{\prime}}\right).$
These associated Legendre functions, toroidal harmonics, are given in terms of
complete elliptic integrals of the first and second kind (cf. (22–26) in Cohl
& Tohline (1999) [10]). Since $Q_{-1/2}(z)$ is given through (cf. (8.13.3) in
Abramowitz & Stegun (1972) [1])
$Q_{-1/2}(z)=\sqrt{\frac{2}{z+1}}K\left(\sqrt{\frac{2}{z+1}}\right),$
the $m=0$ component for ${\mathfrak{g}}^{3}$ is given by
$\left.{\mathfrak{g}}^{3}\right|_{m=0}=\frac{2}{\pi\sqrt{r^{2}+{r^{\prime}}^{2}-2rr^{\prime}\cos(\theta+\theta^{\prime})}}K\left(\sqrt{\frac{4rr^{\prime}\sin\theta\sin\theta^{\prime}}{r^{2}+{r^{\prime}}^{2}-2rr^{\prime}\cos(\theta+\theta^{\prime})}}\right).$
(25)
A fundamental solution of the Laplacian in standard geodesic polar coordinates
on ${\mathbf{H}}_{R}^{3}$ is given by (cf. Theorem 2.1 and (32) below).
$\displaystyle\displaystyle{\mathfrak{h}}^{3}({\widehat{\bf x}},{\widehat{\bf
x}}^{\prime}):=\coth d({\widehat{\bf x}},{\widehat{\bf
x}}^{\prime})-1=\frac{\cosh d({\widehat{\bf x}},{\widehat{\bf
x}}^{\prime})}{\sqrt{\cosh^{2}d({\widehat{\bf x}},{\widehat{\bf
x}}^{\prime})-1}}-1$ $\displaystyle\hskip 93.89418pt=\frac{\cosh r\cosh
r^{\prime}-\sinh r\sinh r^{\prime}\cos\gamma}{\sqrt{(\cosh r\cosh
r^{\prime}-\sinh r\sinh r^{\prime}\cos\gamma)^{2}-1}}-1,$
where ${\mathfrak{h}}^{3}=4\pi R{\mathcal{H}}_{R}^{3},$ and ${\bf x},{{\bf
x}^{\prime}}\in{\mathbf{H}}_{R}^{3}$, such that ${\widehat{\bf x}}={\bf x}/R$
and ${\widehat{\bf x}}^{\prime}={{\bf x}^{\prime}}/R$. In standard geodesic
polar coordinates (cf. (8)) we have
$\cos\gamma=\cos\theta\cos\theta^{\prime}+\sin\theta\sin\theta^{\prime}\cos(\phi-\phi^{\prime}).$
(26)
Replacing $\psi=\phi-\phi^{\prime}$ and defining
$A:=\cosh r\cosh r^{\prime}-\sinh r\sinh
r^{\prime}\cos\theta\cos\theta^{\prime},$
and
$B:=\sinh r\sinh r^{\prime}\sin\theta\sin\theta^{\prime},$
we have in the standard manner, the Fourier coefficients ${\sf
H}_{m}^{1/2}:[0,\infty)^{2}\times[0,\pi]^{2}\to{\mathbf{R}}$ of the expansion
(cf. (33) below)
${\mathfrak{h}}^{3}({\widehat{\bf x}},{\widehat{\bf
x}}^{\prime})=\sum_{m=0}^{\infty}\cos(m(\phi-\phi^{\prime})){\sf
H}_{m}^{1/2}(r,r^{\prime},\theta,\theta^{\prime}),$ (27)
defined by
${\displaystyle{\sf
H}_{m}^{1/2}(r,r^{\prime},\theta,\theta^{\prime}):=-\delta_{n}^{0}+\frac{\epsilon_{m}}{\pi}}{\displaystyle\int_{0}^{\pi}\frac{\left(A/B-\cos\psi\right)\cos(m\psi)d\psi}{\sqrt{\left(\cos\psi-\frac{A+1}{B}\right)\left(\cos\psi-\frac{A-1}{B}\right)}}}.$
(28)
If we make the substitution $x=\cos\psi$, this integral can be converted to
${\displaystyle{\sf
H}_{m}^{1/2}(r,r^{\prime},\theta,\theta^{\prime})=-\delta_{n}^{0}+\frac{\epsilon_{m}}{\pi}}{\displaystyle\int_{-1}^{1}\frac{\left(A/B-x\right)T_{m}(x)dx}{\sqrt{(1-x)(1+x)\left(x-\frac{A+1}{B}\right)\left(x-\frac{A-1}{B}\right)}}},$
(29)
where $T_{m}$ is the Chebyshev polynomial of the first kind. Since $T_{m}(x)$
is expressible as a finite sum over powers of $x$, (29) involves the square
root of a quartic multiplied by a rational function of $x$, which by
definition is an elliptic integral (see for instance Byrd & Friedman (1954)
[5]). We can directly compute (29) using Byrd & Friedman (1954) ([5],
(253.11)). If we define
$d:=-1,\ y:=-1,\ c:=1,\ b:=\frac{A-1}{B},\ a:=\frac{A+1}{B},$ (30)
(clearly $d\leq y<c<b<a$), then we can express the Fourier coefficient (29),
as a linear combination of integrals, each of the form (see Byrd & Friedman
(1954) [5], (253.11))
$\int_{y}^{c}\frac{x^{p}dx}{\sqrt{(a-x)(b-x)(c-x)(x-d)}}=c^{p}g\int_{0}^{u_{1}}\left[\frac{1-\alpha_{1}^{2}\mathrm{sn}^{2}u}{1-\alpha^{2}\mathrm{sn}^{2}u}\right]^{p}du,$
(31)
where $p\in\\{0,\ldots,m+1\\}$. In this expression $\mathrm{sn}$ is a Jacobi
elliptic function (see for instance Chapter 22 in Olver et al. (2010) [25]).
Byrd & Friedman (1954) [5] give a procedure for computing (31) for all
$m\in{\mathbf{N}}_{0}$. These integrals will be given in terms of complete
elliptic integrals of the first three kinds (see the discussion in Byrd &
Friedman (1954) [5], p. 201, 204, and p. 205). To this effect, we have the
following definitions from (253.11) in Byrd & Friedman (1954) [5], namely
$\alpha^{2}=\frac{c-d}{b-d}<1,$ $\alpha_{1}^{2}=\frac{b(c-d)}{c(b-d)},$
$g=\frac{2}{\sqrt{(a-c)(b-d)}},$
$\varphi=\sin^{-1}\sqrt{\frac{(b-d)(c-y)}{(c-d)(b-y)}},$ $u_{1}=F(\varphi,k),$
$k^{2}=\frac{(a-b)(c-d)}{(a-c)(b-d)},$
with $k^{2}<\alpha^{2}$. For our specific choices in (30), these reduce to
$\alpha^{2}=\frac{2B}{A+B-1},$ $\alpha_{1}^{2}=\frac{2(A-1)}{A+B-1},$
$g=\frac{2B}{\sqrt{(A+B-1)(A-B+1)}},$ $k^{2}=\frac{4B}{(A+B-1)(A-B+1)},$
$\varphi=\frac{\pi}{2},$
and
$u_{1}=K(k).$
Specific cases include
$\int_{y}^{c}\frac{dx}{\sqrt{(a-x)(b-x)(c-x)(x-d)}}=gK(k)$
(Byrd & Friedman (1954) [5], (340.00)) and
$\int_{y}^{c}\frac{xdx}{\sqrt{(a-x)(b-x)(c-x)(x-d)}}=\frac{cg}{\alpha^{2}}\left[\alpha_{1}^{2}K(k)+(\alpha^{2}-\alpha_{1}^{2})\Pi(\alpha,k)\right]$
(Byrd & Friedman (1954) [5], (340.01)).
In general we have
$\int_{y}^{c}\frac{x^{p}dx}{\sqrt{(a-x)(b-x)(c-x)(x-d)}}=\frac{c^{p}g\alpha_{1}^{2p}p!}{\alpha^{2p}}\sum_{j=0}^{p}\frac{(\alpha^{2}-\alpha_{1}^{2})^{j}}{\alpha_{1}^{2j}j!(p-j)!}V_{j}$
(Byrd & Friedman (1954) [5], (340.04)), where
$V_{0}=K(k),$ $V_{1}=\Pi(\alpha,k),$
$V_{2}=\frac{1}{2(\alpha^{2}-1)(k^{2}-\alpha^{2})}\left[(k^{2}-\alpha^{2})K(k)+\alpha^{2}E(k)+(2\alpha^{2}k^{2}+2\alpha^{2}-\alpha^{4}-3k^{2})\Pi(\alpha,k)\right],$
and larger values of $V_{j}$ can be computed using the following recurrence
relation
$\displaystyle V_{m+3}=\frac{1}{2(m+2)(1-\alpha^{2})(k^{2}-\alpha^{2})}$
$\displaystyle\hskip
51.21504pt\times\bigl{[}(2m+1)k^{2}V_{m}+2(m+1)(\alpha^{2}k^{2}+\alpha^{2}-3k^{2})V_{m+1}$
$\displaystyle\hskip
102.43008pt+(2m+3)(\alpha^{4}-2\alpha^{2}k^{2}-2\alpha^{2}+3k^{2})V_{m+2}\bigr{]}$
(see Byrd & Friedman (1954) [5], (336.00–03)). For instance,
$\eqalign{\int_{y}^{c}\frac{x^{2}dx}{\sqrt{(a-x)(b-x)(c-x)(x-d)}}\cr\hskip
28.45274pt=\frac{c^{2}g}{\alpha^{4}}\left[\alpha_{1}^{4}K(k)+2\alpha_{1}^{2}(\alpha^{2}-\alpha_{1}^{2})\Pi(\alpha,k)+(\alpha^{2}-\alpha_{1}^{2})^{2}V_{2}\right]}$
(see Byrd & Friedman (1954) [5], (340.02)).
In general, the Fourier coefficients for ${\mathfrak{h}}^{3}$ will be given in
terms of complete elliptic integrals of the first three kinds. Let’s directly
compute the $m=0$ component, in which (29) reduces to
${\displaystyle{\sf
H}_{0}^{1/2}(r,r^{\prime},\theta,\theta^{\prime})=-1+\frac{1}{\pi}}{\displaystyle\int_{-1}^{1}\frac{\left(A/B-x\right)dx}{\sqrt{(1-x)(1+x)\left(x-\frac{A+1}{B}\right)\left(x-\frac{A-1}{B}\right)}}}.$
Therefore using the above formulae, we have
$\displaystyle{\mathfrak{h}}_{3}|_{m=0}={\sf
H}_{0}^{1/2}(r,r^{\prime},\theta,\theta^{\prime})$
$\displaystyle=-1+\frac{2K(k)}{\pi\sqrt{(A-B+1)(A+B-1)}}+\frac{2(A-B-1)\Pi(\alpha,k)}{\pi\sqrt{(A-B+1)(A+B-1)}}$
$\displaystyle=-1+\frac{2}{\pi}\left\\{K(k)+\left[\cosh r\cosh
r^{\prime}-\sinh r\sinh
r^{\prime}\cos(\theta-\theta^{\prime})-1\right]\Pi(\alpha,k)\right\\}$
$\displaystyle\hskip 73.97733pt\times\left[\cosh r\cosh r^{\prime}-\sinh
r\sinh r^{\prime}\cos(\theta-\theta^{\prime})+1\right]^{-1/2}$
$\displaystyle\hskip 73.97733pt\times\left[\cosh r\cosh r^{\prime}-\sinh
r\sinh r^{\prime}\cos(\theta+\theta^{\prime})-1\right]^{-1/2}.$
Note that the Fourier coefficients
${\mathfrak{h}}^{3}|_{m=0}\to{\mathfrak{g}}^{3}|_{m=0},$
in the limit as $r,r^{\prime}\rightarrow 0^{+}$, where
${\mathfrak{g}}^{3}|_{m=0}$ is given in (25). This is expected since
${\mathbf{H}}_{R}^{3}$ is a manifold.
### 3.3 Fourier expansion for a fundamental solution of the Laplacian on
${\mathbf{H}}_{R}^{d}$
For the $d$-dimensional Riemannian manifold ${\mathbf{H}}_{R}^{d}$, with
$d\geq 2$, one can expand a fundamental solution of the Laplace-Beltrami
operator in an azimuthal Fourier series. One may Fourier expand, in terms of
the azimuthal coordinate, a fundamental solution of the Laplace-Beltrami
operator in any rotationally-invariant coordinate systems which admits
solutions via separation of variables. In Euclidean space, there exist non-
subgroup-type rotationally invariant coordinate systems which are separable
for Laplace’s equation. All separable coordinate systems for Laplace’s
equation in $d$-dimensional Euclidean space ${\mathbf{R}}^{d}$ are known. In
fact, this is also true for separable coordinate systems on
${\mathbf{H}}_{R}^{d}$ (see Kalnins (1986) [19]). There has been considerable
work done in two and three dimensions, however there still remains a lot of
work to be done for a detailed analysis of fundamental solutions.
We define an unnormalized fundamental solution of Laplace’s equation on the
unit hyperboloid
${\mathfrak{h}}^{d}:({\mathbf{H}}^{d}\times{\mathbf{H}}^{d})\setminus\\{({\bf
x},{\bf x}):{\bf x}\in{\mathbf{H}}^{d}\\}\to{\mathbf{R}}$ such that
${\mathfrak{h}}^{d}({\widehat{\bf x}},{\widehat{\bf
x}}^{\prime}):={\mathcal{I}}_{d}(\rho({\widehat{\bf x}},{\widehat{\bf
x}}^{\prime}))=\frac{2\pi^{d/2}R^{d-2}}{\Gamma(d/2)}{\mathcal{H}}_{R}^{d}({\bf
x},{{\bf x}^{\prime}}).$ (32)
In our current azimuthal Fourier analysis, we therefore will focus on the
relatively easier case of separable subgroup-type coordinate systems on
${\mathbf{H}}_{R}^{d}$, and specifically for geodesic polar coordinates. In
these coordinates the Riemannian metric is given by (9) and we further
restrict our attention by adopting standard geodesic polar coordinates (1).
In these coordinates would would like to expand a fundamental solution of
Laplace’s equation on the hyperboloid in an azimuthal Fourier series, namely
${\mathfrak{h}}^{d}({\widehat{\bf x}},{\widehat{\bf
x}}^{\prime})=\sum_{m=0}^{\infty}\cos(m(\phi-\phi^{\prime})){\sf
H}_{m}^{d/2-1}(r,r^{\prime},\theta_{1},\ldots,\theta_{d-2},\theta_{1}^{\prime},\ldots,\theta_{d-2}^{\prime})$
(33)
where ${\sf H}_{m}^{d/2-1}:[0,\infty)^{2}\times[0,\pi]^{2d-4}\to{\mathbf{R}}$
is defined such that
${\sf
H}_{m}^{d/2-1}(r,r^{\prime},\theta_{1},\ldots,\theta_{d-2},\theta_{1}^{\prime},\ldots,\theta_{d-2}^{\prime}):=\frac{\epsilon_{m}}{\pi}\int_{0}^{\pi}{\mathfrak{h}}^{d}({\widehat{\bf
x}},{\widehat{\bf
x}}^{\prime})\cos(m(\phi-\phi^{\prime}))d(\phi-\phi^{\prime})$ (34)
(see for instance Cohl & Tohline (1999) [10]). According to Theorem 2.1 and
(32), we may write ${\mathfrak{h}}^{d}({\widehat{\bf x}},{\widehat{\bf
x}}^{\prime})$ in terms of associated Legendre functions of the second kind as
follows
${\mathfrak{h}}^{d}({\widehat{\bf x}},{\widehat{\bf
x}}^{\prime})=\frac{e^{-i\pi(d/2-1)}}{2^{d/2-1}\Gamma(d/2)\,(\sinh
d({\widehat{\bf x}},{\widehat{\bf
x}}^{\prime}))^{d/2-1}}Q_{d/2-1}^{d/2-1}\left(\cosh d({\widehat{\bf
x}},{\widehat{\bf x}}^{\prime})\right).$ (35)
By (3) we know that in any geodesic polar coordinate system
$\cosh d({\widehat{\bf x}},{\widehat{\bf x}}^{\prime})=\cosh r\cosh
r^{\prime}-\sinh r\sinh r^{\prime}\cos\gamma,$ (36)
and therefore through (34), (35), and (36), in standard geodesic polar
coordinates, the azimuthal Fourier coefficient can be given by
$\eqalign{{\sf
H}_{m}^{d/2-1}(r,r^{\prime},\theta_{1},\ldots,\theta_{d-2},\theta_{1}^{\prime},\ldots,\theta_{d-2}^{\prime})\cr\hskip
71.13188pt=\frac{\epsilon_{m}e^{-i\pi(d/2-1)}}{2^{d/2-1}\pi\Gamma(d/2)}\int_{0}^{\pi}\frac{Q_{d/2-1}^{d/2-1}\left(A-B\cos\psi\right)\cos(m\psi)}{\left[(A-B\cos\psi)^{2}-1\right]^{(d-2)/4}}d\psi,}$
(37)
where $\psi:=\phi-\phi^{\prime},$
$A,B:[0,\infty)^{2}\times[0,\pi]^{2d-4}\to{\mathbf{R}}$ are defined through
(8) and (36) as
$\displaystyle
A(r,r^{\prime},\theta_{1},\ldots,\theta_{d-2},\theta_{1}^{\prime},\ldots,\theta_{d-2}^{\prime}):=\cosh
r\cosh
r^{\prime}\sum_{i=1}^{d-2}\cos\theta_{i}{\cos\theta_{i}}^{\prime}\prod_{j=1}^{i-1}\sin\theta_{j}{\sin\theta_{j}}^{\prime},$
and
$\displaystyle
B(r,r^{\prime},\theta_{1},\ldots,\theta_{d-2},\theta_{1}^{\prime},\ldots,\theta_{d-2}^{\prime}):=\sinh
r\sinh r^{\prime}\prod_{i=1}^{d-2}\sin\theta_{i}{\sin\theta_{i}}^{\prime}.$
Even though (37) is a compact expression for the Fourier coefficient of a
fundamental solution of Laplace’s equation on ${\mathbf{H}}_{R}^{d}$ for
$d\in\\{2,3,4,\ldots\\},$ it may be informative to use any of the
representations of a fundamental solution of the Laplacian on
${\mathbf{H}}_{R}^{d}$ from Theorem 2.1 to express the Fourier coefficients.
For instance if one uses the finite-summation expression in the odd-
dimensions, on can write the Fourier coefficients as a linear combination of
integrals of the form
$\int_{-1}^{1}\frac{\left[(a+b)/2-x\right]^{2k-1}x^{p}dx}{(a-x)^{k-1}(b-x)^{k-1}\sqrt{(a-x)(b-x)(c-x)(x-d)}},$
where $x=\cos\psi$, $k\in\\{1,\ldots,(d-1)/2\\}$, $p\in\\{0,\ldots,m\\},$ and
we have used the nomenclature of section 3.2. This integral is a rational
function of $x$ multiplied by an inverse square-root of a quartic in $x$.
Because of this and due to the limits of integration, we see that by
definition, these are all given in terms of complete elliptic integrals. The
special functions which represent the azimuthal Fourier coefficients on
${\mathbf{H}}_{R}^{d}$ are unlike the odd-half-integer degree, integer-order,
associated Legendre functions of the second kind which appear in Euclidean
space ${\mathbf{R}}^{d}$ for $d$ odd (see Cohl (2010) [6]; Cohl & Dominici
(2010) [7]), in that they include complete elliptic integrals of the third
kind (in addition to complete elliptic integrals of the first and second kind)
(cf. section 3.2) in their basis functions. For $d\geq 2$, through (4.1) in
Cohl & Dominici (2010) [7] and that ${\mathbf{H}}_{R}^{d}$ is a manifold (and
therefore must locally represent Euclidean space), the functions ${\sf
H}_{m}^{d/2-1}$ are generalizations of associated Legendre functions of the
second kind with odd-half-integer degree and order given by either an odd-
half-integer or an integer.
## 4 Gegenbauer expansion in geodesic polar coordinates
In this section we derive an eigenfunction expansion for a fundamental
solution of Laplace’s equation on the hyperboloid in geodesic polar
coordinates for $d\in\\{3,4,\ldots\\}.$ Since the spherical harmonics for
$d=2$ are just trigonometric functions with argument given in terms of the
azimuthal angle, this case has already been covered in section 3.1.
In geodesic polar coordinates, Laplace’s equation is given by (cf. (10))
$\Delta f=\frac{1}{R^{2}}\left[\frac{\partial^{2}f}{\partial r^{2}}+(d-1)\coth
r\frac{\partial f}{\partial
r}+\frac{1}{\sinh^{2}r}\Delta_{{\mathbf{S}}^{d-1}}\right]f=0,$ (38)
where $f:{\mathbf{H}}_{R}^{d}\to{\mathbf{R}}$ and
$\Delta_{{\mathbf{S}}^{d-1}}$ is the corresponding Laplace-Beltrami operator
on the $(d-1)$-dimensional unit sphere ${\mathbf{S}}^{d-1}$. Eigenfunctions
$Y_{l}^{K}:{\mathbf{S}}^{d-1}\to{\mathbf{C}}$ of the Laplace-Beltrami operator
$\Delta_{{\mathbf{S}}^{d-1}}$, where $l\in{\mathbf{N}}_{0}$ and $K$ is a set
of quantum numbers which label representations for $l$ in separable subgroup
type coordinate systems on ${\mathbf{S}}^{d-1}$ (i.e. angular momentum type
quantum numbers, see Izmest’ev et al. (2001) [18]), are given by solutions to
the eigenvalue problem (11).
In standard geodesic polar coordinates (1),
$K=(k_{1},\ldots,k_{d-3},|k_{d-2}|)\in{\mathbf{N}}_{0}^{d-2}$ with
$k_{0}=l\geq k_{1}\geq\ldots\geq k_{d-3}\geq|k_{d-2}|\geq 0$, and in
particular $k_{d-2}\in\\{-k_{d-3},\ldots,k_{d-3}\\}$. A positive fundamental
solution
${\mathcal{H}}_{R}^{d}:({\mathbf{H}}_{R}^{d}\times{\mathbf{H}}_{R}^{d})\setminus\\{({\bf
x},{\bf x}):{\bf x}\in{\mathbf{H}}_{R}^{d}\\}\to{\mathbf{R}}$ on the
$R$-radius hyperboloid satisfies (12). The completeness relation for
hyperspherical harmonics in standard hyperspherical coordinates is given by
$\sum_{l=0}^{\infty}\sum_{K}Y_{l}^{K}(\theta_{1},\ldots,\theta_{d-1})\overline{Y_{l}^{K}(\theta_{1}^{\prime},\ldots,\theta_{d-1}^{\prime})}=\frac{\delta(\theta_{1}-\theta_{1}^{\prime})\ldots\delta(\theta_{d-1}-\theta_{d-1}^{\prime})}{\sin^{d-2}\theta_{d-1}^{\prime}\ldots\sin\theta_{2}^{\prime}},$
where $K=(k_{1},\ldots,k_{d-2})$ and $l=k_{0}\in{\mathbf{N}}_{0}$. Therefore
through (14), we can write
$\delta_{g}({\bf x},{{\bf
x}^{\prime}})=\frac{\delta(r-r^{\prime})}{R^{d}\sinh^{d-1}r^{\prime}}\sum_{l=0}^{\infty}\sum_{K}Y_{l}^{K}(\theta_{1},\ldots,\theta_{d-1})\overline{Y_{l}^{K}(\theta_{1}^{\prime},\ldots,\theta_{d-1}^{\prime})}.$
(39)
For fixed $r,r^{\prime}\in[0,\infty)$ and
$\theta_{1}^{\prime},\ldots,\theta_{d-1}^{\prime}\in[0,\pi]$, since
${\mathcal{H}}_{R}^{d}$ is harmonic on its domain, its restriction is in
$C^{2}({\mathbf{S}}^{d-1})$, and therefore has a unique expansion in
hyperspherical harmonics, namely
${\mathcal{H}}_{R}^{d}({\bf x},{{\bf
x}^{\prime}})=\sum_{l=0}^{\infty}\sum_{K}u_{l}^{K}(r,r^{\prime},\theta_{1}^{\prime},\ldots,\theta_{d-1}^{\prime})Y_{l}^{K}(\theta_{1},\ldots,\theta_{d-1}),$
(40)
where $u_{l}^{K}:[0,\infty)^{2}\times[0,\pi]^{d-1}\to{\mathbf{C}}$. If we
substitute (39) and (40) into (12) and use (38) and (11), we obtain
$\displaystyle\sum_{l=0}^{\infty}\sum_{K}Y_{l}^{K}(\theta_{1},\ldots,\theta_{d-1})\left[\frac{d^{2}}{dr^{2}}+(d-1)\coth
r\frac{d}{dr}-\frac{l(l+d-2)}{\sinh^{2}r}\right]u_{l}^{K}(r,r^{\prime},\theta_{1}^{\prime},\dots,\theta_{d-1}^{\prime})$
$\displaystyle\hskip
5.69046pt=\sum_{l=0}^{\infty}\sum_{K}Y_{l}^{K}(\theta_{1},\ldots,\theta_{d-1})\overline{Y_{l}^{K}(\theta_{1}^{\prime},\ldots,\theta_{d-1}^{\prime})}\cdot\frac{\delta(r-r^{\prime})}{R^{d-2}\sinh^{d-1}r^{\prime}}.$
(41)
This indicates that for $u_{l}:[0,\infty)^{2}\to{\mathbf{R}}$,
$u_{l}^{K}(r,r^{\prime},\theta_{1}^{\prime},\ldots,\theta_{d-1}^{\prime})=u_{l}(r,r^{\prime})\overline{Y_{l}^{K}(\theta_{1}^{\prime},\ldots,\theta_{d-1}^{\prime})},$
(42)
and from (40) the expression for a fundamental of the Laplace-Beltrami
operator in hyperspherical coordinates on the hyperboloid is given by
${\mathcal{H}}_{R}^{d}({\bf x},{{\bf
x}^{\prime}})=\sum_{l=0}^{\infty}u_{l}(r,r^{\prime})\sum_{K}Y_{l}^{K}(\theta_{1},\ldots,\theta_{d-1})\overline{Y_{l}^{K}(\theta_{1}^{\prime},\ldots,\theta_{d-1}^{\prime})}.$
(43)
The above expression can be simplified using the addition theorem for
hyperspherical harmonics (see for instance Wen & Avery (1985) [30], section
10.2.1 in Fano & Rau (1996), Chapter 9 in Andrews, Askey & Roy (1999) [2] and
especially Chapter XI in Erdélyi et al. Vol. II (1981) [12]), which is given
by
$\sum_{K}Y_{l}^{K}({\widehat{\bf x}})\overline{Y_{l}^{K}({\widehat{\bf
x}}^{\prime})}=\frac{\Gamma(d/2)}{2\pi^{d/2}(d-2)}(2l+d-2)C_{l}^{d/2-1}(\cos\gamma),$
(44)
where $\gamma$ is the angle between two arbitrary vectors ${\widehat{\bf
x}},{\widehat{\bf x}}^{\prime}\in{\mathbf{S}}^{d-1}$ given in terms of (2).
The Gegenbauer polynomials $C_{l}^{\mu}:[-1,1]\to{\mathbf{R}}$,
$l\in{\mathbf{N}}_{0}$, $\mbox{Re}\,\mu>-1/2$, can be defined in terms of the
Gauss hypergeometric function as
$C_{l}^{\mu}(x):=\frac{(2\mu)_{l}}{l!}\,{}_{2}F_{1}\left(-l,l+2\mu;\mu+\frac{1}{2};\frac{1-x}{2}\right).$
The above expression (43) can be simplified using (44), therefore
${\mathcal{H}}_{R}^{d}({\bf x},{{\bf
x}^{\prime}})=\frac{\Gamma(d/2)}{2\pi^{d/2}(d-2)}\sum_{l=0}^{\infty}u_{l}(r,r^{\prime})(2l+d-2)C_{l}^{d/2-1}(\cos\gamma).$
(45)
Now we compute the exact expression for $u_{l}(r,r^{\prime})$. By separating
the angular dependence in (41) and using (42), we obtain the differential
equation
$\frac{d^{2}u_{l}}{dr^{2}}+(d-1)\coth
r\frac{du_{l}}{dr}-\frac{l(l+d-2)u_{l}}{\sinh^{2}r}=-\frac{\delta(r-r^{\prime})}{R^{d-2}\sinh^{d-1}r^{\prime}}.$
(46)
Away from $r=r^{\prime}$, solutions to the differential equation (46) must be
given by solutions to the homogeneous equation, which are are given in section
2.2. Therefore, the solution to (46) is given by
$u_{l}(r,r^{\prime})=\frac{A}{\left(\sinh r\sinh
r^{\prime}\right)^{d/2-1}}P_{d/2-1}^{-(d/2-1+l)}(\cosh
r_{<})Q_{d/2-1}^{d/2-1+l}(\cosh r_{>}),$ (47)
such that $u_{l}(r,r^{\prime})$ is continuous at $r=r^{\prime}$, where
$A\in{\mathbf{R}}.$
In order to determine the constant $A$, we first make the substitution
$v_{l}(r,r^{\prime})=(\sinh r\sinh r^{\prime})^{(d-1)/2}u_{l}(r,r^{\prime}).$
(48)
This converts (46) into the following differential equation
$\frac{\partial^{2}v_{l}(r,r^{\prime})}{\partial
r^{2}}-\frac{1}{4}\left[\frac{(d-1+2l)(d-3+2l)}{\sinh^{2}r}+(d-1)^{2}\right]v_{l}(r,r^{\prime})=-\frac{\delta(r-r^{\prime})}{R^{d-2}},$
which we then integrate over $r$ from $r^{\prime}-\epsilon$ to
$r^{\prime}+\epsilon$, and take the limit as $\epsilon\to 0^{+}$. We are left
with a discontinuity condition for the derivative of $v_{l}(r,r^{\prime})$
with respect to $r$ evaluated at $r=r^{\prime}$, namely
$\lim_{r\to
0^{+}}\left.\frac{dv_{l}(r,r^{\prime})}{dr}\right|_{r^{\prime}-\epsilon}^{r^{\prime}+\epsilon}=\frac{-1}{R^{d-2}}.$
(49)
After inserting (47) with (48) into (49), substituting $z=\cosh r^{\prime}$,
evaluating at $r=r^{\prime}$, and making use of the Wronskian relation (e.g.
p. 165 in Magnus, Oberhettinger & Soni (1966) [21])
$W\left\\{P_{\nu}^{-\mu}(z),Q_{\nu}^{\mu}(z)\right\\}=-\frac{e^{i\pi\mu}}{z^{2}-1},$
which is equivalent to
$W\left\\{P_{\nu}^{-\mu}(\cosh r^{\prime}),Q_{\nu}^{\mu}(\cosh
r^{\prime})\right\\}=-\frac{e^{i\pi\mu}}{\sinh^{2}r^{\prime}},$
we obtain
$A=\frac{e^{-i\pi(d/2-1+l)}}{R^{d-2}},$
and hence
$u_{l}(r,r^{\prime})=\frac{e^{-i\pi(d/2-1+l)}}{R^{d-2}(\sinh r\sinh
r^{\prime})^{d/2-1}}P_{d/2-1}^{-(d/2-1+l)}(\cosh
r_{<})Q_{d/2-1}^{d/2-1+l}(\cosh r_{>}),$
and therefore through (45), we have
$\displaystyle{\mathcal{H}}_{R}^{d}({\bf x},{{\bf
x}^{\prime}})=\frac{\Gamma(d/2)}{2\pi^{d/2}R^{d-2}(d-2)}\frac{e^{-i\pi(d/2-1)}}{(\sinh
r\sinh r^{\prime})^{d/2-1}}$
$\displaystyle\times\sum_{l=0}^{\infty}(-1)^{l}(2l+d-2)P_{d/2-1}^{-(d/2-1+l)}(\cosh
r_{<})Q_{d/2-1}^{d/2-1+l}(\cosh r_{>})C_{l}^{d/2-1}(\cos\gamma).$ (50)
As an alternative check of our derivation, we can do the asymptotics for the
product of associated Legendre functions $P_{d/2-1}^{-(d/2-1+l)}(\cosh
r_{<})Q_{d/2-1}^{d/2-1+l}(\cosh r_{>})$ in (50) as $r,r^{\prime}\to 0^{+}$.
The appropriate asymptotic expressions for $P$ and $Q$ respectively can be
found on p. 171 and p. 173 in Olver (1997) [24]. For the associated Legendre
function of the first kind there is
$P_{\nu}^{-\mu}(z)\sim\frac{\left[(z-1)/2\right]^{\mu/2}}{\Gamma(\mu+1)},$
as $z\to 1$, $\mu\neq-1,-2,\ldots$, and for the associated Legendre function
of the second kind there is
$Q_{\nu}^{\mu}(z)\sim\frac{e^{i\pi\mu}\Gamma(\mu)}{2\left[(z-1)/2\right]^{\mu/2}},$
as $z\to 1^{+}$, $\mbox{Re}\ \,\mu>0$, and $\nu+\mu\neq-1,-2,-3,\ldots$. To
second order the hyperbolic cosine is given by $\cosh r\simeq 1+r^{2}/2$.
Therefore to lowest order we can insert $\cosh r_{<}\simeq 1+r_{<}^{2}/2$ and
$\cosh r_{>}\simeq 1+r_{>}^{2}/2$ into the above expressions yielding
$P_{d/2-1}^{-(d/2-1+l)}(\cosh
r_{<})\sim\frac{(r_{<}/2)^{d/2-1+l}}{\Gamma(d/2+l)},$
and
$Q_{d/2-1}^{d/2-1+l}(\cosh
r_{>})\sim\frac{e^{i\pi(d/2-1+l)}\Gamma(d/2-1+l)}{2(r_{>}/2)^{d/2-1+l}},$
as $r,r^{\prime}\to 0^{+}$. Therefore the asymptotics for the product of
associated Legendre functions in (50) is given by
$P_{d/2-1}^{-(d/2-1+l)}(\cosh r_{<})Q_{d/2-1}^{d/2-1+l}(\cosh
r_{>})\sim\frac{e^{i\pi(d/2-1+l)}}{2l+d-2}\left(\frac{r_{<}}{r_{>}}\right)^{l+d/2-1}$
(51)
(the factor $2l+d-2$ is a term which one encounters regularly with
hyperspherical harmonics). Gegenbauer polynomials obey the following
generating function
$\frac{1}{\left(1+z^{2}-2zx\right)^{\mu}}=\sum_{l=0}^{\infty}C_{l}^{\mu}(x)z^{l},$
(52)
where $x\in[-1,1]$ and $|z|<1$ (see for instance, p. 222 in Magnus,
Oberhettinger & Soni (1966) [21]). The generating function for Gegenbauer
polynomials (52) can be used to expand a fundamental solution of Laplace’s
equation in Euclidean space ${\mathbf{R}}^{d}$ (for $d\geq 3$, cf. Theorem
3.1) in hyperspherical coordinates, namely
$\frac{1}{\|{\bf x}-{{\bf
x}^{\prime}}\|^{d-2}}=\sum_{l=0}^{\infty}\frac{r_{<}^{l}}{r_{>}^{l+d-2}}C_{l}^{d/2-1}(\cos\gamma),$
(53)
where $\gamma$ was defined in (44). Using (53) and Theorem 3.1, since
${\mathcal{H}}_{R}^{d}\to{\mathcal{G}}^{d}$, $\sinh r,\sinh r^{\prime}\to
r,r^{\prime}$ and (51) is satisfied to lowest order as $r,r^{\prime}\to
0^{+}$, we see that (50) obeys the correct asymptotics and our fundamental
solution expansion locally reduces to the appropriate expansion for Euclidean
space, as it should since ${\mathbf{H}}_{R}^{d}$ is a manifold.
Note that (50) can be further expanded over the remaining $(d-2)$-quantum
numbers in $K$ in terms of a simply separable product of normalized harmonics
$Y_{l}^{K}({\widehat{\bf x}})\overline{Y_{l}^{K}({\widehat{\bf
x}}^{\prime})}$, where ${\widehat{\bf x}},{\widehat{\bf
x}}^{\prime}\in{\mathbf{S}}^{d-1}$, using the addition theorem for
hyperspherical harmonics (44) (see Cohl (2010) [6] for several examples).
It is intriguing to observe how one might obtain the Fourier expansion for
$d=2$ (24) from the the expansion (50), which is strictly valid for $d\geq 3$.
If one makes the substitution $\mu=d/2-1$ in (50) then we obtain the following
conjecture (which matches up to the generating function for Gegenbauer
polynomials in the Euclidean limit $r,r^{\prime}\to 0^{+}$)
$\displaystyle\frac{1}{\sinh^{\mu}\rho}Q_{\mu}^{\mu}(\cosh\rho)$
$\displaystyle=$ $\displaystyle\frac{2^{\mu}\Gamma(\mu+1)}{(\sinh r\sinh
r^{\prime})^{\mu}}$ (54) $\displaystyle\hskip
5.69046pt\times\sum_{n=0}^{\infty}(-1)^{n}\frac{n+\mu}{\mu}P_{\mu}^{-(\mu+n)}(\cosh
r_{<})Q_{\mu}^{\mu+n}(\cosh r_{>})C_{n}^{\mu}(\cos\gamma),$
for all $\mu\in{\mathbf{C}}$ such that $\mbox{Re}\,\mu>-1/2$. If we take the
limit as $\mu\to 0$ in (54) and use
$\lim_{\mu\to 0}\frac{{n}+\mu}{\mu}C_{n}^{\mu}(x)=\epsilon_{n}T_{n}(x)$ (55)
(see for instance (6.4.13) in Andrews, Askey & Roy (1999) [2]), where
$T_{n}:[-1,1]\to{\mathbf{R}}$ is the Chebyshev polynomial of the first kind
defined as $T_{l}(x):=\cos(l\cos^{-1}x),$ then we obtain the following formula
$\frac{1}{2}\log\frac{\cosh\rho+1}{\cosh\rho-1}=\sum_{n=0}^{\infty}\epsilon_{n}(-1)^{n}P_{0}^{-n}(\cosh
r_{<})Q_{0}^{n}(\cosh r_{>})\cos(n(\phi-\phi^{\prime})),$
where $\cosh\rho=\cosh r\cosh r^{\prime}-\sinh r\sinh
r^{\prime}\cos(\phi-\phi^{\prime})$. By taking advantage of the following
formulae
$P_{0}^{-n}(z)=\frac{1}{n!}\left[\frac{z-1}{z+1}\right]^{n/2},$ (56)
for $n\geq 0,$
$Q_{0}(z)=\frac{1}{2}\log\frac{z+1}{z-1}$ (57)
(see (8.4.2) in Abramowitz & Stegun (1972) [1]), and
$Q_{0}^{n}(z)=\frac{1}{2}(-1)^{n}(n-1)!\left\\{\left[\frac{z+1}{z-1}\right]^{n/2}-\left[\frac{z-1}{z+1}\right]^{n/2}\right\\},$
(58)
for $n\geq 1$ then (24) is reproduced. The representation (56) follows easily
from the Gauss hypergeometric representation of the associated Legendre
function of the first kind (see (8.1.2) in Abramowitz & Stegun (1972) [1])
$P_{\nu}^{\mu}(z)=\frac{1}{\Gamma(1-\mu)}\left[\frac{z+1}{z-1}\right]^{\mu/2}{}_{2}F_{1}\left(-\nu,\nu+1;1-\mu;\frac{1-z}{2}\right).$
(59)
One way to derive the representation of the associated Legendre function of
second kind (58) is to use Whipple formula for associated Legendre functions
(cf. (8.2.7) in Abramowitz & Stegun (1972) [1])
$Q_{\nu}^{\mu}(z)=\sqrt{\pi}{2}\Gamma(\nu+\mu+1)(z^{2}-1)^{-1/4}e^{i\pi\mu}P_{-\mu-1/2}^{-\nu-1/2}\left(\frac{z}{\sqrt{z^{2}-1}}\right),$
and (8.6.9) in Abramowitz & Stegun (1972) [1], namely
$P_{\nu}^{-1/2}(z)=\sqrt{\frac{2}{\pi}}\frac{(z^{2}-1)^{-1/4}}{2\nu+1}\left\\{\left[z+\sqrt{z^{2}-1}\right]^{\nu+1/2}-\left[z+\sqrt{z^{2}-1}\right]^{-\nu-1/2}\right\\},$
for $\nu\neq-1/2$.
### 4.1 Addition theorem for the azimuthal Fourier coefficient on
${\mathbf{H}}_{R}^{3}$
One can compute addition theorems for the azimuthal Fourier coefficients of a
fundamental solution for Laplace’s equation on ${\mathbf{H}}_{R}^{d}$ for
$d\geq 3$ by relating directly obtained Fourier coefficients to the expansion
over hyperspherical harmonics for the same fundamental solution. By using the
expansion of ${\mathcal{H}}_{R}^{d}({\bf x},{{\bf x}^{\prime}})$ in terms of
Gegenbauer polynomials (50) in combination with the addition theorem for
hyperspherical harmonics (44) expressed in, for instance, one of Vilenkin’s
polyspherical coordinates (see section IX.5.2 in Vilenkin (1968) [29];
Izmest’ev et al. (1999,2001) [17, 18]), one can obtain through series
rearrangement a multi-summation expression for the azimuthal Fourier
coefficients. Vilenkin’s polyspherical coordinates are simply subgroup-type
coordinate systems which parametrize points on ${\mathbf{S}}^{d-1}$ (for a
detailed discussion of these coordinate systems see chapter 4 in Cohl (2010)
[6]). In this section we will give an explicit example of just such an
addition theorem on ${\mathbf{H}}_{R}^{3}$.
The azimuthal Fourier coefficients on ${\mathbf{H}}_{R}^{3}$ expressed in
standard hyperspherical coordinates (1) are given by the functions ${\sf
H}_{m}:[0,\infty)^{2}\times[0,\pi]^{2}\to{\mathbf{R}}$ which is defined by
(28). By expressing (50) in $d=3$ we obtain
$\displaystyle{\mathcal{H}}_{R}^{3}({\bf x},{{\bf x}^{\prime}})=\frac{-i}{4\pi
R\sqrt{\sinh r\sinh r^{\prime}}}$ $\displaystyle\hskip
11.38092pt\times\sum_{l=0}^{\infty}(-1)^{l}(2l+1)P_{1/2}^{-(1/2+l)}(\cosh
r_{<})Q_{1/2}^{1/2+l}(\cosh r_{>})P_{l}(\cos\gamma),$ (60)
where $P_{l}:[-1,1]\to{\mathbf{R}}$ is the Legendre polynomial defined by
$P_{l}(x)=C_{l}^{1/2}(x)$, or through (59) with $\mu=0$ and
$\nu\in{\mathbf{N}}_{0}$. By using the addition theorem for hyperspherical
harmonics (44) with $d=3$ using
$(\cos\theta,\sin\theta\cos\phi,\sin\theta\sin\phi)$ to parametrize points on
${\mathbf{S}}^{2},$ we have since the normalized spherical harmonics are
$Y_{l,m}(\theta,\phi)=(-1)^{m}\sqrt{\frac{2l+1}{4\pi}\frac{(l-m)!}{(l+m)!}}P_{l}^{m}(\cos\theta)e^{im\phi},$
the addition theorem for spherical harmonics, namely
$P_{l}(\cos\gamma)=\sum_{m=-l}^{l}\frac{(l-m)!}{(l+m)!}P_{l}^{m}(\cos\theta)P_{l}^{m}(\cos\theta^{\prime})e^{im(\phi-\phi^{\prime})},$
(61)
where $\cos\gamma$ is given by (26). By combining (60) and (61), reversing the
order of the two summation symbols, and comparing the result with (27) we
obtain the following single summation addition theorem for the azimuthal
Fourier coefficients of a fundamental solution of Laplace’s equation on
${\mathbf{H}}_{R}^{3}$, namely since ${\mathfrak{h}}^{3}=4\pi
R{\mathcal{H}}_{R}^{3}$,
$\displaystyle{\sf
H}_{m}^{1/2}(r,r^{\prime},\theta,\theta^{\prime})=\frac{-i\epsilon_{m}}{\sqrt{\sinh
r\sinh r^{\prime}}}\sum_{l=|m|}^{\infty}(-1)^{l}(2l+1)\frac{(l-m)!}{(l+m)!}$
$\displaystyle\hskip 56.9055pt\times
P_{l}^{m}(\cos\theta)P_{l}^{m}(\cos\theta^{\prime})P_{1/2}^{-(1/2+l)}(\cosh
r_{<})Q_{1/2}^{1/2+l}(\cosh r_{>}).$
This addition theorem reduces to the corresponding result ((2.4) in Cohl et
al. (2001) [9]) in the Euclidean ${\mathbf{R}}^{3}$ limit as $r,r^{\prime}\to
0^{+}$.
## 5 Discussion
Re-arrangement of the multi-summation expressions in section 4 is possible
through modification of the order in which the countably infinite space of
quantum numbers is summed over in a standard hyperspherical coordinate system,
namely
$\displaystyle\sum_{l=0}^{\infty}\sum_{K}=\sum_{l=0}^{\infty}\sum_{k_{1}=0}^{l}\sum_{k_{2}=0}^{k_{1}}\cdots\sum_{k_{d-4}=0}^{k_{d-5}}\sum_{k_{d-3}=0}^{k_{d-4}}\sum_{k_{d-2}=-k_{d-3}}^{k_{d-3}}$
$\displaystyle\hskip
39.83368pt=\sum_{k_{d-2}=-\infty}^{\infty}\sum_{k_{d-3}=|k_{d-2}|}^{\infty}\sum_{k_{d-4}=k_{d-2}}^{\infty}\cdots\sum_{k_{2}=k_{3}}^{\infty}\sum_{k_{1}=k_{2}}^{\infty}\sum_{k_{0}=k_{1}}^{\infty}.$
Similar multi-summation re-arrangements have been accomplished previously for
azimuthal Fourier coefficients of fundamental solutions for the Laplacian in
Euclidean space (see for instance Cohl et al. (2000) [11]; Cohl et al. (2001)
[9]). Comparison of the azimuthal Fourier expansions in section 3 (and in
particular (37)) with re-arranged Gegenbauer expansions in section 4 (and in
particular (50)) will yield new addition theorems for the special functions
representing the azimuthal Fourier coefficients of a fundamental solution of
the Laplacian on the hyperboloid. These implied addition theorems will provide
new special function identities for the azimuthal Fourier coefficients, which
are hyperbolic generalizations of particular associated Legendre functions of
the second kind. In odd-dimensions, these special functions reduce to toroidal
harmonics.
## Acknowledgements
Much thanks to A. Rod Gover, Tom ter Elst, Shaun Cooper, and Willard Miller,
Jr. for valuable discussions. I would like to express my gratitude to Carlos
Criado Cambón in the Facultad de Ciencias at Universidad de Málaga for his
assistance in describing the global geodesic distance function in the
hyperboloid model. We would also like to acknowledge two anonymous referees
whose comments helped improve this paper. I acknowledge funding for time to
write this paper from the Dean of the Faculty of Science at the University of
Auckland in the form of a three month stipend to enhance University of
Auckland 2012 PBRF Performance. Part of this work was conducted while H. S.
Cohl was a National Research Council Research Postdoctoral Associate in the
Information Technology Laboratory at the National Institute of Standards and
Technology, Gaithersburg, Maryland, U.S.A.
## References
## References
* [1] M. Abramowitz and I. A. Stegun. Handbook of mathematical functions with formulas, graphs, and mathematical tables, volume 55 of National Bureau of Standards Applied Mathematics Series. U.S. Government Printing Office, Washington, D.C., 1972.
* [2] G. E. Andrews, R. Askey, and R. Roy. Special functions, volume 71 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, 1999.
* [3] E. Beltrami. Essai d’interprétation de la géométrie noneuclidéenne. Trad. par J. Hoüel. Annales Scientifiques de l’École Normale Supérieure, 6:251–288, 1869.
* [4] L. Bers, F. John, and M. Schechter. Partial differential equations. Interscience Publishers, New York, N.Y., 1964.
* [5] P. F. Byrd and M. D. Friedman. Handbook of elliptic integrals for engineers and physicists. Die Grundlehren der mathematischen Wissenschaften in Einzeldarstellungen mit besonderer Berücksichtigung der Anwendungsgebiete. Bd LXVII. Springer-Verlag, Berlin, 1954.
* [6] H. S. Cohl. Fourier and Gegenbauer expansions for fundamental solutions of the Laplacian and powers in ${\mathbf{R}}^{d}$ and ${\mathbf{H}}^{d}$. PhD thesis, The University of Auckland, 2010. xiv+190 pages.
* [7] H. S. Cohl and D. E. Dominici. Generalized Heine’s identity for complex Fourier series of binomials. Proceedings of the Royal Society A, 467:333–345, 2010.
* [8] H. S. Cohl and E. G. Kalnins. Fundamental solution of the Laplacian in the hyperboloid model of hyperbolic geometry. In review, Journal of Physics A: Mathematical and Theoretical, 2011.
* [9] H. S. Cohl, A. R. P. Rau, J. E. Tohline, D. A. Browne, J. E. Cazes, and E. I. Barnes. Useful alternative to the multipole expansion of $1/r$ potentials. Physical Review A: Atomic and Molecular Physics and Dynamics, 64(5):052509, Oct 2001.
* [10] H. S. Cohl and J. E. Tohline. A Compact Cylindrical Green’s Function Expansion for the Solution of Potential Problems. The Astrophysical Journal, 527:86–101, December 1999.
* [11] H. S. Cohl, J. E. Tohline, A. R. P. Rau, and H. M. Srivastava. Developments in determining the gravitational potential using toroidal functions. Astronomische Nachrichten, 321(5/6):363–372, 2000.
* [12] A. Erdélyi, W. Magnus, F. Oberhettinger, and F. G. Tricomi. Higher transcendental functions. Vol. II. Robert E. Krieger Publishing Co. Inc., Melbourne, Fla., 1981.
* [13] G. B. Folland. Introduction to partial differential equations. Number 17 in Mathematical Notes. Princeton University Press, Princeton, 1976.
* [14] L. Fox and I. B. Parker. Chebyshev polynomials in numerical analysis. Oxford University Press, London, 1968.
* [15] D. Gilbarg and N. S. Trudinger. Elliptic partial differential equations of second order. Number 224 in Grundlehren der mathematischen Wissenschaften. Springer-Verlag, Berlin etc., second edition, 1983.
* [16] C. Grosche, G. S. Pogosyan, and A. N. Sissakian. Path-integral approach for superintegrable potentials on the three-dimensional hyperboloid. Physics of Particles and Nuclei, 28(5):486–519, 1997.
* [17] A. A. Izmest’ev, G. S. Pogosyan, A. N. Sissakian, and P. Winternitz. Contractions of Lie algebras and separation of variables. The $n$-dimensional sphere. Journal of Mathematical Physics, 40(3):1549–1573, 1999.
* [18] A. A. Izmest’ev, G. S. Pogosyan, A. N. Sissakian, and P. Winternitz. Contractions of Lie algebras and the separation of variables: interbase expansions. Journal of Physics A: Mathematical and General, 34(3):521–554, 2001\.
* [19] E. G. Kalnins. Separation of variables for Riemannian spaces of constant curvature, volume 28 of Pitman Monographs and Surveys in Pure and Applied Mathematics. Longman Scientific & Technical, Harlow, 1986.
* [20] J. M. Lee. Riemannian manifolds, volume 176 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1997.
* [21] W. Magnus, F. Oberhettinger, and R. P. Soni. Formulas and theorems for the special functions of mathematical physics. Third enlarged edition. Die Grundlehren der mathematischen Wissenschaften, Band 52. Springer-Verlag New York, Inc., New York, 1966.
* [22] P. M. Morse and H. Feshbach. Methods of theoretical physics. 2 volumes. McGraw-Hill Book Co., Inc., New York, 1953.
* [23] M. N. Olevskiĭ. Triorthogonal systems in spaces of constant curvature in which the equation $\Delta_{2}u+\lambda u=0$ allows a complete separation of variables. Matematicheskiĭ Sbornik, 27(69):379–426, 1950. (in Russian).
* [24] F. W. J. Olver. Asymptotics and special functions. AKP Classics. A K Peters Ltd., Wellesley, MA, 1997. Reprint of the 1974 original [Academic Press, New York].
* [25] F. W. J. Olver, D. W. Lozier, R. F. Boisvert, and C. W. Clark, editors. NIST handbook of mathematical functions. Cambridge University Press, Cambridge, 2010.
* [26] G. S. Pogosyan and P. Winternitz. Separation of variables and subgroup bases on $n$-dimensional hyperboloids. Journal of Mathematical Physics, 43(6):3387–3410, 2002.
* [27] W. P. Thurston. Three-dimensional geometry and topology. Vol. 1, volume 35 of Princeton Mathematical Series. Princeton University Press, Princeton, NJ, 1997. Edited by Silvio Levy.
* [28] R. J. Trudeau. The non-Euclidean revolution. Birkhäuser, Boston, 1987.
* [29] N. Ja. Vilenkin. Special functions and the theory of group representations. Translated from the Russian by V. N. Singh. Translations of Mathematical Monographs, Vol. 22. American Mathematical Society, Providence, R. I., 1968.
* [30] Z. Y. Wen and J. Avery. Some properties of hyperspherical harmonics. Journal of Mathematical Physics, 26(3):396–403, 1985.
|
arxiv-papers
| 2011-05-02T17:32:43 |
2024-09-04T02:49:18.539691
|
{
"license": "Public Domain",
"authors": "Howard S. Cohl and Ernie G. Kalnins",
"submitter": "Howard Cohl",
"url": "https://arxiv.org/abs/1105.0386"
}
|
1105.0438
|
# Performance improvement of an optical network providing services based on
multicast
Vincent Reinhard1,2 Johanne Cohen2, Joanna Tomasik1,
Dominique Barth2, Marc-Antoine Weisser1
(1) SUPELEC Systems Sciences, Computer Science Dpt.,
91192 Gif sur Yvette, France
email : FistName.LastName@supelec.fr
(2) PRiSM, University of Versailles,
45 avenue des Etats-Unis, 78035 Versailles, France
email : FistName.LastName@prism.uvsq.fr
###### Abstract
Operators of networks covering large areas are confronted with demands from
some of their customers who are virtual service providers. These providers may
call for the connectivity service which fulfils the specificity of their
services, for instance a multicast transition with allocated bandwidth. On the
other hand, network operators want to make profit by trading the connectivity
service of requested quality to their customers and to limit their
infrastructure investments (or do not invest anything at all).
We focus on circuit switching optical networks and work on repetitive
multicast demands whose source and destinations are à priori known by an
operator. He may therefore have corresponding trees “ready to be allocated”
and adapt his network infrastructure according to these recurrent
transmissions. This adjustment consists in setting available branching routers
in the selected nodes of a predefined tree. The branching nodes are opto-
electronic nodes which are able to duplicate data and retransmit it in several
directions. These nodes are, however, more expensive and more energy consuming
than transparent ones.
In this paper we are interested in the choice of nodes of a multicast tree
where the limited number of branching routers should be located in order to
minimize the amount of required bandwidth. After formally stating the problem
we solve it by proposing a polynomial algorithm whose optimality we prove. We
perform exhaustive computations to show an operator gain obtained by using our
algorithm. These computations are made for different methods of the multicast
tree construction. We conclude by giving dimensioning guidelines and outline
our further work.
## 1 Introduction
Optical networks have become a dominant technology in modern networks covering
large areas. Their advantage consists in providing an ultra-high bit rate
obtained with slight energy consumption. All-optical networks are particularly
interesting from economic and ecological point of view because a cost of
transparent routers is low and their energy consumption is negligible [6].
Modern networks face a growing demand on the part of service providers. New
offered services are more complex than the simple connectivity service assured
traditionally by network operators. Providers sell services like storage and
computation together with connectivity service to their customers. The part of
this market ensuring on-the-fly resource allocation, called for commercial
reasons Cloud Computing [1], is under a rapid development. In order to meet
the demands of their customers, virtual service providers have to purchase a
guaranteed connectivity service at network operators. At the same time,
network operators can deal with numerous virtual service providers. They are
interested in using their network resources the most efficiently and in this
way minimize the cost of a prospective extension of their existing
infrastructure.
We studied the mechanisms to execute distributed applications in an optical
mesh network in the context of the CARRIOCAS project [2, 23]. Unlike a
customary approach applied in Grids where applications benefit from a
dedicated network infrastructure [9], this project went into the study of the
coexistence of massive distributed applications in a network whose operator
should make financial profit. With GMPLS [17] deployed, the CARRIOCAS network
has to ensure both unicast and multicast transmissions. Routers which are able
to duplicate data and send it in several directions allow a network operator
to lower the bandwidth amount necessary to construct a multicast tree. On the
other hand, these branching nodes are more expensive and more energy consuming
than the transparent ones. The realistic assumption is thus that only a subset
of routers is equipped with the duplicating functionality. In [19] we
presented our solution to the problem consisting in the construction of a tree
to any multicast request with minimization of the amount of used bandwidth
under assumption of a limited number of branching nodes. The solution is
heuristic because we proved that this problem is NP-complete. It turned out to
be the most effective when the branching nodes were placed in the most
homogeneous way in a network. The most homogeneous placement of $k$ branching
nodes represents in fact a solution to the $k$-centre problem which is also
NP-complete [10].
Our study mentioned above inspired us to explore certain special cases of
multicast demands. A network operator can know in advance recurrent multicast
transmissions which require a lot of bandwidth. Being aware of frequent
demands for identical (or almost identical) multicast transmissions an
operator may have corresponding trees “ready to be allocated” and adapt his
network infrastructure according to these recurrent transmissions. This
adjustment may consist in setting available branching routers in the selected
nodes of the predefined tree. In this paper we are interested in the choice of
nodes of a multicast tree where the branching routers should be located in
order to minimize the amount of required bandwidth. This approach allows an
operator to make his network more efficient without any additional cost.
In the following section we make a survey of existing solutions to multicast
tree allocation and explain the specificity of branching routers. In Section 3
our problem is stated in the formal way. We also formulate (Section 4) the
solution properties. Next, we propose an algorithm to solve our problem,
compute its complexity, and prove that it gives an optimal solution. Our
problem is evidenced to be polynomial. Section 6 presents the results of
bandwidth requirements for multicast trees depending on the number of
available branching routers. The multicast trees which are subject of this
analysis have been obtained by two methods, the first one based on the
shortest path approach and the second one based on the Steiner tree approach.
In the final section we give the conclusions and outline our further work.
## 2 Multicast tree construction
There are several schemes for multicasting data in networks [21, 12]. We
present here the schemes adapted to optical circuit switching networks. The
first one is to construct virtual circuits from the multicast source to each
destination. Such a scheme is equivalent to multiple unicasts (Fig. 2) and the
network bandwidth used by a large multicast group may become unacceptable
[16].
Figure 1: A multicast with source $A$ and destinations $G$, $H$, $K$ built up
as a set of unicasts (without branching nodes)
Figure 2: A multicast with source $A$, destinations $G$, $H$, $K$ and
branching nodes $C$, $F$
In another scheme the multicast source sends data to the first destination and
each destination acts as a source for the next destination until all
destinations receive the data flow. In yet another scheme, intermediate
routers make copies of data packets and dispatch them to their successors in
the multicast tree. This solution allows the multicast transmission to share
bandwidth on the common links. Numerous multicast tree algorithms, which
follow the latter scheme, have been proposed and can roughly be classified
into two categories [21]. The first category contains the algorithms based on
the shortest path while minimizing the weight of the path from the multicast
source to each destination. The second category contains algorithms based on
the Steiner tree problem [3, 5, 11, 14] which we formally define in Section 3.
The algorithms derived from the Steiner tree problem minimize the total weight
of the multicast tree. They are heuristic because the Steiner tree problem is
NP-complete [11].
From the technological point of view, routers able to duplicate packets
introduce a supplementary delay due to O/E/O conversions and are more
expensive. For these reasons network operators want to limit the number of
such routers which we call ”diffusing nodes” or “branching nodes”. The
diffusing nodes which we consider are not equipped with the functionality
“drop-and-continue” [25] as this operation mode is nowadays applied in
practice exclusively in border routers. In Fig. 2 we go back to the example
illustrated in Fig. 2. This time there are two branching nodes which allow one
to reduce the amount of used bandwidth. Contrary to the solution built up of
unicasts, in the one with branching nodes the bandwidth is used only once in
each link.
## 3 Formalization of optimization problem
An optical network is modelled by a directed connected symmetrical graph [4]
$G=(V,E)$. A multicast request is a pair $\epsilon=(e,R)$, where $e\in V$ is a
multicast source and $R\subset V$ is a set of multicast destinations. We
suppose that all multicast requests which we deal with can be transmitted in
the network as a set of unicast transmissions (see Section 2). Therefore, we
do not have to make precise the amount of data to transfer. For a given
multicast request $\epsilon$ we first determine its tree,
$A_{\epsilon}=(V_{A_{\epsilon}},E_{A_{\epsilon}})$. This tree is a subgraph of
$G$ rooted in $e$, whose leaves are in the set $R$ and whose arcs are directed
from the root towards the leaves. We note $D_{A_{\epsilon}}$ the diffusing
nodes in $A_{\epsilon}$, $D_{A_{\epsilon}}\subseteq V_{A_{\epsilon}}$. Their
allowed number is written as $k$. We now try to determine the choice of
diffusing nodes in order to minimize the bandwidth consumption.
We will adopt as a metric of the bandwidth used by a multicast request a total
number of arcs which construct its tree taking into account the fact that an
arc may transport the same data more than once. To define this metric formally
we start by determining the situations in which a request $\epsilon$ is
satisfied by a set of paths in the multicast tree,
$\mathcal{S}(D_{A_{\epsilon}})$. These situations are as follows:
* •
every node of $R$ is the final extremity of exactly one path in
$\mathcal{S}(D_{A_{\epsilon}})$,
* •
every node of $D_{A_{\epsilon}}$ is the final extremity of at most one path in
$\mathcal{S}(D_{A_{\epsilon}})$,
* •
the origin of a path in $\mathcal{S}(D_{A_{\epsilon}})$ is either $e$ or a
node of $D_{A_{\epsilon}}$; in the latter case, the node of $D_{A_{\epsilon}}$
is also the final extremity of a path in $\mathcal{S}(D_{A_{\epsilon}})$,
* •
any node of $a\in D_{A_{\epsilon}}$ is in a path
$p\in\mathcal{S}(D_{A_{\epsilon}})$ only if it is the final extremity or the
origin of $p$.
The metric $\mbox{load}_{A_{\epsilon}}$ is defined as a sum of lengths of all
paths in $\mathcal{S}(D_{A_{\epsilon}})$. The optimization problem which
consists in placing $k$ diffusing nodes can be thus formalized as:
###### theorem 1
Diffusing Nodes in Multicast Tree Problem (DNMTP)
Data: a directed connected symmetrical graph $G=(V,E)$, a multicast request
$\epsilon=(e,R)$, a rooted multicast tree corresponding to this request
$A_{\epsilon}$, and a natural number $k$.
Goal: Find $D_{A_{\epsilon}}$, $|D_{A_{\epsilon}}|\leq k$ so that
$\mbox{load}_{A_{\epsilon}}$ is minimal.
## 4 Properties of the solution induced by the subset of vertices $D$
We now focus on a given multicast $\epsilon$ and we omit the subscript
$\epsilon$ in the formulæ for their clarity. This section is devoted to
studying properties of the solution induced by a set $D$ of diffusing nodes,
$\mathcal{S}(D)$. We introduce the notation used for its description. For any
$u$, $u\in V_{A}$ in $A$ we define $A^{u}$ as a sub-tree of $A$ rooted in $u$.
We also define three parameters of $u$ in $A$. A set $D^{u}$ is a set of
diffusing nodes in tree $A^{u}$ ($D^{u}\subseteq D$). A set $R^{u}$ is a set
of destinations nodes in tree $A^{u}$ ($R^{u}\subseteq R$). $a^{u}$ is the arc
connecting $A^{u}$ from the remainder of $A$. We propose:
###### theorem 2
Let $D$ be a set of vertices in $A$. Let $u$ be a vertex in $A$. The _path
number_ $\mbox{pn}(u)$ is a number of paths in a solution $\mathcal{S}(D)$
spanned on $A$ which pass through $u$ or which terminate in $u$. The window of
the solution $\mathcal{S}(D)$ on arc $a^{u}$ is an triplet of integers
$(\beta,d,load)$ where $\beta$ is its path number $\mbox{pn}(u)$, $d=|D^{u}|$
and $load$ represents the load of $\mathcal{S}(D)$ in tree $A^{u}$.
We can notice that each solution induced by the set of diffusing nodes $D$,
can be defined by each window for each arc of tree $A$.
###### theorem 1
Let $D$ be a set of vertices in $A$. Let $u$ be a node having one child
$u_{1}$ of $A$. The window on arc $a^{u}$ is equal to
$\left\\{\begin{array}[]{llll}(1,d+1,load+1)&\mbox{\rm if}&u\in D&\\\
(b+1,d,load+b+1)&\mbox{\rm if}&u\notin D&\mbox{\rm and }u\in R\\\
(b,d,load+b)&\mbox{\rm if}&u\notin D&\mbox{\rm and }u\notin R\\\
\end{array}\right.$ (1)
where the window on arc $a^{u_{1}}$ is $(b,d,load)$.
Proof: First, we assume that $u\in D$ ($u$ is a diffusing node). By definition
of path number, arc $a^{u}$ has a path number equal to one. Since the window
on arc $a^{u_{1}}$ is $(b,d,load)$, tree $A^{u}$ contains one more diffusing
node than tree $A^{u_{1}}$.
Second, we assume that $u\notin D$. So, $u$ is not a diffusing node and tree
$A^{u}$ contains exactly the same set of diffusing nodes as tree $A^{u_{1}}$.
If $u\in R$, then $u$ is the final extremity of exactly one path. So the path
number on arc $a^{u}$ is equal to the path number on arc $a^{u_{1}}$ plus one.
If $u\notin R$, then the path number on arc $a^{u}$ is equal to the path
number on arc $a^{u_{1}}$.
From these statements, we can compute the load of the solution $\mathcal{S}$
induced by $D$ in tree $A^{u}$. The load increases by the path number on arc
$a^{u}$.
$\Box$
Now, using the same arguments in the proof of Lemma 1, we extend it when $u$
has several children.
###### theorem 2
Let $u$ be a node of $A$ having $\ell$ children $u_{1}\dots,u_{\ell}$. The
window on arc $a^{u}$ is equal to
$\left\\{\begin{array}[]{llllll}(1,&1+\sum_{i}^{\ell}d_{i},&\sum_{i}^{\ell}load_{i}+1)&\mbox{\rm
if}&u\in D&\\\
(1+\sum_{i}^{\ell}b_{i},&\sum_{i}^{\ell}d_{i},&\sum_{i}^{\ell}load_{i}+b+1)&\mbox{\rm
if}&u\notin D&\mbox{ and }u\in R\\\
(\sum_{i}^{\ell}b_{i},&\sum_{i}^{\ell}d_{i},&\sum_{i}^{\ell}load_{i}+b)&\mbox{\rm
if}&u\notin D&\mbox{ and }u\notin R\\\ \end{array}\right.$ (2)
where the window on arc $a^{u_{i}}$ is $(b_{i},d_{i},load_{i})$ for any $i$,
$1\leq i\leq\ell$.
Now, we want to compare two solutions by introducing a partial order for each
node.
###### theorem 3
Let $D$ and $D^{\prime}$ be two subsets of vertices in $A$. Let $v$ be a
vertex. $\mathcal{S}(D)\preceq_{v}\mathcal{S}(D^{\prime})$ if and only if the
following three conditions are simultaneously satisfied: (i) $b\leq
b^{\prime}$; (ii) $d\leq d^{\prime}$ (iii) $load\leq load^{\prime}$, where the
window of the solution $\mathcal{S}(D)$ (respectively
$\mathcal{S}(D^{\prime})$) on arc $a^{v}$ is $(b,d,load)$ (respectively
$(b^{\prime},d^{\prime},load^{\prime})$).
###### theorem 1
Let $u$ be a vertex in $A$. Let $D$ and $D^{\prime}$ be two subsets of
vertices in $A$ such that $\mathcal{S}(D)\preceq_{u}\mathcal{S}(D^{\prime})$.
Then the solution induced by $D^{\prime\prime}$, where
$D^{\prime\prime}=(D^{\prime}\setminus D^{\prime u})\cup D^{u}$, satisfies the
following property
$\mathcal{S}(D^{\prime\prime})\preceq_{v}\mathcal{S}(D^{\prime})$ for all
nodes $v$ not in $A^{u}$
Proof: Let $P$ be the path between root $e$ and vertex $v$.
First, we focus on vertices $v$ outside $A^{u}$ and not in $P$. Since
${D^{\prime\prime v}}={D^{\prime v}}$, arc $a^{v}$ has the same window of the
solution induced by $D$ and of the solution $\mathcal{S}(D^{\prime})$.
Second, we focus on vertices $v$ in $P$. By definition of the partial order
$\preceq_{u}$, we have (i) $b\leq b^{\prime}$, (ii) $d\leq d^{\prime}$, and
(iii) $load\leq load^{\prime}$, where the window of the solution
$\mathcal{S}(D)$ (respectively $\mathcal{S}(D^{\prime})$) on arc $a^{u}$ is
$(b,d,load)$ (respectively $(b^{\prime},d^{\prime},load^{\prime})$). Now, we
can compute the window of the solution $\mathcal{S}(D^{\prime\prime})$ on arc
$a^{t}$ where $t$ is the father of node $u$. Let
$(b^{\prime}_{t},d^{\prime}_{t},load^{\prime}_{t})$ be the window of the
solution $\mathcal{S}(D^{\prime})$ on arc $a^{t}$.
From Lemma 2, if $t\in D^{\prime}$, then the window of the solution
$\mathcal{S}(D^{\prime\prime})$ on arc $a^{t}$ is
$(1,d^{\prime}_{t}-d^{\prime}+d,load^{\prime}_{t}-load^{\prime}+load)$. Thus
$\mathcal{S}(D^{\prime\prime})\preceq_{t}\mathcal{S}(D^{\prime})$. We can
apply the same arguments as previously for the other case. The same reasoning
goes for each vertex of this path starting from the father of $t$ until the
root. This completes the proof of Property 1. $\Box$
###### theorem 4
Let $u$ be a vertex in $A$. Let $D$ be a subset of vertices in $A$. $D$ is
sub-optimal for $A^{u}$ if and only if for any $D^{\prime}$ which is a subset
of vertices in $A$ such that $d=d^{\prime}$ and $b=b^{\prime}$, we have
$load\leq load^{\prime}$ where the window of the solution $\mathcal{S}(D)$
(respectively $\mathcal{S}(D^{\prime})$) on arc $a^{u}$ is $(b,d,load)$
(respectively $(b^{\prime},d^{\prime},load^{\prime})$).
###### theorem 2
Let $u$ be a vertex of $A$ having $\ell$ children $u_{1}\dots,u_{\ell}$. Let
$D$ be a subset of vertices in $A$. If $D$ is sub-optimal for node $u$, then
$D$ is also sub-optimal for node $u_{i}$, for any integer $i$, $1\leq
i\leq\ell$.
Proof: We can prove this property by contradiction. Assume that there is at
least one child $u_{i}$ of $u$ such that $D$ is not sub-optimal for node
$u_{i}$. So it implies that there exists a subset $D^{\prime}$ such that
$D^{\prime}$ is sub-optimal for node $u_{i}$ and such that $d=d^{\prime}$ and
$b=b^{\prime}$, we have $load^{\prime}<load$ where the window of the solution
$\mathcal{S}(D)$ (respectively $\mathcal{S}(D^{\prime})$) on arc $a^{u}$ is
$(b,d,load)$ (respectively $(b^{\prime},d^{\prime},load^{\prime})$). So, using
Lemma 2, we can construct a subset $D^{\prime\prime}$ such that
$D^{\prime\prime}=(D\setminus D^{u_{i}})\cup D^{\prime u_{i}}$ and such that
$\mathcal{S}(D^{\prime\prime})\preceq_{u}\mathcal{S}(D^{\prime})$. So, it
implies that $D$ is not optimal. So there is a contradiction.
$\Box$
## 5 Algorithm, its complexity and optimality
Our algorithm is based on the dynamic approach. We introduce the notation used
for its description. For any $u$, $u\in V_{A_{\epsilon}}$ in $A_{\epsilon}$ we
define $A_{\epsilon}^{u}$ as a sub-tree of $A_{\epsilon}$ rooted in $u$. We
also define two parameters of $u$ in $A_{\epsilon}$. The height $h(u)$ is a
distance between $u$ and $e$ in $A_{\epsilon}$. We also note
$h_{\max}=\max_{u\in V_{A_{\epsilon}}}h(u)$. The path number $\mbox{pn}(u)$ is
a number of paths in a solution $\mathcal{S}(D_{A_{\epsilon}})$ with a given
set of diffusing nodes spanned on $A_{\epsilon}$ which pass through $u$ or
which terminate in $u$. It is obvious that if $u$ is a branching node then
$\mbox{pn}(u)=1$.
The idea of our algorithm is to compute for any $u$, $u\in V(A)$, some sub-
optimal sets $D$ of diffusing nodes for $A^{u}$ where the window of the
solution $\mathcal{S}(D)$ on arc $a^{u}$ is $(b,d,load)$. One set $D$ is
constructed for any value $b$, $1\leq b\leq|R^{u}|$, any value $d$, $0\leq
d\leq k$. As the reader might already guess, a sub-optimal set $D$ for the
root $e$ gives a solution to our problem. We want therefore to find these sets
starting from the leaves and ending up in the root of $A$. As $u$ may be or
may not be a diffusing node, we have to know how to compute the two sets for
both the cases.
Procedure Mat_Vec_Filling $1.$ If $u$ is a leaf then attribute the “unitary”
$M(u)$ and $L(u)$ to $u$ endIf $2.$ If $u$ is not a leaf then $3.$ choose
arbitrarily $v$ which is one of the successors of $u$ in $A_{\epsilon}^{u}$;
$4.$ First_Succ_Mat_Vec(u,v); mark $v$; $5.$ While there is a successor of $u$
in $A_{\epsilon}^{u}$ which has not be marked yet do $6.$ choose arbitrarily
$w$ among the non-marked successors of $u$ in $A_{\epsilon}^{u}$; $7.$
Others_Succ_Mat_Vec(u,w); mark $w$ $8.$ endWhile $9.$ endIf
Figure 3: The procedure Mat_Vec_Filling
As $u$ may not be equipped with the branching property, the minimal load of
the sub-optimal set for it should be stored in the matrix $M(u)$ whose rows
are indexed by $\mbox{pn}(u)$ (these indices are $1,2,\ldots,|R|$) and whose
columns are indexed by the number of diffusing nodes deployed in $A^{u}$
(these indices are $0,1,\ldots,k$). If a solution does not exist, the
corresponding matrix element is equal to zero.
As $u$ may become a branching node, the minimal load of the sub-optimal set
can be stored in a line vector $L(u)$ because the path number of a diffusing
node is always equal to one.
In a nutshell: $M_{i,j}(u)=\alpha\neq 0$ ($L_{i}(u)=\alpha\neq 0$,
respectively) if and only if a sub-optimal set $D$ exists in $A^{u}$ having
its window on arc $a^{u}$ equal to $(j,i,\alpha)$. (respectively to
$(1,i,\alpha)$). For computational reasons the destinations $u$, which are
leaves of $A$, have “unitary” matrix and vector attributed: $M_{1,0}(u)=1$,
$L_{1}(u)=1$ and all other elements are zero.
As we have said above, our algorithm to solve the DNMTP attributes to each
node $u$ its $M(u)$ and $L(u)$ starting from the leaves whose height is
$H=h_{\max}$ and performing the bottom-up scheme with $H=H-1$ until the root
is reached ($H=0$). The attribution of $M(u)$ and $L(u)$ to $u$ is realised by
the procedure `Mat_Vec_Filling` (Fig. 3). This procedure takes a node $u$ and
its corresponding sub-tree as data. Intuitively speaking, this is a modified
breadth-first search [13] in which one arbitrarily chosen successor, treated
first, computes its matrix and vector (the `First_Succ_Mat_Vec` procedure) in
a different way from its brothers (the `Others_Succ_Mat_Vec` procedure). The
leaves have the “unitary” matrix and vector assigned.
Procedure First_Succ_Mat_Vec(u,v) $1.$ $L_{1}(u)=\min^{+}_{j}(M_{0,j}(v))$
$2.$ ForAll $i$ such that $0<i\leq k$ do $3.$
$L_{i}(u)=1+\min^{+}(\min^{+}_{j}(M_{i-1,j}(v),L_{i-1}(v))$ $4.$ endForAll
$5.$ ForAll $i$ such that $0\leq i\leq k$ do $6.$ ForAll $j$ such that
$0<i\leq|R|$ do $7.$ If $j==1$ then $\mbox{\tt
elT}=1+\min^{+}(M_{i,1}(v),L_{i}(v))$ $8.$ else $\mbox{\tt elT}=j+M_{i,j}(v)$
$9.$ endIf; $10.$ If $u$ is destination of multicast $\epsilon$ then $11.$
$M_{i,j+1}(u)=\mbox{\tt elT}+1$ else $M_{i,j}(u)=\mbox{\tt elT}$ $12.$ endIf
$13.$ endForAll $14.$endForAll
Figure 4: The procedure First_Succ_Mat_Vec
The procedure `First_Succ_Mat_Vec` operates on a node $u$ and one of its
successors $v$ for which $M(v)$ and $L(v)$ are already known as
`Mat_Vec_Filling` follows a bottom-up approach (Fig. 4). It uses the variable
`elT` to store the non-zero elements in a column $i$ of $M(u)$ and $L(u)$. The
procedure executes the function $\min^{+}$ whose two arguments are natural. It
returns a minimum of these two values in exception of the case in which one of
the arguments is zero. The other positive argument is when returned. The main
idea is based on the observation that the weight of the multicast tree in
$A^{v}\cup\\{u\\}$ is equal to the multicast weight in $A^{v}$ incremented by
the weight of reaching $u$ which is itself equal to $\mbox{pn}(u)$. Let us
remind the reader that $\mbox{pn}(u)=1$ when $u$ is a diffusing node and
$\mbox{pn}(u)$ is a matrix row index otherwise.
### Remark 1:
From Lemma 1 and Property 1, we can deduce, that if $u$ has one child $v$ for
any $i,\ j$, $0\leq i\leq k$ and $1\leq j\leq|R|$
* •
$L_{i}(u)=1+\min^{+}(L_{i-1}(v),\min^{+}\\{M_{i-1,j}(v):j:1\leq j\leq|R|\\}$
* •
$M_{i,1}(u)=1+\min^{+}(L_{i}(v),M_{i,1}(v))$
* •
$M_{i,j}(u)=j+M_{i,j^{\prime}}(v)$ where $j\neq 1$, and $j^{\prime}=j-1$ if
$u\in R$, otherwise $j^{\prime}=j$
The procedure `First_Succ_Mat_Vec` computes the formulæ here above.
Procedure Others_Succ_Mat_Vec(u,w) $1.$ ForAll $i$ such that $0<i\leq k$ do
$2.$ $\mbox{\tt elT}=\infty$; $3.$ ForAll $(x,y)$ such that
$(x,y)\in\\{1,2,\ldots,k\\}\times\\{1,2,\ldots,k\\}$ and $x+y=i$ do $4.$ If
$(L_{x}(u)+L_{y}(w))<\mbox{\tt elT}$ then $\mbox{\tt elT}=L_{x}(u))+L_{y}(w)$
endIf; $5.$ If $(L_{x}(u)+\min^{+}_{j}(M_{y,j}(w)))<\mbox{\tt elT}$ then
$\mbox{\tt elT}=L_{x}(u))+\min^{+}_{j}(M_{y,j}(w))$ $6.$ endIf; $7.$ endForAll
$8.$ $V^{\prime}_{i}(u)=\mbox{\tt elT}$ $9.$ endForAll; $10.$ ForAll $i$ such
that $0\leq i\leq k$ do $11.$ $\mbox{\tt elT}=\infty$; $12.$ ForAll $(x,y)$
such that $(x,y)\in\\{0,1,\ldots,k\\}\times\\{0,1,\ldots,k\\}$ and $x+y=i$ do
$13.$ ForAll $j$ such that $0<j\leq|R|$ do $14.$ $\mbox{\tt elT}=\infty$;
$15.$ ForAll $(a,b)$ such that
$(a,b)\in\\{0,1,\ldots,k\\}\times\\{0,1,\ldots,k\\}$ and $a+b=j$ do $16.$ If
$M_{x,a}(u)+M_{y,b}(u)+b<\mbox{\tt elT}$ then $17.$ $\mbox{\tt
elT}=M_{x,a}(u)+M_{y,b}(u)+b$ $18.$ endIf $19.$ endForAll; $20.$
$M^{\prime}_{i,j}(u)=\min^{+}(\mbox{\tt elT},M_{x,j-1}(u)+L_{y}(w)+1)$ $21.$
endForAll $22.$ endForAll $23.$ endForAll $24.$ $L(u)\leftarrow
L^{\prime}(u)$; $M(u)\leftarrow M^{\prime}(u)$;
Figure 5: The procedure Others_Succ_Mat_Vec
In lines 1–4 $L(u)$ is computed for $u$ seen as a diffusing node. On the
$i^{\mbox{\scriptsize th}}$ step the smallest positive weight is chosen
between weights of its predecessor $v$ seen as a diffusing and a non-diffusing
node. These weights are taken for $v$ with one diffusing node less because $u$
itself is diffusing. This weight is increased by the weight of reaching $u$
which is one as $u$ is diffusing. Lines 5–9 fill up $M(u)$ when $u$ is seen as
non-diffusing. Line 7 treats the case in which only one path passes through or
terminates in $u$. The successor of $u$ can be either a diffusing or non-
diffusing node. Otherwise (line 8) its successor has to be a non-diffusing
node. The case in which $u$ is a destination despite the fact that it is not a
leaf in ${A}$ is treated in lines 10–12 as the weight of the access to $u$ has
to be added.
`Others_Succ_Mat_Vec` (Fig. 5) operates on a node $u$ and its successors $w$
different from $v$ which has already been examined in `First_Succ_Mat_Vec`.
The procedure uses the variable `elT` as `First_Succ_Mat_Vec` does.
Furthermore, the procedure makes use of the auxiliary variables
$M^{\prime}(u)$ and $L^{\prime}(u)$ to store the new values of $M(u)$ and
$L(u)$ as the current elements of $M(u)$ and $L(u)$ are still in use. The
procedure `Others_Succ_Mat_Vec` is built up on the same principle as the
previous one. Lines 1–9 treat the filling up of $L(u)$ and lines 10–23 treat
the filling up of $M(u)$. The important difference consists in traversing all
the couples $(x,y),x,y=1,2\ldots,k$ or $x,y=0,1,\ldots,k$ such that $x+y=i$.
It leads from the fact that this time the weight of the multicast tree in
$A^{v}\cup A^{v}\cup\\{u\\}$ is equal to the sum of the multicast weights in
$A^{v}\cup\\{u\\}$ and in $A^{w}$ with the branching nodes deployed in both
$A^{v}\cup\\{u\\}$ and $A^{w}$. The matrix computation also requires an
appropriate path number in order to determine the additional tree weight
(lines 15–19).
### Remark 2:
From Lemma 2 and Property 2, we can deduce, that if $u$ has $\ell$ children
$u_{1},\dots,u_{\ell}$ for any $i,\ j$, $0\leq i\leq k$ and $1\leq j\leq|R|$
* •
$L^{\ell}_{i}(u)=\min^{+}\\{L^{\ell-1}_{i^{\prime}}(u)+L_{i^{\prime\prime}}(u_{\ell}),L^{\ell-1}_{i^{\prime}}(u)+\min^{+}_{j:1\leq
j\leq|R|}M_{i^{\prime\prime},j}(v):i^{\prime}+i^{\prime\prime}=i\\}$
* •
$M^{\ell}_{i,j}(u)=\min^{+}\left(\begin{array}[]{@{}ll@{}}\min^{+}\\{M^{\ell-1}_{i^{\prime},j^{\prime}}(u)+M_{i^{\prime\prime},j^{\prime\prime}}(u_{\ell})+j^{\prime\prime}&:i^{\prime}+i^{\prime\prime}=i\land\
j^{\prime}+j^{\prime\prime}=j\\},\\\
\min^{+}\\{M^{\ell-1}_{i^{\prime},j-1}(u)+L_{i^{\prime\prime}}(u_{\ell})+1&:i^{\prime}+i^{\prime\prime}=i\\},\\\
\end{array}\right)$
where $M^{f}(u)$ and $L^{f}(u)$ correspond to the matrix and the vector
computed by the algorithm for the sub-tree of $A^{u}$ where $u$ has only $f$
children $u_{1},\dots,u_{f}$.
The procedure `Others_Succ_Mat_Vec` computes the formulæ here above.
###### theorem 1
The optimal set of diffusing nodes is obtained by the configuration associated
to $min_{i:1\leq i\leq k}L_{i}(e)$. Its complexity is ${\cal
O}(k^{2}|R|^{2}|V_{A}|)$.
Proof: From Remarks 1 and 2, we can deduce from any $u$ in $V(A)$, the
algorithm `Mat_Vec_Mat_Filling` computes vector $L(u)$ and matrix $M(u)$ such
that $\forall b,\ 1\leq b\leq|R|$, $\forall d,\ 0\leq d\leq k$, thus there are
two sub-optimal sets of diffusing nodes: one has load $L_{d}(u)$ and the other
has load $M_{d,b}(u)$. $\Box$
## 6 Numerical results
Our algorithm determines the optimal localizations for $k$ diffusing nodes in
a multicast tree which has already been created for a request
$\epsilon=(e,R)$. As we have signalled in Section 2 there are numerous methods
of construction of these trees. We selected two heuristic methods in order to
observe their impact on the efficiency of our algorithm. The first one
establishes a shortest path (ShP) between $e$ and each $r\in R$. The
corresponding multicast tree $A_{\epsilon}^{\mbox{\scriptsize ShP}}$ is a
union of these shortest paths. The second method, which is based on the
$2$-approximable solution of the Steiner tree problem proposed in [22], gives
$A_{\epsilon}^{\mbox{\scriptsize StT}}$ tree. This Steiner problem formalized
in terms of multicast demand can be written as:
###### theorem 5
Steiner Tree Problem (StTP)
Data: a connected undirected graph $G=(V,E)$, a multicast request
$\epsilon=(e,R)$, and a natural number $k$.
Question: Does a rooted tree $A_{\epsilon}$ exist such that the number of its
arc is less than or equal to $k$?
The heuristic algorithm [22] leans on polynomial algorithms of a minimum-
weight spanning tree [15] and of a shortest path [8] coupled.
To generate a graph of $200$ nodes we apply the Waxman model [24] of BRITE
[18] (with default parameters). We estimate with the $5$% precision at the
significance level $\alpha=0.05$ the average weight of multicast tree as a
function of the destination number for both the algorithms which construct a
tree. For each number of destinations we choose uniformly in $V$ a multicast
source $e$ and next, we select the destinations of this source according to
the uniform distribution in $V-\\{e\\}$.
Figure 6: Average $A_{\epsilon}^{\mbox{\scriptsize ShP}}$ weight as a function
of the number of destinations with and without diffusing nodes
Figure 7: Average $A_{\epsilon}^{\mbox{\scriptsize StT}}$ weight as a function
of the number of destinations with and without diffusing nodes
In order to perceive the impact of diffusing nodes on the tree weight we
perform the computations with four nodes placed by our algorithm, and without
them. In Fig. 7 we observe that the weight reduction obtained for ShP with the
diffusing nodes is significant (about $31$% for $32$ destinations). The
improvement obtained by the introduction of diffusing nodes into the trees
built with StT (Fig. 7) is even more substantial than in the previous case
(about $65$% for $16$ destinations). These two figures exhibit that ShP
generates trees whose weight is less than those generated by StT. This fact is
not astonishing as ShP always chooses a shortest path between the source and
any destination.
In Fig. 8, in which the relative difference between
$A_{\epsilon}^{\mbox{\scriptsize ShP}}$ and $A_{\epsilon}^{\mbox{\scriptsize
StT}}$ weights as StT tree weight percentage is depicted, we notice, however,
that this tendency is inverse for multicast trees with few destination (up to
$18$). To explain this phenomenon we notice that 1) with a small number of
destinations the shortest paths identified by ShP are disjoint, and 2)
typically, the edges of a tree obtained by shortest paths are more numerous
that those of a Steiner tree computed for an identical multicast demand.
Figure 8: Difference between weights of $A_{\epsilon}^{\mbox{\scriptsize
ShP}}$ and $A_{\epsilon}^{\mbox{\scriptsize StT}}$ as
$A_{\epsilon}^{\mbox{\scriptsize StT}}$ weight percentage with four diffusing
nodes as a function of the number of destinations
We now fix the number of destinations to $20$ and we estimate the weights of
trees obtained with ShP and StT algorithms in function of the number of
branching nodes. We remind the reader that for $20$ destinations and $4$
diffusing nodes ShP turned out to be slightly more efficient than StT. Figs.
10 and 10 also show the average weight of ShP and StT trees estimated with the
absence of diffusing nodes. In accordance with the comment made above in the
context of the absence of diffusing nodes, ShP trees are almost twice as good
as StT ones.
Figure 9: Average $A_{\epsilon}^{\mbox{\scriptsize ShP}}$ weight as a function
of the number of diffusing nodes for $20$ multicast destinations
Figure 10: Average $A_{\epsilon}^{\mbox{\scriptsize StT}}$ weight as a
function of the number of diffusing nodes for $20$ multicast destinations
The introduction of three diffusing nodes reduces the weight of
$A_{\epsilon}^{\mbox{\scriptsize ShP}}$ by about $20$% (Fig. 10). Further
additions allows one to lower the tree weight by almost $40$% for $15$
branching nodes. The influence of the branching nodes on the reduction of the
tree weight in the StT case is striking (Fig. 10): an improvement of almost
$60$% in the case of three branching nodes until almost $75$% for $15$ of
them. Confronting the results of ShP and StT with diffusing nodes we observe
that StT, despite its starting point at a worse position, reaches the tree
weight of $40$ in the situation in which ShP has this weight of $48$.
In Fig. 11 we observe the relative difference of ShP and StT tree weights for
$20$ multicast destinations in function of the number of diffusing nodes. It
is not surprising that for this relatively large number of destinations and
few diffusing nodes StT exhibits better performance than ShP. When the number
of branching nodes increases and approaches the number of destinations, ShP
trees become lighter than StT ones for the same reasons as those mentioned in
the comments on Fig. 8.
The next question we ask ourselves concerns the detection of the numbers of
diffusing nodes and destinations up to which StT is more advantageous than
ShP. For the network investigated above the critical point is $(4,18)$. In
Fig. 13 we mark critical points starting from which the ShP tree gives
“lighter” solutions. For the points above the line we recommend ShP (for
example, for three branching nodes and $30$ destinations), for those below the
line we recommend StT.
Figure 11: Difference between weights of ShP and StT trees as a percentage of
StT tree weight as a function of the number of diffusing nodes for $20$
multicast destinations
As the critical points depicted in Fig. 13 form a straight line whose slope is
five, we are now interested in what this gradient depends on. One may guess
that it is determined by the average degree of the network. Indeed, if we look
at Fig. 13, the gradient decreases as the average node degree increases.
Consequently, the line seen in Fig. 13 inclines with the average degree
growth. Therefore we conclude that StT is more favourable for loosely
connected graphs and ShP is better for dense networks.
Figure 12: Critical point line determining the utility of ShP and StT trees
Figure 13: Gradient of critical point line in function of the average graph
degree
## 7 Conclusions and further work
We studied a problem of infrastructural design of a commercial optical meshed
network with a circuit switching routing protocol deployed. This problem was
stated within the context of virtual services based on multicast transmission.
It concerns frequent and voluminous multicast demands whose source and
destinations are à priori known and its solution determines the localizations
of branching nodes (i.e. routers with higher cost and energy consumption but
which allow one to duplicate data and retransmit it in different directions).
A solution to this problem allows a network operator to use his available
resources more efficiently and make more profit with less, or even without
any, investment.
After formally stating the problem we proposed an algorithm to solve it. Next,
we proved its optimality and computed its complexity which is polynomial. We
computed a gain in terms of the used bandwidth compared with multicast trees
without any diffusing nodes. Among the two heuristic algorithms which we used
to deploy multicast trees the first is based on the shortest past approach
(ShP) and the second one exploits a solution to the Steiner tree problem (StT)
in undirected graphs. We performed exhaustive computations in order to compare
the efficiency of our algorithm for multicast trees built with ShP and StT. We
observed the dependency of their efficiency on the numbers of diffusing nodes
and destinations. This dependency is influenced by the average network degree.
StT works better in loosely connected networks whereas ShP is more efficient
for strongly connected ones.
Generally speaking, we found ShP more efficient in finding a multicast tree
than StT. We should not forget, however, that we used the $2$-approximable
algorithm. It is not excluded that a more precise StT algorithm (for example
[7, 20]) may give better results. We consider implementing these algorithms in
order to verify their performance for our purposes.
We plan to continue this work in order to determine a specific solution in
particular graphs (for example having bounded treewidth). We conjecture that
our algorithm could be extended to this kind of graph. On the other hand we
consider pursuing our work on optimal multicast deployment by studying the
Steiner problem in certain oriented graphs.
## References
* [1] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. Zaharia. A view of Cloud Computing. Comm. ACM, 53:50–58, April 2010.
* [2] O. Audouin. CARRIOCAS description and how it will require changes in the network to support Grids. In 20th Open Grid Forum, 2007.
* [3] J. Beasly. An SST-based algorithm for the Steiner problem in graphs. Networks, 19, 1989.
* [4] C. Berge. The theory of graphs and its applications. Wiley, 1966.
* [5] K. Bharath-Kumar and J. M. Jaffe. Routing to multiple destinations in computer networks. IEEE Trans. Commun., 31:343–351, March 1983.
* [6] E. Bonetto, L. Chiaraviglio, D. Cuda, G. Gavilanes Castillo, and F. Neri. Optical technologies can improve the energy efficiency of networks. In ECOC, 2009.
* [7] J. Byrka, F. Grandoni, T. Rothvoß, and L. Sanita. An improved LP-based approximation for Steiner tree. In ACM-STOC, 2010.
* [8] E. W. Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik, 1:269–271, 1959.
* [9] I. T. Foster, C. Kesselman, J. M. Nick, and S. Tuecke. Grid services for distributed system integration. IEEE Computer, 35(6):37–46, 2002.
* [10] M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman and company, 25th edition, 1979.
* [11] F. K. Hwang and D. S. Richards. Steiner tree problems. Networks, 22(1), 1992.
* [12] M. Jeong, C. Qiao, Y. Xiong, H. C. Cankaya, and M. Vandenhoute. Efficient multicast schemes for optical burst-switched WDM networks. In ICC, 2000.
* [13] D. E. Knuth. The Art Of Computer Programming, vol. 1. Addison-Wesley, 1997.
* [14] V. P. Kompella, J. Pasquale, and G. C. Polyzos. Multicasting for multimedia applications. In INFOCOM, 1992.
* [15] J. B. Kruskal. On the shortest spanning subtree of a graph and the traveling salesman problem. Proc. of AMS, 7(1):48–50, February 1956.
* [16] R. Malli, X. Zhang, and C. Qiao. Benefit of multicasting in all-optical networks. In SPIE All Optical Networking, pages 209–220, 1998.
* [17] E. Mannie. RFC 3945 — GMPLS, October 2004.
* [18] A. Medina, A. Lakhina, I. Matta, and J. Byers. BRITE: An approach to universal topology generation. In MASCOTS, Cincinnati, OH, USA, August 2001.
* [19] V. Reinhard, J. Tomasik, D. Barth, and M-A. Weisser. Bandwith optimisation for multicast transmissions in virtual circuits networks. In IFIP Networking, 2009.
* [20] G. Robins and A. Zelikovsky. Improved Steiner tree approximation in graphs. In ACM-SIAM, 2000.
* [21] H. F. Salama, D. S. Reeves, and Y. Viniotis. Evaluation of multicast routing algorithms for real-time communication on high-speed networks. IEEE J. on Sel. Areas in Comm., 15(3):332–345, 1997.
* [22] H. Takahashi and A. Matsuyama. An approximate solution for the Steiner problem in graphs. Math. Jap., 24(6):573–577, 1980.
* [23] D. Verchère, O. Audouin, B. Berde, A. Chiosi, R. Douville, H. Pouylau, P. Primet, M. Pasin, S. Soudan, D. Barth, C. Caderé, V. Reinhard, and J. Tomasik. Automatic network services aligned with grid application requirements in CARRIOCAS project. In GridNets, pages 196–205, 2008.
* [24] B. M. Waxman. Routing of multipoint connections. J-SAC, 6(9), 1988.
* [25] X. Zhang, J. Y. Wei, and C. Qiao. Constrained multicast routing in WDM networks with sparse light splitting. J. of Lightwave Tech., 18(12):1917–1927, 2000.
|
arxiv-papers
| 2011-05-02T20:49:00 |
2024-09-04T02:49:18.547733
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Vincent Reinhard, Johanne Cohen, Joanna Tomasik, Dominique Barth,\n Marc-Antoine Weisser",
"submitter": "Johanne Cohen",
"url": "https://arxiv.org/abs/1105.0438"
}
|
1105.0577
|
# Hadronic Rapidity Spectra in Heavy Ion Collisions at SPS and AGS energies in
a Quark Combination Model††thanks: Supported by National Natural Science
Foundation of China (10775089,10947007,10975092)
SUN Le-Xue1 WANG Rui-Qin2 SONG Jun2 SHAO Feng-Lan1;1) shaofl@mail.sdu.edu.cn 1
Department of Physics, Qufu Normal University, Shandong 273165, China
2 Department of Physics, Shandong University, Shandong 250100, China
###### Abstract
The quark combination mechanism of hadron production is applied to nucleus-
nucleus collisions at the CERN Super Proton Synchrotron (SPS) and BNL
Alternating Gradient Synchrotron (AGS). The rapidity spectra of identified
hadrons and their spectrum widths are studied. The data of $\pi^{-}$,
$K^{\pm}$, $\phi$, $\Lambda$, $\overline{\Lambda}$, $\Xi^{-}$, and
$\overline{\Xi}^{+}$ at 80 and 40 AGeV, in particular at 30 and 20 AGeV where
the onset of deconfinement is suggested to happen, are consistently described
by the quark combination model. However at AGS 11.6 AGeV below the onset the
spectra of $\pi^{\pm}$, $K^{\pm}$ and $\Lambda$ can not be simultaneously
explained, indicating the disappearance of intrinsic correlation of their
production in the constituent quark level. The collision-energy dependence of
the rapidity spectrum widths of constituent quarks and the strangeness of the
hot and dense quark matter produced in heavy ion collisions are obtained and
discussed.
###### keywords:
relativistic heavy ion collisions, rapidity spectra, quark combination
###### pacs:
2
5.75.Dw, 25.75.Nq, 25.75.-q
## 1 Introduction
The production of quark gluon plasma (QGP) and its properties are hot topics
in relativistic heavy ion collisions. A huge number of possible QGP signals
were proposed and measured, and many unexpected novel phenomena were observed
at RHIC and SPS [2, 3, 4, 5, 6]. These experimental data greatly contribute to
the identification of QGP and the understanding of its properties and
hadronization from different aspects. Especially, there are a class of
phenomena that are of particular interest, i.e. the abnormally high ratio of
baryons to mesons and the quark number scaling of hadron elliptic flows in the
intermediate $p_{T}$ range, etc[7, 8]. They reveal the novel features of
hadron production in relativistic heavy ion collisions.
In quark combination/coalescence scenario, hadrons are combined from quarks
and antiquarks, i.e., a quark-antiquark pair merges into a meson and three
quarks into a baryon. The production difference between baryon and meson
mainly results from their different constituent quark numbers. It is shown
that such a simple quark number counting can naturally explain those striking
features of hadron production observed at RHIC[9, 10, 11], while the
fragmentation mechanism can not.
The highlights at RHIC are mainly of hadron production in transverse direction
where the quark combination scenario mostly flashes. In fact, the longitudinal
rapidity distribution of hadrons is also a good tool for testing the
hadronization mechanism. In previous work[12, 13], we have used the quark
combination model to successfully describe the rapidity spectra of various
hadrons at RHIC $\sqrt{s_{NN}}=200$ GeV and top SPS $E_{beam}=158$ AGeV. At
other collision energies where the QGP may be produced, e.g., at lower SPS and
higher LHC energies, does the quark combination mechanism still work well? The
Beam Energy Scan at RHIC and the NA49 Collaboration have provided abundant
data on hadron production in the energy region from 20 GeV to 6 GeV. In this
paper, we extend the quark combination model to systematically study the
rapidity distributions of various identified hadrons in heavy ion collisions
at SPS $E_{beam}=80,~{}40,~{}30,~{}20$ AGeV and AGS $E_{beam}=11.6$ AGeV and
test the applicability of the quark combination mechanism.
## 2 A brief introduction to the quark combination model
The quark combination model deals with how quarks and antiquarks covert to
color-siglet hadrons as the partonic matter evolves to the interface of
hadronization. The basic idea is to put all the quarks and antiquarks line up
in a one-dimensional order in phase space, e.g., in rapidity, and let them
combine into primary hadrons one by one following a combination rule based on
the QCD and near-correlation in phase space requirements. See Sec. II of
Ref.[13] for detailed description of such a rule. Here, we consider only the
production of SU(3) ground states, i.e. 36-plets of mesons and 56-plets of
baryons. The flavor SU(3) symmetry with strangeness suppression in the yields
of initially produced hadrons is fulfilled in the model. The decay of short-
life hadrons is systematically taken into account to get the spectra
comparable to the data. The model has reproduced the experimental data for
hadron multiplicity ratios, momentum distributions and the elliptic flows of
identified hadrons, etc., in heavy ion collisions at RHIC and top SPS
energies[14, 15, 17, 16, 13], and addressed the entropy issue[18] and the
exotic state production[19].
## 3 Rapidity spectra of constituent quarks
The rapidity spectra of constituent quarks just before hadronization are
needed as the input of the quark combination model. Considering that the
collision energies studied here are much lower than RHIC energies, and in
particular 30-20 AGeV is the possible region for the onset of deconfinement,
it is not sure whether the hot and dense quark matter is exactly produced at
these energies, so applying a model or theory for the evolution of the hot and
dense quark matter, e.g., relativistic hydrodynamics, to get the quark spectra
before hadronization may be uneconomic or infeasible. In this paper, assuming
the hot quark matter has been created, we parameterize the rapidity
distribution of constituent quarks and extract the parameter values from the
experimental data of final state hadrons. The rapidity distribution of newborn
quarks is taken to be a Gaussian-like form, i.e.,
$\frac{dN_{q}}{dy}=N_{q}f_{q}(y)=\frac{N_{q}}{A}\Big{(}e^{-|y|^{a}/2\sigma^{2}}-C\Big{)}.$
(1)
Here $C=\exp[{-|y_{beam}|^{a}/2\sigma^{2}}]$ which means the constituent
quarks that form hadrons via combination are within $[-y_{beam},y_{beam}]$ in
the center-of-mass frame. $A$ is the normalization factor, satisfying
$A=\int^{y_{beam}}_{-y_{beam}}\big{(}e^{-|y|^{a}/2\sigma^{2}}-C\big{)}dy$. $a$
and $\sigma$ are the parameters depicting the shape of the spectrum. $N_{q}$
denotes the total number of newborn quarks with type $q$ in the full rapidity
region. For the net quarks coming from the colliding nuclei, their evolution
is generally different from the newborn quarks due to complicated collision
transparency [20]. We fix the rapidity spectrum of net-quarks before
hadronization by the data of the rapidity distribution of (net-)protons[21],
and the results are shown in Fig. 3.
The rapidity distributions of net-quarks.
## 4 Hadronic rapidity spectra at SPS and AGS
In this section, we first use the quark combination model to systematically
study hadron rapidity spectra and their widths at SPS energies. Then we
present the energy dependence of the strangeness and spectrum width of
constituent quarks. Finally, we show the results at AGS 11.6AGeV.
### 4.1 Rapidity spectra of hadrons at SPS energies
Rapidity distributions of identified hadrons in central Pb+Pb collisions at
$E_{beam}=~{}80,~{}40,~{}30,~{}20$ AGeV. The symbols are the experimental
data[22, 23, 24, 25, 26] and lines are the calculated results. The open
symbols are reflection of measured data (solid symbols) against midrapidity.
The results of $\phi$ at 30, 20 AGeV are scaled by proper constant factors for
comparing their shapes with the data.
The calculated rapidity distributions of identified hadrons in central Pb+Pb
collisions at $E_{beam}=$ 80, 40, 30, 20 AGeV are shown in Fig. 4.1. The
values of parameters for newborn quarks are listed in Table 4.1, and
$\chi^{2}/ndf$ is presented as the quality of global fitting. Note that the
experimental data of pions beyond $y_{beam}$ are not included in
$\chi^{2}/ndf$ calculation, because the limiting fragmentation behavior around
$y_{beam}$ becomes prominent which is beyond the study of this paper by quark
combination. The results show the quark combination model can reproduce the
experimental data of various identified hadrons except $\phi$ at 30, 20 AGeV.
The yields of $\phi$ at 30, 20 AGeV deviate from the data, but their sharps
are still in agreement with the data after scaling by proper constant factors.
Parameters of newborn light and strange quarks and $\chi^{2}/ndf$. energy
$a_{u/d}$ $a_{s}$ $\sigma_{u/d}$ $\sigma_{s}$ $N_{u/d}$ $N_{s}$ $\chi^{2}/ndf$
80AGeV 1.90 2.35 1.19 1.40 275.2 159.6 0.98 40AGeV 1.90 2.25 1.12 1.40 171.3
116.5 1.01 30AGeV 1.80 2.05 1.20 1.16 139.0 101.5 0.96 20AGeV 1.80 1.95 1.10
0.85 105.5 77.0 1.90
The deviation of $\phi$ yields at 30 and 20 AGeV is related to the pronounced
rescattering effect. It is shown that at higher SPS and RHIC energies the
production of $\phi$ mainly comes from the contribution of partonic phase,
i.e., the directly produced $\phi$ by hadronization while at lower SPS
energies kaon coalescence may be the dominated production mechanism[26]. In
addition, the directly produced $\phi$ after hadronization will possibly
suffer the destroy by the scattering of the daughter kaons with other produced
hadrons. The absence of these two effects at hadronic stage in our
calculations is the main reason for the deviation in $\phi$ yields.
### 4.2 Widths of rapidity spectra for hadrons
From the above experimental data one can see the widths of rapidity spectra
for different hadron species are generally different. This difference, in
other word the correlation of longitudinal hadron production, can be used to
quantitatively test various hadronization models. In quark combination
mechanism, the rapidity distribution of a specific hadron is the convolution
of the rapidity spectra of its constituent quarks and combination probability
function (denoted by the combination rule in our model). Since the rapidity
distributions of different-flavor quarks are different (see Table 4.1 and Fig.
3), in particular the difference between newborn quarks and net-quarks, the
shapes of rapidity spectra of hadrons with different quark flavor components
are generally different and correlated with each other by constituent quarks.
Here, we calculate the spectrum widths of various hadrons to clarify this
feature.
Considering that the rapidity region covered by the data for different hadron
species or at different collision energies are not the same, we define the
variance of rapidity distribution for a specified hadron in a finite rapidity
region limited by the discrete experimental data,
$\langle
y^{2}\rangle^{(L)}=\frac{\sum\limits_{i=1}^{n}y_{i}^{2}\frac{dN_{i}}{dy}}{\sum\limits_{i=1}^{n}\frac{dN_{i}}{dy}}.$
(2)
Here $n$ is the number of experimental data; $y_{i}$ and $dN_{i}/dy$ are the
rapidity position and the corresponding yield density measured experimentally,
respectively. Replacing $dN_{i}/dy$ with the model results, we can give the
$\langle y^{2}\rangle^{(L)}|_{model}$ and compare it with the experimental
value $\langle y^{2}\rangle^{(L)}|_{data}$ to test the applicability of the
model without any arbitrariness caused by the rapidity region where the
experimental data have not covered yet. We further extrapolate $\langle
y^{2}\rangle^{(L)}|_{data}$ to the full rapidity region $[-y_{beam},y_{beam}]$
via the relation
$\langle y^{2}\rangle^{(F)}|_{data}=\frac{\langle
y^{2}\rangle^{(L)}|_{data}}{\langle y^{2}\rangle^{(L)}|_{model}}\langle
y^{2}\rangle^{(F)}|_{model},$ (3)
where $\langle y^{2}\rangle^{(F)}|_{model}$ is the variance calculated by the
model in the full rapidity region. The degree of agreement between $\langle
y^{2}\rangle^{(F)}|_{model}$ and experimental data $\langle
y^{2}\rangle^{(F)}|_{data}$ still represents the original description ability
of the model, and the $<y^{2}>$ of different hadron species can be directly
compared and their energy dependence is recovered. The spectrum widths of
various hadrons, i.e. $D(y)^{(F)}\equiv\sqrt{\langle y^{2}\rangle^{(F)}}$, are
calculated and the results are shown in Fig 4.2 (158AGeV is also included).
One can see that the $D(y)$ of various hadrons given by the model are
obviously distinguished. The agreement between the data and the calculated
results is the support of the theoretic(quark recombination mechanism)
explanation for the widths of hadronic rapidity distributions.
Widths of rapidity distributions $D(y)^{(F)}$ at
$E_{beam}(y_{beam})=158,~{}80,~{}40,~{}30,~{}20$ AGeV (2.91, 2.57, 2.22, 2.08,
1.88). The right panel includes the results of $K^{+}K^{-}$ coalescence for
$\phi$ production. Model results are connected by the solid lines to guide the
eye.
As stated above, at lower SPS energies, kaon meson coalescence may be an
important mechanism for $\phi$ production. The right panel of Fig. 4.2 shows
the $\phi$’s $D(y)^{(F)}|_{data}$ and $D(y)^{(F)}|_{model}$ as well as the
results for $K^{+}K^{-}$ coalescence $D(y)^{(F)}|_{coal}$. Similar to
Ref.[26], considering the ideal case of coalescence of two kaons with the same
rapidity, we use the measured kaon rapidity distributions $f_{K^{\pm}}(y)$ to
obtain the spectrum of $\phi$ by $f_{\phi}^{coal}(y)\propto
f_{K^{+}}(y)f_{K^{-}}(y)$ and then give the $D(y)^{(F)}|_{coal}$. One can see
that at collision energies above 20 AGeV, $D(y)^{(F)}|_{coal}$ is much smaller
than the data, which is similar to the results in Ref.[26], while our results
nearly agree with the data. This clearly shows the $\phi$ production at these
energies is dominated by the hadronization. At 20 AGeV, $D(y)^{(F)}|_{coal}$
is nearly equal to the data and the model result is also in agreement with the
data. It suggests $\phi$ production at lowest SPS energy can have different
explanations, in other words, even though kaon coalescence is significant, the
$\phi$ directly produced from hadronization may be unnegligible.
### 4.3 The strangeness and spectrum width of constituent quarks
Let us turn to the extracted rapidity distributions of newborn light and
strange quarks. Their properties can be characterized by two quantities, i.e
the ratio of strange quark number $N_{s}$ to light quark number $N_{u/d}$
called strangeness suppression factor $\lambda_{s}$ and the width of rapidity
spectrum.
The left panel of fig. 4.3 shows the strangeness suppression factor
$\lambda_{s}$ at different collision energies. Note that $\lambda_{s}$ defined
here is in terms of quark numbers in the full phase space, so the values are
sightly different from those in terms of midrapidity quark number densities in
previous Ref. [13]. As the comparison with SPS, we also present $\lambda_{s}$
at RHIC $\sqrt{s_{NN}}=200,~{}62.4$ GeV calculated by the model and at AGS
11.6 AGeV calculated by counting the numbers of light and strange valance
quarks hidden in the observed pions, kaons and $\Lambda$ which are mostly
abundant hadron species carrying light and strange gradients of the system.
One can see that the value of $\lambda_{s}$ exhibits a peak behavior at lower
SPS energies. This behavior has been reported by NA49 Collaboration as the
signal of onset of deconfinement.
The strangeness suppression factor $\lambda_{s}$ and $D(y)$ of newborn light
and strange quarks at different collision energies. The results are connected
by the solid lines to guide the eye.
The width of rapidity spectrum is characterized by $D(y)\equiv\sqrt{\langle
y^{2}\rangle-\langle y\rangle^{2}}=\sqrt{\langle y^{2}\rangle}$. The right
panel of Fig. 4.3 shows the $D(y)$ of newborn light and strange quarks at
different collision energies. The results at RHIC $\sqrt{s_{NN}}=200,~{}62.4$
GeV and SPS $E_{beam}=158$ AGeV are included. One can see that with the
increasing of collision energy $D(y)$ of newborn light and strange quarks both
increase regularly, indicating the stronger collective flow formed at higher
energies. As collision energy is equal to or greater than 40 AGeV, $D(y)$ of
newborn light quarks are always smaller than that of strange quarks while at
30 and 20 AGeV the situation flips. The wider spectrum of strange quarks,
relative to light quarks, has been verified at RHIC both in the longitudinal
and transverse directions[13, 17, 27], and the explanation is that strange
quarks acquire stronger collective flow during evolution in partonic phase. As
the collisions energy reduces to 30 and 20 AGeV, the widths of rapidity
distributions of light and strange quarks, see Table 4.1 and Fig. 4.3, are all
quite narrow, which means the collective flow formed in partonic phase is much
smaller than those at higher SPS and RHIC energies. Therefore, the partonic
bulk matter, even if produced as the indication of our results via still
active constituent quark degrees of freedom, should be in the vicinity of
phase boundary, and the extracted momentum distributions of quarks keep the
memory of their original excitation. If thermal equilibrium is reached in
heavy ion collisions, the quark occupation function in momentum space follows
as $\exp[-E/T]=\exp[-m_{T}\cosh(y)/T]$ in the case of no collective flow.
Taking the hadronization temperature $T=165$ MeV and constituent mass
$m_{q}=340$ MeV for light quarks and $m_{s}=500$ MeV for strange quarks, the
quark rapidity spectrum is Gaussian form and width of light quarks is
$\sigma_{q}=0.6$ and strange quarks $\sigma_{s}=0.52$ due to the heavier mass.
The tighter spread of strange quarks in rapidity space can be qualitatively
understood in quark production with thermal-like excitation.
### 4.4 Results at AGS 11.6 AGeV
What happens at lower AGS energies? We further use the model to calculate the
rapidity distributions of various hadrons at 11.6 AGeV. The results are shown
in Fig. 4.4 and are compared with the experimental data. The values of
parameters ($a,\sigma,N_{q}$) for quark spectra are taken to be (2.1, 0.88,71)
for newborn light quarks and (2.0, 0.83, 42) for strange quarks, respectively.
The rapidity distribution of net-quarks is extracted from the proton data of
E802 Collaboration [29, 31], and the data of E877 Collaboration [28] at 10.8
AGeV are used as the guide of extrapolation of net-quark spectrum in the
forward rapidity region. One can see that the results for pions and kaons are
in agreement with the data but the result of $\Lambda$ can not reproduce the
experimental data — the width of spectrum given by the model is much wider
than that of data. This suggests that there is no intrinsic correlation at
constituent quark level between production of kaons and $\Lambda$ at AGS 11.6
AGeV. In addition, the rapidity distributions of $\phi$, $K_{s}^{0}$,
$\overline{p}$, $\overline{\Lambda}$ and $\Xi^{-}(\overline{\Xi}^{+})$ are
predicted to be tested by future data.
Rapidity distributions of identified hadrons in central Au+Au collisions at
$E_{beam}=$ 11.6 AGeV. The symbols are experimental data from Refs [28, 29,
30, 31, 32, 33] and lines are the calculated results.
## 5 Summary
In this paper, we have investigated, with the quark combination model, the
rapidity distributions of identified hadrons and their widths in central A+A
collisions at SPS and AGS energies. Assuming in advance the existence of
constituent quark degrees of freedom, we parameterize the rapidity spectra of
quarks before hadronization, then test whether such a set of light and strange
quark spectra can self-consistently explain the data of $\pi^{-}$, $K^{\pm}$,
$\phi$, $\Lambda(\overline{\Lambda})$, $\Xi^{-}(\overline{\Xi}^{+})$, etc., at
these energies. The results of hadronic rapidity spectra are in agreement with
the data at 80 and 40 AGeV. At 30 and 20 AGeV where the onset of deconfinement
is suggested to happen, the model can still basically describe the production
of various hadrons. The study of rapidity-spectrum widths for hadrons,
particularly for phi via the comparison to the results from kaon coalescence
in the stage of hadronic rescatterings, clearly show the hadron production at
the collision energies above 20 AGeV is dominated by the hadronization. The
energy dependence of the rapidity-spectrum widths of constituent quarks and
the strangeness of the hot and dense quark matter are obtained. It is shown
that the strangeness peaks around 30 AGeV and below (above) the energy, the
width of strange quarks becomes to be smaller (greater) than that of light
quarks. As the collision energy decreases to AGS 11.6 AGeV, it is found that
the production of $\pi^{\pm}$, $K^{\pm}$ and $\Lambda$ can not be consistently
explained by the model, which suggests there seems to be no intrinsic
correlations for their production in the constituent quark level. If the
applicability of the quark combination mechanism can be regarded as a judgment
of the QGP creation, our results implies the threshold for the onset of
deconfinement is located in the energy region 11.6-20 AGeV, which is
consistent with the report of NA49 Collaboration.
###### Acknowledgements.
The authors would like to thank professor XIE Qu-Bing for fruitful
discussions.
## References
* [1]
* [2] Adams J et al (STAR Collaboration). Nucl. Phys. A, 2005, 757: 102.
* [3] Adcox K et al (PHENIX Collaboration). Nucl. Phys. A, 2005, 757: 184.
* [4] Arsene I et al (BRAHMS Collaboration). Nucl. Phys. A, 2005, 757: 1.
* [5] Back B B et al (PHOBOS Collaboration). Nucl. Phys. A, 2005, 757: 28.
* [6] Alt C et al (NA49 Collaboration). Phys. Rev. C, 2008, 77: 024903\.
* [7] Adare A et al (PHENIX Collaboration). Phys. Rev. Lett., 2007, 98: 162301.
* [8] Adcox K et al (PHENIX Collaboration). Phys. Rev. Lett., 2002, 88: 242301.
* [9] Greco V, Ko C M, and Lévai P, Phys. Rev. Lett., 2003, 90: 202302.
* [10] Fries R J, Müller B, Nonaka C, and Bass S A. Phys. Rev. Lett., 2003, 90: 202303.
* [11] Dénes Molnár and Sergei A Voloshin. Phys. Rev. Lett., 2003, 91: 092301.
* [12] SONG J, SHAO F L, and XIE Q B. Int. J. Mod. Phys. A, 2009, 24: 1161\.
* [13] SHAO C E, SONG J, SHAO F L, and XIE Q B. Phys. Rev. C, 2009, 80: 014909.
* [14] SHAO F L, XIE Q B, and WANG Q. Phys. Rev. C, 2005, 71: 044903.
* [15] SHAO F L, YAO T, and XIE Q B. Phys. Rev. C, 2007, 75: 034904.
* [16] YAO T, ZHOU W, and XIE Q B. Phys. Rev. C, 2008, 78: 064911.
* [17] WANG Y F, SHAO F L, SONG J, WEI D M, and XIE Q B. Chin. Phys. C, 2008, 32: 976.
* [18] SONG J, LIANG Z T, LIU Y X, SHAO F L, and WANG Q. Phys. Rev. C, 2010, 81: 057901.
* [19] HAN W, LI S Y, SHANG Y H, SHAO F L, and YAO T. Phys. Rev. C, 2009, 80: 035202.
* [20] Bearden I G et al (BRAHMS Collaboration). Phys. Rev. Lett., 2004, 93: 102301.
* [21] Alt C et al (NA49 Collaboration). Presented at Critical Point and Onset of Deconfinement 4th International Workshop, 2007, PoSCPOD07, 024. arXiv: nucl-ex, 2007, 0709.3030v1.
* [22] Afanasiev S V et al (NA49 Collaboration). Phys. Rev. C, 2002, 66: 054902.
* [23] Afanasiev S V et al (NA49 Collaboration). Phys. Lett. B, 2000, 491: 59.
* [24] Alt C et al (NA49 Collaboration). Phys. Rev. C, 2008, 78: 034918.
* [25] Mischke A for the NA49 Collaboration. Nucl. Phys. A, 2003, 715: 453c.
* [26] Alt C et al (NA49 Collaboration). Phys. Rev. C, 2008, 78: 044907 and references therein.
* [27] CHEN J H et al. Phys. Rev. C, 2008, 78: 034907.
* [28] Lacasse R et al (E877 Collaboration). Nucl. Phys. A, 1996, 610: 153c.
* [29] Ahle L et al (E802 Collaboration). Phys. Rev. C, 1999, 59: 2173.
* [30] Ahle L et al (E802 Collaboration). Phys. Rev. C, 1998, 58: 3523.
* [31] Ahle L et al (E802 Collaboration). Phys. Rev. C, 1998, 57: R466.
* [32] Albergo S et al (E896 Collaboration). Phys. Rev. Lett., 2002, 88: 062301.
* [33] Barrette J (E877 Collaboration). Phys. Rev. C, 2000, 63: 014902.
|
arxiv-papers
| 2011-05-03T13:05:15 |
2024-09-04T02:49:18.555925
|
{
"license": "Public Domain",
"authors": "Le-xue Sun, Rui-qin Wang, Jun Song and Feng-lan Shao",
"submitter": "FengLan Shao",
"url": "https://arxiv.org/abs/1105.0577"
}
|
1105.0623
|
# Lie symmetries of radiation natural convection flow past an inclined surface
M. Nadjafikhah , M. Abdolsamadi, E. Oftadeh School of Mathematics, Iran
University of Science and Technology, Narmak, Tehran, 1684613114, I.R. IRAN.
e-mail: m_nadjafikhah@iust.ac.irFaculty of Mathematics, North Tehran Branch of
Islamic Azad University , Pasdaran, Tehran, I.R.IRAN. e-mail:
m.abdolsamadi@gmail.com, ela.oftadeh@gmail.com
###### Abstract
Applying Lie symmetry method, we find the most general Lie point symmetry
groups of the radiation natural convection flow equation(RNC). Looking the
adjoint representation of the obtained symmetry group on its Lie algebra, we
will find the preliminary classification of its group-invariant solutions.
Keywords: Lie-point symmetries, invariant, optimal system of Lie sub-algebras.
Mathematics Subject Classification: 70G65, 58K70, 34C14.
## 1 Introduction
This paper can be viewed as a continuation of the work of paper [7] where the
authors by using Lie group analysis for the PDE system corresponding to
radiation effects on natural convection heat transfer past an inclined surface
(RNC) could reduce this system to an ODE one and obtain numerical solutions by
applying Runge-Kutta method.
There are various techniques for finding solutions of differential equations
but most of them are useful for a few classes of equations and applying these
techniques for unknown equations is impossible. Fortunately symmetries of
differential equations remove this problem and give exact solutions.
Symmetry methods for differential equations were discussed by S.Lie at first.
One of the most important Lie surveys was finding relationships between
continuous transformation group and their infinitesimal generators that allows
us to reduce invariant conditions of differential equations under group
action, that are complicated because of non linearity, to linear ones.
The main results of applying Lie symmetry method for differential equations is
finding group invariant solutions. A useful way for reducing equations is
finding any subgroup of the symmetry group and writing invariant solutions
with respect to this subalgebra. This reduced equation is of fewer variables
and is easier to solve. In fact for many important equations arising from
geometry and physics these invariant solutions are the only ones which can be
studied thoroughly.
This type of equations as (RNC) system plays an important role in engineering
and industrial fields. In this paper Lie symmetry method is applied to find
the most general Lie symmetry group and optimal system of (RNC). Finally, we
gain group-invariant solutions of the reduced system.
## 2 Symmetries of RNC System
Consider the (RNC) system:
$\displaystyle\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}=0,$
$\displaystyle u\frac{\partial u}{\partial x}+v\frac{\partial u}{\partial
y}=\frac{\partial^{2}u}{\partial y^{2}}+Gr\theta\cos\alpha,$ (1)
$\displaystyle u\frac{\partial\theta}{\partial
x}+v\frac{\partial\theta}{\partial
y}=\frac{1}{Pr}(1+4R)\frac{\partial^{2}\theta}{\partial y^{2}}.$
where $\displaystyle Gr=\frac{g\beta(T_{W}-T_{\infty})\nu}{U_{\infty}^{3}}$ is
the Grashof number, $\displaystyle Pr=\frac{\rho c_{p}\nu}{k}$ is the Prandtl
number and $\displaystyle R=\frac{4\sigma_{0}T_{\infty}^{3}}{3k^{8}}$ is the
radiation parameter [7]. Now the infinitesimal method is implied as follow:
The infinitesimal generator X on total space of the form:
$\displaystyle\textbf{X}=\sum_{i=1}^{p}\xi^{i}(x,u)\partial_{x^{i}}+\sum_{j=1}^{q}\eta^{j}(x,u)\partial_{u^{j}},$
(2)
has the $n^{th}$ order prolongation $\textrm{Pr}^{(n)}X$ ([6], Th 4.16).
Applying the fundamental infinitesimal symmetry criterion ([6], Th 6.5) on $X$
as follow:
$\displaystyle\textrm{Pr}^{(n)}\textbf{X}\big{[}\Delta_{\nu}(x,u^{(n)})\big{]}=0,\quad\nu=1,\cdots,l,\quad\textrm{whenever}\quad\Delta_{\nu}(x,u^{(n)})=0.$
(3)
obtains the infinitesimal generators of the symmetry group.
The vector field associated with RNC system is of the form:
$\displaystyle
X=\xi_{1}(x,y,u,v)\partial_{x}+\xi_{2}(x,y,u,v)\partial_{y}\hskip 99.58464pt$
$\displaystyle+\varphi_{1}(x,y,u,v)\partial_{u}+\varphi_{2}(x,y,u,v)\partial_{v}+\varphi_{3}(x,y,u,v)\partial_{t}.\hskip
25.60747pt$ (4)
Since in RNC system, second order derivatives are appeared, symmetry
generators are obtained by applying (3) for the second prolongation of $X$.
The vector field X generates a one parameter symmetry group of RNC
$=(\Delta_{1},\Delta_{2},\Delta_{3},\Delta_{4})$ if and only if equations (3)
hold for $\nu=1,2,3,4$. So symmetry group of RNC system is spanned by
following infinitesimal generators:
$\displaystyle\begin{array}[]{ll}\displaystyle
X_{1}=\partial_{x},\\!\\!&\\!\\!\quad\quad\displaystyle
X_{2}=\partial_{y},\\\\[8.53581pt] \displaystyle
X_{3}=x\,\partial_{x}\\!+\\!u\partial_{u}+t\partial_{t},\\!\\!&\\!\\!\quad\quad\displaystyle
X_{4}=2x\partial_{x}\\!+\\!y\partial_{y}-v\partial_{v}-2t\partial_{t}.\\\\[8.53581pt]
\end{array}$
The Lie table of Lie algebra ${g}$ for RNC system is given below:
$\displaystyle\begin{array}[]{c|cccc}\\!\\!\\!&\\!\\!\\!\quad
X_{1}\\!\\!\\!&\\!\\!\\!\quad X_{2}\\!\\!\\!&\\!\\!\\!\quad
X_{3}\\!\\!\\!&\\!\\!\\!\quad X_{4}\\\ \hline\cr\
{X}_{1}\\!\\!\\!&\\!\\!\\!\quad 0\\!\\!\\!&\\!\\!\\!\quad
0\\!\\!\\!&\\!\\!\\!\quad X_{1}\\!\\!\\!&\\!\\!\\!\quad 2X_{1}\\\
{X}_{2}\\!\\!\\!&\\!\\!\\!\quad 0\\!\\!&\\!\\!\quad 0\\!\\!\\!&\\!\\!\\!\quad
0\\!\\!\\!&\\!\\!\\!\quad X_{2}\\\ {X}_{3}\\!\\!\\!&\\!\\!\\!\quad-
X_{1}\\!\\!\\!&\\!\\!\\!\quad 0\\!\\!\\!&\\!\\!\\!\quad
0\\!\\!\\!&\\!\\!\\!\quad 0\\\
{X}_{4}\\!\\!\\!&\\!\\!\\!\quad-2X_{1}\\!\\!\\!&\\!\\!\\!\quad-
X_{2}\\!\\!\\!&\\!\\!\\!\quad 0\\!\\!\\!&\\!\\!\\!\quad 0\\\ \end{array}$
### Theorem 1.
If $g_{k}(h)$ be the one parameter group generated by $X_{k}$, $k=1,\cdots,4$,
then
$\displaystyle\begin{array}[]{rcccl}g_{1}&:&(x,y,u,v,t)&\longmapsto&\displaystyle(h+x,y,u,v,t),\\\\[2.84526pt]
g_{2}&:&(x,y,u,v,t)&\longmapsto&\displaystyle(x,h+y,u,v,t),\\\\[2.84526pt]
g_{3}&:&(x,y,u,v,t)&\longmapsto&\displaystyle(xe^{h},y,ue^{h},v,te^{h}),\\\\[2.84526pt]
g_{4}&:&(x,y,u,v,t)&\longmapsto&\displaystyle(xe^{2h},ye^{h},u,ve^{-h},te^{-2h}).\\\\[2.84526pt]
\end{array}$ (11)
## 3 Optimal System
The aim of Lie theory is classifying the invariant solutions and reducing
equations. Some of Lie algebras contain different subalgebras, so classifying
them plays an important role in transforming equations into easier ones.
Therefore we are classifying these subalgebras up to adjoint representation
and finding an optimal system of subalgebras instead of finding an optimal
system of subgroups.
The adjoint action is given by the Lie series
$\displaystyle\mathrm{Ad}(\exp(\varepsilon
X_{i})X_{j})=X_{j}-\varepsilon[X_{i},X_{j}]+\frac{\varepsilon^{2}}{2}[X_{i},[X_{i},X_{j}]]-\cdots,$
(12)
where $[X_{i},X_{j}]$ is the commutator for the Lie algebra, $\varepsilon$ is
a parameter, and $i,j=1,\cdots,4$ ([5], ch 3.3).
### Theorem 2.
A one-dimensional optimal system of RNC results as follow:
$\displaystyle\begin{array}[]{l}1)\;\;X_{1},\\\ 2)\;\;X_{2},\\\
3)\;\;X_{3},\end{array}\qquad\begin{array}[]{l}4)\;\;X_{1}+X_{2},\\\
5)\;\;X_{2}-X_{1},\\\
6)\;\;X_{2}+X_{3},\end{array}\qquad\begin{array}[]{l}7)\;\;X_{3}-X_{2},\\\
8)\;\;X_{3}+X_{4},\\\ 9)\;\;X_{4}-X_{3}.\end{array}$ (22)
Proof: Consider the symmetry algebra ${g}$ of the RNC system with adjoint
representation demonstrated in the below table:
$\displaystyle\begin{array}[]{c|cccc}\\!\\!\\!&\\!\\!\\!\quad{X}_{1}\\!\\!\\!&\\!\\!\\!\quad{X}_{2}\\!\\!\\!&\\!\\!\\!\quad{X}_{3}\\!\\!\\!&\\!\\!\\!\quad{X}_{4}\\\
\hline\cr\
{X}_{1}\\!\\!\\!&\\!\\!\\!\quad{X}_{1}+\varepsilon{X}_{3}+2\varepsilon{X}_{4}\\!\\!\\!&\\!\\!\\!\quad{X}_{2}\\!\\!\\!&\\!\\!\\!\quad{X}_{3}\\!\\!\\!&\\!\\!\\!\quad{X}_{4}\\\
{X}_{2}\\!\\!\\!&\\!\\!\\!\quad{X}_{1}\\!\\!&\\!\\!\quad{X}_{2}+\varepsilon{X}_{4}\\!\\!\\!&\\!\\!\\!\quad{X}_{3}\\!\\!\\!&\\!\\!\\!\quad{X}_{4}\\\
{X}_{3}\\!\\!\\!&\\!\\!\\!\quad{e}^{-\varepsilon}{X}_{1}\\!\\!\\!&\\!\\!\\!\quad{X}_{2}\\!\\!\\!&\\!\\!\\!\quad{X}_{3}\\!\\!\\!&\\!\\!\\!\quad{X}_{4}\\\
{X}_{4}\\!\\!\\!&\\!\\!\\!\quad{e}^{-2\varepsilon}{X}_{1}\\!\\!\\!&\\!\\!\\!\quad{e}^{-\varepsilon}{X}_{2}\\!\\!\\!&\\!\\!\\!\quad{X}_{3}\\!\\!\\!&\\!\\!\\!\quad{X}_{4}\\\
\end{array}$
Let
$\displaystyle X=a_{1}X_{1}+a_{2}X_{2}+a_{3}X_{3}+a_{4}X_{4}.$ (24)
In this stage our goal is to simplify as many of the coefficient $a_{i}$ as
possible.
The following results are obtained from adjoint application on $X$:
* 1.
Let $a_{4}\neq 0$. Scaling $X$ if necessary, suppose that $a_{4}=1$. Then
$\displaystyle X^{\prime}=a_{1}X_{1}+a_{2}X_{2}+a_{3}X_{3}+X_{4}.$ (25)
If we act on $X^{\prime}$ with Ad(exp($a_{3}X_{3}$)), the coefficient of
$X_{1}$ can be vanished:
$\displaystyle X^{\prime\prime}=a_{2}X_{2}+a_{3}X_{3}+X_{4}.$ (26)
By applying Ad(exp($a_{4}X_{4}$)) on $X^{\prime\prime}$, we will obtain
$X^{\prime\prime\prime}$ as follow:
$\displaystyle X^{\prime\prime\prime}=a_{3}X_{3}+X_{4}.$ (27)
* 1-a.
If $a_{3}\neq 0$, then the coefficient of $X_{3}$ can be -1 or 1. Therefore
every one-dimensional subalgebra generated by $X$ in this case is equivalent
to $X_{3}+X_{4}$, $X_{4}-X_{3}$.
* 1-b.
If $a_{3}=0$, then every one-dimensional subalgebra generated by $X$ in this
case is equivalent to $X_{4}$.
* 2.
The remaining one-dimensional subalgebras are spanned by vector fields of the
form $X$ with $a_{4}=0$.
* 2-a.
If $a_{3}\neq 0$, let $a_{3}=1$. By acting of Ad(exp($a_{3}X_{3}$)) on $X$, we
will have
$\displaystyle\hat{X}=a_{2}X_{2}+X_{3}.$ (28)
* 2-a-1.
If $a_{2}\neq 0$, then the coefficient of $X_{2}$ can be -1 or 1. So every
one-dimensional subalgebra generated by $X$ in this case is equivalent to
$X_{2}+X_{3}$, $X_{3}-X_{2}$.
* 2-a-2.
If $a_{2}=0$ then every one-dimensional subalgebra generated by $X_{3}$.
* 2-b.
Let $a_{4}=0,a_{3}=0$.
* 2-b-1.
If $a_{2}\neq 0$, then we can make the coefficient of $X_{1}$ either -1 or 1
or 0. Every one-dimensional subalgebra generated by $X$ is equivalent to
$X_{1}+X_{2}$, $X_{2}-X_{1}$, $X_{2}$.
* 2-b-2.
If $a_{2}=0$, then every one-dimensional subalgebra generated by $X$ is
equivalent to $X_{1}$. $\Box$
## 4 Invariant Solutions and Reduction
Now for finding invariant solutions using characteristic method [2]:
$\displaystyle Q_{\alpha}\mid_{u=u(x,t)}\equiv
X[u^{\alpha}-u^{\alpha}(x,t)]\mid_{u=u(x,t)}\quad\alpha=1,\cdots,M$ (29)
where $M$ is the number of dependant variables. Now invariant solutions for
RNC result as follow:
Consider $X_{1}=\partial_{x}$. Applying characteristic method on $X_{1}$
obtains: $Q_{u}=-{\frac{\partial}{\partial
x}}u\left(x,y\right),Q_{v}=-{\frac{\partial}{\partial
x}}v\left(x,y\right),Q_{t}=-{\frac{\partial}{\partial x}}t\left(x,y\right)$.
Solutions of this system are of the form:
$u={F_{1}}\left(y\right),v={F_{1}}\left(y\right),t={F_{1}}\left(y\right)$.
Consider $X_{2}=\partial_{y}$. Applying characteristic method on $X_{2}$
obtains: $Q_{u}=-{\frac{\partial}{\partial
y}}u\left(x,y\right),Q_{v}=-{\frac{\partial}{\partial
y}}v\left(x,y\right),Q_{t}=-{\frac{\partial}{\partial y}}t\left(x,y\right)$.
Solutions of this system are of the form:
$u={F_{1}}\left(x\right),v={F_{1}}\left(x\right),t={F_{1}}\left(x\right)$.
Consider $X_{3}=x\partial_{x}+u\partial_{u}+t\partial_{t}$. Applying
characteristic method on $X_{3}$ obtains:
$Q_{u}=u\left(x,y\right)-x{\frac{\partial}{\partial
x}}u\left(x,y\right),Q_{v}=-x{\frac{\partial}{\partial
x}}v\left(x,y\right),Q_{t}=t\left(x,y\right)-x{\frac{\partial}{\partial
x}}t\left(x,y\right)$.
Solutions of this system are of the form:
$u=x{F_{1}}\left(y\right),v={F_{1}}\left(y\right),t=x{F_{1}}\left(y\right)$.
Consider $X_{4}=2x\partial_{x}+y\partial_{y}-v\partial_{v}-2t\partial_{t}$.
Applying characteristic method on $X_{4}$ obtains:
$Q_{u}=-2\,x{\frac{\partial}{\partial
x}}u\left(x,y\right)-y{\frac{\partial}{\partial
y}}u\left(x,y\right),Q_{v}=-v\left(x,y\right)-2\,x{\frac{\partial}{\partial
x}}v\left(x,y\right)-y{\frac{\partial}{\partial
y}}v\left(x,y\right),Q_{t}=-2\,t\left(x,y\right)-2\,x{\frac{\partial}{\partial
x}}t\left(x,y\right)-y{\frac{\partial}{\partial y}}t\left(x,y\right)$.
Solutions of this system are of the form:
$u={F_{1}}\left({\frac{y}{\sqrt{x}}}\right),v={F_{1}}\left({\frac{y}{\sqrt{x}}}\right){\frac{1}{\sqrt{x}}},t={F_{1}}\left({\frac{y}{\sqrt{x}}}\right){x}^{-1}$.
Consider $X_{1}+X_{2}=\partial_{x}+\partial_{y}$. Applying characteristic
method on $X_{1}+X_{2}$ obtains: $Q_{u}={\frac{\partial}{\partial
x}}u\left(x,y\right)+{\frac{\partial}{\partial
y}}u\left(x,y\right),Q_{v}={\frac{\partial}{\partial
x}}v\left(x,y\right)+{\frac{\partial}{\partial
y}}v\left(x,y\right),Q_{t}={\frac{\partial}{\partial
x}}t\left(x,y\right)+{\frac{\partial}{\partial y}}t\left(x,y\right)$.
Solutions of this system are of the form:
$u={F_{1}}\left(-x+y\right),v={F_{1}}\left(-x+y\right),t={F_{1}}\left(-x+y\right)$.
Consider $X_{2}-X_{1}=\partial_{y}-\partial_{x}$. Applying characteristic
method on $X_{2}-X_{1}$ obtains: $Q_{u}={\frac{\partial}{\partial
x}}u\left(x,y\right)-{\frac{\partial}{\partial
y}}u\left(x,y\right),Q_{v}={\frac{\partial}{\partial
x}}v\left(x,y\right)-{\frac{\partial}{\partial
y}}v\left(x,y\right),Q_{t}={\frac{\partial}{\partial
x}}t\left(x,y\right)-{\frac{\partial}{\partial y}}t\left(x,y\right)$.
Solutions of this system are of the form:
$u={F_{1}}\left(x+y\right),v={F_{1}}\left(x+y\right),t={F_{1}}\left(x+y\right)$.
Consider $X_{2}+X_{3}=x\partial_{x}+\partial_{y}+u\partial_{u}+t\partial_{t}$.
Applying characteristic method on $X_{2}+X_{3}$ obtains:
$Q_{u}=u\left(x,y\right)-x{\frac{\partial}{\partial
x}}u\left(x,y\right)-{\frac{\partial}{\partial
y}}u\left(x,y\right),Q_{v}=-x{\frac{\partial}{\partial
x}}v\left(x,y\right)-{\frac{\partial}{\partial
y}}v\left(x,y\right),Q_{t}=t\left(x,y\right)-x{\frac{\partial}{\partial
x}}t\left(x,y\right)-{\frac{\partial}{\partial y}}t\left(x,y\right)$.
Solutions of this system are of the form:
$u=x{F_{1}}\left(-\ln\left(x\right)+y\right),v={F_{1}}\left(-\ln\left(x\right)+y\right),t=x{F_{1}}\left(-\ln\left(x\right)+y\right)$.
Consider $X_{3}-X_{2}=x\partial_{x}-\partial_{y}+u\partial_{u}+t\partial_{t}$.
Applying characteristic method on $X_{3}-X_{2}$ obtains:
$Q_{u}=u\left(x,y\right)-x{\frac{\partial}{\partial
x}}u\left(x,y\right)+{\frac{\partial}{\partial
y}}u\left(x,y\right),Q_{v}=-x{\frac{\partial}{\partial
x}}v\left(x,y\right)+{\frac{\partial}{\partial
y}}v\left(x,y\right),Q_{t}=t\left(x,y\right)-x{\frac{\partial}{\partial
x}}t\left(x,y\right)+{\frac{\partial}{\partial y}}t\left(x,y\right)$.
Solutions of this system are of the form:
$u=x{F_{1}}\left(\ln\left(x\right)+y\right),v={F_{1}}\left(\ln\left(x\right)+y\right),t=x{F_{1}}\left(\ln\left(x\right)+y\right)$.
Consider
$X_{3}+X_{4}=3x\partial_{x}+y\partial_{y}+u\partial_{u}-v\partial_{v}-t\partial_{t}$.
Applying characteristic method on $X_{3}+X_{4}$ obtains:
$Q_{u}=u\left(x,y\right)-3\,x{\frac{\partial}{\partial
x}}u\left(x,y\right)-y{\frac{\partial}{\partial
y}}u\left(x,y\right),Q_{v}=-v\left(x,y\right)-3\,x{\frac{\partial}{\partial
x}}v\left(x,y\right)-y{\frac{\partial}{\partial
y}}v\left(x,y\right),Q_{t}=-t\left(x,y\right)-3\,x{\frac{\partial}{\partial
x}}t\left(x,y\right)-y{\frac{\partial}{\partial y}}t\left(x,y\right)$.
Solutions of this system are of the form:
$u={F_{1}}\left({\frac{y}{\sqrt[3]{x}}}\right)\sqrt[3]{x},v={F_{1}}\left({\frac{y}{\sqrt[3]{x}}}\right){\frac{1}{\sqrt[3]{x}}},t={F_{1}}\left({\frac{y}{\sqrt[3]{x}}}\right){\frac{1}{\sqrt[3]{x}}}$.
Consider
$X_{4}-X_{3}=x\partial_{x}+y\partial_{y}-u\partial_{u}-v\partial_{v}-3t\partial_{t}$.
Applying characteristic method on $X_{4}-X_{3}$ obtains:
$Q_{u}=-u\left(x,y\right)-x{\frac{\partial}{\partial
x}}u\left(x,y\right)-y{\frac{\partial}{\partial
y}}u\left(x,y\right),Q_{v}=-v\left(x,y\right)-x{\frac{\partial}{\partial
x}}v\left(x,y\right)-y{\frac{\partial}{\partial
y}}v\left(x,y\right),Q_{t}=-3\,t\left(x,y\right)-x{\frac{\partial}{\partial
x}}t\left(x,y\right)-y{\frac{\partial}{\partial y}}t\left(x,y\right)$.
Solutions of this system are of the form:
$u={F_{1}}\left({\frac{y}{x}}\right){x}^{-1},v={F_{1}}\left({\frac{y}{x}}\right){x}^{-1},t={F_{1}}\left({\frac{y}{x}}\right){x}^{-3}$.
## 5 Conclusion
In this paper by applying infinitesimal symmetry methods, we find optimal
system and finally could reduce the RNC system and find invariant solutions.
## References
* [1] G.W. Bluman and S. Kumei, Symmetries and Differential Equations, Springer, New York, 1989.
* [2] P. Hydon , Symmetries Methods for Differential Equations, A Beginner’s Guid, Cambridge: Cambridge University Press.
* [3] M. Nadjafikhah , Classification of similarity solutions for inviscid burgers’ equation, Adv. appl. Clifford alg. 20 (2010), 71-77.
* [4] M. Nadjafikhah, Lie Symmetries of Inviscid Burgers Equation, Adv. Appl. Clifford Alg., 19. (2009) 101-112.
* [5] P.J. Olver, Applications of Lie Groups to Differential Equations, Springer, New York, 1993.
* [6] P.J. Olver, Equivalence, Invariants, and Symmetry, Cambridge University Press, 1995.
* [7] S. Sivasankaran, M. Bhuvaneswari, P. Kandaswamy and E.K. Ramasami, Lie group analysis of radiation natural convection flow past an inclined surface, Communications in Nonlinear Science and Numerical Simulation 13 (2008) 269-276.
|
arxiv-papers
| 2011-05-03T16:21:51 |
2024-09-04T02:49:18.562731
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Mehdi Nadjafikhah, Maryam Abdolsamadi and Elahe Oftadeh",
"submitter": "Mehdi Nadjafikhah",
"url": "https://arxiv.org/abs/1105.0623"
}
|
1105.0625
|
# Lie symmetry analysis of nonlinear evolution equation for description
nonlinear waves in a viscoelastic tube
Mehdi Nadjafikhah Corresponding author: Department of Mathematics, Islamic
Azad University, Karaj Branch, Karaj, Iran. e-mail: m_nadjafikhah@iust.ac.ir
Vahid Shirvani-Sh e-mail: v.shirvani@kiau.ac.ir
###### Abstract
In this paper, the Lie symmetry method is performed for the nonlinear
evolution equation for description nonlinear waves in a viscoelastic tube. we
will find one and two-dimensional optimal system of Lie subalgebras.
Furthermore, preliminary classification of its group-invariant solutions are
investigated.
Keywords. Lie symmetry, Optimal system, Group-invariant solutions, Nonlinear
evolution equation.
## 1 Introduction
The theory of Lie symmetry groups of differential equations was developed by
Sophus Lie [10]. Such Lie groups are invertible point transformations of both
the dependent and independent variables of the differential equations. The
symmetry group methods provide an ultimate arsenal for analysis of
differential equations and is of great importance to understand and to
construct solutions of differential equations. Several applications of Lie
groups in the theory of differential equations were discussed in the
literature, the most important ones are: reduction of order of ordinary
differential equations, construction of invariant solutions, mapping solutions
to other solutions and the detection of linearizing transformations (for many
other applications of Lie symmetries see [11], [2] and [1]).
In the present paper, we study the following fifth-order nonlinear evolution
equation
$\displaystyle u_{t}+auu_{x}+bu_{x^{3}}+cu_{x^{4}}+du_{x^{5}}=eu_{x^{2}}.$ (1)
where $a$,$b$,$c$,$d$ and $e$ are positive constants. This equation was
introduced recently by Kudryashov and Sinelshchikov [8] which is the
generalization of the famous Kawahara equation. By using the reductive
perturbation method, they obtained the equation (1). Study of nonlinear wave
processes in viscoelastic tube is the important problem such tubes similar to
large arteries (see [4],[9] and [3]).
In this paper, by using the lie point symmetry method, we will investigate the
equation (1) and looking the representation of the obtained symmetry group on
its Lie algebra. We will find the preliminary classification of group-
invariant solutions and then we can reduce the equation (1) to an ordinary
differential equation.
This work is organized as follows. In section 2 we recall some results needed
to construct Lie point symmetries of a given system of differential equations.
In section 3, we give the general form of a infinitesimal generator admitted
by equation (1) and find transformed solutions. Section 4, is devoted to the
construction of the group-invariant solutions and its classification which
provides in each case reduced forms of equation (1).
## 2 Method of Lie Symmetries
In this section, we recall the general procedure for determining symmetries
for any system of partial differential equations (see [11] and [2]). To begin,
let us consider the general case of a nonlinear system of partial differential
equations of order $n$th in $p$ independent and $q$ dependent variables is
given as a system of equations
$\displaystyle\Delta_{\nu}(x,u^{(n)})=0,\;\;\;\;\;\nu=1,\cdots,l,$ (2)
involving $x=(x^{1},\cdots,x^{p})$, $u=(u^{1},\cdots,u^{q})$ and the
derivatives of $u$ with respect to $x$ up to $n$, where $u^{(n)}$ represents
all the derivatives of $u$ of all orders from $0$ to $n$. We consider a one-
parameter Lie group of infinitesimal transformations acting on the independent
and dependent variables of the system (2)
$\displaystyle\tilde{x}^{i}$ $\displaystyle=$ $\displaystyle
x^{i}+s\xi^{i}(x,u)+O(s^{2}),\hskip 28.45274pti=1\cdots,p,$
$\displaystyle\tilde{u}^{j}$ $\displaystyle=$ $\displaystyle
u^{j}+s\eta^{j}(x,u)+O(s^{2}),\hskip 25.60747ptj=1\cdots,q,$
where $s$ is the parameter of the transformation and $\xi^{i}$, $\eta^{j}$ are
the infinitesimals of the transformations for the independent and dependent
variables, respectively. The infinitesimal generator ${\mathbf{v}}$ associated
with the above group of transformations can be written as
$\displaystyle{\mathbf{v}}=\sum_{i=1}^{p}\xi^{i}(x,u)\partial_{x^{i}}+\sum_{j=1}^{q}\eta^{j}(x,u)\partial_{u^{j}}.$
(4)
A symmetry of a differential equation is a transformation which maps solutions
of the equation to other solutions. The invariance of the system (2) under the
infinitesimal transformations leads to the invariance conditions (Theorem 2.36
of [11])
$\displaystyle\textrm{Pr}^{(n)}{\mathbf{v}}\big{[}\Delta_{\nu}(x,u^{(n)})\big{]}=0,\;\;\;\;\;\nu=1,\cdots,l,\;\;\;\;\mbox{whenever}\;\;\;\;\;\Delta_{\nu}(x,u^{(n)})=0,$
(5)
where $\textrm{Pr}^{(n)}$ is called the $n^{th}$ order prolongation of the
infinitesimal generator given by
$\displaystyle\textrm{Pr}^{(n)}{\mathbf{v}}={\mathbf{v}}+\sum^{q}_{\alpha=1}\sum_{J}\varphi^{J}_{\alpha}(x,u^{(n)})\partial_{u^{\alpha}_{J}},$
(6)
where $J=(j_{1},\cdots,j_{k})$, $1\leq j_{k}\leq p$, $1\leq k\leq n$ and the
sum is over all $J$’s of order $0<\\#J\leq n$. If $\\#J=k$, the coefficient
$\varphi_{J}^{\alpha}$ of $\partial_{u_{J}^{\alpha}}$ will only depend on
$k$-th and lower order derivatives of $u$, and
$\displaystyle\varphi_{\alpha}^{J}(x,u^{(n)})=D_{J}(\varphi_{\alpha}-\sum_{i=1}^{p}\xi^{i}u_{i}^{\alpha})+\sum_{i=1}^{p}\xi^{i}u^{\alpha}_{J,i},$
(7)
where $u_{i}^{\alpha}:=\partial u^{\alpha}/\partial x^{i}$ and
$u_{J,i}^{\alpha}:=\partial u_{J}^{\alpha}/\partial x^{i}$.
One of the most important properties of these infinitesimal symmetries is that
they form a Lie algebra under the usual Lie bracket.
## 3 Lie symmetries of the equation (1)
We consider the one parameter Lie group of infinitesimal transformations on
$(x^{1}=x,x^{2}=t,u^{1}=u)$,
$\displaystyle\tilde{x}$ $\displaystyle=$ $\displaystyle
x+s\xi(x,t,u)+O(s^{2}),$ $\displaystyle\tilde{t}$ $\displaystyle=$
$\displaystyle x+s\eta(x,t,u)+O(s^{2}),$ (8) $\displaystyle\tilde{u}$
$\displaystyle=$ $\displaystyle x+s\varphi(x,t,u)+O(s^{2}),$
where $s$ is the group parameter and $\xi^{1}=\xi$, $\xi^{2}=\eta$ and
$\varphi^{1}=\varphi$ are the infinitesimals of the transformations for the
independent and dependent variables, respectively. The associated vector field
is of the form:
$\displaystyle{\mathbf{v}}=\xi(x,t,u)\partial_{x}+\eta(x,t,u)\partial_{t}+\varphi(x,t,u)\partial_{u}.$
(9)
and, by (6) its third prolongation is
$\displaystyle\textrm{Pr}^{(5)}{\mathbf{v}}$ $\displaystyle=$
$\displaystyle{\mathbf{v}}+\varphi^{x}\,\partial_{u_{x}}+\varphi^{t}\,\partial_{u_{t}}+\varphi^{x^{2}}\,\partial_{u_{x^{2}}}+\varphi^{xt}\,\partial_{u_{xt}}+\cdots$
(10)
$\displaystyle\cdots+\varphi^{t^{2}}\,\partial_{u_{t^{2}}}+\varphi^{xt^{4}}\,\partial_{u_{xt^{4}}}+\varphi^{t^{5}}\,\partial_{u_{t^{5}}}.$
where, for instance by (7) we have
$\displaystyle\varphi^{x}$ $\displaystyle=$ $\displaystyle
D_{x}(\varphi-\xi\,u_{x}-\eta\,u_{t})+\xi\,u_{x^{2}}+\eta\,u_{xt},$
$\displaystyle\varphi^{t}$ $\displaystyle=$ $\displaystyle
D_{t}(\varphi-\xi\,u_{x}-\eta\,u_{t})+\xi\,u_{xt}+\eta\,u_{t^{2}},$ (11)
$\displaystyle\ldots$ $\displaystyle\varphi^{t^{5}}$ $\displaystyle=$
$\displaystyle
D^{5}_{x}(\varphi-\xi\,u_{x}-\eta\,u_{t})+\xi\,u_{x^{5}t}+\eta\,u_{t^{5}},$
where $D_{x}$ and $D_{t}$ are the total derivatives with respect to $x$ and
$t$ respectively. By (5) the vector field ${\mathbf{v}}$ generates a one
parameter symmetry group of equation (1) if and only if
$\displaystyle\left.\begin{array}[]{l}\displaystyle\textrm{Pr}^{(5)}{\mathbf{v}}[u_{t}+auu_{x}+bu_{x^{3}}+cu_{x^{4}}+du_{x^{5}}-eu_{x^{2}}]=0,\\\\[14.22636pt]
\displaystyle\mbox{whenever}\hskip
14.22636ptu_{t}+auu_{x}+bu_{x^{3}}+cu_{x^{4}}+du_{x^{5}}-eu_{x^{2}}=0.\end{array}\right.$
(14)
The condition (14) is equivalent to
$\displaystyle\left.\begin{array}[]{l}\displaystyle
au_{x}\varphi+au\varphi^{x}+\varphi^{t}-e\varphi^{x^{2}}+b\varphi^{x^{3}}+c\varphi^{x^{4}}+d\varphi^{x^{5}}=0,\\\\[14.22636pt]
\displaystyle\mbox{whenever}\hskip
14.22636ptu_{t}+auu_{x}+bu_{x^{3}}+cu_{x^{4}}+du_{x^{5}}-eu_{x^{2}}=0.\end{array}\right.$
(17)
Substituting (11) into (17), and equating the coefficients of the various
monomials in partial derivatives with respect to $x$ and various power of $u$,
we can find the determining equations for the symmetry group of the equation
(1). Solving this equation, we get the following forms of the coefficient
functions
$\displaystyle\xi=c_{2}at+c_{3},\quad\eta=c_{1},\quad\varphi=c_{2}.$ (18)
where $c_{1}$, $c_{2}$ and $c_{3}$ are arbitrary constant. Thus, the Lie
algebra of infinitesimal symmetry of the equation (1) is spanned bye the three
vector fields:
$\displaystyle\textbf{v}_{1}=\partial_{x},\quad\textbf{v}_{2}=\partial_{t},\quad\textbf{v}_{3}=t\,\partial_{x}+\frac{1}{a}\partial_{u}.$
(19)
The commutation relations between these vector fields are given in the Table
1.
Table 1: The commutator table $[{\mathbf{v}}_{i},{\mathbf{v}}_{j}]$ | ${\mathbf{v}}_{1}$ | ${\mathbf{v}}_{2}$ | ${\mathbf{v}}_{3}$
---|---|---|---
${\mathbf{v}}_{1}$ | 0 | 0 | 0
${\mathbf{v}}_{2}$ | 0 | 0 | ${\mathbf{v}}_{1}$
${\mathbf{v}}_{3}$ | 0 | $-{\mathbf{v}}_{1}$ | 0
### Theorem 3.1
The Lie algebra $\pounds_{3}$ spanned by $v_{1},v_{2},v_{3}$ is second Bianchi
class type and it’s solvable and Nilpotent. [7]
To obtain the group transformation which is generated by the infinitesimal
generators $\textbf{v}_{i}$ for $i=1,2,3$ we need to solve the three systems
of first order ordinary differential equations
$\displaystyle\displaystyle\frac{d\tilde{x}(s)}{ds}$ $\displaystyle=$
$\displaystyle\xi_{i}(\tilde{x}(s),\tilde{t}(s),\tilde{u}(s)),\quad\tilde{x}(0)=x,$
$\displaystyle\displaystyle\frac{d\tilde{t}(s)}{ds}$ $\displaystyle=$
$\displaystyle\eta_{i}(\tilde{x}(s),\tilde{t}(s),\tilde{u}(s)),\quad\tilde{t}(0)=t,\qquad
i=1,2,3$ (20) $\displaystyle\displaystyle\frac{d\tilde{u}(s)}{ds}$
$\displaystyle=$
$\displaystyle\varphi_{i}(\tilde{x}(s),\tilde{t}(s),\tilde{u}(s)),\quad\tilde{u}(0)=u.$
Exponentiating the infinitesimal symmetries of (1), we get the one-parameter
groups $G_{i}(s)$ generated by $\textbf{v}_{i}$ for $i=1,2,3$
$\displaystyle G_{1}:(t,x,u)$ $\displaystyle\longmapsto$
$\displaystyle(x+s,t,u),$ $\displaystyle G_{2}:(t,x,u)$
$\displaystyle\longmapsto$ $\displaystyle(x,t+s,u),$ (21) $\displaystyle
G_{3}:(t,x,u)$ $\displaystyle\longmapsto$ $\displaystyle(x+ts,t,u+s/a).$
Consequently,
### Theorem 3.2
If $u=f(x,t)$ is a solution of (1), so are the functions
$\displaystyle G_{1}(s)\cdot f(x,t)$ $\displaystyle=$ $\displaystyle
f(x-s,t),$ $\displaystyle G_{2}(s)\cdot f(x,t)$ $\displaystyle=$
$\displaystyle f(x,t-s),$ (22) $\displaystyle G_{3}(s)\cdot f(x,t)$
$\displaystyle=$ $\displaystyle f(x-ts,t)+s/a.$
## 4 Optimal system and invariant solution of (1)
In this section, we obtain the optimal system and reduced forms of the
equation (1) by using symmetry group properties obtained in previous section.
Since the original partial differential equation has two independent
variables, then this partial differential equation transform into the ordinary
differential equation after reduction.
### Definition 4.1
Let $G$ be a Lie group with Lie algebra ${g}$. An optimal system of
$s-$parameter subgroups is a list of conjugacy inequivalent $s-$parameter
subalgebras with the property that any other subgroup is conjugate to
precisely one subgroup in the list. Similarly, a list of $s-$parameter
subalgebras forms an optimal system if every $s-$parameter subalgebra of ${g}$
is equivalent to a unique member of the list under some element of the adjoint
representation: $\overline{{h}}={\mathrm{A}d}(g({{h}}))$.[11]
### Theorem 4.2
Let $H$ and $\overline{H}$ be connected s-dimensional Lie subgroups of the Lie
group $G$ with corresponding Lie subalgebras ${h}$ and $\overline{{h}}$ of the
Lie algebra ${g}$ of $G$. Then $\overline{H}$=$gHg^{-1}$ are conjugate
subgroups if and only if $\overline{{h}}={\mathrm{A}d}(g({{h}}))$ are
conjugate subalgebras. [11]
By theorem (4.2), the problem of finding an optimal system of subgroups is
equivalent to that of finding an optimal system of subalgebras. For one-
dimensional subalgebras, this classification problem is essentially the same
as the problem of classifying the orbits of the adjoint representation, since
each one-dimensional subalgebra is determined by nonzero vector in ${g}$. This
problem is attacked by the naïve approach of taking a general element
${\mathbf{V}}$ in ${g}$ and subjecting it to various adjoint transformation so
as to ”simplify” it as much as possible. Thus we will deal with th
construction of the optimal system of subalgebras of ${g}$.
To compute the adjoint representation, we use the Lie series
$\displaystyle{\mathrm{A}d}(\exp(\varepsilon{\mathbf{v}}_{i}){\mathbf{v}}_{j})={\mathbf{v}}_{j}-\varepsilon[{\mathbf{v}}_{i},{\mathbf{v}}_{j}]+\frac{\varepsilon^{2}}{2}[{\mathbf{v}}_{i},[{\mathbf{v}}_{i},{\mathbf{v}}_{j}]]-\cdots,$
(23)
where $[{\mathbf{v}}_{i},{\mathbf{v}}_{j}]$ is the commutator for the Lie
algebra, $\varepsilon$ is a parameter, and $i,j=1,2,3$. Then we have the Table
2.
Table 2: Adjoint representation table of the infinitesimal generators ${\mathbf{v}}_{i}$ ${\mathrm{A}d}(\exp(\varepsilon{\mathbf{v}}_{i})){\mathbf{v}}_{j}$ | ${\mathbf{v}}_{1}$ | ${\mathbf{v}}_{2}$ | ${\mathbf{v}}_{3}$
---|---|---|---
${\mathbf{v}}_{1}$ | ${\mathbf{v}}_{1}$ | ${\mathbf{v}}_{2}$ | ${\mathbf{v}}_{3}$
${\mathbf{v}}_{2}$ | ${\mathbf{v}}_{1}$ | ${\mathbf{v}}_{2}$ | ${\mathbf{v}}_{3}-\varepsilon{\mathbf{v}}_{1}$
${\mathbf{v}}_{3}$ | ${\mathbf{v}}_{1}$ | ${\mathbf{v}}_{2}+\varepsilon{\mathbf{v}}_{1}$ | ${\mathbf{v}}_{3}$
### Theorem 4.3
An optimal system of one-dimensional Lie algebras of the equation (1) is
provided by 1) $\;\textbf{v}_{2}$, 2) $\;\textbf{v}_{3}+\alpha\textbf{v}_{2}$
### Proof:
Consider the symmetry algebra ${g}$ of the equation (1) whose adjoint
representation was determined in table 2 and
$\displaystyle{\mathbf{V}}=a_{1}{\mathbf{v}}_{1}+a_{2}{\mathbf{v}}_{2}+a_{3}{\mathbf{v}}_{3}.$
(24)
is a nonzero vector field in ${g}$. We will simplify as many of the
coefficients $a_{i}$ as possible through judicious applications of adjoint
maps to ${\mathbf{V}}$. Suppose first that $a_{3}\neq 0$. Scaling
${\mathbf{V}}$ if necessary we can assume that $a_{3}=1$. Referring to table
2, if we act on such a ${\mathbf{V}}$ by $Ad(\exp(a_{1}{\mathbf{v}}_{2}))$, we
can make the coefficient of ${\mathbf{v}}_{1}$ vanish and the vector field
${\mathbf{V}}$ takes the form
$\displaystyle{\mathbf{V}^{\prime}}=Ad(\exp(a_{1}{\mathbf{v}}_{2})){\mathbf{V}}=a^{\prime}_{2}{\mathbf{v}}_{2}+{\mathbf{v}}_{3}.$
(25)
for certain scalar ${a^{\prime}}_{2}$. So, depending on the sign of
${a^{\prime}}_{2}$, we can make the coefficient of ${\mathbf{v}}_{2}$ either
+1, -1 or 0. In other words, every one-dimensional subalgebra generated by a
${\mathbf{V}}$ with $a_{3}\neq 0$ is equivalent to one spanned by either
$\textbf{v}_{3}+\textbf{v}_{2}$, $\textbf{v}_{3}-\textbf{v}_{2}$ or
$\textbf{v}_{3}$.
The remaining one-dimensional subalgebras are spanned by vectors of the above
form with $a_{3}=0$. If $a_{2}\neq 0$, we scale to make $a_{2}=1$, and then
the vector field ${\mathbf{V}}$ takes the form
$\displaystyle{\mathbf{V}^{\prime\prime}}={a^{\prime\prime}}_{1}{\mathbf{v}}_{1}+{\mathbf{v}}_{2}.$
(26)
for certain scalar ${a^{\prime\prime}}_{1}$. Similarly we can vanish
${a^{\prime\prime}}_{1}$, so every one-dimensional subalgebra generated by a
${\mathbf{V}}$ with $a_{3}=0$ is equivalent to the subalgebra spanned by
$\textbf{v}_{2}$. $\mathchar 1027\relax$
### Theorem 4.4
An optimal system of two-dimensional Lie algebras of the equation (1) is
provided by
$<\alpha\textbf{v}_{2}+\textbf{v}_{3},\beta\textbf{v}_{1}+\gamma\textbf{v}_{3}>$
Symmetry group method will be applied to the (1) to be connected directly to
some order differential equations. To do this, a particular linear
combinations of infinitesimals are considered and their corresponding
invariants are determined.
The equation (1) is expressed in the coordinates $(x,t,u)$, so to reduce this
equation is to search for its form in specific coordinates. Those coordinates
will be constructed by searching for independent invariants $(\chi,\zeta)$
corresponding to the infinitesimal generator. So using the chain rule, the
expression of the equation in the new coordinate allows us to the reduced
equation.
In what follows, we begin the reduction process of equation (1).
### 4.5 Galilean-Invariant Solutions.
First, consider $\textbf{v}_{3}=t\,\partial_{x}+\frac{1}{a}\partial_{u}$. To
determine independent invariants $I$, we need to solve the first partial
differential equations $\textbf{v}_{i}$(I)=0, that is invariants $\zeta$ and
$\chi$ can be found by integrating the corresponding characteristic system,
which is
$\displaystyle\frac{dt}{0}=\frac{dx}{t}=\frac{a\,du}{1}.$ (27)
The obtained solution are given by
$\displaystyle\chi=t,\qquad\zeta=u-\frac{x}{a\,t}.$ (28)
Therefore, a solution of our equation in this case is
$\displaystyle u=f(x,\chi,\zeta)=\zeta+\frac{x}{a\,t}.$ (29)
The derivatives of $u$ are given in terms of $\zeta$ and $\chi$ as
$\displaystyle u_{x}=\frac{1}{a\,t},\quad
u_{x^{2}}=u_{x^{3}}=u_{x^{4}}=u_{x^{5}}=0,\quad
u_{t}=\zeta_{\chi}-\frac{1}{a\,t^{2}}\,x.$ (30)
Substituting (30) into the equation (1), we obtain the order ordinary
differential equation
$\displaystyle\zeta_{\chi}+\frac{1}{\chi}\,\zeta=0.$ (31)
The solution of this equation is $\zeta=\frac{c_{1}}{\chi}$. Consequently, we
obtain that
$\displaystyle u(x,t)=\frac{x+a\,c_{1}}{a\,t}.$ (32)
### 4.6 Travelling wave solutions.
The invariants of
$\textbf{v}_{2}+c_{0}\,\textbf{v}_{1}=c_{0}\,\partial_{x}+\partial_{t}$ are
$\chi=x-c_{0}\,t$ and $\zeta=u$ so the reduced form of equation (1) is
$\displaystyle-
c_{0}\,\zeta_{\chi}+a\,\zeta\,\zeta_{\chi}+b\,\zeta_{\chi^{3}}+c\,\zeta_{\chi^{4}}+d\,\zeta_{\chi^{5}}-e\,\zeta_{\chi^{2}}=0.$
(33)
The family of the periodic solution for Eq.(33) when $a=1$ takes the following
form (see [8])
$\displaystyle\zeta=a_{0}+A\,sn^{4}\\{m\,\chi,k\\}+B\,sn\\{m\,\chi,k\\}\,\frac{d}{d\chi}sn\\{m\,\chi,k\\}.$
(34)
where $sn\\{m\,\chi,k\\}$ is Jacobi elliptic function.
### 4.7
The invariants of
$\textbf{v}_{3}+\beta\textbf{v}_{2}=t\,\partial_{x}+\beta\partial_{t}+\frac{1}{a}\partial_{u}$
are $\chi=x-\frac{t^{2}}{2\beta}$ and $\zeta=u-\frac{t}{a\beta}$ so the
reduced form of equation (1) is
$\displaystyle\frac{1}{a\beta}-\frac{t}{\beta}\zeta_{\chi}+a\,\zeta\,\zeta_{\chi}+b\,\zeta_{\chi^{3}}+c\,\zeta_{\chi^{4}}+d\,\zeta_{\chi^{5}}-e\,\zeta_{\chi^{2}}=0.$
(35)
### 4.8
The invariants of $\textbf{v}_{2}=\partial_{t}$ are $\chi=x$ and $\zeta=u$
then the reduced form of equation (1) is
$\displaystyle
a\,\zeta\,\zeta_{\chi}+b\,\zeta_{\chi^{3}}+c\,\zeta_{\chi^{4}}+d\,\zeta_{\chi^{5}}-e\,\zeta_{\chi^{2}}=0.$
(36)
### 4.9
The invariants of $\textbf{v}_{1}=\partial_{x}$ are $\chi=t$ and $\zeta=u$
then the reduced form of equation (1) is $\zeta_{\chi}=0$, then the solution
of this equation is $u(x,t)=cte$.
## Acknowledgment
This research was supported by Islamic Azad University of Karaj bracnch.
## References
* [1] G.W. Bluman and J.D. Cole, Similarity Methods for Differential Equations, Applied Mathematical Sciences, No.13, Springer, New York, 1974.
* [2] G.W. Bluman and S. Kumei, Symmetries and Differential Equations, Springer, New York, 1989.
* [3] C.G. Caro, T.J. Pedly, R.C. Schroter and W.A. Seed, The mechanics of the circulation, Oxford: Oxford University Press, 1978.
* [4] Y.C. Fung, Biomechanics: mechanical properties of living tissues, New York, Springer-Verlag, 1993.
* [5] I.L. Freire and A.C. Gilli Martins, Symmetry coefficients of semilinear PDEs, arXiv: 0803.0865v1, 2008.
* [6] N.H. Ibragimov, (Editor), CRC Handbook of Lie Group Analysis of Differential Equations, Vol. 1, Symmetries, Exact Solutions and Conservation Laws, CRC Press, Boca Raton, 1994.
* [7] S. V. Khabirov, Classification of Three-Dimensional Lie Algebras in $R^{3}$ and Their Second-Order Differential Invariants, Lobachevskii Journal of Mathematics, 31(2)(2010), 152-156.
* [8] N.A. Kudryashov and D.I. Sinelshchikov, Nonlinear evolution equation for describing waves in a viscoelastic tube, Commun Nonlinear Sci Numer Simulat, 16 (2011), 2390-2396.
* [9] N.A. Kudryashov and I.L. Chernyavskii, Nonlinear waves in fluid flow through a viscoelastic tube, Fluid Dynam, 41(1)(2006), 49-62.
* [10] S. Lie, Theories der Tranformationgruppen, Dritter und Letzter Abschnitt, Teubner, Leipzig, 1893.
* [11] P.J. Olver, Applications of Lie Groups to Differential Equations, Springer, New York, 1993.
* [12] A.D. Polyanin, and V.F. Zaitsev, Handbook of Nonlinear Partial Differential Equations, Chapman & Hall/CRC, Boca Raton, 2004.
|
arxiv-papers
| 2011-05-03T16:26:21 |
2024-09-04T02:49:18.566793
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Mehdi Nadjafikhah and Vahid Shirvani-Sh",
"submitter": "Mehdi Nadjafikhah",
"url": "https://arxiv.org/abs/1105.0625"
}
|
1105.0629
|
# Classical and Nonclassical symmetries of the
(2+1)- dimensional Kuramoto-Sivashinsky equation
Mehdi Nadjafikhah m_nadjafikhah@iust.ac.ir Fatemeh Ahangari
fa_ahanagari@iust.ac.ir School of Mathematics, Iran University of Science and
Technology, Narmak, Tehran 1684613114, Iran.
###### Abstract
In this paper, we have studied the problem of determining the largest possible
set of symmetries for an important example of nonlinear dynamical system: the
Kuramoto-Sivashinsky (K-S) model in two spatial and one temporal dimensions.
By applying the classical symmetry method for the K-S model, we have found the
classical symmetry operators. Also, the structure of the Lie algebra of
symmetries is discussed and the optimal system of subalgebras of the equation
is constructed. The Lie invariants associated to the symmetry generators as
well as the corresponding similarity reduced equations are also pointed out.
By applying the nonclassical symmetry method for the K-S model we concluded
that the analyzed model do not admit supplementary, nonclassical type,
symmetries. Using this procedure, the classical Lie operators only were
generated.
###### keywords:
(2+1)-dimensional Kuramoto-Sivanshsky equation, Classical Symmetries,
Invariant Solutions, Optimal System, Similarity Reduced Equations,
Nonclassical Symmetries.
††thanks: Corresponding author: Tel. +9821-73913426. Fax +9821-77240472.
,
## 1 Introduction
As it is well known, commonly the growth process of thin films is from
equilibrium and involves the interactions of a large number of particles.
Notable attentions both in theory and experiment have been devoted to the
study of the mechanism of thin film growth (see [8] and references therein).
In order to describe the growth process mathematically with the partial
differential equations of the surface height with respect to time and spatial
cooordinates, some models have been proposed by researches. These models are
based on the collective and statistic behaviour of deposited particles. One of
them include the Kuramoto-Sivashinsky (K-S) models in which the surface
tension, diffusion, nonlinear effect and random deposition are considered and
so on.
The K-S equation has been derived e.g. in the context of chemical turbulence
[9]. As we declarde above, it is also so important because it can describe the
flow of a falling fluid film [16]. The solutions of the K-S equation can
clarify the features about the evolution of surface morphology which is
instructive to the investigations of the mechanism of thin film growth.
The two dimensional Kuramoto-Sivashinsky (K-S) model represents a nonlinear
dynamical system which is defined in a two dimensional space $\\{x,y\\}$, the
dependent variable $h=h(x,y,t)$ satisfies a fourth order partial derivative
equation which is of the form as follows:
$\displaystyle\frac{\partial h}{\partial
t}=\nu\nabla^{2}h-\kappa\nabla^{4}h+\lambda{\mid\nabla
h\mid}^{2},\;\;\;\;\;\;\;\nabla=(\frac{\partial}{\partial
x},\frac{\partial}{\partial y}).$ (1.1)
where $h$ indicates the height of the interface. The prefactor $\nu$ is
proportional to the surface tension coefficient and $\nu\nabla^{2}h$ is
refered to as the surface tension term. The term $\kappa\nabla^{4}h$ is the
result of surface diffusion which is because of the curvature-induced chemical
potential gradient. The prefactor $\kappa$ is referred to as the surface
diffusion term and is proportional to the surface diffusion coefficient. The
term $\lambda{\mid\nabla h\mid}^{2}$ represents the existence of overhangs and
vacancies during the deposition process (when $\lambda>0$). The combination of
the $\nu\nabla^{2}h$ and $\lambda{\mid\nabla h\mid}^{2}$ terms models the
desorption effect of the deposited atoms [8].
Some important results concerning the K-S equation are already known. For
example, fractal tracer distributions of particles advected in the velocity
field of the K-S equation was found in [7]. Hong-Ji et al. in [8] have studied
the evolution of (2+1)-dimensional surface morphology in the Kuramoto \-
Sivashinsky K-S model by using the numerical simulation approach.
The symmetry group method plays a fundamental role in the analysis of
differential equations.The theory of Lie symmetry groups of differential
equations was first developed by Sophus Lie [12] at the end of the nineteenth
century, which was called classical Lie method. Nowadays, application of Lie
transformations group theory for constructing the solutions of nonlinear
partial differential equations (PDEs) can be regarded as one of the most
active fields of research in the theory of nonlinear PDEs and applications.
The fact that symmetry reductions for many PDEs are unobtainable by applying
the classical symmetry method, motivated the creation of several
generalizations of the classical Lie group method for symmetry reductions. The
nonclassical symmetry method of reduction was devised originally by Bluman and
Cole in 1969 [2], to find new exact solutions of the heat equation. The
description of the method is presented in [6, 11]. Many authors have used the
nonclassical method to solve PDEs. In [5] Clarkson and Mansfield have proposed
an algorithm for calculating the determining equations associated to the
nonclassical method. A new procedure for finding nonclassical symmetries has
been proposed by B$\hat{\mbox{i}}$l$\breve{\mbox{a}}$ and Niesen in [1].
Classical and nonclassical symmetries of nonlinear PDEs may be applied to
reduce the number of independent variables of the PDEs. Particularly, the PDEs
can be reduced to ODES. The ODEs may also have symmetries which enable us to
reduce the order of the equation and we can integrate to obtain exact
solutions. In [4], R. Cimpoiasu et al. have discussed the classical and
nonclassical symmetries of the K-S model in the special case
$\nu=\kappa=\lambda=1$, but there are some problems because of some
computational mistakes. In this paper, we will try to analyze the problem of
the symmetries of the K-S equation in a general case by considering all the
three prefactors $\nu,\kappa,\lambda$. Meanwhile, since this paper is the
generalization of [4], those computational mistakes exist in [4] will be
corrected and some further facts are added.
The structure of the present paper is as follows: In section 2, using the
basic Lie symmetry method the most general Lie point symmetry group of the K-S
equation is determined. In section 3, some results yield from the structure of
the Lie algebra of symmetries are given. Section 4 is devoted to obtaining the
one-parameter subgroups and the most general group-invariant solutions of K-S
equation. In section 5, we construct the optimal system of one-dimensional
subalgebras. Lie invariants and similarity reduced equations corresponding to
the infinitesimal symmetries of equation (1.1) are obtained in section 6. In
section 7, is devoted to the nonclassical symmetries of the K-S model,
symmetries generated when a supplementary condition, the invariance surface
condition, is imposed. Some concluding remarks are presented at the end of the
paper.
## 2 Classical symmetries of the K-S Equation
In this section, the classical Lie symmetry method for the K-S Equation has
been performed. First, we recall the general procedure for determining
symmetries for an arbitrary system of partial differential equations [8, 9].
To begin, consider a general system of partial differential equation
containing $q$ dependent and $p$ independent variables as follows
$\displaystyle\Delta_{\mu}(x,u^{(n)})=0\ ,\ \ \ \ \ \ \mu=1,...,r,$ (2.2)
where $u^{(n)}$ represents all the derivatives of $u$ of all orders from 0 to
$n$. The one-parameter Lie group of transformations
$\displaystyle\bar{x}^{i}=x^{i}+\varepsilon\xi^{i}(x,u)+O(\varepsilon^{2}),\quad\bar{u}^{\alpha}=u^{\alpha}+\varepsilon\varphi^{\alpha}(x,u)+O(\varepsilon^{2}),\qquad
i=1,...,p,\;\;\alpha=1,...,q.$ (2.3)
where
$\xi^{i}=\frac{\partial\bar{x}^{i}}{\partial\varepsilon}|_{\varepsilon=0}$ and
$\varphi^{\alpha}=\frac{\partial\bar{u}^{\alpha}}{\partial\varepsilon}|_{\varepsilon=0}$,
are given. The action of the Lie group can be recovered from that of its
infinitesimal generators acting on the space of independent and dependent
variables. Hence, we consider the following general vector field
$\displaystyle V=\sum_{i=1}^{p}\xi^{i}(x,u)\frac{\rm\partial}{\partial
x^{i}}+\sum_{\alpha=1}^{q}\varphi^{\alpha}(x,u)\frac{\partial}{\partial
u^{\alpha}}$ (2.4)
the characteristic of the vector field $V$ is given by the function
$\displaystyle
Q^{\alpha}(x,u^{(1)})=\varphi^{\alpha}(x,u)-\sum_{i=1}^{p}\xi^{i}(x,u)\frac{\partial
u^{\alpha}}{\partial x^{i}},\ \ \alpha=1,...,q.$ (2.5)
Assume that the symmetry generator associated to (1.1) is given by
$\displaystyle
V:=\xi^{1}(x,y,t,h)\partial_{x}+\xi^{2}(x,y,t,h)\partial_{y}+\xi^{3}(x,y,t,h)\partial_{t}+\varphi(x,y,t,h)\partial_{h}$
(2.6)
The fourth prolongation of $V$ is the vector field
$\displaystyle
V^{(4)}=V+\varphi^{x}\partial_{h_{x}}+\varphi^{t}\partial_{h_{y}}+\varphi^{t}\partial_{h_{t}}+\varphi^{xx}\partial_{h_{xx}}+\varphi^{xt}\partial_{h_{xt}}+...+\varphi^{tttt}\partial_{h_{tttt}}$
(2.7)
with coefficients given by
$\displaystyle\varphi^{\iota}=D_{\iota}Q+\xi^{1}h_{x\iota}+\xi^{2}h_{y\iota}+\xi^{3}h_{t\iota},\quad\quad\quad\varphi^{\iota\jmath}=D_{\imath}(D_{\jmath}Q)+\xi^{1}h_{x\imath\jmath}+\xi^{2}h_{y\imath\jmath}+\xi^{3}h_{t\imath\jmath},$
(2.8)
where $Q=\varphi-\xi^{1}h_{x}-\xi^{2}h_{y}-\xi^{3}h_{t}$ is the characteristic
of the vector field $V$ given by (2.4) and $D_{i}$ represents total derivative
and subscripts of $u$ are derivative with respect to respective coordinates.
$\imath$ and $\jmath$ in the above could be $x$ and $t$ coordinates. By
theorem (6.5) in [[]], the invariance condition for the K-S is given by the
relation:
$\displaystyle\mathbf{V}^{(4)}[h_{t}+\nu(h_{2x}+h_{2y})+\kappa(h_{4x}+2h_{(2x)(2y)}+h_{4y})-\lambda(h_{x}^{2}+h_{y}^{2})]=0$
(2.9)
Hence, the invariance condition () is equivalent with the following equation:
$\displaystyle\varphi^{t}$
$\displaystyle+\nu(\varphi^{2x}+\varphi^{2y})+\kappa(\varphi^{4x}+2\varphi^{(2x)(2y)}+\varphi^{4y})-2\lambda(\varphi^{x}h_{x}+\varphi^{y}u_{y})=0$
Substituting () into invariance condition (), we are left with a polynomial
equation involving the various derivatives of $h(x,y,t)$ whose coefficients
are certain derivatives of $\xi^{1}$, $\xi^{2}$, $\xi^{3}$ and $\varphi$.
Since, $\xi^{1}$, $\xi^{2}$, $\xi^{3}$ and $\varphi$ depend only on $x$, $y$,
$t$, $h$ we can equate the individual coefficients to zero, leading to the
complete set of determining equations:
$\displaystyle\xi^{1}_{h}$
$\displaystyle=0,\quad\quad\xi^{1}_{x}=0,\quad\quad\xi^{1}_{y}+\xi^{2}_{x}=0,\quad\quad\xi^{1}_{2t}=0$
$\displaystyle\xi^{2}_{h}$
$\displaystyle=0,\quad\quad\xi^{2}_{y}=0,\quad\quad\xi^{2}_{2t}=0,\quad\quad\xi^{2}_{2x}=0$
$\displaystyle\xi^{3}_{t}$
$\displaystyle=0,\quad\quad\xi^{3}_{h}=0,\quad\quad\xi^{3}_{x}=0,\quad\quad\xi^{3}_{y}=0,\quad\quad\xi^{2}_{xt}=0$
$\displaystyle\varphi_{t}$
$\displaystyle=0,\quad\quad\varphi_{h}=0,\quad\quad(2\lambda-2)\varphi_{x}-\xi^{1}_{t}=0,\quad\quad(2\lambda-2)\varphi_{y}-\xi^{2}_{t}=0$
By solving this system of PDEs, we find that:
###### Theorem 1.
The Lie group of point symmetries of K-S (1.1) has a Lie algebra generated by
the vector fields $\mathbf{V}=\xi\frac{\partial}{\partial
x}+\tau\frac{\partial}{\partial t}+\varphi\frac{\partial}{\partial h}$, where
$\displaystyle\xi^{1}(x,y,t,h)$
$\displaystyle=c_{4}y+c_{2}t+c_{3},\quad\quad\xi^{2}(x,y,t,h)=-c_{4}x+c_{5}t+c_{6},$
$\displaystyle\xi^{3}(x,y,t,h)$
$\displaystyle=c_{1},\quad\quad\varphi(x,y,t,h)=\big{(}\frac{x}{2\lambda}\big{)}c_{2}+\big{(}\frac{y}{2\lambda}\big{)}c_{5}+c_{7}.$
and $c_{i},\ i=1,...,7$ are arbitrary constants.
###### Corollary 2.
Infinitesimal generators of every one parameter Lie group of point symmetries
of the K-S are:
$\displaystyle\mathbf{V}_{1}$ $\displaystyle=\frac{\partial}{\partial
x},\;\;\;\;\;\mathbf{V}_{2}=\frac{\partial}{\partial
y},\;\;\;\;\;\mathbf{V}_{3}=\frac{\partial}{\partial
t},\;\;\;\;\;\mathbf{V}_{4}=\frac{\partial}{\partial h},\;\;\;\;\;$
$\displaystyle\mathbf{V}_{5}$ $\displaystyle=y\frac{\partial}{\partial
x}-x\frac{\partial}{\partial
y},\;\;\;\;\;\mathbf{V}_{6}=t\frac{\partial}{\partial
x}+\frac{x}{2\lambda}\frac{\partial}{\partial
h},\;\;\;\;\;\mathbf{V}_{7}=t\frac{\partial}{\partial
y}+\frac{y}{2\lambda}\frac{\partial}{\partial h}$
The commutator table of symmetry generators of the K-S is given in Table 1,
where the entry in the $i^{\mbox{th}}$ row and $j^{\mbox{th}}$ column is
defined as $[V_{i},V_{j}]=V_{i}V_{j}-V_{j}V_{i},\ \ i,j=1,...,7.$
$\;\;\;\;\;\;\;\quad\quad\quad\Box$
Table 1: Commutation relations satisfied by infinitesimal generators $\displaystyle\begin{array}[]{l | l l l l l l l}\hline\cr&\hskip 17.07182ptV_{1}&V_{2}&V_{3}&V_{4}&V_{5}&V_{6}&V_{7}\\\ \hline\cr V_{1}&\hskip 17.07182pt0&0&0&0&{\bf V_{2}}&\big{(}\frac{1}{2\lambda}\big{)}{\bf V_{4}}&0\\\ V_{2}&\hskip 17.07182pt0&0&0&0&{\bf V_{1}}&0&\big{(}\frac{1}{2\lambda}\big{)}{\bf V_{4}}\\\ V_{3}&\hskip 17.07182pt0&0&0&0&0&{\bf V_{1}}&{\bf V_{2}}\\\ V_{4}&\hskip 17.07182pt0&0&0&0&0&0&0\\\ V_{5}&\hskip 14.22636pt{\bf V_{2}}&-{\bf V_{1}}&0&0&0&{\bf V_{7}}&-{\bf V_{6}}\\\ V_{6}&\hskip 4.26773pt\big{(}\frac{1}{2\lambda}\big{)}{\bf V_{4}}&0&-{\bf V_{1}}&0&-{\bf V_{7}}&0&0\\\ V_{7}&\hskip 17.07182pt0&\big{(}\frac{-1}{2\lambda}\big{)}{\bf V_{4}}&-{\bf V_{2}}&0&{\bf V_{6}}&0&0\\\ \hline\cr\end{array}$
## 3 The Structure of the Lie algebra of Symmetries
In this part, we determine the structure of symmetry Lie algebra of the K-S
equation.
${g}$ has no non-trivial Levi decomposition in the form ${g}={r}\mathchar
9582\relax{g}_{1}$, because ${g}$ has not any non-trivial radical, i.e. if
${r}$ be the radical of ${g}$, then ${g}={r}$.
The Lie algebra ${g}$ is solvable and non-semisimple. It is solvable, because
if ${g}^{(1)}=<V_{i},[V_{i},V_{j}]>=[{g},{g}]$, we have:
$\displaystyle{g}^{(1)}=[{g},{g}]=<-V_{2},-\frac{1}{2\lambda}V_{4},V_{1},V_{7},-V_{6}>,\
\mbox{and}$
$\displaystyle{g}^{(2)}=[{g}^{(1)},{g}^{(1)}]=<\frac{1}{2\lambda}V_{4}>,$
so we have the following chain of ideals
${g}^{(1)}\supset{g}^{(2)}\supset\\{0\\}$. Also, ${g}$ is not semisimple,
because its killing form
$\displaystyle\pmatrix{0&0&0&0&0&0&0\cr 0&0&0&0&0&0&0\cr 0&0&0&0&0&0&0\cr
0&0&0&0&0&0&0\cr 0&0&0&0&-4&0&0\cr 0&0&0&0&0&0&0\cr 0&0&0&0&0&0&0}$
is degenerate.
Taking into account the table of commutators, ${g}$ has two abelian four and
two dimensional subalgebras which are spanned by $<V_{1},V_{2},V_{3},V_{4}>$
and $<V_{6},V_{7}>$, respectively, such that the first one is an ideal in
${g}$.
## 4 Reduction of the Equation
The equation (1.1) can be regarded as a submanifold of the jet space
$J^{4}({R}^{3},{R})$. So we can find the most general group of invariant
solutions of equation (1.1). To obtain the group transformation which is
generated by the infinitesimal generators ${\bf
V_{i}}=\xi^{1}_{i}{\partial_{x}}+\xi^{2}_{i}{\partial_{y}}+\xi^{3}_{i}\partial_{t}+\varphi_{i}\partial_{h}$
for $i=1,...,7$, we need to solve the seven systems of first order ordinary
differential equations,
$\displaystyle\frac{d\bar{x}(s)}{ds}$
$\displaystyle=\xi^{1}_{i}(\bar{x}(s),\bar{y}(s),\bar{t}(s),\bar{h}(s)),\ \ \
\ \ \bar{x}(0)=x,$ (4.11) $\displaystyle\frac{d\bar{y}(s)}{ds}$
$\displaystyle=\xi^{2}_{i}(\bar{x}(s),\bar{y}(s),\bar{t}(s),\bar{h}(s)),\ \ \
\ \ \bar{y}(0)=y,$ $\displaystyle\frac{d\bar{t}(s)}{ds}$
$\displaystyle=\xi^{3}_{i}(\bar{x}(s),\bar{y}(s),\bar{t}(s),\bar{h}(s)),\ \ \
\ \ \bar{t}(0)=t,$ $\displaystyle\frac{d\bar{u}(s)}{ds}$
$\displaystyle=\varphi_{i}(\bar{x}(s),\bar{y}(s),\bar{t}(s),\bar{u}(s)),\ \ \
\ \ \bar{u}(0)=u,\ \ \ \ \ \ \ \ i=1,...,7.$
exponentiating the infinitesimal symmetries of equation (1.1), we get the one
parameter groups $G_{k}(x)$ generated by $v_{k}$ for $k=1,...,7$.
###### Theorem 3.
The one-parameter groups $G_{i}(t):M\longrightarrow M$ generated by the
$V_{i},\ i=1,...,7$, are given in the following table:
$\displaystyle G_{1}(s)$ $\displaystyle\ :\ (x,y,t,u)\longmapsto(x+s,y,t,u),$
$\displaystyle G_{2}(s)$ $\displaystyle\ :\ (x,y,t,u)\longmapsto(x,y+s,t,u),$
$\displaystyle G_{3}(s)$ $\displaystyle\ :\ (x,y,t,u)\longmapsto(x,y,t+s,u),$
$\displaystyle G_{4}(s)$ $\displaystyle\ :\ (x,y,t,u)\longmapsto(x,y,t,u+s),$
$\displaystyle G_{5}(s)$ $\displaystyle\ :\
(x,y,t,u)\longmapsto(x\cos(s)+y\sin(s),y\cos(s)-x\sin(s),t,u)$ $\displaystyle
G_{6}(s)$ $\displaystyle\ :\
(x,y,t,u)\longmapsto(x+st,y,t,\big{(}\frac{1}{4\lambda-4}\big{)}(ts^{2}+2xs+4u\lambda-4u)),$
$\displaystyle G_{7}(s)$ $\displaystyle\ :\
(x,y,t,u)\longmapsto(x,y+st,t,\big{(}\frac{1}{4\lambda-4}\big{)}(ts^{2}+2xs+4u\lambda-4u)).$
where entries give the transformed point
$\exp(tX_{i})(x,y,t,u)=(\bar{x},\bar{y},\bar{t},\bar{u}).$
These operators which generate invariance of the evolution equation (2) corre-
spond to the following transformations: U0 generate temporal translation, U1
and U4 Galilean boosts, U2 spatial dilatation and gauge transformation, U3 a
gauge transformation, U5 a spatial rotation, U6 and U7 spatial translations.
Recall that in general to each one parameter subgroups of the full symmetry
group of a system there will correspond a family of solutions called invariant
solutions. Consequently, we can state the following theorem:
###### Theorem 4.
If $u=f(x,y,t)$ is a solution of equation (1.1), so are the functions
$\displaystyle G_{1}(s)$ $\displaystyle.f(x,y,t)=f(x+s,y,t),$ $\displaystyle
G_{2}(s)$ $\displaystyle.f(x,y,t)=f(x,y+s,t),$ $\displaystyle G_{3}(s)$
$\displaystyle.f(x,y,t)=f(x,y,t+s),$ $\displaystyle G_{4}(s)$
$\displaystyle.f(x,y,t)=f(x,y,t)-s,$ $\displaystyle G_{5}(s)$
$\displaystyle.f(x,y,t)=f(x\cos(s)+y\sin(s),y\cos(s)-x\sin(s),t),$
$\displaystyle G_{6}(s)$
$\displaystyle.f(x,y,t)=f(x+st,y,t)-\big{(}\frac{2xs+ts^{2}}{4\lambda-4}\big{)},$
$\displaystyle G_{7}(s)$
$\displaystyle.f(x,y,t)=f(x,y+st,t)-\big{(}\frac{2ys+ts^{2}}{4\lambda-4}\big{)}.$
Thus, from above theorem we conclude that:
###### Corollary 5.
for the arbitrary combination $V=\sum_{i=1}^{7}V_{i}\in{g}$, the K-S equation
has the following solution
$\displaystyle
u=e^{2\epsilon_{7}}f(xe^{\epsilon_{7}}+t\epsilon_{5}+\epsilon_{1},ye^{\epsilon_{7}}\cos(\epsilon_{6})+z\sin(\epsilon_{6})+\epsilon_{2},-yze^{\epsilon_{7}}\sin(\epsilon_{6})+z\cos(\epsilon_{6})+\epsilon_{3},te^{3\epsilon_{7}}+\epsilon_{4})-\frac{\epsilon_{5}}{\alpha}$
(4.14)
where $\epsilon_{i}$ are arbitrary real numbers.
## 5 Classification of Subalgebras for the K-S equation
Let $\Delta$ be a system of differential equations with the symmetry Lie group
$G$. Now, $G$ operates on the set of the solutions of $\Delta$ denoted by $S$.
Let $s\cdot G$ be the orbit of $s$, and $H$ be an $r-$dimensional subgroup of
$G$. Hence, $H-$ invariant solutions $s\in S$ are characterized by equality
$s\cdot S=\\{s\\}$. If $h\in G$ is a transformation and $s\in S$, then
$h\cdot(s\cdot H)=(h\cdot s)\cdot(hHh^{-1})$. Consequently, every
$H-$invariant solution $s$ transforms into an $hHh^{-1}-$ invariant solution
(Proposition 3.6 of [13]).
Therefore, different invariant solutions are found from similar subgroups of
$G$. Thus, classification of $H-$invariant solutions is reduced to the problem
of classification of subgroups of $G$, up to similarity. An optimal system of
$r-$dimensional subgroups of $G$ is a list of conjugacy inequivalent
$r-$dimensional subgroups of $G$ with the property that any other subgroup is
conjugate to precisely one subgroup in the list. Similarly, a list of
$r-$dimensional subalgebras forms an optimal system if every $r-$dimensional
subalgebra of ${g}$ is equivalent to a unique member of the list under some
element of the adjoint representation: $\tilde{{h}}={\rm Ad}(g)\cdot{{h}},\
g\in G$.
Let $H$ and $\tilde{H}$ be connected, $r-$dimensional Lie subgroups of the Lie
group $G$ with corresponding Lie subalgebras ${{h}}$ and $\tilde{{h}}$ of the
Lie algebra ${{g}}$ of $G$. Then $\tilde{H}=gHg^{-1}$ are conjugate subgroups
if and only if $\tilde{{h}}={\rm Ad}(g)\cdot{{h}}$ are conjugate subalgebras
(Proposition 3.7 of [13]). Thus, the problem of finding an optimal system of
subgroups is equivalent to that of finding an optimal system of subalgebras,
and so we concentrate on it.
### 5.1 Optimal system of one-dimensional subalgebras of the K-S equation
There is clearly an infinite number of one-dimensional subalgebras of the K-S
Lie algebra, ${g}$, each of which may be used to construct a special solutions
or class of solutions. So, it is impossible to use all the one dimensional
subalgebras of the K-S to construct invariant solutions. However, a well-known
standard procedure [15] allows us to classify all the one-dimensional
subalgebras into subsets of conjugate subalgebras. This involves constructing
the adjoint representation group, which introduces a conjugate relation in the
set of all one-dimensional subalgebras. In fact, for one-dimensional
subalgebras,the classification problem is essentially the same as the problem
of classifying the orbits of the adjoint representation. If we take only one
representative from each family of equivalent subalgebras, an optimal set of
subalgebras is created. The corresponding set of invariant solutions is then
the minimal list from which we can get all other invariant solutions of one-
dimensional subalgebras simply via transformations.
Each $V_{i},\ i=1,...,7$, of the basis symmetries generates an adjoint
representation (or interior automorphism) $\mathrm{Ad}(\exp(\varepsilon
V_{i}))$ defined by the Lie series
$\displaystyle\mathrm{Ad}(\exp(\varepsilon.V_{i}).V_{j})=V_{j}-s.[V_{i},V_{j}]+\frac{\varepsilon^{2}}{2}.[V_{i},[V_{i},V_{j}]]-\cdots$
(5.15)
where $[V_{i},V_{j}]$ is the commutator for the Lie algebra, $s$ is a
parameter, and $i,j=1,\cdots,7$ ([Olv1],page 199). In table 2 we give all the
adjoint representations of the K-S Lie group, with the (i,j) the entry
indicating $\mathrm{Ad}(\exp(\varepsilon V_{i}))V_{j}$. Essentially, these
adjoint representations simply permute amongst ”similar” one dimensional
subalgebras. Hence, they are used to identify similar one dimensional
subalgebras.
We can expect to simplify a given arbitrary element,
$\displaystyle
V=a_{1}V_{1}+a_{2}V_{2}+a_{3}V_{3}+a_{4}V_{4}+a_{5}V_{5}+a_{6}V_{6}+a_{7}V_{7}.$
(5.16)
of the K-S Lie algebra ${g}$. Note that the elements of ${g}$ can be
represented by vectors $a=(a_{1},...,a_{7})\in{{R}}^{7}$ since each of them
can be written in the form (6.22) for some constants $a_{1},...,a_{7}$. Hence,
the adjoint action can be regarded as (in fact is) a group of linear
transformations of the vectors $(a_{1},...,a_{7})$.
Table 2: Adjoint representation generated by the basis symmetries of the K-S Lie algebra $\displaystyle\begin{array}[]{l | l l l l l l l}\hline\cr Ad&\hskip 42.67912ptV_{1}&V_{2}&V_{3}&V_{4}\\\ \hline\cr V_{1}&\hskip 39.83368ptV_{1}&V_{2}&V_{3}&V_{4}\\\ V_{2}&\hskip 19.91684pt\quad\quad V_{1}&V_{2}&V_{3}&V_{4}\\\ V_{3}&\hskip 19.91684pt\quad\quad V_{1}&V_{2}&V_{3}&V_{4}\\\ V_{4}&\hskip 19.91684pt\quad\quad V_{1}&V_{2}&V_{3}&V_{4}\\\ V_{5}&\hskip 14.22636pt\cos(\varepsilon)V_{1}-\sin(\varepsilon)V_{2}&\cos(\varepsilon)V_{2}+\sin(\varepsilon)V_{1}&V_{3}&V_{4}\\\ V_{6}&\hskip 17.07182ptV_{1}+\big{(}\frac{\varepsilon}{2\lambda}\big{)}{V_{4}}&V_{2}&V_{3}+\varepsilon V_{1}+\big{(}\frac{\varepsilon^{2}}{4\lambda-4}\big{)}V_{4}&V_{4}\\\ V_{7}&\hskip 34.14322ptV_{1}&V_{2}+\big{(}\frac{\varepsilon}{2\lambda}\big{)}{V_{4}}&V_{3}+\varepsilon V_{2}+\big{(}\frac{\varepsilon^{2}}{4\lambda-4}\big{)}V_{4}&V_{4}\\\ \hline\cr\end{array}$
$\displaystyle\begin{array}[]{l | l l l }\hline\cr Ad&\hskip 42.67912ptV_{5}&V_{6}&V_{7}\\\ \hline\cr V_{1}&\hskip 28.45274ptV_{5}+\varepsilon V_{2}&V_{6}-\big{(}\frac{\varepsilon}{2\lambda}\big{)}{V_{4}}&V_{7}\\\ V_{2}&\hskip 28.45274ptV_{5}-\varepsilon V_{1}&V_{6}&V_{7}-\big{(}\frac{\varepsilon}{2\lambda}\big{)}{V_{4}}\\\ V_{3}&\hskip 39.83368ptV_{5}&V_{6}-\varepsilon V_{1}&V_{7}-\varepsilon V_{2}\\\ V_{4}&\hskip 39.83368ptV_{5}&V_{6}&V_{7}\\\ V_{5}&\hskip 39.83368ptV_{5}&\cos(\varepsilon)V_{6}-\sin(\varepsilon)V_{7}&\cos(\varepsilon)V_{7}+\sin(\varepsilon)V_{6}\\\ V_{6}&\hskip 28.45274ptV_{5}+\varepsilon V_{7}&V_{6}&V_{7}\\\ V_{7}&\hskip 28.45274ptV_{5}-\varepsilon V_{6}&{V_{6}}&{V_{7}}\\\ \hline\cr\end{array}$
Therefore, we can state the following theorem:
###### Theorem 6.
A one-dimensional optimal system of the K-S Lie algebra ${g}$ is given by
$\displaystyle(1)$ $\displaystyle:\ V_{2}+aV_{6},\hskip 102.43008pt(4):\
aV_{3}+V_{7},$ $\displaystyle(2)$ $\displaystyle:\ aV_{3}+bV_{5},\hskip
96.73918pt(5):\ V_{1}+aV_{3}+bV_{7},$ $\displaystyle(3)$ $\displaystyle:\
aV_{3}+V_{6},\hskip 102.43008pt(6):\ aV_{3}+V_{4}+bV_{5},$
where $a,b,c\in{{R}}$ and $a\neq 0$.
Proof: $F^{s}_{i}:{g}\to{g}$ defined by
$V\mapsto\mathrm{Ad}(\exp(s_{i}V_{i}).V)$ is a linear map, for $i=1,\cdots,7$.
The matrix $M^{s}_{i}$ of $F^{s}_{i}$, $i=1,\cdots,7$, with respect to basis
$\\{V_{1},\cdots,V_{7}\\}$ is
$\displaystyle M_{1}^{s}=\pmatrix{1&0&0&0&0&0&0\cr 0&1&0&0&0&0&0\cr
0&0&1&0&0&0&0\cr 0&0&0&1&0&0&0\cr 0&s&0&0&1&0&0\cr 0&0&0&-s\zeta&0&1&0\cr
0&0&0&0&0&0&1}\ M_{2}^{s}=\pmatrix{1&0&0&0&0&0&0\cr 0&1&0&0&0&0&0\cr
0&0&1&0&0&0&0\cr 0&0&0&1&0&0&0\cr-s&0&0&0&1&0&0\cr 0&0&0&0&0&1&0\cr
0&0&0&-s\zeta&0&0&1}\ M_{3}^{s}=\pmatrix{1&0&0&0&0&0&0\cr 0&1&0&0&0&0&0\cr
0&0&1&0&0&0&0\cr 0&0&0&1&0&0&0\cr 0&0&0&0&1&0&0\cr-s&0&0&0&0&1&0\cr
0&-s&0&0&0&0&1}$ $\displaystyle\hskip
56.9055ptM_{4}^{s}=\pmatrix{1&0&0&0&0&0&0\cr 0&1&0&0&0&0&0\cr 0&0&1&0&0&0&0\cr
0&0&0&1&0&0&0\cr 0&0&0&0&1&0&0\cr 0&0&0&0&0&1&0\cr 0&0&0&0&0&0&1}\quad\quad
M_{5}^{s}=\pmatrix{C&-S&0&0&0&0&0\cr S&C&0&0&0&0&0\cr 0&0&1&0&0&0&0\cr
0&0&0&1&0&0&0\cr 0&0&0&0&1&0&0\cr 0&0&0&0&0&C&-S\cr 0&0&0&0&0&S&C}$
$\displaystyle\hskip 56.9055ptM_{6}^{s}=\pmatrix{1&0&0&s\zeta&0&0&0\cr
0&1&0&0&0&0&0\cr s&0&1&s^{2}\frac{\zeta}{2}&0&0&0\cr 0&0&0&1&0&0&0\cr
0&0&0&0&1&0&s\cr 0&0&0&0&0&1&0\cr 0&0&0&0&0&0&1}\quad\quad
M_{7}^{s}=\pmatrix{1&0&0&0&0&0&0\cr 0&1&0&s\zeta&0&0&0\cr
0&s&1&s^{2}\frac{\zeta}{2}&0&0&0\cr 0&0&0&1&0&0&0\cr 0&0&0&0&1&-s&0\cr
0&0&0&0&0&1&0\cr 0&0&0&0&0&0&1}$
respectively, where $S=\sin s$, $C=\cos s$ and $\zeta=\frac{1}{2\lambda}$. Let
$V=\sum_{i=1}^{7}a_{i}V_{i}$, then
$\displaystyle F^{s_{7}}\circ F^{s_{6}}_{6}\circ\cdots\circ
F^{s_{1}}_{1}\;:\;V\;\mapsto\;$ (5.19) $\displaystyle\hskip
11.38109pt\big{(}\cos(s_{5})a_{1}-\sin(s_{5})a_{2}+(\zeta
s_{6}\cos(s_{5})-\zeta s_{7}\sin(s_{5}))a_{4}\big{)}V_{1}$
$\displaystyle+\big{(}\sin(s_{5})a_{1}+\cos(s_{5})a_{2}+(\zeta
s_{7}\cos(s_{5})+\zeta s_{6}\sin(s_{5}))a_{4}\big{)}V_{2}$
$\displaystyle+\big{(}s_{6}a_{1}+s_{7}a_{2}+a_{3}+\frac{\zeta}{2}(s_{7}^{2}+s_{6}^{2})a_{4}\big{)}V_{3}+a_{4}V_{4}$
$\displaystyle+\bigg{(}\big{(}-s_{2}\cos(s_{5})+s_{1}\sin(s_{5})\big{)}a_{1}+\big{(}s_{2}\sin(s_{5})+s_{1}\cos(s_{5})\big{)}a_{2}$
$\displaystyle+\big{(}(s_{2}\sin(s_{5})+s_{1}\cos(s_{5}))\zeta
s_{7}+(-s_{2}\cos(s_{5})+s_{1}\sin(s_{5}))\zeta
s_{6}\big{)}a_{4}+a_{5}-s_{7}a_{6}+s_{6}a_{7}\bigg{)}V_{5}$
$\displaystyle+\bigg{(}-s_{3}\cos(s_{5})a_{1}+s_{3}\sin(s_{5})a_{2}+\big{(}\zeta
s_{3}s_{7}\sin(s_{5})-\zeta s_{3}s_{6}\cos(s_{5})-\zeta
s_{1}\big{)}a_{4}+\cos(s_{5})a_{6}-\sin(s_{5})a_{7}\bigg{)}V_{6}$
$\displaystyle+\bigg{(}-s_{3}\sin(s_{5})a_{1}-s_{3}\cos(s_{5})a_{2}+\big{(}-\zeta
s_{3}s_{7}\cos(s_{5})-\zeta s_{3}s_{6}\sin(s_{5})-\zeta
s_{2}\big{)}a_{4}+\sin(s_{5})a_{6}+\cos(s_{5})a_{7}\bigg{)}V_{7}.$
Now, we can simplify $V$ as follows:
If $a_{4}\neq 0$ we can make the coefficients of $V_{1}$, $V_{2}$ and $V_{6}$
vanish by $F_{1}^{s_{1}}$, $F_{2}^{s_{2}}$, $F_{6}^{s_{6}}$ and
$F_{7}^{s_{7}}$. By setting $s_{1}=\frac{a_{6}}{\zeta a_{4}}$,
$s_{2}=\frac{a_{7}}{\zeta a_{4}}$, $s_{6}=-\frac{a_{1}}{\zeta a_{4}}$ and
$s_{7}=-\frac{a_{2}}{\zeta a_{4}}$, respectively. Scaling $V$ if necessary, we
can assume that $a_{4}=1$. So, $V$ is reduced to the case (6).
If $a_{4}=0$ and $a_{1}\neq 0$ then we can make the coefficients of $V_{2}$,
$V_{5}$ and $V_{6}$ vanish by $F_{5}^{s_{5}}$, $F_{3}^{s_{3}}$ and
$F_{2}^{s_{2}}$. By setting $s_{5}=-\arctan(\frac{a_{2}}{a_{1}})$,
$s_{3}=\frac{a_{6}}{a_{1}}$ and $s_{2}=\frac{a_{5}}{a_{1}}$, respectively.
Scaling $V$ if necessary, we can assume that $a_{1}=1$. So, $V$ is reduced to
the case (5).
If $a_{4}=a_{1}=0$ and $a_{2}\neq 0$ then we can make the coefficients of
$V_{3}$, $V_{7}$ and $V_{5}$ vanish by $F_{7}^{s_{7}}$, $F_{3}^{s_{3}}$ and
$F_{1}^{s_{1}}$. By setting $s_{7}=-\frac{a_{3}}{a_{2}}$,
$s_{3}=\frac{a_{7}}{a_{2}}$ and $s_{1}=-\frac{a_{5}}{a_{2}}$, respectively.
Scaling $V$ if necessary, we can assume that $a_{2}=1$. So, $V$ is reduced to
the case (1).
If $a_{1}=a_{2}=a_{4}=0$ and $a_{6}\neq 0$ then we can make the coefficients
of $V_{7}$ and $V_{5}$ vanish by $F_{5}^{s_{5}}$ and $F_{7}^{s_{7}}$. By
setting $s_{5}=-\arctan(\frac{a_{7}}{a_{6}})$ and $s_{7}=\frac{a_{5}}{a_{6}}$,
respectively. Scaling $V$ if necessary, we can assume that $a_{6}=1$. So, $V$
is reduced to the case (3).
If $a_{1}=a_{2}=a_{4}=a_{6}=0$ and $a_{7}\neq 0$ then we can make the
coefficients of $V_{5}$ vanish by $F_{6}^{s_{6}}$. By setting
$s_{6}=-\frac{a_{5}}{a_{7}}$, respectively. Scaling $V$ if necessary, we can
assume that $a_{7}=1$. So, $V$ is reduced to the case (4).
If $a_{1}=a_{2}=a_{4}=a_{6}=a_{7}=0$ then $V$ is reduced to the case (2).
$\;\;\;\;\;\;\;\Box$
### 5.2 Two-dimensional optimal system
The next step is constructing the two-dimensional optimal system, i.e.,
classification of two-dimensional subalgebras of ${g}$. This process is
performed by selecting one of the vector fields as stated in theorem (6). Let
us consider $V_{1}$ (or $V_{i},i=2,3,4,5,6,7$). Corresponding to it, a vector
field $V=a_{1}V_{1}+\cdots+a_{7}V_{7}$, where $a_{i}$’s are smooth functions
of $(x,y,z,t)$ is chosen, so we must have
$\displaystyle[V_{1},V]=\vartheta V_{1}+\varpi V.$ (5.20)
Equation (5.20) leads us to the system
$\displaystyle C^{i}_{jk}\alpha_{j}a_{k}=\vartheta
a_{i}+\varpi\alpha_{i}\hskip 56.9055pt(i=1,\cdots,7).$ (5.21)
The solutions of the system (5.21), give one of the two-dimensional generator
and the second generator is $V_{1}$ or, $V_{i},i=2,3,4,5,6,7$ if selected.
After the construction of all two-dimensional subalgebras, for every vector
fields of theorem 6, they need to be simplified by the action of adjoint
matrices in the manner analogous to the way of one-dimensional optimal system.
Thus the two-dimensional optimal system of ${g}$ has three classes of ${g}$’s
members combinations such as
$\displaystyle<$
$\displaystyle\alpha_{1}V_{1}+\alpha_{2}V_{2},\beta_{1}V_{3}+\beta_{2}V_{4}+\beta_{3}V_{5}>,$
$\displaystyle<$
$\displaystyle\alpha_{1}V_{1}+\alpha_{2}V_{3},\beta_{1}V_{2}+\beta_{2}V_{4}+\beta_{3}V_{5}>,$
$\displaystyle<$ $\displaystyle V_{1}+\alpha
V_{3},V_{7}>\;\;\;,\;\;\;<V_{1}+\alpha V_{2},V_{7}>$ $\displaystyle<$
$\displaystyle
V_{1}+\alpha_{1}V_{6},\beta_{1}V_{4}+\beta_{2}V_{5}>\;\;\;\,;\;\;\;<V_{4}+\alpha_{1}V_{5}+\alpha_{2}V_{6},V_{1}>.$
where $\alpha_{i},\ i=1,2$ and $\beta_{j},\ j=1,..,3$ are real numbers and
$\alpha$ is a real nonzero constant. All of these sub-algebras are abelian.
### 5.3 Three-dimensional optimal system
This system can be developed by the method of expansion of two-dimensional
optimal system. For this take any two-dimensional subalgebras of (5.2), let us
consider the first two vector fields of (5.2), and call them $Y_{1}$ and
$Y_{2}$, thus, we have a subalgebra with basis $\\{Y_{1},Y_{2}\\}$, find a
vector field $Y=a_{1}\textbf{V}_{1}+\cdots+a_{7}\textbf{V}_{7}$, where
$a_{i}$’s are smooth functions of $(x,y,z,t,u)$, such the triple
$\\{Y_{1},Y_{2},Y\\}$ generates a basis of a three-dimensional algebra. For
that it is necessary an sufficient that the vector field $Y$ satisfies the
equations
$\displaystyle[Y_{1},Y]=\vartheta_{1}Y+\varpi_{1}Y_{1}+\rho_{1}Y_{2},\qquad[Y_{2},Y]=\vartheta_{2}Y+\varpi_{2}Y_{1}+\rho_{2}Y_{2},$
(5.23)
and following from (5.23), we obtain the system
$\displaystyle
C^{i}_{jk}\beta_{r}^{j}a_{k}=\vartheta_{1}a_{i}+\varpi_{1}\beta_{r}^{i}+\rho_{1}\beta_{s}^{i},\qquad
C^{i}_{jk}\beta_{s}^{j}a_{k}=\vartheta_{2}a_{i}+\varpi_{2}\beta_{r}^{i}+\rho_{2}\beta_{s}^{i}.$
(5.24)
The solutions of system (5.24) is linearly independent of $\\{Y_{1},Y_{2}\\}$
and give a three-dimensional subalgebra. This process is used for the another
two couple vector fields of (5.2).
By performing the above procedure for all of the two couple vector fields of
(5.2), we conclude that $Y=\beta_{1}Y_{1}+\beta_{2}Y_{2}$. By a suitable
change of the base of ${g}$, we can assume that $Y=0$, so that ${g}$ is not a
$3-$dimensional sub-algebra. Thus, we infer that:
###### Corollary 7.
The K-S equation Lie algebra ${g}$ has no three-dimensional Lie subalgebra.
## 6 Similarity Reduction of K-S equation
The K-S equation (1.1) is expressed in the coordinates $(x,y,t,u)$, so we
ought to search for this equation’s form in specific coordinates in order to
reduce it. Those coordinates will be constructed by looking for independent
invariants $(z,w,r)$ corresponding to the infinitesimal symmetry generator.
Hence, by applying the chain rule, the expression of the equation in the new
coordinate leads to the reduced equation.
We can now compute the invariants associated with the symmetry operators. They
can be obtained by integrating the characteristic equations. For example for
the operator, $H_{5}:=V_{6}=t\frac{\partial}{\partial
x}-\frac{1}{2\lambda}x\frac{\partial}{\partial h}$ this means:
$\displaystyle\frac{dx}{t}=\frac{dy}{0}=\frac{dt}{0}=\frac{-2\lambda dh}{x}$
(6.25)
The corresponding invariants are as follows: $z=y,\ w=t,\
r=h+\frac{x^{2}}{4\lambda t}$.
Taking into account the last invariant, we assume a similarity solution of the
form: $h=f(z,w)-\frac{x^{2}}{4\lambda t}$ and we substitute it into (2) to
determine the form of the function $f(z,w)$: We obtain that $f(z,w)$ has to be
a solution of the following differential equation:
$\displaystyle f_{w}+\nu f_{zz}+kf_{4z}-\lambda f_{z}^{2}-\frac{\nu}{2\lambda
w}=0$ (6.26)
Having determined the infinitesimals, the similarity solutions $z_{j}$,
$w_{j}$ and $h_{j}$ are listed in Table I. In Table II we list the reduced
form of the K-S equation corresponding to infinitesimal symmetries.
Table 3: Lie Invariants and Similarity Solutions $\displaystyle\begin{array}[]{l | l l l l l l l}\hline\cr J&\hskip 14.22636ptH_{j}&z_{j}&w_{j}&r_{j}&h_{j}\\\ \hline\cr 1&\hskip 14.22636pt{\bf V_{1}}&y&t&h&f(z,w)\\\ 2&\hskip 14.22636pt{\bf V_{2}}&x&t&h&f(z,w)\\\ 3&\hskip 14.22636pt{\bf V_{3}}&x&y&h&f(z,w)\\\ 4&\hskip 14.22636pt{\bf V_{5}}&x^{2}+y^{2}&t&h&f(z,w)\\\ 5&\hskip 14.22636pt{\bf V_{6}}&y&t&h+\frac{x^{2}}{4\lambda t}&f(z,w)-\frac{x^{2}}{4\lambda t}\\\ 6&\hskip 14.22636pt{\bf V_{7}}&x&t&h+\frac{y^{2}}{4\lambda t}&f(z,w)-\frac{y^{2}}{4\lambda t}\\\ 7&\hskip 5.69046pt{\bf V_{2}+V_{6}}&t&y-\frac{x}{t}&h+\frac{x^{2}}{4\lambda t}&f(z,w)-\frac{x^{2}}{4\lambda t}\\\ 8&\hskip 5.69046pt{\bf V_{3}+V_{5}}&x^{2}+y^{2}&t-\arctan(\frac{x}{y})&h&f(z,w)\\\ 9&\hskip 8.5359pt{\bf V_{3}+V_{6}}&y&-2x+t^{2}&h+\frac{3tx-t^{3}}{6\lambda}&f(z,w)-\frac{3tx-t^{3}}{6\lambda}\\\ 10&\hskip 5.69046pt{\bf V_{3}+V_{7}}&x&-2y+t^{2}&h+\frac{3ty-t^{3}}{6\lambda}&f(z,w)-\frac{3ty-t^{3}}{6\lambda}\\\ 11&\hskip 5.69046pt{\bf V_{1}+V_{3}+V_{7}}&t-x&y+\frac{1}{2}x^{2}-tx&h+\frac{x^{3}-3x^{2}t+6xy}{12\lambda}&f(z,w)-\frac{x^{3}-3x^{2}t+6xy}{12\lambda}\\\ 12&\hskip 5.69046pt{\bf V_{3}+V_{4}+V_{5}}&x^{2}+y^{2}&t-\arctan(\frac{x}{y})&h-\arctan(\frac{x}{y})&f(z,w)+\arctan(\frac{x}{y})\\\ \\\ \hline\cr\end{array}$ Table 4: Reduced equations corresponding to infinitesimal symmetries $\displaystyle\begin{array}[]{l | l l l l l l l}\hline\cr J&\hskip 19.91684pt\mbox{Similarity Reduced Equations}\\\ \hline\cr 1&\hskip 25.6073ptf_{w}+\nu f_{zz}+kf_{4z}-\lambda f_{z}^{2}=0\\\ 2&\hskip 19.91684ptf_{w}+\nu f_{zz}+kf_{4z}-\lambda f_{z}^{2}=0\\\ 3&\hskip 19.91684pt\nu(f_{zz}+f_{ww})+k(f_{4z}+2f_{zzww}+f_{4w})-\lambda(f_{z}^{2}+f_{w}^{2})=0\\\ 4&\hskip 19.91684ptf_{w}+4z\nu f_{zz}+4\nu f_{z}+16kz^{2}f_{4z}+64zf_{zzz}+32kf_{zz}-4\lambda zf_{z}^{2}=0\\\ 5&\hskip 19.91684ptf_{w}+\nu f_{zz}+kf_{4z}-\lambda f_{z}^{2}-\frac{\nu}{2\lambda w}=0\\\ 6&\hskip 19.91684ptf_{w}+\nu f_{zz}+kf_{4z}-\lambda f_{z}^{2}-\frac{\nu}{2\lambda t}=0\\\ 7&\hskip 19.91684pt2\lambda z^{4}f_{z}+2\nu\lambda(z^{2}+z^{4})f_{ww}+2k\lambda(z+1)^{2}f_{4w}-2\lambda^{2}(z^{2}+z^{4})f_{w}^{2}=0\\\ 8&\hskip 19.91684pt(4k+\nu z)f_{ww}-\lambda zf_{w}^{2}-4\lambda z^{3}f_{z}^{2}+64kz^{3}f_{3z}+16kz^{4}f_{4z}+(4\nu z^{3}+32kz^{2})f_{zz}+4\nu z^{2}f_{z}+kf_{4w}+z^{2}f_{w}+8kz^{2}f_{zzww}=0\\\ 9&\hskip 19.91684pt16\nu\lambda f_{ww}-4\nu\lambda f_{zz}-64k\lambda f_{4w}-32k\lambda f_{zzww}-4k\lambda f_{4z}+16\lambda^{2}f_{w}^{2}+4\lambda^{2}f_{z}^{2}-w=0\\\ 10&\hskip 19.91684pt-4\nu\lambda f_{zz}-16\nu\lambda f_{ww}-4k\lambda f_{4z}-32k\lambda f_{zzww}-64k\lambda f_{4w}+4\lambda^{2}f_{z}^{2}+16\lambda^{2}f_{w}^{2}-w=0\\\ 11&\hskip 19.91684pt32k\lambda(z^{2}+z^{4})f_{4w}+(96k\lambda z^{2}+32k\lambda)f_{3w}+(16z^{2}+48k\lambda+16\nu\lambda)f_{ww}+(96k\lambda z^{2}+32k\lambda)f_{zzww}\\\ &\hskip 19.91684pt+64k\lambda(z+z^{3})f_{zwww}+16\nu\lambda f_{zz}+(16\lambda-8\lambda w)f_{z}-16\lambda^{2}(z^{2}+1)f_{w}^{2}+(16\nu\lambda-16\lambda zw)f_{w}\\\ &\hskip 19.91684pt-32\lambda^{2}zf_{w}f_{z}+192k\lambda zf_{zww}+64k\lambda zf_{zzzw}+32\nu\lambda zf_{zw}+8\nu z+4w^{2}=0\\\ 12&\hskip 19.91684pt(4\nu z^{3}+32kz^{2})f_{zz}+(z^{2}+2\lambda z)f_{w}+16kz^{4}f_{4z}+(\nu z+4k)f_{ww}+kf_{4w}-4\lambda z^{3}f_{z}^{2}+64kz^{3}f_{3z}\\\ &\hskip 19.91684pt+4\nu z^{2}f_{z}+8kz^{2}f_{zzww}-\lambda zf_{w}^{2}-\lambda z=0\\\ \\\ \hline\cr\end{array}$
## 7 Nonclassical Symmetries of K-S Equation
In this section, we will apply the so called nonclassical symmetry method [2].
Beside the classical symmetries, the nonclassical symmetry method can be used
to find some other solutions for a system of PDEs and ODEs. The nonclassical
symmetry method has become the focus of a lot of research and many
applications to physically important partial differential equations as in [6,
11, 5, 1]. Here, we follow the method used by cai guoliang et al, for
obtaining the non-classical symmetries of the Burgers-Fisher equation based on
compatibility of evolution equations [3]. For the non-classical method, we
must add the invariance surface condition to the given equation, and then
apply the classical symmetry method. This can also be conveniently written as:
$\displaystyle V^{(4)}\Delta_{1}|_{\Delta_{1}=0,\Delta_{2}=0}=0,$ (7.29)
where $V$ is defined in (2.5) and $\Delta_{1}$ and $\Delta_{2}$ are given as:
$\displaystyle\Delta_{1}:=h_{t}+h_{2x}+h_{2y}+h_{4x}+2h_{(2x)(2y)}+h_{4y}-(h_{x}^{2}+h_{y}^{2}),\hskip
28.45274pt\Delta_{2}:=\varphi-\xi^{1}h_{x}-\xi^{2}h_{y}-\xi^{3}h_{t}$
Without loss of generality we choose $\xi^{3}=1$. In this case using
$\Delta_{2}$ we have:
$\displaystyle h_{t}=\varphi-\xi^{1}h_{x}-\xi^{2}h_{y}.$ (7.30)
First, total differentiation $D_{t}$ of the equation gives
$\displaystyle D_{t}(h_{t})=D_{t}(-\nu h_{2x}-\nu
h_{2y}-kh_{4x}-2kh_{(2x)(2y)}-kh_{4y}+\lambda h_{x}^{2}+\lambda h_{y}^{2})$
$\displaystyle D_{t}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y})$ $\displaystyle=$
$\displaystyle-\nu h_{2xt}-\nu
h_{2yt}-kh_{4xt}-2kh_{(2x)(2yt)}-kh_{4yt}+2\lambda h_{x}h_{xt}+2\lambda
h_{y}h_{yt}$ $\displaystyle=$ $\displaystyle-\nu
D_{xx}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y})-\nu
D_{yy}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y})-kD_{xxxx}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y})$
$\displaystyle-$ $\displaystyle
kD_{yyyy}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y})-2kD_{xxyy}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y})+2\lambda
h_{x}D_{x}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y})$ $\displaystyle+$ $\displaystyle
2\lambda h_{y}D_{y}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y}).$
Substituting $\xi^{1}u_{xt}$ and $\xi^{2}u_{yt}$ to both sides, we can get
$\displaystyle\varphi^{t}$ $\displaystyle=$
$\displaystyle-\nu\varphi^{xx}-\nu\varphi^{yy}-k\varphi^{xxxx}-2k\varphi^{xxyy}-k\varphi^{yyyy}+2\lambda
h_{x}\varphi^{x}+2\lambda h_{y}\varphi^{y}+\xi^{1}u_{xt}+\xi^{2}u_{yt}$
$\displaystyle+$
$\displaystyle\xi^{1}u_{xxx}+\xi^{2}u_{xxy}+\xi^{1}u_{yyx}+\xi^{2}u_{yyy}+k\xi^{1}u_{xxxxx}+k\xi^{2}u_{xxxxy}+2k\xi^{1}u_{xxxyy}$
$\displaystyle+$
$\displaystyle\xi^{2}u_{xxyyy}-2\lambda\xi^{1}u_{x}u_{xx}-2\lambda\xi^{2}u_{x}u_{xy}-2\lambda
u_{y}\xi^{1}u_{xy}-2\lambda u_{y}\xi^{2}u_{yy}$
By virtue of
$\displaystyle D_{x}(h_{t})=D_{x}(-\nu h_{2x}-\nu
h_{2y}-kh_{4x}-2kh_{(2x)(2y)}-kh_{4y}+\lambda h_{x}^{2}+\lambda h_{y}^{2})$
$\displaystyle D_{y}(h_{t})=D_{y}(-\nu h_{2x}-\nu
h_{2y}-kh_{4x}-2kh_{(2x)(2y)}-kh_{4y}+\lambda h_{x}^{2}+\lambda h_{y}^{2})$
gives
$\displaystyle h_{xt}$ $\displaystyle=-\nu h_{xxx}-\nu
h_{2yx}-kh_{xxxxx}-2kh_{(2x)(2yx)}-kh_{4yx}+2\lambda h_{x}h_{xx}+2\lambda
h_{y}h_{yx}$ $\displaystyle h_{yt}$ $\displaystyle=-\nu h_{2xy}-\nu
h_{yyy}-kh_{4xy}-2kh_{(2x)(yyy)}-kh_{yyyyy}+2\lambda h_{x}h_{xy}+2\lambda
h_{y}h_{yy}$
so it gives the governing equation
$\displaystyle\varphi^{t}+\varphi^{2x}+\varphi^{2y}+\varphi^{4x}+2\varphi^{(2x)(2y)}+\varphi^{4y}-2\lambda(\varphi^{x}u_{x}+\varphi^{y}u_{y})=0$
where $\varphi^{t}$, $\varphi^{x}$ are given by
$\displaystyle\varphi^{t}$ $\displaystyle=$ $\displaystyle
D_{t}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y}-\xi^{3}u_{t})+\xi^{1}u_{xt}+\xi^{2}u_{yt}+\xi^{3}u_{tt}=D_{t}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y})+\xi^{1}u_{xt}+\xi^{2}u_{yt},$
$\displaystyle\varphi^{x}$ $\displaystyle=$ $\displaystyle
D_{x}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y}-\xi^{3}u_{t})+\xi^{1}u_{xx}+\xi^{2}u_{xy}+\xi^{3}u_{xt}=D_{x}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y})+\xi^{1}u_{xx}+\xi^{2}u_{xy},$
$\displaystyle\varphi^{y}$ $\displaystyle=$ $\displaystyle
D_{y}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y}-\xi^{3}u_{t})+\xi^{1}u_{xy}+\xi^{2}u_{yy}+\xi^{3}u_{ty}=D_{t}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y})+\xi^{1}u_{xy}+\xi^{2}u_{yy},$
$\displaystyle\varphi^{xx}$ $\displaystyle=$ $\displaystyle
D_{xx}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y}-\xi^{3}u_{t})+\xi^{1}u_{xxx}+\xi^{2}u_{xxy}+\xi^{3}u_{xxt}=D_{xx}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y})+\xi^{1}u_{xxx}+\xi^{2}u_{xxy},$
$\displaystyle\varphi^{yy}$ $\displaystyle=$ $\displaystyle
D_{yy}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y}-\xi^{3}u_{t})+\xi^{1}u_{xyy}+\xi^{2}u_{yyy}+\xi^{3}u_{tyy}=D_{yy}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y})+\xi^{1}u_{xyy}+\xi^{2}u_{yyy},$
$\displaystyle\varphi^{xxxx}$ $\displaystyle=$ $\displaystyle
D_{xxxx}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y}-\xi^{3}u_{t})+\xi^{1}u_{xxxxx}+\xi^{2}u_{yxxxx}+\xi^{3}u_{txxxx}$
$\displaystyle=$ $\displaystyle
D_{xxxx}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y})+\xi^{1}u_{xxxxx}+\xi^{2}u_{yxxxx},$
$\displaystyle\varphi^{xxyy}$ $\displaystyle=$ $\displaystyle
D_{xxyy}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y}-\xi^{3}u_{t})+\xi^{1}u_{xxxyy}+\xi^{2}u_{xxyyy}+\xi^{3}u_{xxyyt}$
$\displaystyle=$ $\displaystyle
D_{xxxx}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y})+\xi^{1}u_{xxxxx}+\xi^{2}u_{yxxxx},$
$\displaystyle\varphi^{yyyy}$ $\displaystyle=$ $\displaystyle
D_{yyyy}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y}-\xi^{3}u_{t})+\xi^{1}u_{xyyyy}+\xi^{2}u_{yyyyy}+\xi^{3}u_{tyyyy}$
$\displaystyle=$ $\displaystyle
D_{yyyy}(\varphi-\xi^{1}u_{x}-\xi^{2}u_{y})+\xi^{1}u_{xyyyy}+\xi^{2}u_{yyyyy}.$
Substituting them into the governing equation, we can get the determining
equations for the symmetries of the K-S equation. Substituting $\xi^{3}=1$
into the determining equations we obtain the determining equations of the
nonclassical symmetries of the original equation (1.1). Solving the system
obtained by this procedure, the only solutions we found were exactly the
solution obtained through the classical symmetry approach (theorem 1). This
means that no supplementary symmetries, of non-classical type, are specific
for our model.
## Conclusion
In this paper by applying the criterion of invariance of the equation under
the infinitesimal prolonged infinitesimal generators, we find the most general
Lie point symmetries group of the K-S equation. Also, we have constructed the
optimal system of one-dimensional subalgebras of K-S equation. The latter,
creats the preliminary classification of group invariant solutions. The Lie
invariants and similarity reduced equations corresponding to infinitesimal
symmetries are obtained. By applying the nonclassical symmetry method for the
K-S model we concluded that the analyzed model do not admit supplementary,
nonclassical type symmetries. Using this procedure, the classical Lie
operators only were generated.
## References
* [1] N. B$\hat{\mbox{i}}$l$\breve{\mbox{a}}$, J. Niesen, On a new procedure for finding nonclassical symmetries, Journal of Symbolic Computation 38 (2004) 1523-1533.
* [2] GW. Bluman, JD. Cole, The general similarity solutions of the heat equation, Journal of Mathematics and Mechanics 18(1969) 1025-1042.
* [3] G. Cai, Y. Wang, F. zhang, Nonclassical symmetries and group invariant solutions of Burgers-Fisher equations, World Journal of Modelling and Simulation 3 (2007) No. 4, pp. 305-309.
* [4] R. Cimpoiasu, V. Cimpoiasu, R. Constantinescu, Classical and Non-classical Symmetries for the 2D-Kuramoto-Sivanshsky Model, Physica AUC, 18(2008), 214-218.
* [5] PA. Clarkson, EL. Mansfield, Algorithms for the nonclassical method of symmetry reductions, SIAM Journal on Applied Mathematics 55(1994) 1693-1719.
* [6] PA. Clarkson, Nonclassical symmetry reductions of the Boussinesq equation, Chaos, Solitons, Fractals 5(1995) 2261-2301.
* [7] J. L. Hansen and T.Bohr, Fractal tracer distributions in turbulent field theories, arXiv:chao- dyn/9709008V1.
* [8] Q. Hong-Ji, J. Yong-Hao, C. Chuan-Fu, H. Li-Hua, Y. Kui, S. Jian-Da, Dynamic Scaling Behaviour in (2+1)-Dimensional Kuramoto-Sivashinsky Model, CHIN.PHYS.LETT. Vol. 20, 5 (2003) 622-625.
* [9] Y. Kuramoto, Chemical Oscillations, Waves, and Turbulence, Springer Verlag, 1984.
* [10] A. Kushner, V. Lychagin, V. Rubstov, Contact geometry and nonlinear differential equations, Cambridge University Press, Cambridge 2007.
* [11] D. Levi D, P. Winternitz, Nonclassical symmetry reduction: example of the Boussinesq equation, Journal of Physics A 22(1989) 2915-2924.
* [12] S. Lie, On integration of a class of linear partial differential equations by means of definite integrals, Arch. for Math.6, 328 (1881). translation by N.H. Ibragimov.
* [13] P.J. Olver, Applications of Lie Groups to Differential Equations, New York Springer, 1986.
* [14] P.J. Olver, Equivalence, Invariants and Symmetry, Cambridge University Press, 1995.
* [15] L. V. Ovsiannikov, Group Analysis of Differential Equations, Academic Press, New York, 1982.
* [16] G. I. Sivashinsky, Acta Astronautica 4(1977), 1177-1206.
|
arxiv-papers
| 2011-05-03T16:33:12 |
2024-09-04T02:49:18.571546
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Mehdi Nadjafikhah, Fatemeh Ahangari",
"submitter": "Mehdi Nadjafikhah",
"url": "https://arxiv.org/abs/1105.0629"
}
|
1105.0729
|
# Low Mach number limit for the multi-dimensional Full magnetohydrodynamic
equations
Song Jiang LCP, Institute of Applied Physics and Computational Mathematics,
P..O. Box 8009, Beijing 100088, P.R. China jiang@iapcm.ac.cn , Qiangchang Ju
Institute of Applied Physics and Computational Mathematics, P.O. Box 8009-28,
Beijing 100088, P.R. China qiangchang_ju@yahoo.com and Fucai Li Department
of Mathematics, Nanjing University, Nanjing 210093, P.R. China fli@nju.edu.cn
###### Abstract.
The low Mach number limit for the multi-dimensional full magnetohydrodynamic
equations, in which the effect of thermal conduction is taken into account, is
rigorously justified in the framework of classical solutions with small
density and temperature variations. Moreover, we show that for sufficiently
small Mach number, the compressible magnetohydrodynamic equations admit a
smooth solution on the time interval where the smooth solution of the
incompressible magnetohydrodynamic equations exists. In addition, the low Mach
number limit for the ideal magnetohydrodynamic equations with small entropy
variation is also investigated. The convergence rates are obtained in both
cases.
###### Key words and phrases:
Full MHD equations, smooth solution, low Mach number limit
###### 2000 Mathematics Subject Classification:
76W05, 35B40
## 1\. Introduction
The magnetohydrodynamic (MHD) equations govern the motion of compressible
quasi-neutrally ionized fluids under the influence of electromagnetic fields.
The full three-dimensional compressible MHD equations read as (see, e.g., [12,
15, 22, 23])
$\displaystyle\partial_{t}\rho+{\rm div}(\rho{\mathbf{u}})=0,$ (1.1)
$\displaystyle\partial_{t}(\rho{\mathbf{u}})+{\rm
div}\left(\rho{\mathbf{u}}\otimes{\mathbf{u}}\right)+{\nabla
p}=\frac{1}{4\pi}(\nabla\times\mathbf{H})\times\mathbf{H}+{\rm div}\Psi,$
(1.2)
$\displaystyle\partial_{t}\mathbf{H}-\nabla\times({\mathbf{u}}\times\mathbf{H})=-\nabla\times(\nu\nabla\times\mathbf{H}),\quad{\rm
div}\mathbf{H}=0,$ (1.3) $\displaystyle\partial_{t}{\mathcal{E}}+{\rm
div}\left({\mathbf{u}}({\mathcal{E}}^{\prime}+p)\right)=\frac{1}{4\pi}{\rm
div}(({\mathbf{u}}\times\mathbf{H})\times\mathbf{H})$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\,\,+{\rm
div}\Big{(}\frac{\nu}{4\pi}\mathbf{H}\times(\nabla\times\mathbf{H})+{\mathbf{u}}\Psi+\kappa\nabla\theta\Big{)}.$
(1.4)
Here $x\in\Omega$, and $\Omega$ is assumed to be the whole $\mathbb{R}^{3}$ or
the torus $\mathbb{T}^{3}$. The unknowns $\rho$ denotes the density,
${\mathbf{u}}=(u_{1},u_{2},u_{3})\in{\mathbb{R}}^{3}$ the velocity,
$\mathbf{H}=(H_{1},H_{2},H_{3})\in{\mathbb{R}}^{3}$ the magnetic field, and
$\theta$ the temperature, respectively; $\Psi$ is the viscous stress tensor
given by
$\Psi=2\mu\mathbb{D}({\mathbf{u}})+\lambda{\rm
div}{\mathbf{u}}\;\mathbf{I}_{3}$
with
$\mathbb{D}({\mathbf{u}})=(\nabla{\mathbf{u}}+\nabla{\mathbf{u}}^{\top})/2$,
$\mathbf{I}_{3}$ being the $3\times 3$ identity matrix, and
$\nabla{\mathbf{u}}^{\top}$ the transpose of the matrix $\nabla{\mathbf{u}}$;
${\mathcal{E}}$ is the total energy given by
${\mathcal{E}}={\mathcal{E}}^{\prime}+|\mathbf{H}|^{2}/({8\pi})$ and
${\mathcal{E}}^{\prime}=\rho\left(e+|{\mathbf{u}}|^{2}/2\right)$ with $e$
being the internal energy, $\rho|{\mathbf{u}}|^{2}/2$ the kinetic energy, and
$|\mathbf{H}|^{2}/({8\pi})$ the magnetic energy. The viscosity coefficients
$\lambda$ and $\mu$ of the flow satisfy $2\mu+3\lambda>0$ and $\mu>0$; $\nu>0$
is the magnetic diffusion coefficient of the magnetic field, and $\kappa>0$ is
the heat conductivity. For simplicity, we assume that $\mu,\lambda,\nu$ and
$\kappa$ are constants. The equations of state $p=p(\rho,\theta)$ and
$e=e(\rho,\theta)$ relate the pressure $p$ and the internal energy $e$ to the
density $\rho$ and the temperature $\theta$ of the flow.
The MHD equations have attracted a lot of attention of physicists and
mathematicians because of its physical importance, complexity, rich phenomena,
and mathematical challenges, see, for example, [2, 4, 5, 12, 6, 8, 9, 11, 23,
13, 31] and the references cited therein. One of the important topics on the
equations (1.1)–(1.4) is to study its low Mach number limit. For the
isentropic MHD equations, the low Mach number limit has been rigorously proved
in [20, 14, 16, 17]. Nevertheless, it is more significant and difficult to
study the limit for the non-isentropic models from both physical and
mathematical points of view.
The main purpose of this paper is to present the rigorous justification of the
low Mach number limit for the full MHD equations (1.1)-(1.4) in the framework
of classical solutions.
Now, we rewrite the energy equation (1.4) in the form of the internal energy.
Multiplying (1.2) by ${\mathbf{u}}$ and (1.3) by $\mathbf{H}/({4\pi})$, and
summing them together, we obtain
$\displaystyle\frac{d}{dt}\big{(}\frac{1}{2}\rho|{\mathbf{u}}|^{2}+\frac{1}{8\pi}|\mathbf{H}|^{2}\big{)}+\frac{1}{2}{\rm
div}\Big{(}\rho|{\mathbf{u}}|^{2}{\mathbf{u}}\Big{)}+\nabla
p\cdot{\mathbf{u}}$ $\displaystyle\quad={\rm
div}\Psi\cdot{\mathbf{u}}+\frac{1}{4\pi}(\nabla\times\mathbf{H})\times\mathbf{H}\cdot{\mathbf{u}}+\frac{1}{4\pi}\nabla\times({\mathbf{u}}\times\mathbf{H})\cdot\mathbf{H}$
$\displaystyle\,\,\qquad-\frac{\nu}{4\pi}\nabla\times(\nabla\times\mathbf{H})\cdot\mathbf{H}.$
(1.5)
Using the identities
${\rm
div}(\mathbf{H}\times(\nabla\times\mathbf{H}))=|\nabla\times\mathbf{H}|^{2}-\nabla\times(\nabla\times\mathbf{H})\cdot\mathbf{H}$
(1.6)
and
${\rm
div}(({\mathbf{u}}\times\mathbf{H})\times\mathbf{H})=(\nabla\times\mathbf{H})\times\mathbf{H}\cdot{\mathbf{u}}+\nabla\times({\mathbf{u}}\times\mathbf{H})\cdot\mathbf{H},$
(1.7)
and subtracting (1) from (1.4), we obtain the internal energy equation
$\partial_{t}(\rho e)+{\rm div}(\rho{\mathbf{u}}e)+({\rm
div}{\mathbf{u}})p=\frac{\nu}{4\pi}|\nabla\times\mathbf{H}|^{2}+\Psi:\nabla{\mathbf{u}}+\kappa\Delta\theta,$
(1.8)
where $\Psi:\nabla{\mathbf{u}}$ denotes the scalar product of two matrices:
$\Psi:\nabla{\mathbf{u}}=\sum^{3}_{i,j=1}\frac{\mu}{2}\left(\frac{\partial
u_{i}}{\partial x_{j}}+\frac{\partial u_{j}}{\partial
x_{i}}\right)^{2}+\lambda|{\rm
div}{\mathbf{u}}|^{2}=2\mu|\mathbb{D}({\mathbf{u}}^{\epsilon})|^{2}+\lambda(\mbox{tr}\mathbb{D}({\mathbf{u}}^{\epsilon}))^{2}.$
In this paper, we shall focus our study on the ionized fluid obeying the
perfect gas relations
$\displaystyle p=\mathfrak{R}\rho\theta,\quad e=c_{V}\theta,$ (1.9)
where the constants $\mathfrak{R},c_{V}\\!>\\!0$ are the gas constant and the
heat capacity at constant volume, respectively. We point out here that our
analysis below can be applied to more general equations of state for $p$ and
$e$ by employing minor modifications in arguments.
To study the low Mach number limit of the system (1.1)–(1.3) and (1.8), we use
its appropriate dimensionless form as follows (see the Appendix for the
details)
$\displaystyle\partial_{t}\rho+{\rm div}(\rho{\mathbf{u}})=0,$ (1.10)
$\displaystyle\rho(\partial_{t}{\mathbf{u}}+{\mathbf{u}}\cdot\nabla{\mathbf{u}})+\frac{\nabla(\rho\theta)}{\epsilon^{2}}=(\nabla\times\mathbf{H})\times\mathbf{H}+{\rm
div}\Psi,$ (1.11)
$\displaystyle\partial_{t}\mathbf{H}-\nabla\times({\mathbf{u}}\times\mathbf{H})=-\nabla\times(\nu\nabla\times\mathbf{H}),\quad{\rm
div}\mathbf{H}=0,$ (1.12)
$\displaystyle\rho(\partial_{t}\theta+{\mathbf{u}}\cdot\nabla\theta)+(\gamma-1)\rho\theta{\rm
div}{\mathbf{u}}=\epsilon^{2}\nu|\nabla\times\mathbf{H}|^{2}+\epsilon^{2}\Psi:\nabla{\mathbf{u}}+\kappa\Delta\theta,$
(1.13)
where $\epsilon=M$ is the Mach number and the coefficients $\mu,\lambda,\nu$
and $\kappa$ are the scaled parameters. $\gamma=1+\mathfrak{R}/c_{V}$ is the
ratio of specific heats. Note that we have used the same notations and assumed
that the coefficients $\mu,\lambda,\nu$ and $\kappa$ are independent of
$\epsilon$ for simplicity. Also, we have ignored the Cowling number in the
equations (1.10)–(1.13), since it does not create any mathematical
difficulties in our analysis.
We shall study the limit as $\epsilon\to 0$ of the solutions to (1.10)–(1.13).
We further restrict ourselves to the small density and temperature variations,
i.e.
$\displaystyle\rho=1+\epsilon q,\quad\theta=1+\epsilon\phi.$ (1.14)
We first give a formal analysis. Putting (1.14) and (1.9) into the system
(1.10)–(1.13), and using the identities
$\displaystyle{\rm curl\,}{\rm curl\,}{\mathbf{H}}=\nabla\,{\rm
div}{\mathbf{H}}-\Delta{\mathbf{H}},$
$\displaystyle\nabla(|{\mathbf{H}}|^{2})=2{\mathbf{H}}\cdot\nabla{\mathbf{H}}+2{\mathbf{H}}\times{\rm
curl\,}{\mathbf{H}},$ (1.15) $\displaystyle{\rm
curl\,}({{\mathbf{u}}}\times{\mathbf{H}})={{\mathbf{u}}}({\rm
div}{\mathbf{H}})-{\mathbf{H}}({\rm
div}{{\mathbf{u}}})+{\mathbf{H}}\cdot\nabla{{\mathbf{u}}}-{{\mathbf{u}}}\cdot\nabla{\mathbf{H}},$
(1.16)
then we can rewrite (1.10)–(1.13) as
$\displaystyle\partial_{t}q^{\epsilon}+{\mathbf{u}}^{\epsilon}\cdot\nabla
q^{\epsilon}+\frac{1}{\epsilon}(1+\epsilon q^{\epsilon}){\rm
div}{\mathbf{u}}^{\epsilon}=0,$ (1.17) $\displaystyle(1+\epsilon
q^{\epsilon})(\partial_{t}{\mathbf{u}}^{\epsilon}+{\mathbf{u}}^{\epsilon}\cdot\nabla{\mathbf{u}}^{\epsilon})+\frac{1}{\epsilon}\big{[}(1+\epsilon
q^{\epsilon})\nabla\phi^{\epsilon}+(1+\epsilon\phi^{\epsilon})\nabla
q^{\epsilon}\big{]}$
$\displaystyle\qquad\qquad-\mathbf{H}^{\epsilon}\cdot\nabla\mathbf{H}^{\epsilon}+\frac{1}{2}\nabla(|\mathbf{H}^{\epsilon}|^{2})=2\mu{\rm
div}(\mathbb{D}({\mathbf{u}}^{\epsilon}))+\lambda\nabla(\mbox{tr}\mathbb{D}({\mathbf{u}}^{\epsilon})),$
(1.18)
$\displaystyle\partial_{t}\mathbf{H}^{\epsilon}+{\mathbf{u}}^{\epsilon}\cdot\nabla\mathbf{H}^{\epsilon}+{\rm
div}{\mathbf{u}}^{\epsilon}\mathbf{H}^{\epsilon}-\mathbf{H}^{\epsilon}\cdot\nabla{\mathbf{u}}^{\epsilon}=\nu\Delta\mathbf{H}^{\epsilon},\quad{\rm
div}\mathbf{H}^{\epsilon}=0,$ (1.19) $\displaystyle(1+\epsilon
q^{\epsilon})(\partial_{t}\phi^{\epsilon}+{\mathbf{u}}^{\epsilon}\cdot\nabla\phi^{\epsilon})+\frac{\gamma-1}{\epsilon}(1+\epsilon
q^{\epsilon})(1+\epsilon\phi^{\epsilon}){\rm div}{\mathbf{u}}^{\epsilon}$
$\displaystyle\qquad\qquad=\kappa\Delta\phi^{\epsilon}+\epsilon(2\mu|\mathbb{D}({\mathbf{u}}^{\epsilon})|^{2}+\lambda(\mbox{tr}\mathbb{D}({\mathbf{u}}^{\epsilon}))^{2})+\nu\epsilon|\nabla\times\mathbf{H}^{\epsilon}|^{2}.$
(1.20)
Here we have added the superscript $\epsilon$ on the unknowns to stress the
dependence of the parameter $\epsilon$. Therefore, the formal limit as
$\epsilon\rightarrow 0$ of (1.17)–(1.19) is the following incompressible MHD
equations (suppose that the limits
${{\mathbf{u}}}^{\epsilon}\rightarrow{\mathbf{w}}$ and
${\mathbf{H}}^{\epsilon}\rightarrow{\mathbf{B}}$ exist.)
$\displaystyle\partial_{t}{\mathbf{w}}+{\mathbf{w}}\cdot\nabla{\mathbf{w}}+\nabla\pi+\frac{1}{2}\nabla(|{\mathbf{B}}|^{2})-{\mathbf{B}}\cdot\nabla{\mathbf{B}}=\mu\Delta{\mathbf{w}},$
(1.21)
$\displaystyle\partial_{t}{\mathbf{B}}+{\mathbf{w}}\cdot\nabla{\mathbf{B}}-{\mathbf{B}}\cdot\nabla{\mathbf{w}}=\nu\Delta{\mathbf{B}},$
(1.22) $\displaystyle{\rm div}{\mathbf{w}}=0,\;\;\;\;{\rm div}{\mathbf{B}}=0.$
(1.23)
In this paper we shall establish the above limit rigorously. Moreover, we
shall show that for sufficiently small Mach number, the compressible flows
admit a smooth solution on the time interval where the smooth solution of the
incompressible MHD equations exists. In addition, we shall also study the low
Mach number limit of the ideal compressible MHD equations (namely,
$\mu=\lambda=\nu=\kappa=0$ in (1.1)–(1.4)) for which small pressure and
entropy variations are assumed. The convergence rates are obtained in both
cases.
We should point out here that it still remains to be an open problem to prove
rigorously the low Mach number limit of the ideal or full non-isentropic MHD
equations with large temperature variations in the framework of classical
solutions, even in the whole space case, although the corresponding problems
for the non-isentropic Euler and the full Navier-Stokes equations were solved
in the whole space [25, 1] or the bounded domain in [27]. The reason is that
the presence of the magnetic field and its interaction with hydrodynamic
motion in the MHD flow of large oscillation cause serious difficulties. We can
not apply directly the techniques developed in [27, 25, 1] for the Euler and
Navier-Stokes equations to obtain the uniform estimates for the solutions to
the ideal or full non-isentropic MHD equations. In the present paper, however,
we shall employ an alternative approach, which is based on the energy
estimates for symmetrizable quasilinear hyperbolic-parabolic systems and the
convergence-stability lemma for singular limit problems [30, 3], to deal with
the ideal or full non-isentropic MHD equations. There are two advantages of
this approach: The first one is that we can rigorously prove the
incompressible limit in the time interval where the limiting system admits a
smooth solution. The second one is that the estimates we obtained do not
depend on the viscosity and thermodynamic coefficients, compared with the
results in [10] where all-time existence of smooth solutions to the full
Navier-Stokes equations was discussed and the estimates depend on the
parameters intimately.
For large entropy variation and general initial data, the authors have
rigorously proved the low Mach number limit of the non-isentropic MHD
equations with zero magnetic diffusivity in [18] by adapting and modifying the
approach developed in [25]. We mention that the coupled singular limit problem
for the full MHD equations in the framework of the so-called variational
solutions were studied recently in [21, 26].
Before ending the introduction, we give the notations used throughout the
current paper. We use the letter $C$ to denote various positive constants
independent of $\epsilon$. For convenience, we denote by $H^{l}\equiv
H^{l}(\Omega)$ ($l\in\mathbb{R}$) the standard Sobolev spaces and write
$\|\cdot\|_{l}$ for the standard norm of $H^{l}$ and $\|\cdot\|$ for
$\|\cdot\|_{0}$.
This paper is organized as follows. In Section 2 we state our main results.
The proof for the full MHD equations and the ideal MHD equations is presented
in Section 3 and Section 4, respectively. Finally, an appendix is given to
derive briefly the dimensionless form of the full compressible MHD equations.
## 2\. Main results
We first recall the local existence of strong solutions to the incompressible
MHD equations (1.21)–(1.23) in the domain $\Omega$. The proof can be found in
[7, 28]. Recall here that $\Omega=\mathbb{R}^{3}$ or $\Omega=\mathbb{T}^{3}$.
###### Proposition 2.1 ([7, 28]).
Let $s>3/2+2$. Assume that the initial data
$({\mathbf{w}},{\mathbf{B}})|_{t=0}$ $=({\mathbf{w}}_{0},{\mathbf{B}}_{0})$
satisfy ${\mathbf{w}}_{0}\in{H}^{s},{\mathbf{B}}_{0}\in{H}^{s}$, and ${\rm
div}\,{\mathbf{w}}_{0}=0$, ${\rm div}{\mathbf{B}}_{0}=0$. Then, there exist a
$\hat{T}^{*}\in(0,\infty]$ and a unique solution
$({\mathbf{w}},{\mathbf{B}})\in L^{\infty}(0,\hat{T}^{*};{H}^{s})$ to the
incompressible MHD equations (1.21)–(1.23), and for any $0<T<\hat{T}^{*}$,
$\sup_{0\leq t\leq
T}\\!\big{\\{}||({\mathbf{w}},{\mathbf{B}})(t)||_{H^{s}}+||(\partial_{t}{\mathbf{w}},\partial_{t}{\mathbf{B}})(t)||_{H^{s-2}}+||\nabla\pi(t)||_{H^{s-2}}\big{\\}}\leq
C.$
Denoting
$U^{\epsilon}=(q^{\epsilon},{\mathbf{u}}^{\epsilon},\mathbf{H}^{\epsilon},\phi^{\epsilon})^{\top}$,
we rewrite the system (1.17)–(1.19) in the vector form
$\displaystyle
A_{0}(U^{\epsilon})\partial_{t}U^{\epsilon}+\sum_{j=1}^{3}A_{j}(U^{\epsilon})\partial_{j}U^{\epsilon}=Q(U^{\epsilon}),$
(2.1)
where
$Q(U^{\epsilon})=\big{(}0,{F}({\mathbf{u}}^{\epsilon}),\nu\Delta\mathbf{H}^{\epsilon},\kappa\Delta\phi^{\epsilon}+\epsilon(L({\mathbf{u}}^{\epsilon})+G(\mathbf{H}^{\epsilon}))\big{)}^{\top},$
with
$\displaystyle{F}({\mathbf{u}}^{\epsilon})$ $\displaystyle=$ $\displaystyle
2\mu{\rm
div}(\mathbb{D}({\mathbf{u}}^{\epsilon}))+\lambda\nabla(\mbox{tr}\mathbb{D}({\mathbf{u}}^{\epsilon})),$
$\displaystyle L({\mathbf{u}}^{\epsilon})$ $\displaystyle=$ $\displaystyle
2\mu|\mathbb{D}({\mathbf{u}}^{\epsilon})|^{2}+\lambda(\mbox{tr}\mathbb{D}({\mathbf{u}}^{\epsilon}))^{2},$
$\displaystyle G(\mathbf{H}^{\epsilon})$ $\displaystyle=$
$\displaystyle\nu|\nabla\times\mathbf{H}^{\epsilon}|^{2},$
and the matrices $A_{j}(U^{\epsilon})$ ($0\leq j\leq 3$) are given by
$\displaystyle\qquad\qquad\ \ \quad
A_{0}(U^{\epsilon})=\mbox{diag}(1,1+\epsilon q^{\epsilon},1+\epsilon
q^{\epsilon},1+\epsilon q^{\epsilon},1,1,1,1+\epsilon q^{\epsilon}),$
$\displaystyle\qquad\qquad\ \ \quad A_{1}(U^{\epsilon})=$
$\displaystyle{\small\left(\begin{array}[]{cccccccc}u_{1}^{\epsilon}&\frac{1+\epsilon
q^{\epsilon}}{\epsilon}&0&0&0&0&0&0\\\
\frac{1+\epsilon\phi^{\epsilon}}{\epsilon}&u_{1}^{\epsilon}(1+\epsilon
q^{\epsilon})&0&0&0&H_{2}^{\epsilon}&H_{3}^{\epsilon}&\frac{1+\epsilon
q^{\epsilon}}{\epsilon}\\\ 0&0&u_{1}^{\epsilon}(1+\epsilon
q^{\epsilon})&0&0&-H^{\epsilon}_{1}&0&0\\\ 0&0&0&u_{1}^{\epsilon}(1+\epsilon
q^{\epsilon})&0&0&-H_{1}^{\epsilon}&0\\\ 0&0&0&0&u_{1}^{\epsilon}&0&0&0\\\
0&H_{2}^{\epsilon}&-H_{1}^{\epsilon}&0&0&u_{1}^{\epsilon}&0&0\\\
0&H_{3}^{\epsilon}&0&-H_{1}^{\epsilon}&0&0&u_{1}^{\epsilon}&0\\\
0&\frac{(\gamma-1)(1+\epsilon
q^{\epsilon})(1+\epsilon\phi^{\epsilon})}{\epsilon}&0&0&0&0&0&(1+\epsilon
q^{\epsilon})u_{1}^{\epsilon}\end{array}\right),}$ $\displaystyle\qquad\qquad\
\ \quad A_{2}(U^{\epsilon})=$
$\displaystyle{\small\left(\begin{array}[]{cccccccc}u_{2}^{\epsilon}&0&\frac{1+\epsilon
q^{\epsilon}}{\epsilon}&0&0&0&0&0\\\ 0&u_{2}^{\epsilon}(1+\epsilon
q^{\epsilon})&0&0&-H_{2}^{\epsilon}&0&0&0\\\
\frac{1+\epsilon\phi^{\epsilon}}{\epsilon}&0&u_{2}^{\epsilon}(1+\epsilon
q^{\epsilon})&0&H_{1}^{\epsilon}&0&H_{3}^{\epsilon}&\frac{1+\epsilon
q^{\epsilon}}{\epsilon}\\\ 0&0&0&u_{2}^{\epsilon}(1+\epsilon
q^{\epsilon})&0&0&-H_{2}^{\epsilon}&0\\\
0&-H_{2}^{\epsilon}&H_{1}^{\epsilon}&0&u_{2}^{\epsilon}&0&0&0\\\
0&0&0&0&0&u_{2}^{\epsilon}&0&0\\\
0&0&H_{3}^{\epsilon}&-H_{2}^{\epsilon}&0&0&u_{2}&0\\\
0&0&\frac{(\gamma-1)(1+\epsilon
q^{\epsilon})(1+\epsilon\phi^{\epsilon})}{\epsilon}&0&0&0&0&(1+\epsilon
q^{\epsilon})u_{2}^{\epsilon}\end{array}\right),}$ $\displaystyle\qquad\qquad\
\ \quad A_{3}(U^{\epsilon})=$
$\displaystyle{\small\left(\begin{array}[]{cccccccc}u_{3}^{\epsilon}&0&0&\frac{1+\epsilon
q^{\epsilon}}{\epsilon}&0&0&0&0\\\ 0&u_{3}^{\epsilon}(1+\epsilon
q^{\epsilon})&0&0&-H_{3}^{\epsilon}&0&0&0\\\ 0&0&u_{3}^{\epsilon}(1+\epsilon
q^{\epsilon})&0&0&-H_{3}^{\epsilon}&0&0\\\
\frac{1+\epsilon\phi^{\epsilon}}{\epsilon}&0&0&u_{3}^{\epsilon}(1+\epsilon
q^{\epsilon})&H_{1}^{\epsilon}&H_{2}^{\epsilon}&0&\frac{1+\epsilon
q^{\epsilon}}{\epsilon}\\\
0&-H_{3}^{\epsilon}&0&H_{1}^{\epsilon}&u_{3}^{\epsilon}&0&0&0\\\
0&0&-H_{3}^{\epsilon}&H_{2}^{\epsilon}&0&u_{3}^{\epsilon}&0&0\\\
0&0&0&0&0&0&u_{3}^{\epsilon}&0\\\ 0&0&0&\frac{(\gamma-1)(1+\epsilon
q^{\epsilon})(1+\epsilon\phi^{\epsilon})}{\epsilon}&0&0&0&(1+\epsilon
q^{\epsilon})u_{3}^{\epsilon}\end{array}\right).}$
It is easy to see that the matrices $A_{j}(U^{\epsilon})$ ($0\leq j\leq 3$)
can be symmetrized by choosing
$\hat{A}_{0}(U^{\epsilon})=\mbox{diag}\big{(}(1+\epsilon\phi^{\epsilon})(1+\epsilon
q^{\epsilon})^{-1},1,1,1,1,1,1,[(\gamma-1)(1+\epsilon\phi^{\epsilon})]^{-1}\big{)}.$
Moreover, for $U^{\epsilon}\in\bar{G}_{1}\subset\subset G$ with $G$ being the
state space for the system (2.1), $\hat{A}_{0}(U^{\epsilon})$ is a positive
definite symmetric matrix for sufficiently small $\epsilon$.
Assume that the initial data
${U}^{\epsilon}(x,0)={U}_{0}^{\epsilon}(x)=({q^{\epsilon}_{0}}(x),{{\mathbf{u}}^{\epsilon}_{0}}(x),{\mathbf{H}^{\epsilon}_{0}}(x),{\phi^{\epsilon}_{0}}(x))^{\top}\in
H^{s}$ and ${U}_{0}^{\epsilon}(x)\in G_{0},$ $\bar{G}_{0}\subset\subset G$.
The main theorem of the present paper is the following.
###### Theorem 2.2.
Let $s>{3}/{2}+2$. Suppose that the initial data ${U}_{0}^{\epsilon}(x)$
satisfy
$\left\|{U}_{0}^{\epsilon}(x)-\left(0,{\mathbf{w}}_{0}(x),{\mathbf{B}}_{0}(x),0\right)^{\top}\right\|_{s}=O(\epsilon).$
Let $({\mathbf{w}},{\mathbf{B}},\pi)$ be a smooth solution to (1.21)–(1.23)
obtained in Proposition 2.1. If $({\mathbf{w}},\pi)\in
C([0,T^{\ast}],H^{s+2})\cap C^{1}([0,T^{\ast}],H^{s})$ with $T^{\ast}>0$
finite, then there exists a constant $\epsilon_{0}>0$ such that, for all
$\epsilon\leq\epsilon_{0}$, the system (2.1) with initial data
${U}^{\epsilon}_{0}(x)$ has a unique smooth solution ${U}^{\epsilon}(x,t)\in
C([0,T^{\ast}],H^{s})$. Moreover, there exists a positive constant $K>0$,
independent of $\epsilon$, such that, for all $\epsilon\leq\epsilon_{0}$,
$\displaystyle\sup_{t\in[0,T^{\ast}]}\left\|U^{\epsilon}(\cdot,t)-\left(\frac{\epsilon}{2}\pi,{\mathbf{w}},{\mathbf{B}},\frac{\epsilon}{2}\pi\right)^{\top}\right\|_{s}\leq
K\epsilon.$ (2.2)
###### Remark 2.1.
From Theorem 2.2, we know that for sufficiently small $\epsilon$ and well-
prepared initial data, the full MHD equations (1.1)–(1.4) admits a unique
smooth solution on the same time interval where the smooth solution of the
incompressible MHD equations exists. Moreover, the solution can be
approximated as shown in (2.2).
###### Remark 2.2.
We remark that the constant $K$ in (2.2) is also independent of the
coefficients $\mu,\nu$ and $\kappa$. This is quite different from the results
by Hagstorm and Loranz in [10], where the estimates do depend on $\mu$
intimately.
Our approach is still valid for the ideal compressible MHD equations. However,
we will give a particular analysis for the ideal model with more general
pressure by using the entropy form of the energy equation rather than the
thermal energy equation in (1.8).
The ideal compressible MHD equations can be written as
$\displaystyle\partial_{t}\rho+{\rm div}(\rho{\mathbf{u}})=0,$ (2.3)
$\displaystyle\partial_{t}(\rho{\mathbf{u}})+{\rm
div}\left(\rho{\mathbf{u}}\otimes{\mathbf{u}}\right)+{\nabla
p}=\frac{1}{4\pi}(\nabla\times\mathbf{H})\times\mathbf{H},$ (2.4)
$\displaystyle\partial_{t}\mathbf{H}-\nabla\times({\mathbf{u}}\times\mathbf{H})=0,\quad{\rm
div}\mathbf{H}=0,$ (2.5) $\displaystyle\partial_{t}{\mathcal{E}}+{\rm
div}\left({\mathbf{u}}({\mathcal{E}}^{\prime}+p)\right)=\frac{1}{4\pi}{\rm
div}\big{(}({\mathbf{u}}\times\mathbf{H})\times\mathbf{H}\big{)}.$ (2.6)
With the help of the Gibbs relation
$\theta\mathrm{d}S=\mathrm{d}e+p\,\mathrm{d}\left(\frac{1}{\rho}\right)$
and the identity (1.7), the energy balance equation (2.6) is replaced by
$\partial_{t}(\rho S)+{\rm div}(\rho S{\mathbf{u}})=0,$ (2.7)
where $S$ denotes the entropy. We reconsider the equation of state as a
function of $S$ and $p$, i.e. $\rho=R(S,p)$ for some positive smooth function
$R$ defined for all $S$ and $p>0$, and satisfying $\frac{\partial R}{\partial
p}>0$. Then, by utilizing (2.3), (1.15) and (1.16), together with the
constraint ${\rm div}{\mathbf{H}}=0$, the system (2.3)–(2.5) and (2.7) can be
written in the dimensionless form as follows (after applying the arguments
similar to those in the Appendix):
$\displaystyle
A(S^{\epsilon},p^{\epsilon})(\partial_{t}p^{\epsilon}+{\mathbf{u}}^{\epsilon}\cdot\nabla
p^{\epsilon})+{\rm div}{\mathbf{u}}^{\epsilon}=0,$ (2.8) $\displaystyle
R(S^{\epsilon},p^{\epsilon})(\partial_{t}{\mathbf{u}}^{\epsilon}+{\mathbf{u}}^{\epsilon}\cdot\nabla{\mathbf{u}}^{\epsilon})+\frac{\nabla
p^{\epsilon}}{\epsilon^{2}}-\mathbf{H}^{\epsilon}\cdot\nabla\mathbf{H}^{\epsilon}+\frac{1}{2}\nabla(|\mathbf{H}^{\epsilon}|^{2})=0,$
(2.9)
$\displaystyle\partial_{t}{\mathbf{H}^{\epsilon}}+{\mathbf{u}}^{\epsilon}\cdot\nabla\mathbf{H}^{\epsilon}+{\rm
div}{\mathbf{u}}^{\epsilon}\mathbf{H}^{\epsilon}-\mathbf{H}^{\epsilon}\cdot\nabla{\mathbf{u}}^{\epsilon}=0,\quad{\rm
div}\mathbf{H}^{\epsilon}=0,$ (2.10)
$\displaystyle\partial_{t}S^{\epsilon}+{\mathbf{u}}^{\epsilon}\cdot\nabla
S^{\epsilon}=0,$ (2.11)
where
$A(S^{\epsilon},p^{\epsilon})=\frac{1}{R(S^{\epsilon},p^{\epsilon})}\frac{\partial
R(S^{\epsilon},p^{\epsilon})}{\partial p^{\epsilon}}$.
To study the low Mach number limit of the above system, we introduce the
transformation
$\displaystyle p^{\epsilon}(x,t)=\underline{p}e^{\epsilon
q^{\epsilon}(x,t)},\;\;S^{\epsilon}(x,t)=\underline{S}+\epsilon\Theta^{\epsilon}(x,t),$
(2.12)
where $\underline{p}$ and $\underline{S}$ are positive constants, to obtain
that
$\displaystyle a(\underline{S}+\epsilon\Theta^{\epsilon},\epsilon
q^{\epsilon})(\partial_{t}q^{\epsilon}+{\mathbf{u}}^{\epsilon}\cdot\nabla
q^{\epsilon})+\frac{1}{\epsilon}{\rm div}{\mathbf{u}}^{\epsilon}=0,$ (2.13)
$\displaystyle r(\underline{S}+\epsilon\Theta^{\epsilon},\epsilon
q^{\epsilon})(\partial_{t}{\mathbf{u}}^{\epsilon}+{\mathbf{u}}^{\epsilon}\cdot\nabla{\mathbf{u}}^{\epsilon})+\frac{1}{\epsilon}\nabla
q^{\epsilon}-\mathbf{H}^{\epsilon}\cdot\nabla\mathbf{H}^{\epsilon}+\frac{1}{2}\nabla(|\mathbf{H}^{\epsilon}|^{2})=0,$
(2.14)
$\displaystyle\partial_{t}{\mathbf{H}}^{\epsilon}+{\mathbf{u}}^{\epsilon}\cdot\nabla\mathbf{H}^{\epsilon}+{\rm
div}{\mathbf{u}}^{\epsilon}\mathbf{H}^{\epsilon}-\mathbf{H}^{\epsilon}\cdot\nabla{\mathbf{u}}^{\epsilon}=0,\quad{\rm
div}\mathbf{H}^{\epsilon}=0,$ (2.15)
$\displaystyle\partial_{t}\Theta^{\epsilon}+{\mathbf{u}}^{\epsilon}\cdot\nabla\Theta^{\epsilon}=0,$
(2.16)
where
$\displaystyle a(S^{\epsilon},\epsilon
q^{\epsilon})=A(S^{\epsilon},\underline{p}e^{\epsilon
q^{\epsilon}})\underline{p}e^{\epsilon
q^{\epsilon}}=\frac{\underline{p}e^{\epsilon
q^{\epsilon}}}{R(S^{\epsilon},\underline{p}e^{\epsilon
q^{\epsilon}})}\cdot\frac{\partial R(S^{\epsilon},s)}{\partial
s}\Big{|}_{s=\underline{p}e^{\epsilon q^{\epsilon}}},$ $\displaystyle
r^{\epsilon}(S^{\epsilon},\epsilon
q^{\epsilon})=\frac{R(S^{\epsilon},\underline{p}e^{\epsilon
q^{\epsilon}})}{\underline{p}e^{\epsilon q^{\epsilon}}}.$
Making use of the fact that ${\rm curl\,}\nabla=0$ and letting
$\epsilon\rightarrow 0$ in (2.13) and (2.14), we formally deduce that ${\rm
div}{\mathbf{v}}=0$ and
$\displaystyle{\rm
curl\,}\big{(}r(\underline{S},0)(\partial_{t}{\mathbf{v}}+{\mathbf{v}}\cdot\nabla{\mathbf{v}})-(\nabla\times{{\mathbf{J}}})\times{{\mathbf{J}}}\big{)}=0,$
where we have supposed that the limits
${{\mathbf{u}}}^{\epsilon}\rightarrow{\mathbf{v}}$ and
${\mathbf{H}}^{\epsilon}\rightarrow{\mathbf{J}}$ exist. Thus, we can expect
that the limiting system of (2.13)–(2.16) takes the form
$\displaystyle
r(\underline{S},0)(\partial_{t}{\mathbf{v}}+{\mathbf{v}}\cdot\nabla{\mathbf{v}})-(\nabla\times{{\mathbf{J}}})\times{{\mathbf{J}}}+\nabla\Pi=0,$
(2.17)
$\displaystyle\partial_{t}{{\mathbf{J}}}+{{\mathbf{v}}}\cdot\nabla{{\mathbf{J}}}-{{\mathbf{J}}}\cdot\nabla{{\mathbf{v}}}=0,$
(2.18) $\displaystyle{\rm div}{\mathbf{v}}=0,\quad{\rm div}{{\mathbf{J}}}=0$
(2.19)
for some function $\Pi$.
In order to state our result, we first recall the local existence of strong
solutions to the ideal incompressible MHD equations (2.17)–(2.19) in the
domain $\Omega$. The proof can be found in [7, 28].
###### Proposition 2.3 ([7, 28]).
Let $s>3/2+1$. Assume that the initial data
$({\mathbf{v}},{\mathbf{J}})|_{t=0}$ $=({\mathbf{v}}_{0},{\mathbf{J}}_{0})$
satisfy ${\mathbf{v}}_{0}\in{H}^{s},{\mathbf{J}}_{0}\in{H}^{s}$, and ${\rm
div}\,{\mathbf{v}}_{0}=0$, ${\rm div}{\mathbf{J}}_{0}=0$. Then, there exist a
$\tilde{T}^{*}\in(0,\infty]$ and a unique smooth solution
$({\mathbf{v}},{\mathbf{J}})\in L^{\infty}(0,\tilde{T}^{*};{H}^{s})$ to the
incompressible MHD equations (1.21)–(1.23), and for any $0<T<\tilde{T}^{*}$,
$\sup_{0\leq t\leq
T}\\!\big{\\{}||({\mathbf{v}},{\mathbf{J}})(t)||_{H^{s}}+||(\partial_{t}{\mathbf{v}},\partial_{t}{\mathbf{J}})(t)||_{H^{s-1}}+||\nabla\Pi(t)||_{H^{s-1}}\big{\\}}\leq
C.$
In the vector form, we arrive at, for
$V^{\epsilon}=(q^{\epsilon},{\mathbf{u}}^{\epsilon},\mathbf{H}^{\epsilon},\Theta^{\epsilon})^{\top}$,
that
$\displaystyle A_{0}(\epsilon\Theta^{\epsilon},\epsilon
q^{\epsilon})\partial_{t}V^{\epsilon}+\sum_{j=1}^{3}\Big{\\{}u^{\epsilon}_{j}A_{0}(\epsilon\Theta^{\epsilon},\epsilon
q^{\epsilon})+\epsilon^{-1}C_{j}+B_{j}(\mathbf{H}^{\epsilon})\Big{\\}}\partial_{j}V^{\epsilon}=0,$
(2.20)
where
$A_{0}(\epsilon\Theta^{\epsilon},\epsilon
q^{\epsilon})=\mbox{diag}(a(S^{\epsilon},\epsilon
q^{\epsilon}),r(S^{\epsilon},\epsilon q^{\epsilon}),r(S^{\epsilon},\epsilon
q^{\epsilon}),r(S^{\epsilon},\epsilon q^{\epsilon}),1,1,1,1),$
and $C_{j}$ is symmetric constant matrix, and $B_{j}(\mathbf{H}^{\epsilon})$
is a symmetric matrix of $\mathbf{H}^{\epsilon}$.
Assume that the initial data for the equations (2.20) satisfy
${V}^{\epsilon}_{0}(x)=(\tilde{q}_{0}^{\epsilon}(x),\tilde{{\mathbf{u}}}_{0}^{\epsilon}(x),\tilde{\mathbf{H}}_{0}^{\epsilon}(x),\tilde{\Theta}_{0}^{\epsilon}(x))^{\top}\in
H^{s},\;\mbox{ and }\;V_{0}^{\epsilon}(x)\in
G_{0},\;\;\bar{G}_{0}\subset\subset G$
with $G$ being state space for (2.20). Thus, our result on the ideal
compressible MHD equations reads as
###### Theorem 2.4.
Let $s>{3}/{2}+1$. Suppose that the initial data ${V}_{0}^{\epsilon}(x)$
satisfy
$\left\|{V}_{0}^{\epsilon}(x)-\left(0,{{\mathbf{v}}}_{0}(x),{\mathbf{J}}_{0}(x),0\right)^{\top}\right\|_{s}=O(\epsilon).$
Let $({\mathbf{v}},{\mathbf{J}},\Pi)$ be a smooth solution to (2.17)–(2.19)
obtained in Proposition 2.3. If $({\mathbf{v}},\Pi)\in
C([0,\bar{T}_{\ast}],H^{s+1})\cap C^{1}([0,\bar{T}_{\ast}]$, $H^{s})$ with
$\bar{T}_{\ast}>0$ finite, then there exists a constant $\epsilon_{1}>0$ such
that, for all $\epsilon\leq\epsilon_{1}$, the system (2.20) with initial data
${V}_{0}^{\epsilon}(x)$ has a unique solution $V^{\epsilon}(x,t)\in
C([0,\bar{T}_{\ast}],H^{s})$. Moreover, there exists a positive constant
$K_{1}>0$ such that, for all $\epsilon\leq\epsilon_{1}$,
$\displaystyle\sup_{t\in[0,\bar{T}_{\ast}]}\left\|V^{\epsilon}(\cdot,t)-(\epsilon\Pi,{\mathbf{v}},{\mathbf{J}},\epsilon\Pi)^{\top}\right\|_{s}\leq
K_{1}\epsilon.$ (2.21)
## 3\. Proof of Theorem 2.2
This section is devoted to proving Theorem 2.2. First, following the proof of
the local existence theory for the initial value problem of symmetrizable
hyperbolic-parabolic systems by Volpert and Hudjaev in [29], we obtain that
there exists a time interval $[0,T]$ with $T>0$, so that the system (2.1) with
initial data ${U}_{0}^{\epsilon}(x)$ has a unique classical solution
$U^{\epsilon}(x,t)\in C([0,T],H^{s})$ and $U^{\epsilon}(x,t)\in G_{2}$ with
$\bar{G}_{2}\subset\subset G$. We remark that the crucial step in the proof of
local existence result is to prove the uniform boundedness of the solutions.
See also [19] for some relative results.
Now, define
$\displaystyle T_{\epsilon}=\sup\\{T>0:U^{\epsilon}(x,t)\in
C([0,T],H^{s}),U^{\epsilon}(x,t)\in
G_{2},\forall\,(x,t)\in\Omega\times[0,T]\\}.$
Note that $T_{\epsilon}$ depends on $\epsilon$ and may tend to zero as
$\epsilon$ goes to $0$.
To show that $\underline{\lim}_{\epsilon\rightarrow 0}T_{\epsilon}>0$, we
shall make use of the convergence-stability lemma which was established in
[30, 3] for hyperbolic systems of balance laws. It is also implied in [30]
that a convergence-stability lemma can be formulated as a part of (local)
existence theories for any evolution equations. For the hyperbolic-parabolic
system (2.1), we have the following convergence-stability lemma.
###### Lemma 3.1.
Let $s>{3}/{2}+2$. Suppose that ${U}^{\epsilon}_{0}(x)\in
G_{0},\bar{G}_{0}\subset\subset G,$ and ${U}^{\epsilon}_{0}(x)\in H^{s}$, and
the following convergence assumption (A) holds.
(A) There exists $T_{\star}>0$ and $U_{\epsilon}\in
L^{\infty}(0,T_{\star};H^{s})$ for each $\epsilon$, satisfying
$\displaystyle\bigcup_{x,t,\epsilon}\\{U_{\epsilon}(x,t)\\}\subset\subset G,$
such that for $t\in[0,\min\\{T_{\star},T_{\epsilon}\\})$,
$\sup_{x,t}|U^{\epsilon}(x,t)-U_{\epsilon}(x,t)|=o(1),\;\;\sup_{t}\|U^{\epsilon}(x,t)-U_{\epsilon}(x,t)\|_{s}=O(1),\quad\mbox{as
}\epsilon\to 0.$
Then, there exist an $\bar{\epsilon}>0$ such that, for all
$\epsilon\in(0,\bar{\epsilon}]$, it holds that
$\displaystyle T_{\epsilon}>T_{\star}.$
To apply Lemma 3.1, we construct the approximation
$U_{\epsilon}=(q_{\epsilon},{\mathbf{v}}_{\epsilon},{\mathbf{B}}_{\epsilon},\phi_{\epsilon})^{\top}$
with
$q_{\epsilon}=\epsilon\pi/2,{\mathbf{v}}_{\epsilon}={\mathbf{w}},{\mathbf{B}}_{\epsilon}={\mathbf{B}},$
and $\phi_{\epsilon}={\epsilon}\pi/2$, where $({\mathbf{w}},{\mathbf{B}},\pi)$
is the classical solution to the system (1.21)–(1.23) obtained in Proposition
2.1. It is easy to verify that $U_{\epsilon}$ satisfies
$\displaystyle\partial_{t}q_{\epsilon}+{\mathbf{v}}_{\epsilon}\cdot\nabla
q_{\epsilon}+\frac{1}{\epsilon}(1+\epsilon q_{\epsilon}){\rm
div}{\mathbf{v}}_{\epsilon}=\frac{\epsilon}{2}(\pi_{t}+{\mathbf{w}}\cdot\nabla\pi),$
(3.1) $\displaystyle(1+\epsilon
q_{\epsilon})(\partial_{t}{\mathbf{v}}_{\epsilon}+{\mathbf{v}}_{\epsilon}\cdot\nabla{\mathbf{v}}_{\epsilon})+\frac{1}{\epsilon}\big{[}(1+\epsilon
q_{\epsilon})\nabla\phi_{\epsilon}+(1+\epsilon\phi_{\epsilon})\nabla
q_{\epsilon}\big{]}$
$\displaystyle\qquad\qquad-{\mathbf{B}}_{\epsilon}\cdot\nabla{\mathbf{B}}_{\epsilon}+\frac{1}{2}\nabla(|{\mathbf{B}}_{\epsilon}|^{2})=\mu\Delta{\mathbf{v}}_{\epsilon}+\frac{\epsilon^{2}}{2}\pi({\mathbf{w}}_{t}+{\mathbf{w}}\cdot\nabla{\mathbf{w}}+\nabla\pi),$
(3.2)
$\displaystyle\partial_{t}{\mathbf{B}}_{\epsilon}+{\mathbf{v}}_{\epsilon}\cdot\nabla{\mathbf{B}}_{\epsilon}+{\rm
div}{\mathbf{v}}_{\epsilon}{\mathbf{B}}_{\epsilon}-{\mathbf{B}}_{\epsilon}\cdot\nabla{\mathbf{v}}_{\epsilon}=\nu\Delta{\mathbf{B}}_{\epsilon},\quad{\rm
div}{\mathbf{B}}_{\epsilon}=0,$ (3.3) $\displaystyle(1+\epsilon
q_{\epsilon})(\partial_{t}\phi_{\epsilon}+{\mathbf{v}}_{\epsilon}\cdot\nabla\phi_{\epsilon})+\frac{\gamma-1}{\epsilon}(1+\epsilon
q_{\epsilon})(1+\epsilon\phi_{\epsilon}){\rm div}{\mathbf{v}}_{\epsilon}$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\qquad=\left(\frac{\epsilon}{2}+\frac{\epsilon^{3}}{4}\pi\right)(\pi_{t}+{\mathbf{w}}\cdot\nabla\pi).$
(3.4)
We rewrite the system (3.1)–(3.4) in the following vector form
$\displaystyle
A_{0}(U_{\epsilon})\partial_{t}U_{\epsilon}+\sum_{j=1}^{3}A_{j}(U_{\epsilon})\partial_{j}U_{\epsilon}=S(U_{\epsilon})+R,$
(3.5)
with
$S(U_{\epsilon})=(0,\mu\Delta{\mathbf{v}}_{\epsilon},\nu\Delta{\mathbf{B}}_{\epsilon},0)^{\top}$
and
$R=\left(\begin{array}[]{c}\frac{\epsilon}{2}(\pi_{t}+{\mathbf{w}}\cdot\nabla\pi)\\\
\frac{\epsilon^{2}}{2}\pi({\mathbf{w}}_{t}+{\mathbf{w}}\cdot\nabla{\mathbf{w}}+\nabla\pi)\\\
\big{(}\frac{\epsilon}{2}+\frac{\epsilon^{3}}{4}\pi\big{)}(\pi_{t}+{\mathbf{w}}\cdot\nabla\pi)\\\
0\end{array}\right).$
Due to the regularity assumptions on $({\mathbf{w}},\pi)$ in Theorem 2.2, we
have
$\displaystyle\max_{t\in[0,T^{\ast}]}\|R(t)\|_{s}\leq C\epsilon.$
To prove Theorem 2.2, it suffices to prove the error estimate in (2.2) for
$t\in[0,\min\\{T^{\ast},T_{\epsilon}\\})$ thanks to Lemma 3.1. To this end,
introducing
$E=U^{\epsilon}-U_{\epsilon}\quad\mbox{ and
}\quad\mathcal{A}_{j}(U)=A_{0}^{-1}(U)A_{j}(U),$
and using (2.1) and (3.5), we see that
$\displaystyle E_{t}+\sum_{j=1}^{3}\mathcal{A}_{j}(U^{\epsilon})E_{x_{j}}=$
$\displaystyle(\mathcal{A}_{j}(U_{\epsilon})-\mathcal{A}_{j}(U^{\epsilon}))U_{\epsilon
x_{j}}+A_{0}^{-1}(U^{\epsilon})Q(U^{\epsilon})$ $\displaystyle-
A_{0}^{-1}(U_{\epsilon})(S(U_{\epsilon})+R).$ (3.6)
For any multi-index $\alpha$ satisfying $|\alpha|\leq s$, we take the operator
$D^{\alpha}$ to (3.6) to obtain
$\displaystyle\partial_{t}D^{\alpha}E+\sum_{j=1}^{3}\mathcal{A}_{j}(U^{\epsilon})\partial_{x_{j}}D^{\alpha}E=P_{1}^{\alpha}+P_{2}^{\alpha}+Q^{\alpha}+R^{\alpha}$
(3.7)
with
$\displaystyle
P_{1}^{\alpha}=\sum_{j=1}^{3}\\{\mathcal{A}_{j}(U^{\epsilon})\partial_{x_{j}}D^{\alpha}E-D^{\alpha}(\mathcal{A}_{j}(U^{\epsilon})\partial_{x_{j}}E)\\},$
$\displaystyle
P_{2}^{\alpha}=\sum_{j=1}^{3}D^{\alpha}\\{(\mathcal{A}_{j}(U_{\epsilon})-\mathcal{A}_{j}(U^{\epsilon}))U_{\epsilon
x_{j}}\\},$ $\displaystyle
Q^{\alpha}=D^{\alpha}\\{A_{0}^{-1}(U^{\epsilon})Q(U^{\epsilon})-A_{0}^{-1}(U_{\epsilon})S(U_{\epsilon})\\},$
$\displaystyle R^{\alpha}=D^{\alpha}\\{A_{0}^{-1}(U_{\epsilon})R\\}.$
Define
$\tilde{A}_{0}(U^{\epsilon})=\mbox{diag}\Big{(}\frac{1+\epsilon\phi^{\epsilon}}{(1+\epsilon
q^{\epsilon})^{2}},1,1,1,\frac{1}{1+\epsilon q^{\epsilon}},\frac{1}{1+\epsilon
q^{\epsilon}},\frac{1}{1+\epsilon
q^{\epsilon}},\frac{1}{(\gamma-1)(1+\epsilon\phi^{\epsilon})}\Big{)},$
and the canonical energy by
$\displaystyle\|E\|_{\mathrm{e}}^{2}:=\int\langle\tilde{A}_{0}(U^{\epsilon})E,E\rangle
dx.$
Note that $\tilde{A}_{0}(U^{\epsilon})$ is a positive definite symmetric
matrix and $\tilde{A}_{0}(U^{\epsilon})\mathcal{A}_{j}(U^{\epsilon})$ is
symmetric. Now, if we multiply (3.7) with $\tilde{A}_{0}(U^{\epsilon})$ and
take the inner product between the resulting system and $D^{\alpha}E$, we
arrive at
$\displaystyle\frac{d}{dt}\|D^{\alpha}E\|_{\mathrm{{e}}}^{2}=$ $\displaystyle
2\int\langle\Gamma D^{\alpha}E,D^{\alpha}E\rangle dx$
$\displaystyle+2\int(D^{\alpha}E)^{T}\tilde{A}_{0}(U^{\epsilon})(P_{1}^{\alpha}+P_{2}^{\alpha}+Q^{\alpha}+R^{\alpha}),$
(3.8)
where
$\displaystyle\Gamma=(\partial_{t},\nabla)\cdot\Big{(}\tilde{A}_{0},\tilde{A}_{0}(U^{\epsilon})\mathcal{A}_{1}(U^{\epsilon}),\tilde{A}_{0}(U^{\epsilon})\mathcal{A}_{2}(U^{\epsilon}),\tilde{A}_{0}(U^{\epsilon})\mathcal{A}_{3}(U^{\epsilon})\Big{)}.$
Next, we estimate various terms on the right-hand side of (3.8). Note that our
estimates only need to be done for $t\in[0,\min\\{T^{\ast},T_{\epsilon}\\})$,
in which both $U^{\epsilon}$ and $U_{\epsilon}$ are regular enough and take
values in a convex compact subset of the state space. Thus, we have
$\displaystyle
C^{-1}\int|D^{\alpha}E|^{2}\leq\|D^{\alpha}E\|_{\mathrm{e}}^{2}\leq
C\int|D^{\alpha}E|^{2}$ (3.9)
and
$\displaystyle|(D^{\alpha}E)^{\top}\tilde{A}_{0}(U^{\epsilon})(P_{1}^{\alpha}+P_{2}^{\alpha}+R^{\alpha})|\leq
C(|D^{\alpha}E|^{2}+|P_{1}^{\alpha}|^{2}+|P_{2}^{\alpha}|^{2}+|R^{\alpha}|^{2}).$
To estimate $\Gamma$, we write
$\mathcal{A}_{j}(U^{\epsilon})=u^{\epsilon}_{j}\mathbf{I}_{8}+\bar{\mathcal{A}}_{j}(U^{\epsilon})$.
Notice that $\bar{\mathcal{A}}_{j}(U^{\epsilon})$ depends only on
$q^{\epsilon},\phi^{\epsilon}$ and $\mathbf{H}^{\epsilon}$. Thus using (1.17)
and (1.19), we have
$\displaystyle|\Gamma|=$ $\displaystyle\left|\frac{\partial}{\partial
t}\tilde{A}_{0}+u_{j}^{\epsilon}\cdot\nabla\tilde{A}_{0}+\tilde{A}_{0}{\rm
div}{\mathbf{u}}^{\epsilon}+{\rm
div}(\tilde{A}_{0}\bar{\mathcal{A}}_{j}(U^{\epsilon}))\right|$
$\displaystyle=$ $\displaystyle\big{|}\tilde{A}_{0}\,{\rm
div}{\mathbf{u}}^{\epsilon}-\tilde{A}_{0\eta_{1}}^{{}^{\prime}}(1+\epsilon
q^{\epsilon}){\rm
div}{\mathbf{u}}^{\epsilon}-\tilde{A}_{0\eta_{2}}^{{}^{\prime}}[(1+\epsilon\phi^{\epsilon}){\rm
div}{\mathbf{u}}^{\epsilon}+\kappa(1+\epsilon
q^{\epsilon})^{-1}\Delta\phi^{\epsilon}$
$\displaystyle+\epsilon^{2}(L({\mathbf{u}}^{\epsilon})+G(\mathbf{H}^{\epsilon}))]+{\rm
div}(\tilde{A}_{0}\bar{\mathcal{A}}_{j}(U^{\epsilon}))\big{|}$
$\displaystyle\leq$ $\displaystyle C+C(|\nabla{\mathbf{u}}^{\epsilon}|+|\nabla
q^{\epsilon}|+|\nabla\phi^{\epsilon}|+|\nabla\mathbf{H}^{\epsilon}|+|\Delta\phi^{\epsilon}|+|\nabla{\mathbf{u}}^{\epsilon}|^{2}+|\nabla\mathbf{H}^{\epsilon}|^{2})$
$\displaystyle\leq$ $\displaystyle C+C(|\nabla E|+|\nabla
E|^{2})+C|\Delta(\phi^{\epsilon}-\phi_{\epsilon})|+C(|\nabla
U_{\epsilon}|+|\nabla U_{\epsilon}|^{2})$ $\displaystyle\leq$ $\displaystyle
C+C(\|E\|_{s}+\|E\|_{s}^{2}),$
where we have used Sobolev’s embedding theorem and the fact that
$s>{3}/{2}+2$, and the symbols $\tilde{A}_{0\eta_{1}}^{{}^{\prime}}$ and
$\tilde{A}_{0\eta_{2}}^{{}^{\prime}}$ denote the differentiation of
$\tilde{A}_{0}$ with respect to $\rho^{\epsilon}$ and $\theta^{\epsilon}$,
respectively.
Since
$\displaystyle\mathcal{A}_{j}(U^{\epsilon})\partial_{x_{j}}D^{\alpha}E-D^{\alpha}(\mathcal{A}_{j}(U^{\epsilon})\partial_{x_{j}}E)=-\sum_{0<\beta\leq\alpha}\binom{\alpha}{\beta}\partial^{\beta}\mathcal{A}_{j}(U^{\epsilon})\partial^{\alpha-\beta}E_{x_{j}}$
$\displaystyle\qquad=-\sum_{0<\beta\leq\alpha}\binom{\alpha}{\beta}\partial^{\beta}[u^{\epsilon}_{j}\mathbf{I}_{8}+\bar{\mathcal{A}}_{j}(U^{\epsilon})]\partial^{\alpha-\beta}E_{x_{j}},$
we obtain, with the help of the Moser-type calculus inequalities in Sobolev
spaces, that
$\displaystyle\|P_{1}^{\alpha}\|\leq$ $\displaystyle
C\left\\{(1+\|({\mathbf{u}}^{\epsilon},\mathbf{H}^{\epsilon})\|_{s})\|E_{x_{j}}\|_{|\alpha|-1}+\|\epsilon^{-1}(\partial^{\beta}f(q^{\epsilon},\phi^{\epsilon})\partial^{\alpha-\beta}E_{x_{j}})\|\right\\}$
$\displaystyle+C\|\partial^{\beta}[(1+\epsilon
q^{\epsilon})^{-1}(\mathbf{H}^{\epsilon}-{\mathbf{B}}_{\epsilon})+((1+\epsilon
q^{\epsilon})^{-1}-(1+\epsilon
q_{\epsilon})^{-1}){\mathbf{B}}_{\epsilon}]\partial^{\alpha-\beta}E_{x_{j}}\|$
$\displaystyle\leq$ $\displaystyle
C(1+\|E\|_{s}+\|(q^{\epsilon},\phi^{\epsilon})\|_{s}^{s})\|E_{x_{j}}\|_{|\alpha|-1}$
$\displaystyle\leq$ $\displaystyle
C(1+\|E\|_{s}^{s})\|E_{x_{j}}\|_{|\alpha|},$
where $f(q^{\epsilon},\phi^{\epsilon})=(1+\epsilon
q^{\epsilon})+(\gamma-1)(1+\epsilon\phi^{\epsilon})+(1+\epsilon
q^{\epsilon})^{-1}(1+\epsilon\phi^{\epsilon}).$
Similarly, utilizing the boundedness of $\|U_{\epsilon}\|_{s+1}$, the term
$P_{2}^{\alpha}$ can be bounded as follows:
$\displaystyle\|P_{2}^{\alpha}\|$ $\displaystyle\leq C\|U_{\epsilon
x_{j}}\|_{s}\|\mathcal{A}_{j}(U_{\epsilon})-\mathcal{A}_{j}(U^{\epsilon})\|_{|\alpha|}$
$\displaystyle\leq C\|(u_{j}^{\epsilon}-v_{\epsilon
j})\mathbf{I}_{8}+\bar{\mathcal{A}}_{j}(U^{\epsilon})-\bar{\mathcal{A}}_{j}(U_{\epsilon})\|_{|\alpha|}$
$\displaystyle\leq
C(1+\|{\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon}\|_{|\alpha|}+\|\mathbf{H}^{\epsilon}-{\mathbf{B}}_{\epsilon}\|_{|\alpha|})+C\|\epsilon^{-1}(f(q^{\epsilon},\phi^{\epsilon})-f(q_{\epsilon},\phi_{\epsilon}))\|_{|\alpha|}$
$\displaystyle\leq
C(1+\|q_{\epsilon}+\eta_{3}(q^{\epsilon}-q_{\epsilon})+\phi_{\epsilon}+\eta_{4}(\phi^{\epsilon}-\phi_{\epsilon})\|_{s}^{s})\|E\|_{|\alpha|}$
$\displaystyle\leq C(1+\|E\|_{s}^{s})\|E\|_{|\alpha|},$
where $0\leq\eta_{3},\eta_{4}\leq 1$ are constants.
The estimate of
$\int(D^{\alpha}E)^{\top}\tilde{A}_{0}(U^{\epsilon})Q^{\alpha}$ is more
complex and delicate. First, we can rewrite
$\int(D^{\alpha}E)^{\top}\tilde{A}_{0}(U^{\epsilon})Q^{\alpha}$ as
$\displaystyle\int(D^{\alpha}E)^{\top}\tilde{A}_{0}(U^{\epsilon})Q^{\alpha}=\int
D^{\alpha}({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})D^{\alpha}\big{[}(1+\epsilon
q^{\epsilon})^{-1}F({\mathbf{u}}^{\epsilon})-\mu(1+\epsilon
q_{\epsilon})^{-1}\Delta{\mathbf{v}}_{\epsilon}\big{]}$
$\displaystyle\quad+\nu\int
D^{\alpha}(\mathbf{H}^{\epsilon}-{\mathbf{B}}_{\epsilon})(1+\epsilon
q^{\epsilon})^{-1}D^{\alpha}(\Delta\mathbf{H}^{\epsilon}-\Delta{\mathbf{B}}_{\epsilon})$
$\displaystyle\quad+\kappa(\gamma-1)^{-1}\int
D^{\alpha}(\phi^{\epsilon}-\phi_{\epsilon})(1+\epsilon\phi^{\epsilon})^{-1}D^{\alpha}\\{(1+\epsilon
q)^{-1}\Delta\phi^{\epsilon}-(1+\epsilon
q_{\epsilon})^{-1}\Delta\phi_{\epsilon}\\}$
$\displaystyle\quad+\epsilon(\gamma-1)^{-1}\int
D^{\alpha}(\phi^{\epsilon}-\phi_{\epsilon})(1+\epsilon\phi^{\epsilon})^{-1}D^{\alpha}\\{(1+\epsilon
q^{\epsilon})^{-1}(L({\mathbf{u}}^{\epsilon})+G(\mathbf{H}^{\epsilon}))\\}$
$\displaystyle=\mathcal{Q}_{u}+\mathcal{Q}_{H}+\mathcal{Q}_{\phi_{1}}+\mathcal{Q}_{\phi_{2}}.$
By integration by parts, the Cauchy and Moser-type inequalities, and Sobolev’s
embedding theorem, we find that $\mathcal{Q}_{u}$ can be controlled as
follows:
$\displaystyle\mathcal{Q}_{u}=$ $\displaystyle\int
D^{\alpha}({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})D^{\alpha}\\{(1+\epsilon
q^{\epsilon})^{-1}\mu\Delta({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})+(\mu+\lambda)\nabla{\rm
div}({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})\\}$
$\displaystyle+\mu\int
D^{\alpha}({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})D^{\alpha}\\{[(1+\epsilon
q^{\epsilon})^{-1}-(1+\epsilon
q_{\epsilon})^{-1}]\Delta{\mathbf{v}}_{\epsilon}\\}$ $\displaystyle\leq$
$\displaystyle-\int\frac{\mu}{1+\epsilon
q^{\epsilon}}|D^{\alpha}\nabla({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})|^{2}-\int\frac{\mu+\lambda}{1+\epsilon
q^{\epsilon}}|D^{\alpha}{\rm
div}({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})|^{2}$
$\displaystyle+\int
D^{\alpha}({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})\sum_{0<\beta\leq\alpha}D^{\beta}[(1+\epsilon
q^{\epsilon})^{-1}]D^{\alpha-\beta}\big{\\{}\mu\Delta({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})$
$\displaystyle+(\mu+\lambda)\nabla{\rm
div}({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})\big{\\}}+C\|E\|_{|\alpha|}^{2}+C\|E\|_{s}^{4}+C\epsilon\|D^{\alpha}\nabla({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})\|^{2}$
$\displaystyle\leq$
$\displaystyle-C\int\mu|D^{\alpha}\nabla({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})|^{2}-C\int(\mu+\lambda)|D^{\alpha}{\rm
div}({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})|^{2}+C\|E\|_{|\alpha|}^{2}$
$\displaystyle+C\epsilon\|D^{\alpha}\nabla({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})\|^{2}+C\|E\|_{s}^{4}+C\|E\|_{s}^{2}\|E\|_{|\alpha|}^{2}+\int
D^{\alpha}({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})\cdot$
$\displaystyle\ \ \ \ \sum_{1<\beta\leq\alpha}D^{\beta}[(1+\epsilon
q^{\epsilon})^{-1}]D^{\alpha-\beta}\\{\mu\Delta({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})+(\mu+\lambda)\nabla{\rm
div}({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})\\}$ $\displaystyle\leq$
$\displaystyle-C\int\mu|D^{\alpha}\nabla({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})|^{2}-C\int(\mu+\lambda)|D^{\alpha}{\rm
div}({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})|^{2}$
$\displaystyle+C\epsilon\|D^{\alpha}\nabla({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})\|^{2}+C\|E\|_{s}^{4}+C\|E\|_{|\alpha|}^{2}.$
Similarly, the terms $\mathcal{Q}_{H}$, $\mathcal{Q}_{\phi_{1}}$ and
$\mathcal{Q}_{\phi_{2}}$ can be bounded as follows:
$\displaystyle\mathcal{Q}_{H}\leq-C\nu\int|D^{\alpha}\nabla(\mathbf{H}^{\epsilon}-{\mathbf{B}}_{\epsilon})|^{2}+C\epsilon\|D^{\alpha}\nabla(\mathbf{H}^{\epsilon}-{\mathbf{B}}_{\epsilon})\|^{2}+C\|E\|_{s}^{4}+C\|E\|_{|\alpha|}^{2},$
$\displaystyle\mathcal{Q}_{\phi_{1}}\leq-C\kappa\int|D^{\alpha}\nabla(\phi^{\epsilon}-\phi_{\epsilon})|^{2}+C\epsilon\|D^{\alpha}\nabla(\phi^{\epsilon}-\phi_{\epsilon})\|^{2}+C\|E\|_{s}^{4}+C\|E\|_{|\alpha|}^{2}$
and
$\mathcal{Q}_{\phi_{2}}\leq
C\epsilon\|D^{\alpha}\nabla(\phi^{\epsilon}-\phi_{\epsilon})\|^{2}+C\|E\|_{s}^{4}+C\|E\|_{|\alpha|}^{2}.$
Putting all the above estimates into (3.8) and taking $\epsilon$ small enough,
we obtain that
$\displaystyle\frac{d}{dt}\|D^{\alpha}E\|_{\mathrm{e}}^{2}+\xi\int|D^{\alpha}\nabla({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})|^{2}+\nu\int|D^{\alpha}\nabla(\mathbf{H}^{\epsilon}-{\mathbf{B}}_{\epsilon})|^{2}$
$\displaystyle\qquad+\kappa\int|D^{\alpha}\nabla(\phi^{\epsilon}-\phi_{\epsilon})|^{2}\leq
C\|R^{\alpha}\|^{2}+C(1+\|E\|_{s}^{2s})\|E\|_{|\alpha|}^{2}+\|E\|_{s}^{4},$
(3.10)
where we have used the following estimate
$\displaystyle\mu\int|D^{\alpha}\nabla({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})|^{2}+(\mu+\lambda)\int|D^{\alpha}{\rm
div}({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})|^{2}\geq\xi\int|D^{\alpha}\nabla({\mathbf{u}}^{\epsilon}-{\mathbf{v}}_{\epsilon})|^{2}$
for some positive constant $\xi>0$.
Using (3.9), we integrate the inequality (3.10) over $(0,t)$ with
$t<\min\\{T_{\epsilon},T^{\ast}\\}$ to obtain
$\displaystyle\|D^{\alpha}E(t)\|^{2}\leq$ $\displaystyle
C\|D^{\alpha}E(0)\|^{2}+C\int_{0}^{t}\|R^{\alpha}(\tau)\|^{2}d\tau$
$\displaystyle+C\int_{0}^{t}\big{\\{}(1+\|E\|_{s}^{2s})\|E\|_{|\alpha|}^{2}+\|E\|_{s}^{4}\big{\\}}(\tau)d\tau.$
Summing up this inequality for all $\alpha$ with $|\alpha|\leq s$, we get
$\displaystyle\|E(t)\|_{s}^{2}\leq
C\|E(0)\|_{s}^{2}+C\int_{0}^{T^{\ast}}\|R(\tau)\|_{s}^{2}d\tau+C\int_{0}^{t}\big{\\{}(1+\|E\|_{s}^{2s})\|E\|_{s}^{2}\big{\\}}(\tau)d\tau.$
With the help of Gronwall’s lemma and the fact that
$\|E(0)\|_{s}^{2}+\int_{0}^{T^{\ast}}\|R(t)\|_{s}^{2}dt=O(\epsilon^{2}),$
we conclude that
$\displaystyle\|E(t)\|_{s}^{2}\leq
C\epsilon^{2}\mbox{exp}\left\\{C\int_{0}^{t}(1+\|E(\tau)\|_{s}^{2s})d\tau\right\\}\equiv\Phi(t).$
It is easy to see that $\Phi(t)$ satisfies
$\displaystyle\Phi^{\prime}(t)=C(1+\|E(t)\|_{s}^{2s})\Phi(t)\leq
C\Phi(t)+C\Phi^{s+1}(t).$
Thus, employing the nonlinear Gronwall-type inequality, we conclude that there
exists a constant $K$, independent of $\epsilon$, such that
$\displaystyle\|E(t)\|_{s}\leq K\epsilon,$
for all $t\in[0,\min\\{T_{\epsilon},T^{*}\\})$, provided
$\Phi(0)=C\epsilon^{2}<\mbox{exp}(-CT^{*})$. Thus, the proof is completed.
## 4\. Proof of Theorem 2.4
The proof of Theorem 2.4 is essentially similar to that of Theorem 2.2, and we
only give some explanations here. The local existence of classical solution to
the system (2.20) is given by the proof of Theorem 2.1 in [24]. For each fixed
$\epsilon$, we assume that the maximal time interval of existence is
$[0,T^{\epsilon})$. To prove Theorem 2.4, it is crucial to obtain the error
estimates in (2.21). For this purpose, we construct the approximation
$V_{\epsilon}=(q_{\epsilon},{\mathbf{v}}_{\epsilon},{\mathbf{J}}_{\epsilon},\Theta_{\epsilon})^{\top}$
with
$q_{\epsilon}={\epsilon}\Pi,{\mathbf{v}}_{\epsilon}={\mathbf{v}},{\mathbf{J}}_{\epsilon}={\mathbf{J}},$
and $\Theta_{\epsilon}=\epsilon\Pi$. It is then easy to verify that
$V_{\epsilon}$ satisfies
$\displaystyle a(\underline{S}+\epsilon\Theta_{\epsilon},\epsilon
q_{\epsilon})(\partial_{t}q_{\epsilon}+{\mathbf{v}}_{\epsilon}\cdot\nabla
q_{\epsilon})+\frac{1}{\epsilon}{\rm div}{\mathbf{v}}_{\epsilon}$
$\displaystyle\qquad\qquad\qquad\qquad=\epsilon
a(\underline{S}+\epsilon^{2}\Pi,\epsilon^{2}\Pi)(\Pi_{t}+{\mathbf{v}}\cdot\nabla\Pi),$
(4.1) $\displaystyle r(\underline{S}+\epsilon\Theta_{\epsilon},\epsilon
q_{\epsilon})(\partial_{t}{\mathbf{v}}_{\epsilon}+{\mathbf{v}}_{\epsilon}\cdot\nabla{\mathbf{v}}_{\epsilon})+\frac{1}{\epsilon}\nabla
q_{\epsilon}-{\mathbf{J}}_{\epsilon}\cdot\nabla{\mathbf{J}}_{\epsilon}+\frac{1}{2}(|{\mathbf{J}}_{\epsilon}|^{2})$
$\displaystyle\qquad\qquad\qquad\qquad=[r(\underline{S}+\epsilon\Theta_{\epsilon},\epsilon
q_{\epsilon})-r(\underline{S},0)]({\mathbf{v}}_{t}+{\mathbf{v}}\cdot\nabla{\mathbf{v}}),$
(4.2)
$\displaystyle\partial_{t}{\mathbf{J}}_{\epsilon}+{\mathbf{v}}_{\epsilon}\cdot\nabla{\mathbf{J}}_{\epsilon}+{\rm
div}{\mathbf{v}}_{\epsilon}{\mathbf{J}}_{\epsilon}-{\mathbf{J}}_{\epsilon}\cdot\nabla{\mathbf{v}}_{\epsilon}=0,\quad{\rm
div}{\mathbf{J}}_{\epsilon}=0,$ (4.3)
$\displaystyle\partial_{t}\Theta_{\epsilon}+{\mathbf{v}}_{\epsilon}\cdot\nabla\Theta_{\epsilon}=\epsilon(\Pi_{t}+{\mathbf{v}}\cdot\nabla\Pi).$
(4.4)
Thus we can rewrite (4.1)–(4.4) in the vector form of (2.20) with a source
term. Letting $E=V^{\epsilon}-V_{\epsilon}$, we can perform the energy
estimates similar to those in the proof of Theorem 2.2 to show Theorem 2.4.
Here we omit the details of the proof for conciseness.
## 5\. Appendix
We give a dimensionless form of the system (1.1)-(1.3) and (1.8) for the
ionized fluid obeying the perfect gas relations (1.9) by following the spirit
of [12]. Introduce the new dimensionless quantities:
$\displaystyle{x}_{\star}=\frac{{x}}{L_{0}},\;\;t_{\star}=\frac{t}{L_{0}/u_{0}},\;\;{\mathbf{u}}_{\star}=\frac{{\mathbf{u}}}{u_{0}},$
$\displaystyle\mathbf{H}_{\star}=\frac{\mathbf{H}}{H_{0}},\;\;\rho_{\star}=\frac{\rho}{\rho_{0}},\;\;\theta_{\star}=\frac{\theta}{\theta_{0}},$
where the subscripts $0$ denote the corresponding typical values and $\star$
denotes dimensionless quantities. For convenience, all the coefficients are
assumed to be constants. Thus, the dimensionless form of the system
(1.1)–(1.3) and (1.8) is obtained by a direct computation:
$\displaystyle\frac{\partial\rho_{\star}}{\partial t_{\star}}+{\rm
div}_{\star}(\rho_{\star}{\mathbf{u}}_{\star})=0,$
$\displaystyle\rho_{\star}\frac{d{\mathbf{u}}_{\star}}{dt_{\star}}+\frac{1}{M^{2}}\nabla_{\star}(\rho_{\star}\theta_{\star})=C(\nabla_{\star}\times\mathbf{H}_{\star})\times\mathbf{H}_{\star}+\frac{1}{R}{\rm
div}_{\star}\Psi_{\star},$
$\displaystyle\rho_{\star}\frac{d\theta_{\star}}{dt_{\star}}+(\gamma-1)\rho_{\star}\theta_{\star}{\rm
div}_{\star}{\mathbf{u}}_{\star}=\frac{(\gamma-1)}{R_{m}}CM^{2}|\nabla_{\star}\times\mathbf{H}_{\star}|^{2}$
$\displaystyle\qquad\quad+\frac{(\gamma-1)M^{2}}{R}\Psi_{\star}:\nabla_{\star}{\mathbf{u}}_{\star}+\frac{\gamma}{RP_{r}}\Delta_{\star}\theta_{\star},$
$\displaystyle\frac{\partial\mathbf{H}_{\star}}{\partial
t_{\star}}-\nabla_{\star}\times({\mathbf{u}}_{\star}\times\mathbf{H}_{\star})=\frac{1}{R_{m}}\nabla_{\star}\times(\nabla_{\star}\times\mathbf{H}_{\star}),\quad{\rm
div}_{\star}\mathbf{H}_{\star}=0,$
where we have used the material derivative
$\displaystyle\frac{d}{dt_{\star}}=\frac{\partial}{\partial
t_{\star}}+{\mathbf{u}}_{\star}\cdot\nabla_{\star},$
and the new viscous stress tensor
$\Psi_{\star}=2\mathbb{D}_{\star}({\mathbf{u}}_{\star})+\frac{\lambda}{\mu}{\rm
div}_{\star}{\mathbf{u}}_{\star}\;\mathbf{I}_{3}$
with
$\mathbb{D}_{\star}({\mathbf{u}}_{\star})=(\nabla_{\star}{\mathbf{u}}_{\star}+\nabla_{\star}{\mathbf{u}}_{\star}^{\top})/{2}$.
In the above dimensionless system, there are following dimensionless
characteristic parameters:
$\displaystyle\mbox{Reynolds number:
}R=\frac{\rho_{0}u_{0}L_{0}}{\mu},\qquad\mbox{Mach number:
}M=\frac{u_{0}}{a_{0}},$ $\displaystyle\mbox{Prandtl number:
}P_{r}=\frac{c_{p}\mu}{\kappa},\qquad\mbox{magnetic Reynolds number:
}R_{m}=\frac{v_{0}L_{0}}{\nu},$ $\displaystyle\mbox{Cowling number:
}C=\frac{\mu H_{0}^{2}/4\pi\rho_{0}}{u_{0}^{2}},$
where $c_{p}$ is the specific heat at constant pressure and
$a_{0}=\sqrt{\mathfrak{R}\theta_{0}}$ is the sound speed. Note that
$\mathfrak{R}=c_{p}-c_{V}$ and $\gamma={c_{p}}/{c_{V}}$.
Acknowledgements: The authors are very grateful to the referees for their
helpful suggestions. This work was partially done when Fucai Li was visiting
the Institute of Applied Physics and Computational Mathematics in Beijing. He
would like to thank the institute for hospitality. Jiang was supported by NSFC
(Grant No. 40890154) and the National Basic Research Program under the Grant
2011CB309705. Ju was supported by NSFC (Grant No. 40890154 and 11171035). Li
was supported by NSFC (Grant No. 10971094), PAPD, NCET, and the Fundamental
Research Funds for the Central Universities.
## References
* [1] T. Alazard, Low Mach number limit of the full Navier-Stokes equations, Arch. Ration. Mech. Anal. 180 (2006), 1-73.
* [2] A. Blokhin, Yu. Trakhinin, Stability of strong discontinuities in fluids and MHD. In: Handbook of mathematical fluid dynamics. S. Friedlander, D. Serre, (eds.), vol. 1, 545-652, Elsevier, Amsterdam, 2002.
* [3] Y. Brenier, W.-A. Yong, Derivation of particle, string, and membrane motions from the Born CInfeld electromagnetism, J. Math. Phys., 46 (2005), 062305, 17pp.
* [4] G.Q. Chen and D.H. Wang, Global solutions of nonlinear magnetohydrodynamics with large initial data, J. Differential Equations, 182 (2002), no.1, 344-376.
* [5] G.Q. Chen and D.H. Wang, Existence and continuous dependence of large solutions for the magnetohydrodynamic equations, Z. Angew. Math. Phys. 54 (2003), 608-632.
* [6] B. Ducomet and E. Feireisl, The equations of magnetohydrodynamics: on the interaction between matter and radiation in the evolution of gaseous stars, Comm. Math. Phys. 266 (2006), 595-629.
* [7] G. Duvaut, J.L. Lions, Inéquations en thermoélasticité et magnéto-hydrodynamique, Arch. Rational Mech. Anal. 46 (1972) 241-279.
* [8] J. Fan, S. Jiang and G. Nakamura, Vanishing shear viscosity limit in the magnetohydrodynamic equations, Comm. Math. Phys. 270 (2007), 691-708.
* [9] H. Freistühler and P. Szmolyan, Existence and bifurcation of viscous profiles for all intermediate magnetohydrodynamic shock waves, SIAM J. Math. Anal., 26 (1995), 112-128.
* [10] T. Hagstrom, J. Lorenz, On the stability of approximate solutions of hyperbolic-parabolic systems and the all-time existence of smooth, slightly compressible flows, Indiana Univ. Math. J., 51 (2002), 1339-1387.
* [11] D. Hoff and E. Tsyganov, Uniqueness and continuous dependence of weak solutions in compressible magnetohydrodynamics, Z. Angew. Math. Phys. 56 (2005), 791-804.
* [12] W. R. Hu, Cosmic Magnetohydrodynamics, Scientific Press, 1987\. (in Chinese)
* [13] X. P. Hu and D. H. Wang, Global solutions to the three-dimensional full compressible magnetohydrodynamic flows, Comm. Math. Phys. 283 (2008), 255-284.
* [14] X. P. Hu and D. H. Wang, Low Mach number limit of viscous compressible magnetohydrodynamic flows, SIAM J. Math. Anal., 41 (2009), 1272–1294.
* [15] A. Jeffrey and T. Taniuti, Non-Linear Wave Propagation. With applications to physics and magnetohydrodynamics. Academic Press, New York, 1964.
* [16] S. Jiang, Q. C. Ju and F. C. Li, Incompressible limit of the compressible Magnetohydrodynamic equations with periodic boundary conditions, Comm. Math. Phys., 297 (2010), 371-400.
* [17] S. Jiang, Q. C. Ju and F. C. Li, Incompressible limit of the compressible magnetohydrodynamic equations with vanishing viscosity coefficients, SIAM J. Math. Anal., 42 (2010), 2539-2553.
* [18] S. Jiang, Q. C. Ju and F. C. Li, Incompressible limit of the compressible non-isentropic magnetohydrodynamic equations with zero magnetic diffusivity, available at: arXiv:1111.2926v1 [math.AP].
* [19] Q. C. Ju, F. C. Li and H. L. Li, The quasineutral limit of compressible Navier-Stokes-Poisson system with heat conductivity and general initial data, J. Differential Equations, 247 (2009), 203-224.
* [20] S. Klainerman, A. Majda, Singular perturbations of quasilinear hyperbolic systems with large parameters and the incompressible limit of compressible fluids, Comm. Pure Appl. Math. 34 (1981) 481-524.
* [21] P. Kuku$\rm\check{c}$ka, Singular Limits of the Equations of Magnetohydrodynamics, J. Math. Fluid Mech., 13 (2011), 173-189.
* [22] A. G. Kulikovskiy and G. A. Lyubimov, Magnetohydrodynamics, Addison-Wesley, Reading, Massachusetts, 1965.
* [23] L. D. Laudau and E. M. Lifshitz, Electrodynamics of Continuous Media, 2nd ed., Pergamon, New York, 1984.
* [24] A. Majda, Compressible Fluid Flow and Systems of Conservation Laws in Several space variables, Springer-Verlag, New York, 1984
* [25] G. Métivier, S. Schochet, The incompressible limit of the non-isentropic Euler equations, Arch. Ration. Mech. Anal. 158 (2001), 61-90..
* [26] A. Novotny, M. Ruzicka and G. Thäter, Singular limit of the equations of magnetohydrodynamics in the presence of strong stratification, Math. Models Methods Appl. Sci., 21 (2011), 115-147.
* [27] S. Schochet, The compressible Euler equations in a bounded domain: existence of solutions and the incompressible limit, Comm. Math. Phys., 104 (1986), 49-75.
* [28] M. Sermange, R. Temam, Some mathematical questions related to the MHD equations, Comm. Pure Appl. Math. 36 (1983), 635-664.
* [29] A. I. Volpert, S. I. Hudjaev, On the Cauchy problem for composite systems of nonlinear differential equations. Mat. Sbornik 87 (1972), 504-528(Russian) [English transl.: Math. USSR, Sbornik 16(1973), 517-544.
* [30] W. A. Yong, Basic aspects of hyperbolic relaxation systems, in Advances in the Theory of Shock Waves, H. Freistühler and A. Szepessy, eds., Progr. Nonlinear Differential Equations Appl. 47, Birkhäuser Boston, 2001, 259-305.
* [31] J.W. Zhang, S. Jiang, F. Xie, Global weak solutions of an initial boundary value problem for screw pinches in plasma physics, Math. Models Methods Appl. Sci. 19 (2009), 833-875.
|
arxiv-papers
| 2011-05-04T03:12:37 |
2024-09-04T02:49:18.582165
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Song Jiang, Qiangchang Ju, Fucai Li",
"submitter": "Fucai Li",
"url": "https://arxiv.org/abs/1105.0729"
}
|
1105.0812
|
# Compression of flow can reveal overlapping modular organization in networks
Alcides Viamontes Esquivel a.viamontes.esquivel@physics.umu.se Integrated
Science Lab, Umeå University, Sweden. Martin Rosvall Integrated Science Lab,
Umeå University, Sweden.
(04/28/2011)
###### Abstract
To better understand the overlapping modular organization of large networks
with respect to flow, here we introduce the map equation for overlapping
modules. In this information-theoretic framework, we use the correspondence
between compression and regularity detection. The generalized map equation
measures how well we can compress a description of flow in the network when we
partition it into modules with possible overlaps. When we minimize the
generalized map equation over overlapping network partitions, we detect
modules that capture flow and determine which nodes at the boundaries between
modules should be classified in multiple modules and to what degree. With a
novel greedy search algorithm, we find that some networks, for example, the
neural network of _C. Elegans_ , are best described by modules dominated by
hard boundaries, but that others, for example, the sparse European road
network, have a highly overlapping modular organization.
###### pacs:
89.70.Cf,89.65.Ef
## I Introduction
To discern higher levels of organization in large social and biological
networks albert2002 ; pastor2004evolution ; boccaletti2006complex ;
ratti2010redrawing ; alba1978elite , researchers have used hard clustering
algorithms to aggregate highly interconnected nodes into non-overlapping
modules fortunato ; girvan ; newman-fast because they have assumed that each
node only plays a single modular role in a network. Recently, because
researchers have realized that nodes can play many roles in a network, they
have detected overlapping modules in networks using three approaches: a hard
clustering algorithm that is run multiple times wilkinson2004method ; gfeller
; a local clustering method that generates independent and intersecting
modules palla2007 ; gregory2008fast ; Lancichinetti2011 ;
whitney2011clustering ; and link clustering that assigns boundary nodes to
multiple modules evans2009line ; Ahn:2010p167 ; kim2011 . However, all these
approaches have limitations.The first and second approaches require several
steps or tunable parameters to infer overlapping modules and the third
approach necessarily overlaps all neighboring modules. To find simultaneously
the number of modules in a network, which nodes belong to which modules, and
which nodes should belong to multiple modules and to what degree, we use an
information- theoretic approach and the map equationrosvall2009map .
We are interested in the dynamics on networks and what role nodes on the
boundaries between modules play with respect to flow through the system. For
example, in Fig. 1(a), Keflavik airport in Reykjavik connects Europe and North
America in the global air traffic network. When we summarize the network in
modules with long flow persistence times, should Reykjavik belong to Europe,
North America, or both? In our framework, the answer depends on the traffic
flow. That is, Reykjavik’s role in the network depends on to what degree
passengers visit Iceland as tourists versus to what degree they use Keflavik
as a transit between North America and Europe. If we assign the boundary node
to both modules, for returning flow we can increase the time the flow stays in
the modules and decrease the transition rate between the modules, but for
transit flow, the transition rate does not decrease and a single module
assignment is preferable. By generalizing the information theoretic clustering
method called the map equation RosvallBergstrom08 to overlapping structures,
we can formalize this observation and use the level of compression of a
modular description of the flow through the system to resolve the fuzzy
boundaries between modules. With this approach, modules will overlap if they
correspond to separate flow systems with shared nodes.
In the next section, we review the map equation framework, introduce the map
equation for overlapping modules, and explain how it exploits returning flow
near module boundaries. The mathematical framework works for both generalized
and empirical flow, but here we illustrate the method by exploring the
overlapping modular structure of several real-world networks based on the
probability flow of a random walker. We also test the performance on synthetic
networks and compare the results with other clustering algorithms. Finally, in
the Materials and Methods section, we provide complete descriptions of the map
equation for overlapping modules and the novel search algorithm.
## II Results and discussion
### II.1 The map equation
The mathematics of the map equation are designed to take advantage of
regularities in the flow that connects a system’s components and generates
their interdependence. The flow can be, for example, passengers traveling
between airports, money transferred between banks, gossip exchanged among
friends, people surfing the web, or, what we use here as a proxy for real
flow, a random walker on a network guided by the (weighted directed) links of
the network. Specifically, the map equation measures how well different
partitions of a network can be used to compress descriptions of flow on the
network and utilizes the rationale of the minimum description length
principle. Quoting Peter Grünwald grunwald2007minimum : “ _…[E]very regularity
in the data can be used to compress the data, i.e., to describe it using fewer
symbols than the number of symbols needed to describe the data literally._ ”
That is, the map equation gauges how successful different network partitions
are at finding regularities in the flow on the network.
We employ two regularities for compressing flow on a network. First, we use
short code words for nodes visited often and, by necessity, long code words
for nodes visited rarely, such that the average code word length will be as
short as possible. Second, we use a two-level code for module movements and
within-module movements, such that we can reuse short node code words between
modules with long persistence times.
Because we are not interested in the actual code words, but only in the
theoretical limit of compression, we use Shannon’s source coding theorem
shannon , which establishes the Shannon entropy $H(\mathbf{p})$ as the lower
limit of the average number of bits per code word necessary to encode a
message, given the probability distribution $\mathbf{p}$ of the code words,
$H(\mathbf{p})=-\sum_{i}p_{i}\log_{2}p_{i}.$
For example, if there is a message “ABABBAAB…” for which the symbols “A” and
“B” occur randomly with the same frequency, that is, “A” and “B” are
independent and identically distributed, the source coding theorem states that
no binary language can describe the message with less than
$-\frac{1}{2}\log_{2}\frac{1}{2}-\frac{1}{2}\log_{2}\frac{1}{2}=1$ bit per
symbol. However, if “A” occurs twice as often as “B”, the regularity can be
exploited and the message compressed to
$-\frac{1}{3}\log_{2}\frac{1}{3}-\frac{2}{3}\log_{2}\frac{2}{3}\approx 0.92$
bit per symbol. To measure the per-step minimum average description length of
flow on a network, we collect the mapping from symbols “A” and “B” or, in our
case, node names, to code words in a codebook, and calculate the Shannon
entropy based on the node-visit frequencies.
Figure 1: The map equation for overlapping modules can exploit regularities in
the boundary flow between modules. The three lines in (b) show the description
length as a function of the proportion of returning passengers for three
different partitions: North America, Europe, and Reykjavik in one big module
(green); North America and Europe in two non-overlapping modules with
Reykjavik in either of the modules (black); and North America and Europe in
two overlapping modules with Reykjavik in both modules (blue).
But flow or a random walker do not visit nodes independently. For example, if
a network has a modular structure, once a random walker enters a tightly
interconnected region in the network, in the next step she will most likely
visit a node in the same tightly interconnected region, and she tends to stay
in the region for a long time. To take advantage of this regularity and
further compress the description of the walk, we use multiple module
codebooks, each with an extra exit code that is used when the random walker
exits the module, and an index codebook that is used after the exit code to
specify which module codebook is to be used next. Now we can make use of
higher-order structure in a network. For a modular network, we can describe
flow on the network without ambiguities in fewer bits, using a two-level code,
than we could do with only one codebook, because we only use the index
codebook for movements between modules and can reuse short code words in the
smaller module codebooks.
Given a network partition $\mathsf{M}$, it is now straightforward to calculate
the per-step minimum description length $L(\textsf{M})$ of flow on the
network. We use the Shannon entropy to calculate the average description
length of each codebook and weight the average lengths by their rates of use.
For a modular partition M with $m$ modules, the map equation takes the form:
$L(\mathsf{\mathsf{M}})=q_{\curvearrowright}H(\mathcal{Q})+\sum_{i=1}^{m}p_{\circlearrowright}^{i}H\left(\mathcal{P}^{i}\right).$
(1)
For between-module movements, we use $q_{\curvearrowright}$ for the rate of
use of the index codebook with module code words used according to the
probability distribution $Q$. For within-module movements, we use
$p_{\circlearrowright}^{i}$ for the rate of use of the $i$-th codebook with
node and exit code words used according to the probability distribution
$\mathcal{P}^{i}$.
By minimizing the map equation over network partitions, we can resolve how
many modules we should use and which nodes should be in which modules to best
capture the dynamics on the network. See http://www.mapequation.org for a
dynamic visualization of the mechanics of the map equation. Because the map
equation only depends on the rates of node visits and module transitions, it
is universal to all flow for which the rates of node visits and module
transitions can be measured or calculated. The code structure of the map
equation can also be generalized to make use of higher-order structures. In
ref. RosvallMultilevel2011 , we show how a multilevel code structure can
reveal hierarchical organization in networks, and in the next section, we show
that we can capitalize on overlapping structures by releasing the constraint
that a node can only belong to one module codebook.
### II.2 The map equation for overlapping modules
The code structure of the map equation framework is flexible and can be
modified to uncover different structures of a network as long as flow on the
network can be unambiguously coded and decoded. As we will show here, by
releasing the constraint that a node can only belong to one module codebook
and allowing nodes to be _information free ports_ , we can reveal overlapping
modular organization in networks. To see how, let us again study the air
traffic between North America and Europe in Fig. 1(a). Suppose that cities in
North America and Europe belong to two different modules, for simplicity
identical in size and composition, and we are to assign membership to
Reykjavik between North America and Europe. For a hard partition, we would
assign Reykjavik to the module that most passengers travel to and from, and if
the traffic flow were the same, we could chose either module. But if the flow
to and from Reykjavik were dominated by American and European tourists
visiting Iceland for sightseeing before returning to their home continent,
both Americans and Europeans would consider Iceland as part of their
territory. We can accommodate for this view if we allow nodes to belong to
multiple module codebooks; depending on the origin of the flow, we use
different code words for the same node.
With the map equation for overlapping modules, we can measure the description
length of flow on the network with nodes assigned to multiple modules. By
minimizing the map equation for overlapping modules, we can not only resolve
into how many modules a network is organized and which nodes belong to which
modules, but also which nodes belong to multiple modules and to what degree.
The pattern of flow, returning tourists to Iceland or in-transit businessmen
on intercontinental trips, determines whether we should assign Reykjavik to
North America, Europe, or both. Or, conversely, when we decide whether
Reykjavik should be assigned to North America, Europe, or both, we reveal the
pattern of boundary flow between modules, as Fig. 1 illustrates. In this
hypothetical example, assigning cities to two non-overlapping modules is
always better than assigning all cities to one module. But for a sufficiently
high proportion of returning flow, the overlapping modular solution with
Reykjavik in both modules as a free port provides the most efficient partition
to describe flow on the network.
The map equation for overlapping modules can take advantage of regularities in
the boundary flow between modules. To measure the length of an overlapping
modular description of flow on a network, we must decide how the flow switches
modules to calculate the node-visit rates from different modules of multiply
assigned nodes. In the Materials and Methods section, we provide a detailed
description of how a random walker moves in an overlapping modular structure,
but the rule is simple: when a random walker arrives at a node assigned to
multiple modules, the walker remains in the same module if possible.
Otherwise, the random walker switches, with equal probability, to one of the
modules to which the node is assigned.
Figure 2 illustrates the code structure of a hard and a fuzzy partition of an
example network with the dynamics derived from a random walker. For this
network, the figure shows that an overlapping modular description allows us to
describe the path of a random walker with fewer bits than we could do with a
hard network partition. With overlapping modules, we halve the use of the
index codebook, since the rate of module switching halves. Because we
consequently use the exit codes in the now identical module codebooks less
often, the description of movements within modules also becomes shorter, even
if the average code word length increases. Turning the reasoning around again,
given the overlapping modular organization, we have learned that returning
flow characterizes the boundary flow between the modules.
With the mathematical foundation in place, we need an algorithm that can
discover the best partition of the network. In particular, which nodes should
belong to multiple modules and to what degree? For this optimization problem,
we have developed a greedy search algorithm that we call Fuzzy infomap and
detail in the Materials and Methods section. Here we give a short summary of
Fuzzy infomap designed to provide good approximate solutions for large
networks. We start from Infomap’s hard clustering of the network and then
execute the two-step algorithm. In the first step, we measure the change in
the description length when we assign boundary nodes, one by one, to multiple
modules. This calculation is fast, but aggregating the changes in the second
step is expensive and often requires recalculating all node-visit rates.
Therefore, we rank the individual multiple module assignments and, in a greedy
fashion, aggregate the individual best ones to minimize the description
length.
Figure 2: The code structure of the map equation (a) without and (b) with
overlapping modules. The color of a node in the networks and of the
corresponding block in the code structures represents the module assignment,
the width of a block represents the node-visit rate, and the height of the
blocks represents the average codelength of code words in the codebooks.
### II.3 Overlapping modular organization in real-world networks
To illustrate our flow-based approach, we have clustered a number of real-
world networks. Figure 3 shows researchers organized in overlapping research
groups in network science. The underlying co-authorship network is derived
from the reference lists in the three review articles albert2002 ; newmanSIAM
; fortunato . In this weighted undirected network, we connect two researchers
with a weighted link if they have co-authored one or more research papers. For
every co-authored paper, we add to the total weight of the link a weight
inversely proportional to the number of authors on the paper. Our premise is
that two persons who have co-authored a paper have exchanged information,
information they can subsequently share with other researchers and induce a
flow of information on the network. The map equation can capitalize on
regularities in this flow, and Fig. 3 highlights one area of the co-authorship
network with several overlapping research groups. For example, assigning Jure
Leskovec to four research groups contributes to maximal compression of a
description of a random walker on the network. Based on this co-authorship
network, Leskovec is strongly associated with Dasgupta, Mahoney, Lang, and
Backstrom, but also with groups at Cornell University, Carnegie Mellon
University, Stanford University, and Yahoo Research. The size of the modules
and the fraction of returning flow at the boundary nodes determine whether
hard or fuzzy boundaries between research groups lead to optimal compression
of flow on the network.
Figure 3: Network scientists organized in overlapping research groups. The
colors of the nodes represent overlapping research groups identified by the
map equation and the pie charts represent the fractional association with the
different research groups.
Table 1 shows the level of compression and overlap of a number of real-world
networks. The networks are sorted from highest to lowest compression gain when
allowing for overlaps. We find the highest compression gain in the European
roads network, which is a sparse network with intersections as nodes and roads
as links. Many intersections at boundaries between modules are classified in
multiple modules, because intersections only connect a few roads and the
return rate of the random flow is relatively high.
By contrast, compressing random flow in overlapping modules only gives a
marginal gain over hard clustering in the highly interconnected and directed
network of _C. Elegans_ , where less than three percent of the neurons are
classified in multiple modules. Even if there is evidence that the neural
network is modular, we most likely underestimate the degree of overlap with a
random walk model of flow.
In the middle of the table, the world air routes network shows a relatively
low compression gain, given the many cities classified in multiple modules.
For this network, the compression gain would be much higher if, instead of
random flow on the links, we were to describe real passenger flow with a
higher return rate.
Table 1: The overlapping organization and the level of compression of eight real-world networks. For each network with $n$ nodes and $l$ links, we report the hard partition compression $C$ with Infomap, the additional compression with Fuzzy infomap, and the fraction of nodes that are assigned to multiple modules. Network | $n$ | $l$ | $C$ | $\triangle C_{\textrm{fuzzy}}$ | $N_{\textrm{fuzzy}}/N$
---|---|---|---|---|---
European roads networkeroadsnet | 1018 | 1274 | $46.2\%$ | $10.4\%$ | $35.5\%$
Western states power gridwatts1998collective | 4941 | 6994 | $53.4\%$ | $8.84\%$ | $27.5\%$
Human diseases networkgoh2007human | 516 | 1188 | $46.4\%$ | $2.87\%$ | $15.3\%$
Coauthorhip networkcoauthor | 552 | 1317 | $48.9\%$ | $2.47\%$ | $14.6\%$
World air routesguimera2005worldwide | 3618 | 14142 | $31.1\%$ | $1.24\%$ | $13.9\%$
U.S. political blogsadamic2005political | 1222 | 16714 | $4.13\%$ | $0.35\%$ | $5.81\%$
Swedish political blogsblogs | 855 | 10315 | $0.50\%$ | $0.18\%$ | $4.79\%$
Neural net. of C. Eleganswatts1998collective | 297 | 2345 | $1.16\%$ | $0.13\%$ | $2.69\%$
### II.4 Comparing the map equation for overlapping modules with other
methods
Depending on the system being studied and the research question at hand,
researchers develop clustering algorithms for overlapping modules based on
different principles. For example, while some researchers take a statistical
approach and see modules as non-random features of a network, other researcher
use a local definition and identify independent and intersecting modules, or
take a link perspective and assign all boundary nodes to multiple modules.
Consequently, the final partitions are quite different, and it is interesting
to contrast our information theoretic and flow-based approach, implemented in
fuzzy infomap with these approaches, here represented by OSLOM
Lancichinetti2011 , Clique Percolation palla2005uncovering , and Link
Communities Ahn:2010p167 .
OSLOM defines a module as the set of nodes that maximizes a local statistical
significance metric. In other words, OSLOM identifies possibly overlapping
modules that are unlikely to be found in a random network. Clique percolation
identifies clusters by sliding fully connected k-cliques to adjacent k-cliques
that share k-1 vertices with each other. A module is defined as the maximal
set of nodes that can be visited in chained iterations of this operation, and
the overlaps consist of the shared nodes between modules that do not support
the slide operation across the boundary. Finally, the Link Communities
approach creates highly overlapping modules by aggregating nodes that are part
of a link community. The link communities themselves are built using a
similarity measure between links, the primal actors of the method.
To compare the methods at different degrees of overlap, we used a set of
synthetic networks presented in ref. lancichinetti2009benchmarks . In Table 2,
we included six statistics for the four methods applied to synthetic networks
with 1000 nodes and three different degrees of overlap (see table caption for
details). The first group of partition numbers describe the number of detected
modules, the number of nodes that are assigned to multiple modules, and the
total number of assignments. To interpret the results from a flow perspective,
we included the index, module, and total codelength for describing a random
walker on the network given the network partition.
| Partition numbers | | Codelength (bits)
---|---|---|---
| modules | overlaps | assignments | | index | module | total
Low overlap | | | | | | |
Fuzzy Infomap | 44 | 105 | 1228 | | 1.7 | 5.9 | 7.6
OSLOM | 44 | 89 | 1089 | | 1.8 | 5.8 | 7.6
Clique Percolation | 43 | 104 | 1108 | | 1.7 | 6.0 | 7.7
Link Communities | 3415 | 1000 | 9215 | | 8.1 | 3.5 | 12
| | | | | | |
Medium overlap | | | | | | |
Fuzzy Infomap | 53 | 303 | 1830 | | 2.2 | 6.0 | 8.2
OSLOM | 54 | 276 | 1277 | | 2.3 | 5.9 | 8.2
Clique Percolation | 55 | 268 | 1283 | | 2.3 | 6.1 | 8.3
Link Communities | 4457 | 1000 | 11628 | | 8.7 | 3.5 | 14
| | | | | | |
High overlap | | | | | | |
Fuzzy Infomap | 56 | 398 | 1676 | | 2.6 | 6.1 | 8.8
OSLOM | 61 | 462 | 1465 | | 2.8 | 6.0 | 8.8
Clique Percolation | 73 | 388 | 1429 | | 2.9 | 6.1 | 9.0
Link Communities | 4298 | 1000 | 11063 | | 10 | 3.7 | 11
Table 2: Comparing four different overlapping clustering methods. We run
Fuzzy infomap, OSLOM, and Link communities with their default settings and use
clique size four for the Clique percolation method. All values are averaged
over ten instantiations of random undirected and unweighted networks with 1000
nodes and predefined community structure, generated with three different
degrees of overlap lancichinetti2009benchmarks : Low overlap corresponds to
100, medium overlap corresponds to 300, and high overlap corresponds to 500
nodes in multiple modules. All other parameters were held constant: The number
of nodes that multiply-assigned nodes are assigned to was set to two; each
cluster consisted of on average 20 nodes with a minimum of $10$ and a maximum
of $50$ nodes; and the power law exponent was set to $-2$ for of the node
degree distribution and $-1$ for the module size distribution. Finally, the
mixing parameter that controls the proportion of links within and between
modules was set to $0.1$.
Table 2 shows that Fuzzy infomap and OSLOM generate similar partitions for low
and medium degrees of overlap, but the trend when going to higher degrees of
overlap indicates fundamental differences. By assigning boundary nodes to more
modules than OSLOM prefers, Fuzzy infomap identifies modules with longer
persistence times. The shorter index codelength resulting from the fewer
transitions compensates for the longer module codelength from the larger
modules. As a result, with the overlapping partitions generated by Fuzzy
infomap, random flow can be described with fewer bits. But the difference is
small and shows up only in the second decimal place when up to half of all the
nodes are assigned to multiple modules.
Clique percolation generates partitions with more modules but fewer
assignments than both Fuzzy infomap and OSLOM. From a flow perspective,
smaller modules with less overlap give more module switches that cannot be
compensated for by a shorter module codelength. The strength of the Clique
percolation method is the simple definition that allows for easy
interpretation of the results.
Designed with links as the primal actors used to identify pervasive overlap in
networks, the results of Link Communities are quite different. For example,
independent of the degree of overlap of the synthetic networks, each node
belongs to on average ten modules. From the perspective of a random flow
model, the persistence time is short in the many small modules, and the
information necessary to encode the many transitions is much larger than for
the other methods. This result is expected, as Link Communities is tailored to
identify pervasive overlap in social networks in which people belong to
several modules and information flow is far from random.
Often the performance is an important aspect to consider when choosing a
clustering method. Therefore, we measured the time it took to cluster the
synthetic networks with the different clustering algorithms. We stress that we
used presumably non-optimized research code made available online by its
developers and that the performance, of course, depends on the network. Per
1000 node synthetic network used in our comparison, Fuzzy infomap used on
average 1.7 seconds for a single iteration of module growth and 240 seconds
for multiple growths, OSLOM used 330 seconds, the Clique Percolation method
1.5 seconds, and link communities were identified in 2.4 seconds.
We conclude this comparison by stressing that the research question at hand
must be considered when choosing a clustering method. Fuzzy infomap provides
fast results that, for a random flow model, are similar to results generated
by OSLOM and the Clique percolation method, at least for moderate degree of
overlap. On the other hand, for identifying pervasive overlap, researchers
should consider Link Communities or a generalized flow model with longer
persistence times in smaller, highly overlapping modules.
## III Materials and methods
Here we detail the map equation for overlapping modules and describe our
greedy search algorithm.
### III.1 The map equation for overlapping modules
Below we explain in detail how we derive the transition rates of a random
walker between overlapping modules. We also derive the conditional
probabilities for nodes assigned to multiple modules. We then express the map
equation (Eq. 1) in terms of these rates, which allows for fast updates in the
search algorithm.
#### III.1.1 Movements between nodes assigned to multiple modules
To calculate the map equation for overlapping modules, we need the visit rates
$p_{\alpha_{i}}$ for all modules $i\in M_{\alpha}$ a node $\alpha$ is assigned
to and the inflow $q_{i\curvearrowleft}$ and the outflow
$q_{i\curvearrowright}$ of all modules. We derive these quantities from the
weighted and directed links $W_{\alpha\beta}$, which we normalize such that
$w_{\alpha\beta}$ correspond to the probability of the random walker moving to
node $\beta$ once at node $\alpha$:
$w_{\alpha\beta}=\begin{cases}0,&\mathrm{if\>there\>is\,no\>link\>from}\>\alpha\mathrm{\>to\>}\beta\\\
\frac{W_{\alpha\beta}}{\underset{\beta}{\sum}W_{\alpha\beta}},&\mathrm{otherwise}\end{cases}.$
(2)
When necessary, we use random teleportation to guarantee a unique steady state
distribution google . That is, for directed networks, at rate $\tau$, or
whenever the random walker arrives at a node with no out-links, the random
walker teleports to a random node in the network. To simplify the notation, we
set $w_{\alpha\beta}=1/n$ for all nodes $\alpha$ without out-links to all $n$
nodes $\beta$ in the network.
The movements between multiply assigned nodes and overlapping modules are
straightforward. Whenever the random walker arrives at a node that is assigned
to multiple modules, she remains in the same module if possible or switches to
a random module if not possible. For example, assuming that the random walker
is in module $i$, she remains in module $i$ when moving to node $\beta$ if
node $\beta$ is assigned to module $i$, $i\in M_{\beta}$. But if node $\beta$
is not assigned to module $i$, $i\notin M_{\beta}$, she switches with equal
probability $1/\left|M_{\beta}\right|$ to any of the modules to which node
$\beta$ is assigned(see Fig. 4). If we define the transition function
$\delta_{\alpha_{i}\beta_{j}}=\begin{cases}\begin{array}[]{l}1,\\\
\frac{1}{\left|M_{\beta}\right|},\\\
0,\end{array}&\begin{array}[]{l}\mathrm{if}\>i=j\\\ \mathrm{if\>i\neq
j\>\mathrm{and\>i\notin M_{\beta}}}\\\ \mathrm{if\>i\neq j\>\mathrm{and\>i\in
M_{\beta}}}\end{array}\end{cases},$ (3)
we can now define the visit rates by the equation system
$p_{\alpha_{i}}=\underset{\beta}{\sum}\underset{j\in
M_{\beta}}{\sum}p_{\beta_{j}}\delta_{\alpha_{i}\beta_{j}}\left[\left(1-\tau\right)w_{\beta\alpha}+\tau\frac{1}{n}\right].$
(4)
We solve for the unknown visit rates with the fast iterative algorithm
BiCGStabBicGSTAB . Since every node in module $i$ guides a fraction
$\left(1-\tau\right)\sum_{\beta\notin i}w_{\alpha\beta}$ and teleports a
fraction $\tau\frac{n-n_{i}}{n}$ of its conditional probability
$p_{\alpha_{i}}$ to nodes outside of module $i$, the exit probability of
module $i$ is
$q_{i\curvearrowright}=\underset{\alpha\in
i}{\sum}p_{\alpha_{i}}\left[\left(1-\tau\right)\underset{\beta\notin
i}{\sum}w_{\alpha\beta}+\tau\frac{n-n_{i}}{n}\right],$ (5)
where $n_{i}$ is the number of nodes assigned to module $i$.
Figure 4: Movements between nodes possibly assigned to multiple modules. (a)
and (b) Assuming that the random walker is in module $i$, she remains in
module $i$ when moving to node $\beta$ if node $\beta$ is assigned to module
$i$. (b) But if node $\beta$ is not assigned to module $i$, she switches with
equal probability to any of the modules node $\beta$ is assigned to.
#### III.1.2 The expanded map equation for overlapping modules
To make explicit which terms must be updated in a given step of a search
algorithm, here we expand the entropies of the map equation (Eq. 1) in terms
of the visit and transition rates $p_{\alpha_{i}}$, $q_{i\curvearrowleft}$,
and $q_{i\curvearrowright}$. When teleportation is included in the description
length as above, the outflow of modules balances the inflow, but here we
derive for the general case when $q_{i\curvearrowleft}\neq
q_{i\curvearrowright}$.
We use the per-step probabilities of entering the modules
$q_{i\curvearrowleft}$ to calculate the average code word length of the index
code words weighted by their rates of use, which is given by the entropy for
the index codebook
$H\left(\mathcal{Q}\right)=-\sum_{i=1}^{m}\frac{q_{i\curvearrowleft}}{\sum_{j=1}^{m}q_{j\curvearrowleft}}\log_{2}\frac{q_{i\curvearrowleft}}{\sum_{j=1}^{m}q_{j\curvearrowleft}},$
(6)
where the sum runs over the $m$ modules of the modular partition. The
contribution to the average description length from the index codebook is the
entropy $H\left(\mathcal{Q}\right)$ weighted by its rate of use
$q_{\curvearrowleft}$,
$q_{\curvearrowleft}=\sum_{j=1}^{m}q_{j\curvearrowleft}.$ (7)
Substituting Eq. 7 into Eq. 6, we can express the contribution to the per-step
average description length from the index codebook as
$\displaystyle q_{\curvearrowleft}H\left(\mathcal{Q}\right)$ $\displaystyle=$
$\displaystyle-
q_{\curvearrowleft}\left[\sum_{i=1}^{m}\frac{q_{i\curvearrowleft}}{q_{\curvearrowleft}}\log_{2}\frac{q_{i\curvearrowleft}}{q_{\curvearrowleft}}\right]$
(8) $\displaystyle=$
$\displaystyle-\sum_{i=1}^{m}q_{i\curvearrowleft}\left[\log_{2}q_{i\curvearrowleft}-\log_{2}q_{\curvearrowleft}\right]$
$\displaystyle=$ $\displaystyle
q_{\curvearrowleft}\log_{2}q_{\curvearrowleft}-\sum_{i=1}^{m}q_{i\curvearrowleft}\log_{2}q_{i\curvearrowleft}.$
We use the per-step probabilities of exiting the modules
$q_{i\curvearrowright}$ and the visit rates $p_{\alpha_{i}}$ to calculate the
entropy of each module codebook:
$H\left(\mathcal{P}^{i}\right)=-\frac{q_{i\curvearrowright}}{q_{i\curvearrowright}+\sum_{\beta\in
i}p_{\beta
i}}\log_{2}\frac{q_{i\curvearrowright}}{q_{i\curvearrowright}+\sum_{\beta\in
i}p_{\beta i}}-\\\ -\sum_{\alpha\in
i}\frac{p_{\alpha_{i}}}{q_{i\curvearrowright}+\sum_{\beta\in i}p_{\beta
i}}\log_{2}\frac{p_{\alpha_{i}}}{q_{i\curvearrowright}+\sum_{\beta\in
i}p_{\beta i}}\\\ =-\frac{1}{p_{\circlearrowright
i}}\left[q_{i\curvearrowright}\log_{2}q_{i\curvearrowright}+\sum_{\alpha\in
i}p_{\alpha_{i}}\log_{2}p_{\alpha_{i}}-p_{\circlearrowright}^{i}\log_{2}p_{\circlearrowright}^{i}\right],$
(9)
with $p_{\circlearrowright}^{i}$ for the rate of use of the $i$-th module
codebook,
$p_{\circlearrowright}^{i}=q_{i\curvearrowright}+\sum_{\beta\in
i}p_{\beta_{i}}.$ (10)
Finally, summing over all module codebooks, the description length given by
the overlapping module partition $\mathsf{M}$ is
$\displaystyle L\left(\mathsf{M}\right)$ $\displaystyle=$ $\displaystyle
q_{\curvearrowleft}\log_{2}q_{\curvearrowleft}$
$\displaystyle-\sum_{i=1}^{m}q_{i\curvearrowleft}\log_{2}q_{i\curvearrowleft}-\sum_{i=1}^{m}q_{i\curvearrowright}\log_{2}q_{i\curvearrowright}$
$\displaystyle-\sum_{i=1}^{m}\underset{\alpha\in
i}{\sum}p_{\alpha_{i}}\log_{2}p_{\alpha_{i}}+\sum_{i=1}^{m}p_{\circlearrowright}^{i}\log_{2}p_{\circlearrowright}^{i}.$
The only visible difference between this expression and the map equation for
non-overlapping modules is the sum over conditional probabilities for nodes
assigned to multiple modules, which is no longer independent of the
overlapping module partition $\mathsf{M}$. But since the transition rates
depend on the conditional probabilities (see Eq. 5), all terms depend on the
overlapping configuration.
### III.2 The greedy search algorithm for overlapping modules
Figure 5: General scheme of the two-step greedy search algorithm for
overlapping modules. (a) Pseudocode with first step (c) and second step (d) of
the algorithm that can be iterated as shown in (b). Starting from a hard
partition generated by Infomap rosvall2009map , each iteration successively
increases the overlap between modules to minimize the map equation for
overlapping modules. In the first step (c), one by one, each boundary node is
assigned to adjacent modules. In the second step (d), we first sort the local
changes from best to worst and then iteratively apply quadratic fitting to
find the number of best local changes that minimizes the map equation.
To detect the overlapping modular organization of a network, ultimately we
want to find the global minimum of the map equation over all possible
overlapping modular configurations of the network, but only with an exhaustive
enumeration of all possible solutions can we guarantee the optimal solution.
This procedure is, of course, impractical for all but the smallest networks.
However, we can construct an algorithm that finds a good approximation. Figure
5 explains the concept of our algorithm, which builds on an iterative two-step
procedure.
In the first step, we individually assess which nodes are most likely to be
assigned to multiple modules. Starting from a hard partition generated by
Infomaprosvall2009map in the first iteration, we go through all nodes at the
boundary between modules and assign each boundary node to adjacent modules.
That is, one node and one adjacent module at a time, we assign the node to the
extra module, measure the map equation change, and then return to the previous
configuration (see Fig. 5(c)). Because the multiply assigned nodes only
connect to singly assigned nodes in the first iteration, the conditional
probabilities and the change in the map equation can be updated quickly
without a full recalculation of the visit rates. This first step produces
3-tuples of local changes of the form _(node, extra-module, map-equation-
change)_.
In the second step, we combine a fraction of all local changes generated in
the first step into a global solution. Every time two or more multiply
assigned nodes are connected, we need to solve a linear system to calculate
the conditional probabilities. When a majority of nodes are assigned to
multiple modules, this can take as long as calculating the steady-state
distribution of random walkers in the first place. For good performance, we
therefore try to test as few combinations of local changes as possible. After
testing several different approaches, we have opted for a heuristic method in
which we first sort the tuples from best to worst in terms of map equation
change and then determine the number of best tuples that minimizes the map
equation. The method works well, because good local changes often are good
globally.
As a side remark, the map equation for link community kim2011 allows for
straightforward and fast calculation of all conditional probabilities and
transition rates, since each link belongs to only one module. But this
constraint enforces module switches between boundary nodes that belong to the
same module, because all boundary nodes belong to multiple modules in the link
community approach.
Figure 5(d) shows the value of the map equation as a function of the number of
aggregated tuples ordered from best to worst. Combinations of tuples that
individually generate longer description lengths can generate a shorter
description length if they are applied together. This fact, together with the
greedy order in which we aggregate the tuples, generates noise in the curve.
To quickly approach the global minimum, we must overcome bad local minima
caused by the noise and evaluate as few aggregations as possible. Therefore,
we iteratively fit a quadratic polynomial to the curve by selecting new points
at the minimum of the polynomial. A quadratic polynomial only requires three
points to be fully specified, but in order to deal with the noise, we use a
moving local least squares fit. In practice, we evaluate around ten points for
each quadratic fit and repeat this procedure a few times to obtain a good
solution.
Step 1 and step 2 can now be repeated, each time starting from the obtained
solution with overlapping modules from the previous iteration. Figure 5(b)
illustrates that by repeating the two steps, we sometimes can extend the
overlap between modules, but this comes at a cost. After the first iteration
of the algorithm, step 1 also can involve solving a linear system to calculate
the conditional probabilities. Thus, the first step is no longer guaranteed to
be as fast as in the first iteration. Still, for medium-sized networks,
multiple iterations are feasible. For example, for the networks presented in
Table 1, the first iteration took a few seconds and multiple iterations until
the point of no further improvements took less than two minutes on a normal
laptop. We have made the code available here:
https://sites.google.com/site/alcidesve82/.
## IV Conclusions
In this paper, we have introduced the map equation for overlapping modules.
When we allow nodes to belong to multiple module codebooks and minimize the
map equation over possibly overlapping network partitions, we can determine
which nodes belong to multiple modules and to what degree. Compared to hard
partitions detected by the map equation, we have further compressed
descriptions of a random walker on all tested real-world networks, and
therefore revealed more regularities in the flow on the networks. We find the
highest overlapping modular organization in sparse infrastructure networks,
but this result depends on our random-walk model of flow. Since the
mathematical framework is not limited to random flow, it would be interesting
to compare our results with results derived from empirical flow.
###### Acknowledgements.
We are grateful to Klas Markström and Daniel Andrén for several good
algorithmic suggestions. MR was supported by the Swedish Research Council
grant 2009-5344.
## References
* (1) R. Albert and A.L. Barabási, “Statistical mechanics of complex networks,” Rev Mod Phys 74, 47–97 (2002)
* (2) R. Pastor-Satorras and A. Vespignani, _Evolution and structure of the Internet: A statistical physics approach_ (Cambridge Univ Pr, 2004) ISBN 0521826985
* (3) S. Boccaletti, V. Latora, Y. Moreno, M. Chavez, and D.U. Hwang, “Complex networks: Structure and dynamics,” Physics reports 424, 175–308 (2006), ISSN 0370-1573
* (4) C. Ratti, S. Sobolevsky, F. Calabrese, C. Andris, J. Reades, and O. Sporns, “Redrawing the Map of Great Britain from a Network of Human Interactions,” PLoS ONE 5, e14248 (2010)
* (5) R.D. Alba and G. Moore, “Elite social circles,” Sociological Methods & Research 7, 167 (1978)
* (6) S. Fortunato, “Community detection in graphs,” Phys. Rep. 486, 75–174 (2010)
* (7) M. E. J. Newman and M. Girvan, “Finding and evaluating community structure in networks,” Phys Rev E 69, 026113 (2004)
* (8) M. E. J. Newman, “Fast algorithm for detecting community structure in networks,” Phys Rev E 69, 066133 (2004)
* (9) D.M. Wilkinson and B.A. Huberman, “A method for finding communities of related genes,” Proceedings of the National Academy of Sciences of the United States of America 101, 5241 (2004)
* (10) D. Gfeller, J.-C. Chappelier, and P. De Los Rios, “Finding instabilities in the community structure of complex networks,” Phys Rev E 72, 056135 (2005)
* (11) G. Palla, A.L. Barabási, and T. Vicsek, “Quantifying social group evolution,” Nature 446, 664 (2007)
* (12) S. Gregory, “A fast algorithm to find overlapping communities in networks,” Machine Learning and Knowledge Discovery in Databases, 408–423(2008)
* (13) Andrea Lancichinetti, Filippo Radicchi, José J. Ramasco, and Santo Fortunato, “Finding statistically significant communities in networks,” PLoS ONE 6, e18961 (04 2011), http://dx.doi.org/10.1371%2Fjournal.pone.0018961
* (14) J. Whitney, J. Koh, M. Costanzo, G. Brown, C. Boone, and M. Brudno, “Clustering with overlap for genetic interaction networks via local search optimization,” in _Proceedings of the 11th international conference on Algorithms in bioinformatics_ (Springer-Verlag, 2011) pp. 326–338
* (15) TS Evans and R. Lambiotte, “Line graphs, link partitions, and overlapping communities,” Physical Review E 80, 16105 (2009), ISSN 1550-2376
* (16) Yong-Yeol Ahn, James P Bagrow, and Sune Lehmann, “Link communities reveal multiscale complexity in networks,” Nature 466, 761–764 (2010)
* (17) Y. Kim and H. Jeong, “The map equation for link community,” arXiv:1105.0257(May 2011), arXiv:1105.0257
* (18) M. Rosvall, D. Axelsson, and Carl T. Bergstrom, “The map equation,” Eur. Phys. J. Special Topics 178, 13–23 (2009)
* (19) M. Rosvall and Carl T. Bergstrom, “Maps of information flow reveal community structure in complex networks,” Proc Natl Acad Sci USA 105, 1118–1123 (2008)
* (20) P.D. Grünwald, _The minimum description length principle_ (The MIT Press, 2007) ISBN 0262072815
* (21) C. E. Shannon and W. Weaver, _The mathematical theory of communication_ (Univ of Illinois Press, 1949)
* (22) Martin Rosvall and Carl T. Bergstrom, “Multilevel compression of random walks on networks reveals hierarchical organization in large integrated systems,” PLoS ONE 6, e18209 (04 2011)
* (23) M. E. J. Newman, “The structure and function of complex networks,” SIAM Review 45, 167–256 (2003)
* (24) (2010), we compiled the road network data from this source http://europe.aaroads.com/eroads/erdlist.htm
* (25) D.J. Watts and S.H. Strogatz, “Collective dynamics of ‘small-world’networks,” Nature 393, 440–442 (1998)
* (26) K.I. Goh, M.E. Cusick, D. Valle, B. Childs, M. Vidal, and A.L. Barabási, “The human disease network,” Proc Natl Acad Sci USA 104, 8685 (2007)
* (27) (2010), we have compiled the network based on the coauthorships in refs. 1-3.
* (28) R. Guimera, S. Mossa, A. Turtschi, and L.A.N. Amaral, “The worldwide air transportation network: Anomalous centrality, community structure, and cities’ global roles,” Proc Natl Acad Sci USA 102, 7794 (2005)
* (29) L.A. Adamic and N. Glance, “The political blogosphere and the 2004 US election: divided they blog,” in _Proceedings of the 3rd international workshop on Link discovery_ (ACM, 2005) pp. 36–43, ISBN 1595932151
* (30) (2010), we have compiled the network from data provided by Twingly.com.
* (31) G. Palla, I. Derényi, I. Farkas, and T. Vicsek, “Uncovering the overlapping community structure of complex networks in nature and society,” Nature 435, 814–818 (2005)
* (32) A. Lancichinetti and S. Fortunato, “Benchmarks for testing community detection algorithms on directed and weighted graphs with overlapping communities,” Physical Review E 80, 16118 (2009)
* (33) S. Brin and L. Page, “The anatomy of a large-scale hypertextual web search engine,” Computer networks and ISDN systems 33, 107–117 (1998)
* (34) H. A. van der Vorst, “Bi-cgstab: A fast and smoothly converging variant of bi-cg for the solution of nonsymmetric linear systems,” SIAM J. on Scientific and Statistic Computing 13, 631–644 (1991)
|
arxiv-papers
| 2011-05-04T12:57:22 |
2024-09-04T02:49:18.590275
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Alcides Viamontes Esquivel, Martin Rosvall",
"submitter": "Alcides Viamontes Esquivel",
"url": "https://arxiv.org/abs/1105.0812"
}
|
1105.0886
|
# Exotic leaky wave radiation from anisotropic epsilon near zero metamaterials
Klaus Halterman Simin Feng Naval Air Warface Center, Michelson Laboratory,
Physics Division, China Lake, California 93555, USA Viet Cuong Nguyen
Photonics Research Centre, School of Electrical and Electronics Engineering,
Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798
###### Abstract
We investigate the emission of electromagnetic waves from biaxial
subwavelength metamaterials. For tunable anisotropic structures that exhibit a
vanishing dielectric response along a given axis, we find remarkable variation
in the launch angles of energy associated with the emission of leaky wave
radiation. We write closed form expressions for the energy transport velocity
and corresponding radiation angle $\varphi$, defining the cone of radiation
emission, both as a functions of frequency, and material and geometrical
parameters. Full wave simulations exemplify the broad range of directivity
that can be achieved in these structures.
###### pacs:
81.05.Xj,42.25.Bs,42.82.Et
Metamaterials are composite structures engineered with subwavelength
components, and with the purpose of manipulating and directing electromagnetic
(EM) radiation. Recently many practical applications have emerged, and
structures fabricated related to cloaking, metamaterial perfect absorbers liu2
, and chirality zhang2 ; menzel . For metamaterials, the desired EM response
to the incident electric (${\bm{E}}$) and magnetic (${\bm{H}}$) fields,
typically involves tuning the permittivity, $\epsilon$, and permeability,
$\mu$ in rather extraordinary ways. This includes double negative index media
(negative real parts of both $\epsilon$ and $\mu$), single negative index
media (negative real part of $\epsilon$ or $\mu$), matched impedance zero
index media zio ; cuong (real part of $\epsilon$ and $\mu$ is near zero), and
epsilon near zero (ENZ) media (real part of $\epsilon$ is near zero).
Scenarios involving ENZ media in particular have gained prominence lately as
useful components to radiative systems over a broad range of the EM spectrum
enoch ; eng ; alu .
In conjunction with ENZ developments, there have also been advances in
infrared metamaterials, where thermal emitters laroche , optical switches shen
, and negative index metamaterials zhang ; zhang2 have been fabricated. Due
to the broad possibilities in sensing technologies, this EM band is of
considerable importance. Smaller scale metamaterial devices can also offer
more complex and interesting scenarios, including tunable devices souk ,
filters alek , and nanoantennas liu . For larger scale ENZ metamaterials, high
directivity of an incident beam has been demonstrated enoch . This can be
scaled down and extended to composites containing an array of nanowires,
yielding a birefringent response with only one direction possessing ENZ
properties alek . A metamaterial grating can be designed to also have
properties akin to ENZ media mocella .
Often times, the structure being modeled is assumed isotropic. Although this
offers simplifications, anisotropy is an inextricable feature of metamaterials
that plays a crucial role in their EM response. For instance, at optical and
infrared frequencies, incorporating anisotropy into a thin planar
(nonmagnetic) waveguide can result in behavior indicative of double negative
index media pod . Anisotropic metamaterial structures can now be created that
contain elements that possess extreme electric and magnetic responses to an
incident beam. The inclusion of naturally anisotropic materials that are also
frequency dispersive (e.g., liquid crystals), allows additional control in
beam direction. It has also been shown that metamaterial structures requiring
anisotropic permittivity and permeability can be created using tapered
waveguides smol . By assimilating anisotropic metamaterial leaky wave
structures within conventional radiative systems, the possibility exists to
further control the emission characteristics.
Prompted by submicron experimental developments, and potential beam
manipulation involving metamaterials with vanishing dielectric response, we
investigate a planar anisotropic system with an ENZ response at near-ir
frequencies along a given (longitudinal) direction. By “freezing” the phase in
the longitudinal direction and tuning the electric and magnetic responses in
the transverse directions, we will demonstrate the ability to achieve
remarkable emission control and directivity. When excited by a source, the
direction of energy flow can be due to the propagation of localized surface
waves. There can also exist leaky waves, whereby the energy radiatively
“leaks” from the structure while attenuating longitudinally. Indeed, there can
be a complex interplay between the different type of allowed modes whether
radiated or guided, or some other mechanism involving material absorption.
Through a judicious choice of parameters, the admitted modes for the
metamaterial can result in radiation launched within a narrow cone spanned by
the outflow of energy flux.
Some of the earliest works involving conventional leaky wave systems reported
narrow beamwidth antennas with prescribed radiation angles tamir , and
forward/backward leaky wave propagation in planar multilayered structures
tamir2 . In the microwave regime, photonic crystals colak ; micco ; laroche
and transmission lines can also can serve as leaky wave antennas lim . More
recently, a leaky wave metamaterial antenna exhibited broad side scanning at a
single frequency lim . The leaky wave characteristics have also been studied
for grounded single and double negative metamaterial slabs bacc . Directive
emission in the microwave regime was demonstrated for magnetic metamaterials
in which one of the components of $\bm{\mu}$ is small yuan . Nonmagnetic slabs
can also yield varied beam directivity slab2 .
To begin our investigation, a harmonic time dependence $\exp(-i\omega t)$ for
the TM fields is assumed. The planar structure contains a central biaxial
anisotropic metamaterial of width $2d$ sandwiched between the bulk superstrate
and substrate, each of which can in general be anisotropic. The material in
each region is assumed linear with a biaxial permittivity tensor,
$\bm{\epsilon}_{i}=\epsilon_{i}^{xx}\hat{\bf x}\hat{\bf
x}+\epsilon_{i}^{yy}\hat{\bf y}\hat{\bf y}+\epsilon_{i}^{zz}\hat{\bf
z}\hat{\bf z}$. Similarly, the biaxial magnetic response is represented via
${\bm{\mu}}_{i}=\mu_{i}^{xx}\hat{\bf x}\hat{\bf x}+\mu_{i}^{yy}\hat{\bf
y}\hat{\bf y}+\mu_{i}^{zz}\hat{\bf z}\hat{\bf z}$. The translational
invariance in the $y$ and $z$ directions allows the magnetic field in the
$i$th layer, ${\bf H}_{i}$, to be written ${\bf
H}_{i}=\hat{\bm{y}}h_{i}^{y}(x)e^{i(\gamma z-\omega t)}$, and the electric
field as ${\bf
E}_{i}=[\hat{\bm{x}}e_{i}^{x}(x)+\hat{\bm{z}}e_{i}^{z}(x)]e^{i(\gamma z-\omega
t)}$. Here, $\gamma\equiv\beta+i\alpha$ is the complex longitudinal
propagation constant. We focus on wave propagation occurring in the positive
$x$ direction, and nonnegative $\beta$ and $\alpha$. Upon matching the
tangential $\bm{E}$ and $\bm{H}$ fields at the boundary, we arrive at the
general dispersion equation that governs the allowed modes for this structure,
$\displaystyle\epsilon^{zz}_{2}k_{\perp,2}$
$\displaystyle(\epsilon^{zz}_{3}k_{\perp,1}+\epsilon^{zz}_{1}k_{\perp,3})+$
(1)
$\displaystyle[(\epsilon^{zz}_{2})^{2}k_{\perp,1}k_{\perp,3}-\epsilon^{zz}_{1}\epsilon^{zz}_{3}k^{2}_{\perp,2}]\tan(2dk_{\perp,2})=0,$
where the transverse wavevector in the superstrate (referred to as region 1),
$k_{\perp,1}$, is,
$\displaystyle
k_{\perp,1}=\pm\sqrt{\epsilon_{1}^{zz}/\epsilon_{1}^{xx}(\beta^{2}-\alpha^{2})-k_{0}^{2}\mu_{1}^{yy}\epsilon_{1}^{zz}+2i\alpha\beta\epsilon_{1}^{zz}/\epsilon_{1}^{xx}}.$
(2)
For the metamaterial region (region 2), we write
$k_{\perp,2}=\pm\sqrt{k_{0}^{2}\mu_{2}^{yy}\epsilon_{2}^{zz}-\gamma^{2}\epsilon_{2}^{zz}/\epsilon_{2}^{xx}}$,
and for the substrate (region 3),
$k_{\perp,3}=\pm\sqrt{\gamma^{2}\epsilon_{3}^{zz}/\epsilon_{3}^{xx}-k_{0}^{2}\mu_{3}^{yy}\epsilon_{3}^{zz}}$.
The choice of sign in regions 1 and 3 plays an important role in the
determination of the physical nature of the type of mode solution that will
arise. The two roots associated with $k_{\perp,2}$, results in the same
solutions to Eq. (1). The dispersion (Eq. (1)) is also obtained from the poles
of the reflection coefficient for a plane wave incident from above on the
structure. The transverse components of the ${\bm{E}}$ field in region 1 are,
$e_{1}^{z}=-ik_{\perp,1}/(k_{0}\epsilon^{zz}_{1})H_{1}e^{-k_{\perp,1}(x-d)}$,
$e_{1}^{x}=\gamma/(k_{0}\epsilon^{xx}_{1})h_{1}^{y}$, and
$h_{1}^{y}=H_{1}e^{-k_{\perp,1}(x-d)}$, where $H_{1}$ is a constant
coefficient.
Next, to disentangle the evanescent and leaky wave fields, we separate the
wavevector $k_{\perp,1}$ into its real and imaginary parts:
$k_{\perp,1}=\pm(q^{-}+iq^{+})$, with $q^{+}$ and $q^{-}$ real. The
$k_{\perp,1}$, $q^{-}$, and $q^{+}$ are in general related, depending on
$\operatorname{sgn}(\epsilon_{1}^{zz}\alpha\beta/\epsilon_{1}^{xx})$. For
upward wave propagation ($+x$ direction), clearly we have $q^{+}q^{-}\geq 0$.
It is also apparent that the parameter $q^{-}$ represents the inverse length
scale of wave increase along the transverse $x$-direction. We are mainly
concerned with the $k_{\perp,1}$ that correspond to exponential wave increase
in the transverse direction while decaying in $z$, a hallmark of leaky waves.
Although leaky wave modes are not localized, they can be excited by a point or
line source which gives rise to limited regions of space of EM wave amplitude
increase before eventually decaying. When explicitly decomposing $k_{\perp,1}$
into its real and imaginary parts, there is an intricate interdependence among
$\gamma$, $\bm{\epsilon}_{i}$, and $\bm{\mu}_{i}$ (for $\alpha\neq 0$):
$q^{\pm}=1/\sqrt{2}(\sqrt{{\cal A}^{2}+{\cal B}^{2}}\mp{\cal A})^{1/2}$, where
${\cal
A}=\epsilon_{1}^{zz}/\epsilon_{1}^{xx}(\beta^{2}-\alpha^{2})-k_{0}^{2}\mu_{1}^{yy}\epsilon_{1}^{zz}$,
and ${\cal B}=2\alpha\beta\epsilon_{1}^{zz}/\epsilon_{1}^{xx}$. We will see
below that $q^{+}$ is the root of interest in determining leaky wave emission
for our structure. At this point the surrounding media can have frequency
dispersion in $\bm{\epsilon}_{i}$, and $\bm{\mu}_{i}$, while the anisotropic
metamaterial region can be dispersive and absorptive.
We are ultimately interested in anisotropic metamaterials with an ENZ response
along the axial direction ($z$-axis). In the limit of vanishing
$\epsilon_{2}^{zz}$, and perfectly conducting ground plane, Eq. (1) can be
solved analytically for the complex propagation constant, $\gamma$. The result
is
$\gamma^{\pm}=\dfrac{1}{\sqrt{2}}\dfrac{\sqrt{(\epsilon_{2}^{xx})^{2}+8(k_{0}d)^{2}\epsilon_{1}^{xx}\epsilon_{1}^{zz}\epsilon_{2}^{xx}\mu_{2}^{yy}\pm\left|\epsilon_{2}^{xx}\right|\sqrt{(\epsilon_{2}^{xx})^{2}+(4k_{0}d)^{2}\epsilon_{1}^{zz}\epsilon_{1}^{xx}(\mu_{2}^{yy}\epsilon_{2}^{xx}-\mu_{1}^{yy}\epsilon_{1}^{xx})}}}{2k_{0}d\sqrt{\epsilon_{1}^{xx}\epsilon_{1}^{zz}}}.$
(3)
The two possible roots correspond to distinct dispersion branches (seen
below). There are, in all, four solutions, $\gamma^{\pm}$, and
$-\gamma^{\pm}$. The geometrical and material dependence contained in Eq. (3),
determines the entire spectrum of the leaky radiation fields that may exist in
our system.
There are numerous quantities one can study in order to effectively
characterize leaky wave emission. One physically meaningful quantity is the
energy transport velocity, ${\bm{v}}_{T}$, which is the velocity at which EM
energy is transported through a medium loudon ; ruppin . It is intuitively
expressed as the ratio of the time-averaged Poynting vector, ${\bm{S}}_{\rm
avg}$, to the energy density, $U$: ${\bm{v}}_{T}\equiv{\bm{S}}_{\rm avg}/U$.
Properly accounting for frequency dispersion that may be present, we can thus
write the energy velocity for EM radiation emitted above the structure,
$\displaystyle{\bm{v}}_{T}=\dfrac{c/(8\pi){\rm
Re}[{\bm{E}}_{1}\times{\bm{H}}_{1}^{*}]}{1/(16\pi)\Bigl{[}{\bm{E}}_{1}^{\dagger}\cdot\dfrac{d(\omega{\bm{\epsilon}}_{1})}{d\omega}{\bm{E}}_{1}+{\bm{H}}_{1}^{\dagger}\cdot\dfrac{d(\omega{\bm{\mu}}_{1})}{d\omega}{\bm{H}}_{1}\Bigr{]}},$
(4)
where the conventional definition shitz of $U$ has been extended to include
anisotropy. Inserting the calculated EM fields, we get the compact expression
(assuming no dispersion in the superstrate),
$\displaystyle{\bm{v}}_{T}=\omega\dfrac{\bigl{(}\epsilon_{1}^{xx}q^{+}\hat{\bm{x}}+\epsilon_{1}^{zz}\beta\hat{\bm{z}}\bigr{)}}{\epsilon_{1}^{zz}\beta^{2}+\epsilon_{1}^{xx}(q^{+})^{2}}.$
(5)
The corresponding direction of energy outflow is straightforwardly extracted
from the vector directionality in Eq. (5),
$\varphi=\tan^{-1}\bigl{(}\dfrac{\epsilon_{1}^{xx}q^{+}}{\epsilon_{1}^{zz}\beta}\bigr{)},$
(6)
which holds in the case of loss and frequency dispersion in the metamaterial.
It is evident that Eq. (6) satisfies $\varphi\rightarrow 0$ as
$\alpha\rightarrow 0$, corresponding to the disappearance of the radiation
cone and possible emergence of guided waves. In this limit,
${\bm{v}}_{T}=\hat{\bm{z}}\omega/\beta$, which corresponds to the expected
phase velocity, or velocity at which plane wavefronts travel along the
$z$-direction. There is also angular symmetry, where
$\varphi(\epsilon_{2}^{xx})\rightarrow\varphi(-\epsilon_{2}^{xx})$, when
$\mu_{2}^{yy}\rightarrow-\mu_{2}^{yy}$. For high refractive index media
($\epsilon_{2}^{xx}$ or $\mu_{2}^{yy}$ $\rightarrow\infty$), we moreover
recover the expected result that $\varphi$ tends toward broadside
($\varphi=0$).
Leaky wave emission from an anisotropic metamaterial in vacuum
Figure 1: (Color online). The real ($\beta$) and imaginary ($\alpha$) parts of
the complex propagation constant $\gamma^{+}$, normalized by the vacuum
wavevector $k_{0}$ at $f=280$ THz ($\mu_{2}^{yy}=1$). The figures (a) and (c)
are 3D global views depicting $\alpha$ and $\beta$ as functions of
$\epsilon_{2}^{xx}$ and the thickness parameter $d$. Figures (b) and (d)
represent the normalized $\beta$ and $\alpha$, respectively, as functions of
$\epsilon_{2}^{xx}$ and for $d=0.01$ µm (solid curve), $d=0.05$ µm (dotted
curve), and $d=0.1$ µm (dashed curve).
is characterized in Figs. 1 (a) and (c), where 3-D views of the normalized
$\beta$ (${\rm Re}[\gamma^{+}]$), and $\alpha$ (${\rm Im}[\gamma^{+}]$), are
shown as functions of the transverse dielectric response, $\epsilon_{2}^{xx}$,
and thickness parameter, $d$ (the width = $2d$). In Fig. 1 (b) and (d), 2D
slices depict the normalized $\beta$ and $\alpha$ as functions of
$\epsilon_{2}^{xx}$. Only the positive root, $\gamma^{+}$, is shown,
corresponding to the leaky wave case of interest, with $\alpha\geq 0$. The
slight kinks in the curves (see Fig. 1(b)) are at points where the
$\gamma^{-}$ solutions would emerge (for $\alpha<0$). Both panels on the left
clearly demonstrate how $\beta$ rises considerably with increasing
$|\epsilon_{2}^{xx}|$. For subwavelength widths ($k_{0}d\ll 1$), and to lowest
order, the propagation constant varies linearly in $\epsilon_{2}^{xx}$, as
$\beta/k_{0}\approx\epsilon_{2}^{xx}/(2k_{0}d\sqrt{\epsilon_{1}^{xx}\epsilon_{1}^{zz}})$.
As the dielectric response $\epsilon_{2}^{xx}$ vanishes, corresponding to an
isotropic ENZ slab, we see from the figures (and Eq. (3)) that
$\beta\rightarrow 0$ (long wavelength limit), and emission is subsequently
perpendicular to the interface (see below). It is also interesting that the
important parameter $\alpha$ characterizing leaky waves rapidly increases from
zero at $\epsilon_{2}^{xx}=0$ and peaks at differing values, depending on the
width of the emitting structure (Figs. 1 (c) and (d)), until eventually
returning to zero at the two points,
$\epsilon_{2}^{xx}=4[-2(k_{0}d)^{2}\pm\sqrt{(k_{0}d)^{2}+4(k_{0}d)^{4}}]$.
This illustrates that $\alpha$ is spread over a greater range of
$\epsilon_{2}^{xx}$ for larger widths, but as previously discussed in
conjunction with Fig. 1, $\alpha$ simultaneously suffers a dramatic reduction.
For small $d/\lambda$, the extremum of Eq. (3), reveals that the strength of
the $\alpha$ peaks, $\alpha_{\rm max}$, are given by $\alpha_{\rm max}\approx
1/2\pm k_{0}d$.
Figure 2: (Color online) Leaky wave launch angle, $\varphi$, as a function of
permittivity, $\epsilon_{2}^{xx}$ (Figs. (a) and (b)) for eight different
thicknesses in succession, starting with $d=1$ µm (dotted orange curve), and
subsequent values of $d$ (in µm), equaling $1/5,1/10,1/20,1/30,1/40,1/50$, and
$1/60$. Other parameters are as in Fig. 1. In (c) the emission angle is shown
as a function of frequency for the same thicknesses in (a) and (b). In (d),
the effects of geometrical variation are presented for
$\epsilon_{2}^{xx}=0.001,0.01,0.05,0.2,0.4,0.6$, and $0.8$. The curves with
larger overall $\varphi$ correspond to smaller $\epsilon_{2}^{xx}$ in
succession.
Next, in Fig. 2, we show the angle, $\varphi$, which defines the radiation
cone from the surface of the metamaterial structure, as functions of both
$\epsilon_{2}^{xx}$, frequency, and thickness parameter $d$. In panel (a), the
variation in $\varphi$ is shown over a broad range of $\epsilon_{2}^{xx}$ for
nonmagnetic media ($\mu_{2}^{yy}=1$), while panel (b) is for a metamaterial
with vanishing $\mu_{2}^{yy}$, representative of a type of matched impedance
cuong . The eight curves in Fig. 2 (a) and (b) represent different widths,
identified in the caption. We see that for $\epsilon_{2}^{xx}\rightarrow 0$,
we recover the isotropic result of nearly normal emission ($\varphi\approx
90^{\rm o}$), discussed and demonstrated in the millimeter regime enoch . This
behavior can be understood in our system, at least qualitatively, from a
geometrical optics perspective and a generalization of Snell’s Law for
bianisotropic media kong . When the magnetic response vanishes (Fig. 2 (b)),
the emission angle becomes symmetric with respect to $\epsilon_{2}^{xx}$,
dropping from $\varphi=\pi/2$ for zero $\epsilon_{2}^{xx}$, to broadside
($\varphi=0$) when $\epsilon_{2}^{xx}=\pm 4k_{0}d$. Thus thinner widths result
in more rapid beam variation as a function of $\epsilon_{2}^{xx}$. In Fig.
2(c) we show how the emission angle varies as a function of frequency, with
the transverse response obeying the Drude form,
$\epsilon_{2}^{xx}=1-\omega_{p}^{2}/(\omega^{2}+i\Gamma\omega)$. Here,
$\omega_{p}=(2\pi)120$ THz and $\Gamma=0$, to isolate leaky wave effects. With
increasing frequency, we observe similar trends found in the previous figures,
where a larger dielectric response pulls the beam towards the metamaterial. In
(d), a geometrical study illustrates how the emission angle varies with
thickness: for $\epsilon_{2}^{xx}\mu_{2}^{yy}<1$, the emission angle rises
abruptly with increased $d$, before leveling off at
$\phi=\tan^{-1}(\sqrt{1/(\epsilon_{2}^{xx}\mu_{2}^{yy})-1})$. Physically, as
the slab increases in size, the complex propagation constant becomes purely
real, $\gamma\rightarrow\sqrt{\epsilon_{2}^{xx}\mu_{2}^{yy}}$, and
$q^{+}\rightarrow\sqrt{1-\epsilon_{2}^{xx}\mu_{2}^{yy}}$. This is consistent
with what was discussed previously involving the depletion of $\alpha$ with
$d$; for thick ENZ slabs, leaky wave radiation is replaced by conventional
propagating modes. For fixed $\epsilon_{2}^{xx}$, there is also a critical
thickness, $d^{*}$, below which no leaky waves are emitted, which by Eq. (3)
is, $d^{*}=\epsilon_{2}^{xx}/(4k_{0}\sqrt{1-\epsilon_{2}^{xx}\mu_{2}^{yy}})$.
Figure 3: Normalized field profiles illustrating broad angular variation in
beam emission. The arrows along the interfaces depict the Poynting vector. The
top left and right panels correspond to $\epsilon_{2}^{xx}=0.05$, and
$\epsilon_{2}^{xx}=0.05+0.02i$, respectively. The the bottom left and right
panels are for $\epsilon_{2}^{xx}=0.66$ and $\epsilon_{2}^{xx}=0.66+0.02i$,
respectively. The metamaterial is subwavelength ($d=1/20$ µm) and nonmagnetic
($\mu_{2}^{yy}=1$). Coordinates are given in units of $(\times 10)$ µm.
These results are consistent with simulations from a commercial finite element
software package comsol . In Fig. 3, we show the normalized $|{\bm{H}}|$
arising from a source excitation (at $f=280$ THz) within the metamaterial for
$d=1/20$ µm. The left two panels are for $\Gamma=0$, and the right two have
absorption present. The full wave simulations agree with Fig. 2(a) (dashed
green curve), where the leaky-wave energy outflow spans a broad angular range
when $0\lesssim\epsilon_{2}^{xx}\lesssim 0.68$. The right two panels exemplify
the robustness of this effect for moderate amounts of loss present in the
metamaterial.
In conclusion, we have demonstrated leaky wave radiation in subwavelength
biaxial metamaterials with vanishing permittivity along the longitudinal
direction. The leaky-wave radiation cone illustrated broad directionality
through variations in the transverse EM response. By utilizing nanodeposition
techniques, such structures can be fabricated by implementing an array of
metallic nanowires embedded in a self-organized porous nanostructured material
alek .
###### Acknowledgements.
K.H. is supported in part by ONR and a grant of HPC resources as part of the
DOD HPCMP.
## References
* (1) X. Liu, et al., Phys. Rev. Lett. , 104, 207403 (2010).
* (2) S. Zhang, et al., Phys. Rev. Lett. , 102, 023901 (2009).
* (3) C. Menzel, et al., Phys. Rev. Lett. , 104, 253902 (2010).
* (4) R.W. Ziolkowski, Phys. Rev. E70, 046608 (2004).
* (5) V. C. Nguyen, et al., Phys. Rev. Lett. 105, 233908 (2010).
* (6) S. Enoch, et al., Phys. Rev. Lett. 89, 213902 (2002).
* (7) M. Silveirinha and N. Engheta, Phys. Rev. Lett. 97, 157403 (2006).
* (8) A. Alù, et al., Phys. Rev. B, 65, 155410 (2007).
* (9) M. Laroche, et al., Phys. Rev. Lett. 96, 123903 (2006).
* (10) N.-H. Shen, et al., Phys. Rev. Lett. , bf 106, 037403 (2011).
* (11) S. Zhang, et al., Phys. Rev. Lett. , 95, 137404 (2005).
* (12) N.-H. Shen, et al., Phys. Rev. Lett. 106, 037403 (2011).
* (13) L.V. Alekseyev, et al., Appl. Phys. Lett. 97, 131107 (2010).
* (14) X-X. Liu and A. Alù, Phys. Rev. B82 144305 (2010).
* (15) V. Mocella, et al., Opt. Express 18, 25068 (2010).
* (16) V.A. Podolskiy and E.E. Narimanov, Phys. Rev. B, 71, 201101(R) (2005).
* (17) I. I. Smolyaninov, et al., Phys. Rev. Lett. 102, 213901 (2009).
* (18) R. E. Collin and F. J. Zucker, Antenna Theory, Part II, Mcgraw-Hill (1969).
* (19) T. Tamir and F. Y. Kou, IEEE J. Quantum Electr. 22, 544 (1986).
* (20) E. Colak, et al., Optics Express 17, 9879 (2009).
* (21) A. Micco, et al., Phys. Rev. B79, 075110 (2009).
* (22) S. Lim, IEEE trans. on Microw. theory and Tech. 52, 2678 (2004).
* (23) P. Baccarelli, et al., IEEE Trans. on Microw. Th. and Tech., 53, 32 (2005).
* (24) Y. Yuan, et al., Phys. Rev. A77, 053821 (2008).
* (25) H. Liu and K. J. Webb, Phys. Rev. B 81, 201404(R) (2010).
* (26) R. Loudon, J. Phys A 3, 233 (1970).
* (27) R. Ruppin, Phys. Lett. A 299, 309 (2002).
* (28) L. D. Landau, E. M. Lifshitz, and L. P. Pitaevskii, Electrodynamics of Continuous Media (Butterworth-Heinemann, Oxford, 1984), 2nd ed.
* (29) T. M. Grzegorczyk, et al., IEEE Trans. On Microw. Theory and Tech. 53, 1443 (2005).
* (30) COMSOL Multiphysics, http://www.comsol.com
|
arxiv-papers
| 2011-05-04T18:18:12 |
2024-09-04T02:49:18.597792
|
{
"license": "Public Domain",
"authors": "Klaus Halterman, Simin Feng, and Viet Cuong Nguyen",
"submitter": "Klaus Halterman",
"url": "https://arxiv.org/abs/1105.0886"
}
|
1105.0902
|
# Modeling Network Evolution Using Graph Motifs
Drew Conway
###### Abstract
Network structures are extremely important to the study of political science.
Much of the data in its subfields are naturally represented as networks. This
includes trade, diplomatic and conflict relationships. The social structure of
several organization is also of interest to many researchers, such as the
affiliations of legislators or the relationships among terrorist. A key aspect
of studying social networks is understanding the evolutionary dynamics and the
mechanism by which these structures grow and change over time. While current
methods are well suited to describe static features of networks, they are less
capable of specifying models of change and simulating network evolution. In
the following paper I present a new method for modeling network growth and
evolution. This method relies on graph motifs to generate simulated network
data with particular structural characteristic. This technique departs notably
from current methods both in form and function. Rather than a closed-form
model, or stochastic implementation from a single class of graphs, the
proposed “graph motif model” provides a framework for building flexible and
complex models of network evolution. The method is computationally based,
relying on graph theoretic and machine learning techniques to grow networks.
The paper proceeds as follows: first a brief review of the current literature
on network modeling is provided to place the graph motif model in context.
Next, the graph motif model is introduced, and a simple example is provided.
As a proof of concept, three classic random graph models are recovered using
the graph motif modeling method: the Erdős-Rènyi binomial random graph, the
Watts-Strogatz “small world” model, and the Barabási-Albert preferential
attachment model. In the final section I discuss the results of these
simulations and subsequent advantage and disadvantages presented by using this
technique to model social networks.
The study of networks is one of the most interdisciplinary fields in
contemporary scholarship. With its origins in graph theory, and migration to
the social science largely via sociology (Freeman, 2004), many political
scientists have discovered the value of these methods. The primary reason for
this is that much of the data relevant to political science can be represented
as a network. In network science the primary unit of analysis is the edge, or
link between two actors. Likewise, many subfields in political science study
interactions and organizations that are naturally modeled as a network. At the
macro-level in international relations this includes trade, diplomatic and
conflict relationships, while at micro-level networks can be used to study the
structure of terrorist organizations. For comparative politics this may
include government coalitions networks, or party affiliations. Finally, in
American politics this can include campaign finance contributions or
legislative co-sponsorship networks.
Given the breadth of possible applications, network analysis is a growing
methodological subfield within political science. The work within this niche
can be crudely divided into two applications of network analysis: structurally
descriptive, or networks as dependent variables. Research in the former
category has a relatively long and rich history, with well-established methods
for describing the structure of networks. 111The most complete reference on
statistically descriptive methods is _Social Network Analysis: Methods and
Applications_ , by Wasserman and Foust (Wasserman and Faust, 1994). These
methods include measures of actor centrality, whereby the relative position or
role of actors is based on their number and type of edges within the network.
In political science these methods have most frequently been applied to
international relations studies. For example, structurally descriptive methods
have been used to illustrate how social capital transferred through inter-
governmental organizations’ membership networks can create conflicts between
states (Hafner-Burton and Montgomery, 2006). Centrality-based methods have
also been used to identify key actors in the conflict in Chechnya, and and
these central actors vary in type, i.e., civilian, military, etc (Hämmerli et
al., 2006). Additionally, structural similarities within a network of conflict
dyads among countries have even been used to describe international conflict
patterns (Maoz et al., 2006). In American politics, structurally descriptive
methods have been used to identify the most influential members of the U.S.
Congress based on their co-sponsorship networks (Fowler, 2006).
In many applications of network analysis in political science authors attempt
to explain a behavior or observed outcome from the structural features of the
network being studied. In the example of the co-sponsorship network study, the
position of the legislators inferred by their centrality is used to predict
the number of legislative amendments proposed by members. This type of
inference is common in descriptive network analysis. It is difficult, however,
to know the direction of causality. Structurally descriptive work is often
based on static snapshots of network data, or large aggregations over time. As
is the case in the aforementioned legislator study, which aggregates co-
sponsorship data from 1973-2004. From this perspective it is difficult to
parse whether legislative influence comes as a result of co-sponsorship
behavior, or that co-sponsorship behavior is affected by influence. The same
can be said for the other studies. In each case, and in most examples of
structurally descriptive work both in and outside of political science, the
network is given.
Descriptive techniques rarely give any insight as to the data generation
process that resulted in the network being studied 222It should be noted that
in (Hämmerli et al., 2006) the authors explicitly point out the limitations of
structurally descriptive techniques for analyzing network time-series.. An
alternative viewpoint is to assume that network structure is endogenous to the
preferences of actors. Then, estimate how various structural features or actor
attributes contribute to network structure. As a simple example, consider
membership in a terrorist network. One may assume that members of al-Qaeda
were more likely to make ties with other known al-Qaeda affiliates. Using a
statistical technique known as exponential random graph models (ERGM) one
could estimate a coefficient to measure this in- and out-group effect given
the appropriate data (Hunter et al., 2008; Robins et al., 2007; Snijders et
al., 2006). This type of network analysis treats networks as the dependent
variable, and is the second common application of network analysis in
political science.
The innovation presented by ERGM models is disentangling the highly
interrelated dependency structures of network relationships in order to
generate unbiased estimates of network parameters. The application of
exponential random graph models, however, has only very recently emerged
within political science. Some work goes only so far as to suggest that ERGM
models be used to correct for the dependencies present in international
relations networks (Hafner-Burton et al., 2009). There is, however, some very
recent research on causality on networks wherein the authors use an ERGM
design to determine if there is systematic bias in reported contacts among
lobbyists (Fowler et al., 2011). In fact, most of the application of ERGM
models in political science remain unpublished working papers.333For further
examples of working papers using this technique see the recent submission to
the Political Networks section of APSA
(http://www.apsanet.org/content_69102.cfm). Given the current research trends
ERGM models will continue to proliferate the discipline.
A clear advantage of these techniques over structurally descriptive analyses
is that the direction of the effect being measured is explicit. Consider again
the example of inferring the role of members of the U.S. Congress based on
their structural position. Using an ERGM design the problem is approached in
reverse. The model estimates separate coefficients for the centrality metrics
and number of amendments from each members given the co-sponsorship network.
From this, one could potentially determine which had a larger or more
significant effect on the structure of the network. It is much more difficult,
however, to model how these parameters will affect the nature of co-
sponsorship relationships among legislators going forward.
Analyzing longitudinal networks is a small but vibrant area of research within
the network analysis community. Much of this work has been done by Tom A.B.
Snijders, and has focused on generating maximum likelihood estimators (MLE)
for network panel data (Snijders, 2008; Snijders et al., 2010; Steglich et
al., 2010). These dynamic network models make two assumptions. First, each
actor is a singular agent determining its own relationships. Second, these
relationships are a function of a time-varying continuous random variable.
Then, a MLE is specified to estimate future states of the network given time
interval network data, i.e., the structure of the network at
$t_{0},t_{1},...,$ and so on. These models are extremely powerful, and have
pushed the envelope of this discipline by incorporating network dynamics to a
discipline dominated by static models. Unfortunately, given the current state-
of-the-art, these methods also have significant limitations, particularly with
respect to variations in the type and availability of network data.
At present, there does not exists a flexible set of tools for modeling the
growth and change of networks over time. ERGM and dynamic MLE methods have
introduced many new tools, however, methods for generating random networks
from sparse and heterogeneous network data remain elusive. Such techniques
would be extremely valuable to social science research—and especially
political science.
Suppose a researcher was interested in studying recruitment into an
organization engaged in elicit activity, such as a terrorist group. In view of
the literature on recruitment into terrorist organizations (Sageman, 2008;
Hegghammer, 2006), one could assume that individuals’ social networks play a
large role in the evolution of these groups. As a matter of fact, gathering
survey data from terrorists is difficult or impossible. There does exists,
however, sparse historical data on terrorist networks (Carley et al., ; Krebs,
2002). Using this limited data, it may be useful to test theories of terrorist
recruitment given the roles and positions of the actors in the network.
Alternatively, one might simulate a terrorist network and assign actor roles
as some random or stochastic process. Then, as with the real data, attempt to
study recruitment within the networks. Unfortunately, given the current set of
tools this is not possible.
Ideally, one could possibly learn something about the state of this
organization from the natures of its structure and the type of actors in it.
Then, posit a model for change and simulate future states of the network. In
the case terrorist recruitment, a model of recruitment might assume that new
members join as a function of specific personal attributes and always at the
periphery. With the inherent internal security concerns of terrorists this
seems reasonable. One could also assume that recruitment only occurs with
actors assigned that role in the network, and thus growth and change is
localized to these areas of the network. With a method that was able to
leverage data in this way both of these models could be tested, and the
resulting simulations could be analyzed and compared.
There also exists a burgeoning literature on the effects of social networks on
political outcomes and collective actions. This research suggests a strong
relationship between the two (McClurg, 2003; Scholz and Wang, 2006; Siegel,
2009). As such, it may be the case that individuals are using social networks
to overcome problems inherent in collective action. Informational or
efficiency gains may be made through the repeating of certain network
structures in various contexts. There are however, many theoretical
considerations for how social networks affect collective action (Siegel,
2009). Up to this point, much of the applied research in this area has focused
on experimental work and cooperative games (Jackson, 2008; Easley and
Kleinberg, 2010). A classic example from this literature is having actors vote
on a color, such as red or blue. Players are incentivized to match the color
chosen by everyone in the network but can only see the votes of their
immediate neighbors. In this case, the networks are treated as exogenous and
the experiments attempt to study the types of networks that reach equilibrium.
One may, however, be interested in studying how variation in votes casts
alters the trajectory of network structure. Rather than studying only the
collective act of voting for a color, by using the network’s structure and
color votes one could specify a model for how this network grows as a function
of these two variables.
In the following sections I propose a technique for modeling network evolution
in precisely this way. Using information ascertained from a base set of
network relationships, the evolutionary process is modeled using a set of
graph motifs, or small constituent components of a network. The proposed
method attempts to fill the void in network modeling tools for social
scientists elaborated above. The remainder of this paper proceeds as follows.
First, a motivation for why using graph motifs in this way is a useful
approach for modeling network growth. Next, the formal specification of the
graph motif modeling technique is described. This abstract framework has been
implemented in a software package. This software is then used to provide a
proof of concept for graph motif modeling. This is done by recovering three
classic random graph models: the Erdős-Rènyi binomial random graph, the Watts-
Strogatz “small world”, and the Barabási-Albert preferential attachment. In
the final section, the results of these simulations are discussed, noting
strength and weakness of the technique.
### Modeling random network growth
The research and development of random graph models that consistently
characterize structural phenomena observed empirically in social and complex
networks dates back to the work of Paul Erdős and Alfrèd Rènyi in their
seminal work on binomial random graph models (Erdős and Rènyi, 1959). In the
intervening decades there has been an explosion of research in this area. The
so-called “small-world” network model was introduced by Watts and Strogatz
(Watts and Strogatz, 1998), and was predicated on two important observations
in social networks: short average path length between nodes and a high level
of localized clustering. These structural phenomena were often observed in
relatively small networks, but as technology improved so did the ability to
study large complex networks. Following the Watts-Strogatz model was the work
of Barabási and Albert (Albert and Barabási, 2002), which noted that structure
within complex networks exhibited “preferential attachment,” meaning a limited
number of nodes drew in disproportionally more edges than the vast majority of
others. This process generated networks with “heavy-tailed” degree
distributions (Barabási and Albert, 1999; Albert and Barabási, 2002; Clauset
et al., 2009). Later, these models will be used as examples of the types of
structures that can be modeled using graph motifs.
As mentioned, ERGM techniques have emerged as preferred method for modeling
networks. Random networks can also be generated with these techniques using
Markov-Chain Monte Carlo (MCMC) simulations. Rather than estimating
coefficients for some network parameters given a network, random graphs can be
generated given estimated parameters. This class of models retains the
structural consistency of previously developed models. ERGM, however, assume a
fixed number of nodes, and structure is modeled as random variables in a
stochastic process (Robins et al., 2007).
In addition to these general models, over the past several years an explosion
of highly tailored network models have appeared. These models were developed
to address specific structural features of networks. Some of these models are
more closely related to graph theory, which attempt to bridge the gap between
the classical concepts of Erdős and Renyi with observed features in social
networks (Bollobás, 2001; Newman, 2003). Similarly, an alternative class of
contemporary models takes an agent-level approach. These simulate structural
growth as a decision process occurring endogenously through the nodes
themselves (Leskovec et al., 2005; Steglich et al., 2010). Literature on
random graph models has provided enormous insight into the general structural
dynamics of networks, they remain limited. Specifically, in both their
underlying assumption about the means by which structure is generated and the
types of networks these models can simulate.
Using a model specifically designed to approximate the dynamics of interest
may be adequate. The rigidity of these models, however, makes them much less
useful for modeling poorly understood network dynamics. The ERGM class of
models can theoretically model any countable graph, which itself constitutes a
monumental and unifying contribution. In practice, however, the MCMC
simulations used to generated random graphs from ERGM models achieve a much
sparser set of graphs. The problem of model degeneracy is well known in the
ERGM literature, and attempts have been made to address these problems
(Handcock, 2003). Unfortunately, the practical implications are quite
limiting. With many models of interest degenerating into complete (fully
connected) or empty graphs (no edges).
The primary shortcoming of these models is their treatment of the atomistic
component of a network—the node. In all of the models mentioned above, and in
fact in the vast majority of random graph models of social networks, actors
are modeled as entering the system in a vacuum. That is, nodes enter free of
any pre-existing structure. In real networks, however, this is not the case.
Except in the simplest of cases, whenever an actor enters a network system
that actor is bringing some degree of exogenous structure. This structure will
have an immediate impact on the growth trajectory of that network. This is
particularly true of human social networks, which exist in a rich, complex,
and often hidden fabric of social ties.
Consider the network dynamics when two people meet each other for the first
time. Upon meeting, these individuals have changed the structure of their
social networks by creating an edge between them. With that structure they
have also brought with them their pre-existing social structure. All of the
people they already know: friends, family, co-workers, competitors, etc. This
meeting has not simply created a dyad existing in isolation, but rather it has
connected two large components. It may have also increased the probability
that the single bridge created by this dyad will in turn become a cluster of
shared relationships. Figure 1 visually depicts the difference between these
concepts.
(a) Dyadic model
(b) Motif model
Figure 1: Competing models of social interaction
There is considerable nuance and ambiguity with respect to how to model the
relationships in the right panel of Figure 1. Instead, the dyadic relationship
on the left is simply a binary event. The dependencies related to these ties
can be a function of structurally descriptive metrics. These could include
network-level metrics, such as diameter, centralization, or density. Likewise,
node- and edge-level attributes could be used, such as centrality metrics or
node type. The plethora of potential modeling parameters have led to a
literature full of rigorous, yet limited models for network growth. Current
random graph models of social networks are useful, but are limited by
oversimplified assumptions that ignore the inherent complexities of social
structure.
This research attempts to close the gap between the theoretical assumptions of
current models and the self-evident reality of natural network interactions by
providing a more flexible framework capable of modeling a much larger set of
networks by leveraging graph motifs.
## Graph motif model
To overcome the limitations of current random graph models of social networks
the concept of a graph motif model (GMM) is introduced. This new framework is
predicated on two key assumptions that distinguish it from other network
modeling techniques. First, new actors entering a network do not do so in
isolation. That is, actors bring exogenous structure to a network when
entering it. Models of social networks, therefore, should build new structure
in an analogous way. To model networks this way it is necessary to posit
assumptions about these exogenous structures and the process by which they
will enter the network. For GMM the assumption is that new network structure
will resemble currently observable structure in type and frequency.
With this assumption, current structure can be used to form the necessary
beliefs. This, however, forces a strict requirement for GMM that is not shared
by other random graph models. Specifically, the need for some base structure
from which to derive beliefs about the network being modeled. One could argue
that all random graph models require base structure in that they all require
some fixed number of nodes to model. A set of nodes without structure still
constitutes a base graph, despite its degenerate form. This is particularly
true of the Barabási-Albert model of preferential attachment, which always
begins with the same base structure.444In practice this is often modeled as a
single dyad or a three-node line graph.
The observation that networks perpetuate self-similarity as they grow has been
noted several times in the empirical literature. In fact, complex networks
exhibit significant fractal scaling (Song et al., 2005; Kim et al., 2006; Kim
and Jeong, 2006). That is, as their size increases, so too does the amount of
self-similarity. This observation forms the critical bridge between the first
and second assumptions. To be clear, there are several assumptions that could
be used to form beliefs about network structure. These include those mentioned
earlier, such as node-level metrics, stochastic processes, etc. Self-
similarity is preferable because its empirically supported and not dependent
on a graph’s type. If a model were based on a node-level metric it could only
describe networks for which that metric was relevant. For example, many
metrics are only defined for directed or undirected graphs, weighted or
unweighted graphs, weakly connected or strongly connected graphs, and so on.
The GMM framework described here applies to undirected and directed graphs
with an arbitrary set of node or edge attributes. While this allows for an
extremely rich set of possible models, it precludes some graph forms.
Specifically, this restricts the proposed model from describing multigraphs
and hypergraphs. It may be possible to incorporate multigraph models into the
GMM framework; however, the abstract nature of hypergraphs makes their
applicability unclear. For example, consider a hypergraph wherein a single
edge is incident on many nodes. This is not a construction that models social
interactions naturally, and thus incorporating them into the model has limited
value.555A “multigraph” is defined as a graph where any two nodes may have
multiple edges between them. Conversely, a “hypergraph” is defined as graph
where a single edge may be connected on any number of nodes.
With these restrictions, the model proceeds as follows: require some graph $G$
of arbitrary size and some integer $\tau>1$. Next, count all of the subgraph
isomorphisms in $G$ of graphs $i\in I$, where $I$ is the set of all _single-
component graphs formed by $\tau$ nodes_. The restriction that $\tau$ be
strictly greater than one accounts for the fact that a $\tau\geq 1$ would
allow for a singleton element. Allowing this would violate the first
assumption of GMM for exogenous network structure. These single component
graphs are the motifs on which the entire GMM framework rests. For example,
suppose $\tau=3$, then $I=[\\{V=2,E=1\\},\\{V=3,E=2\\},\\{V=3,E=3\\}]$ where
$V$ is the number of nodes and $E$ the number of edges for graph $i\in I$. In
this example $\\{V=3,E=1\\}\notin I$, as this graph contains two components: a
dyad and an isolate. Also, note that these motifs have a natural ordering
given their number of nodes and edges. This ordering will become critical to
how new network structure enters the model.
Next, let $f(i_{n},G)$ be a function that describes the number of subgraph
isomorphisms of $i_{n}$ contained in $G$. Then, let $S$ be an ordered
$n$-tuple where $S=\\{i_{1},i_{2},...,i_{n}\\}$, such that $i_{n}$ is
increasing in number of nodes and edges. For two graphs to be isomorphic there
must be a one-to-one correspondence among the nodes and edges of two graphs. A
subgraph isomorphism between two graphs _G_ and _H_ , therefore, is defined as
such a correspondence for graph _G_ in an induced subgraph of _H_. This
construction is very useful, as it allows for the quantification of motif
frequency in any given base structure. Put another way, this is the
composition of a graph given some set of possible constituent parts. While the
subgraph isomorphism problem is known to be NP-complete, certain cases can be
solved in polynomial time and several algorithmic approximations have been
proposed (Ullmann, 1976).666The term “NP-complete” refers to the complexity
class of a problem. Specifically, an NP-complete problem is one for which a
solution can be verified in nondeterministic polynomial time. Simply, this
represents a very difficult problem to solve algorithmically. The ordered
tuple $S$ can then be used to generate beliefs about the type of structure
entering the network as it grows in size and complexity. In order to generate
these beliefs some function must be defined over the structures in $S$.
### Generating beliefs about structure
A critical aspect of modeling network growth using motifs is specifying what
types of motifs are evolving the graph. Subgraph isomorphism provides a means
for calculating the frequency of these motifs. It is still necessary, however,
to specify how these discrete counts are used to generate the beliefs about
the network’s evolution. A straightforward way to do so is to simply define a
discrete probability mass function over these counts.
Given $S$, define a probability mass function (PMF) such that
$\sum_{i}^{S}Pr(X=i)=1$, where $i$ is element of the tuple $S$ with discrete
probability. The sum of probabilities for all elements in $S$ is equal to one.
As the number of elements in $S$ is dependent on $\tau$, it is not necessary
that the PMF relate exclusively to the elements of $S$. For the purposes of
models specified in this paper the PMF defined are both exclusive to this set,
i.e., only account for motifs defined by $\tau$. For example, recall the
simple set of motifs described for $\tau=3$. In this case, a GMM would not
require a PMF that satisfied the above requirement for the complete graph mode
of four nodes because this motif would be excluded from $S$ by definition.
This allows for a large set of possible PMF to determine the probability a
given motif will enter the network. This function may rely explicitly on the
subgraph isomorphism counts, wherein zero probability mass is defined for any
motif with no subgraph isomorphism in the base structure. Alternatively, it is
also possible to specify a PMF that models the probability of motifs as a
discrete probability distribution over all elements of $S$. Below I describe
two examples of PMF that could be used in a GMM. The first is an explicit
function over the elements in $S$, while the second function provides positive
probability mass for all elements of $S$. In the latter PMF, regardless of
whether a given motif was observed as subgraph isomorphisms in the base
structure it may still have positive probability mass.
$F(i)=\frac{S_{i}}{\displaystyle\sum_{n=1}^{S}S_{n}}$ (1) Figure 2: Explicit
PMF for motif probability
Equation 1 provides a discrete probability mass over $S$. This function states
that the probability $i_{n}$ will be the next structural component of $G$ is
the proportion of subgraph isomorphisms found for $i_{n}\in G$, normalized by
the total number of subgraph isomorphisms counted $\forall i\in S$. $F$ thus
provides the necessary prior beliefs to generate new structure in $G$. Again,
this function will assign zero probability mass to any motifs that are not
observed as subgraph isomorphisms in the base structure. This can be
problematic, as it presumes that certain motifs will never enter a graph. This
also clearly limits the possible networks it can model using this PMF. In
other cases, however, this limitation may be necessary. For example, in
bipartite networks certain motif structures cannot exist in order to maintain
the bipartite structure.777Bipartite networks are defined as have two mutually
exclusive node sets, wherein edges can only be formed between nodes from
different sets.
It may also be useful, therefore, to have a PMF that assigns positive
probability to all motifs. Here, I utilize the natural ordering of elements in
$S$ by their structural complexity to fit a canonical discrete probability
distribution to the set of motifs. Specifically, I define an alternative PMF
for the elements of $S$ in terms of the Poisson distribution in Equation 2
below.
$F(i;\lambda)=\frac{\lambda^{i}e^{-\lambda}}{i!}$ (2) Figure 3: Poisson PMF
for motif probability
In this specification the “mean” of the distribution, represented by the shape
parameter $\lambda$, is the mean of all motif counts in $S$. The natural
ordering of motifs by complexity fit the motivation of the Poisson
distribution to model event counts. Here I consider the occurrence of
increasingly complex motifs within a given base structure as an increasingly
rare event. Likewise, the most likely motifs to enter a graph may have
probabilities centered around the motif with mean complexity in the base
structure. If these assumptions do not reflect the data generating process
present in the base structure, however, such a specification is misplaced and
an alternative specification of the PMF should be used.
It is important to note that these, or any PMF defined over $S$, have a direct
effect on the nature of the GMM specified. The function defined in Equation 1
requires the least amount of assumption about the probability of motifs in the
model. It relies explicitly on subgraph isomorphism counts. This, however, is
limiting and alternative methods may be desirable. As such, it may be useful
to define a PMF from the canon of discrete probability mass functions. I have
done this using the Poisson distribution in Equation 2. In each case, the
given assumption must be relevant to the network growth process being modeled.
Once probabilities are defined over $S$, the next step in specifying a GMM is
to define methods for adding these structures to the graphs. This function
must map motifs from $S$ into the base structure. As before, this function can
take many forms.
### Generating new structure
The next step in the model is to draw some motif from $S$ using the
probability distribution and add it to the network structure by some growth
rule. This rule is denoted $R(\cdot)$ and is defined as a mapping
$R:i_{n}\rightarrow G$, which is restricted only by the graph theoretic
constructs assumed by $G$. Specifically, the decision rule must be applicable
to the fundamental constructs of $G$ and subgraph elements of $S$, but is
otherwise open to the particularities of a model’s design. For example, a
growth rule cannot assume a multigraph as input because this graph type is
restricted from the GMM framework. In the following section a simple growth
rule is defined.
After each iteration of growth the process for forming structural beliefs is
repeated, and the probability distribution recalculated. The continual
updating of beliefs as the network grows allows for a certain degree of path
dependance in the model, as probability mass may converge to the most likely
motifs as the network grows. This may or may not be viewed as an advantage of
the model, but future versions will allow for both static and dynamic
probability calculations over $S$. This process continues until the model has
satisfied some termination rule. This is denoted as $T(\cdot)$, and is
restricted as $R(\cdot)$.
The means by which the evolutionary process is modeled are intentionally left
open. The GMM framework is meant to support any number of possible growth
models. Beyond the few restrictions described above, the choice and growth and
termination functions are completely at the discretion of the modeler. This is
a dramatic philosophical departure from both the closed form models and
stochastic models discussed earlier. Unlike these strict models the GMM
framework retains flexibility. This allows the technique to describe a large
and nuanced set of graphs. In the following section the algorithmic
implementation of this method is described in detail, and one simple
implementation of a GMM is specified. Before proceeding, however, here is a
review of the core elements of the method for modeling networks using graph
motifs.
The framework for modeling network structure using graph motifs described
above attempts to overcome the limitations of current methods by proposing a
flexible modeling framework wherein a rich set of graphs can be described.
This is motivated by the need to not only describe the static structure of
networks, but also model how they evolve over time. To achieve this, two key
assumption are made. First, some base structure upon which to form beliefs
about the type of graph being modeled must be provided. Then, the constituent
parts of this base structure—graph motifs—represent a useful proxy for the
data generating process present in the network being modeled. Again, these
assumptions are in stark contrast to those of many traditional network models.
1. 1.
Require some base graph $G$ of arbitrary complexity
2. 2.
Given some integer $\tau>1$, the set $I$ contains all single-component
subgraphs formed by $\tau$ nodes
3. 3.
Define $S$ as an ordered $n$-tuple containing all $i\in I$
4. 4.
Define the function $f(i_{n})$ to count the number of subgraph isomorphisms of
$i_{n}\in G$, and a PMF over all elements in $S$
5. 5.
Draw structure from this probability distribution and add that structure to
the network by some growth rule $R(\cdot)$
6. 6.
Repeat steps 4-5 until the some termination rule $T(\cdot)$ is satisfied
Figure 4: The basic steps of the GMM framework
The GMM framework brings with it a different set of limitations, many of which
will be discussed in the conclusion. As much of the model hinges on subgraph
isomorphisms counts, this model requires a sophisticated computational
implementation. In the following section an implementation is described using
the Python programming language to develop the GMM package for graph motif
modeling.
## Algorithmic implementation: the GMM Python Package
Before any implementation of the GMM can proceed, it will be necessary to have
a means for representing complex networks computationally in the Python
language.888For more information on the Python language see
http://www.python.org/ Fortunately, the NetworkX package is a highly-developed
Python package for the creation, manipulation, and study of the structure,
dynamics, and functions of complex networks (Hagberg et al., 2008).999The
NetworkX package exploits existing code from high-quality legacy software in
C, C++, Fortran, etc., is open-source, and fully unit-tested. For more
information on NetworkX see http://networkx.lanl.gov/ NetworkX is capable of
representing graphs of arbitrary complexity, including both node and edge
attribute data. These features make NetworkX ideally suited as the
computational foundation for an algorithmic implementation of the GMM
framework.
The GMM package consists of two object classes. The first is the “gmm” class
itself, which is the essential element of any model. This requires three
arguments: a NetworkX graph object as the model’s base structure, and special
Python functions as growth and termination rules. With these elements in
place, the “gmm” object can be used to simulate the evolution of the given
base structure. The remaining scaffolding built into this class is in place to
verify that all model parameters are valid to a GMM model, store these
parameters appropriately, and provide functionality for storing and retrieving
information about a given GMM simulation.
Figure 5: Implementation of GMM with dependencies
The second class is “algorithms,” which provide all of the functionality for
properly running a simulation and generating network structure from a given
“gmm” object. This contains all of the functions needed to create a set of
graph motifs, generate beliefs about how that structure enters the model, and
the actual generation of new network structure. With respect to generating
beliefs, the two PMF discussed in the previous section are included by
default. The Poisson function requires SciPy—a third-party Python scientific
computing package—to generate the correct probabilities. NetworkX also
requires this package as a dependency; therefore, its use in the GMM package
does not compound software requirements.101010For more information on SciPy
see http://www.scipy.org/
Finally, as stated previously, the subgraph isomorphism problem is known to be
NP-complete and requires a sophisticated approximation. In this case, the VF2
algorithm—the most commonly used algorithm to evaluate subgraph isomorphism—is
included in NetworkX and is used to perform the necessary calculation for
matching subgraph isomorphism (Junttila and Kaski, 2007; Cordella et al.,
2001). With these two simple classes, it is possible to specify a rich set of
GMM. Figure 6 above illustrates the basic computational framework described
here.
Following the example of high quality scientific Python packages, such as
NetworkX and SciPy, the GMM package is open-source and fully unit-tested. All
of the code is free to inspect and download at this website:
https://github.com/drewconway/gmm, which includes all unit-tests to verify all
function’s execution. The software is also listed in Python’s official package
index at http://pypi.python.org/pypi/GMM. Detailed descriptions of the
algorithms, their requirements, and additional examples are provided in the
GMM package documentation, which can be viewed here:
http://www.drewconway.com/gmm/. In the next section a very basic GMM model is
specified and simulated using this software.
### A simple GMM with random growth
As described, the first step in specifying a GMM are to determine the base
structure. Then, the growth and termination rules to be used to simulate
growth. In this example I use the canonical Petersen graph as the base
structure, and two very simple rules.111111The Petersen graph, often denoted
as $K_{10}$, is a ten node graph with uniform degree of three. It is a well-
studied graph for its many known properties, such as being a non-planar. The
termination rule will be a “node ceiling,” whereby the model will terminate
growth once the network contains at least 250 nodes. For growth a random
attachment rule will is implemented. When a graph motif enters the graph, a
random node from the motif will be connected to a random node from the current
base structure. These algorithms are implemented in pseudo-code below.
0: $G$
if $G>=250$ then
return true
else
return false
end if
Algorithm 1 Pseudo-code “node ceiling” termination rule
0: $G,H$
$G=G[H]$ {Compose $H$ with $G$}
$r_{1}=RANDOM(G)$; $r_{2}=RANDOM(H)$ {Select random nodes from each graph}
$G=EDGE(G,r_{1},r_{2})$ {Create edge}
return $G$
Algorithm 2 Pseudo-code random growth rule
This example is not a model of any particular network growth mechanism. It is
a useful example in that it shows the power of the GMM framework even with
simple rules. The Petersen graph is made of all closed motifs, i.e., there are
no pendants or pendant chains present in the graph. As such, the random growth
rule connects these structures by single edges. This mechanism will create
simple chains of whatever motifs are drawn from the probability distributions.
Figure 6 below illustrates this, with the Petersen graph shown at the left,
and the resulting simulation on the right. The elongated structure in the
right panel shows the chains of motifs growing in different directions, as the
random selection of nodes caused growth to occur along several paths.
(a) Petersen graphs as base structure (b) Simulated GMM results
Figure 6: Result of simple GMM with random growth rule
Using this same framework it is possible to model much more complex networks.
In the following section, three classic random graph models are recovered
using the graph motif modeling method: the Erdős-Rènyi binomial random graph,
the Watts-Strogatz “small world” model, and the Barabási-Albert preferential
attachment model. This exercise is meant as a proof of concept for GMM, which
highlights many strengths and weaknesses of the technique.
## Recovering classic random graph models with GMM
To show the applicability of the GMM framework for modeling the growth of
networks over time I propose three specifications that attempt to recover
classic random graph models. Each of these models has already been mentioned
in earlier sections, and all are excellent benchmarks for GMM. A primary
reason for their utility is that each of the classic models described in this
section attempts to model distinctly different network growth mechanisms. As
the experiments will illustrate, one of the most powerful features of the GMM
technique is the ability to describe many different growth mechanism within a
single modeling framework.
The first model these experiments attempt to recover is the most classic of
all random graph models: the Erdős-Rènyi (ER) binomial random graph model
(Erdős and Rènyi, 1959). The model specification is very simple. Given some
number of nodes $n$, and a probability $p$ that any two nodes from that set
form an edge. This stochastic processes generates the random graph. If
$p=0.5$, then for each node a coin toss would determine if that node formed a
connection to any of the other nodes. The model is referred to as a binomial
random graph because it produces networks with degree distributions that fit
binomial distributions for the given $p$ and $n$.
To specify a GMM that will recover this model an appropriate base structure
and growth rule will be needed. The termination rule in this case will be a
simple “node ceiling” because the ER models networks using a fixed number of
nodes. In fact, all of the classic random graph models discussed in this
section use a fixed number of nodes. As such, a node ceiling termination rule
is used in all GMM described here. The growth rule for a GMM equivalent of the
ER model follows closely to the random growth rule specified in Algorithm 2.
The difference is that in this case there is an additional parameter $p$,
which is the probability that the motif entering the network will form a tie
to all of the nodes in the base graph. For each node in the motif and each
node in the base structure, draw a random value from a uniform distribution on
the unit interval. If that value is less than or equal to $p$, create an edge
between the node in the motif and the node in the base structure. As before, a
pseudo-code implementation of this algorithm is provided in Algortihm 3 below.
0: $G,H,p$
$G=G[H]$ {Compose $H$ with $G$}
for $i$ in $H$ do
for $j$ in $G$ do
$ran=RANDOM(low=0,high=1)$ {Draw a random value from a uniform distribution}
if $ran<=p$ then
$G=EDGE(G,i,j)$ {If random draw less than $p$, create edge}
end if
end for
end for{For each node in $H$ test if it will connect to each node in $G$}
return $G$
Algorithm 3 Pseudo-code ER growth rule
(a) E-R ($p=0.5$, $n=50$)
(b) GMM ($p=0.5$, $n=50$)
(c) E-R ($p=0.5$, $n=75$)
(d) GMM ($p=0.5$, $n=75$)
(e) E-R ($p=0.5$, $n=100$)
(f) GMM ($p=0.5$, $n=100$)
Figure 7: Degree distribution and binomial fit for both classic Erdős-Rènyi
(ER) models and GMM equivalent. The graphs in the left column represent degree
distributions for classic models, all with $p=0.5$ and varying sizes with
$n=\\{50,75,100\\}$. In the right column are the degree distributions for
simulated GMM graphs using a growth rule that attempts to mimic the E-R model.
The grey bars are the densities of observed degree in each network, and the
connect red points are the fitted binomial densities. In each case, $p=0.5$
and the base structure is a randomly generated ER graph with $n_{base}=n-25$.
For example, in panel (b) the base structure for the GMM was a random E-R with
p=$0.5$ and $n=25$. This pattern is consistent through all GMM simulations.
To run this experiment I generated four classic ER random binomial networks.
These networks were generated using the built-in functionality of NetworkX.
All subsequent classic models are also generated using this software. For all
networks $p=0.5$, but the number of nodes in each network increases from 25 to
100 at intervals of 25. These networks are used as the benchmarks to evaluate
the GMM simulations that follow.
To evaluate the result from the GMM simulations a measure of their ability to
recover the data generating process described by the ER model will be needed.
The ER model describes a purely random data generating process. With $p=0.5$,
each node has an equal probability of creating an edge to every other node.
The degree distribution of the resulting networks, therefore, should follow a
binomial distribution where the number of trials is the number of nodes minus
one (no self-loops) and a probability of $0.5$. The evaluation of the
simulation results, and subsequent comparison to classic ER models, is based
on the goodness of fit of the observed degree distributions to theoretical
binomial distributions with the same specification.
Figure 7 illustrates these fits graphically. In the left column of this figure
are the degree distributions for all of the classic ER models, with network
size increasing by 25 from top to bottom. The grey bars in the figure are the
observed degree distributions, and the connected red points are the
theoretical binomial densities. The GMM simulation for an equivalent network
is the second column, with graph sizes matched left to right. As you can see
from the these visualizations, both the classic ER model and the GMM
equivalent produce degree distributions that well approximate the theoretical
binomial densities.
To statistically compare the goodness of fit between the ER and GMM
simulations a series of simple linear regressions are calculated wherein
fitted binomial densities are regressed on observed degree distribution
densities for all graphs. A standard approach to testing the goodness of fit
for count data is to use the Pearsons Chi-squared test. In this case, however,
due to an abundance of zeroes in the tails of the observed degree
distributions this technique is not robust. The linear models can be used to
assess both the quality of the fit via the $R^{2}$ and root mean squared error
(RMSE) values, as well as the quality of the models themselves using AIC. The
results are reported in the table in Figure 8.
Graph | Binom. Density | Std. Error | $R^{2}$ | RMSE | AIC
---|---|---|---|---|---
ER ($n=50$) | 0.9339 | 0.0827 | 0.7880 | 0.0183 | -174.6894
GMM ($n=50$) | 0.9028 | 0.0819 | 0.7798 | 0.0181 | -175.3879
ER ($n=75$) | 0.8712 | 0.0638 | 0.7911 | 0.0133 | -283.8167
GMM ($n=75$) | 0.9717 | 0.0713 | 0.7901 | 0.0149 | -272.6114
ER ($n=100$) | 0.8943 | 0.0766 | 0.6859 | 0.0152 | -342.4095
GMM ($n=100$) | 0.9939 | 0.0416 | 0.9032 | 0.0082 | -412.7616
Figure 8: This table reports the results of several linear models wherein
fitted binomial densities are regressed on observed degree distribution
densities. This is done to measure the quality of fit for the observed degree
distribution to a theoretical binomial for all graphs. The coefficients for
the theoretical densities are reported, with standard error. The $R^{2}$ and
root mean squared error (RMSE) values are reported as measures of the quality
of fit, and AIC as a measure of model quality. The p-values for all models are
significant at the 0.99 level.
The table in Figure 8 reinforces the observation from Figure 7 that all
simulations produce good fits. In addition, these values provide some
numerical basis for comparing the results of the ER and GMM simulations. The
most important observation from this table and the preceding graphs is that
the GMM is able to successfully recover the classic ER model. In all cases,
the quality of fit for observed degree distribution is at least as good in the
GMM simulations as the ER. Only in the experiments where $n=100$ is the GMM
model noticeably better. This difference, however, would likely be diminished
if cumulative comparisons were made across many simulations with the same
parameterization.
These results are encouraging, as the GMM was able to recover very easily the
results from the ER model. That said, the ER does not describe a network
evolutionary process that is observed naturally. People do not form
relationships at random. A better test of the GMM is to attempt to recover a
classic model that is designed to describe such a natural process. The Watts-
Strogatz (WS) “small-world” model attempts to do this, and is the focus of the
second set of experiments.
As mentioned in the introduction, the WS model is motivated by the observation
the social networks often exhibit two structural features: short average path
length between nodes and a high level of localized clustering (Watts and
Strogatz, 1998). Short average path lengths indicate a network that is
relatively dense, where any two nodes have few intervening connections between
them. High localized clustering is observed in networks with many cliques,
where close nodes are densely connected. Both of these features lead to social
interactions that are associated with a small world effect, hence the name.
To produce a network of size $n$ and these structural features the WS model
assumes a regular lattice of size $n$ and mean degree $k$. This regular
lattice is—in effect—the base structure used in the WS model. In order to
produce the small world effect an additional parameter $p$ is incorporated.
This is the probability that each edge in the regular lattice will be “re-
wired” to some other random node in the network. That is, given a regular
$n,k$-lattice, the model iterates over each edge and rewires it with
probability $p$.
This classic model achieves the desired result. To measure localized
clustering WS uses the mean clustering coeffecient for each graph. Average
shortest path length is used as the second measure, and both metrics are
normalized by the equivalent measure for a regular $n,k$-lattice with no re-
wiring. This latter transformation is used to make the results comparable
across variation in $p$ for fixed $n$ and $k$. An interesting footnote of the
original WS paper is that results were only reported for simulations on
network with 10,000 nodes. This is a relatively large number of nodes, given
the theoretical motivation is to model smaller social groups.
Figure 9: The above graph compare the classic Watts-Strogatz (WS) model with a
GMM equivalent using both normalized clustering coefficient ($C(p)/C(0)$) and
characteristic path length ($L(p)/L(0)$). These values have been normalized by
the clustering coeffecient and characteristic path length of a regular lattice
of $n=100$ and $k=3$, as in the original WS paper. Each point in the graph
represents the mean value for both metrics over 20 simulations for both the
classic WS and the GMM simulations. For all simulations $n=100$ and $k=3$. The
y-axis represents variation in the $p$ in log-scale. This is the probability
of being rewired in the classic model, and the probability of connecting to
the main component in the GMM. Red points and are clustering coefficients, and
blues are characteristic path length. Solid lines connect points from the
classic WS model, while dotted for the GMM simulations. The grey bars in each
graph are the densities for each degree count in the graphs. The red dotted
line is the theoretical binomial fit given the distribution.
In this experiment a GMM is specified that attempts to recover the WS model
with the framework of motif modeling. The original WS model does not allow for
new nodes to enter the network. It can be argued that this design poorly
describes real social networks. Regardless, the GMM specified for these
experiments attempts to remain as faithful to the original design as possible
by using the $k$ parameter to maintain mean degree in the base structure and
$p$ as the probability of connecting to the main component. The pseudo-code
for the GMM implementation is available in Appendix A.
The results of these simulations are reported in Figure 9. This graph follows
the specification used in the original WS paper, where both $n$ and $k$ are
fixed in all simulations and $p$ varies between zero and one on a log-scale.
Unlike the original paper, however, $n=100$ and $k=3$ as compared to the
values of $n=10,000$ and $k=10$. As mentioned, a 10,000 node network is very
large and not necessarily representative of the small-world networks this
model is meant to describe. Likewise, initializing the model with a regular
lattice with mean degree 10 assumes a lot of structure at the outset. The
smaller and sparser networks in this specification are a better benchmark for
comparing the GMM simulations with the classic WS results.
The results are not as straightforward as the ER simulations, but are quite
telling. Each point in Figure 9 represents the average of that value for that
network specification over 20 simulations. Thirteen variations of $p$ were
used in this experiment; therefore, the figure represents the results of 260
simulations. The solid lines correspond to the classic WS model, and dashes
the GMM simulations. Red dots represent mean values for the normalized
clustering coeffecient. Blue dots the mean characteristic path length.
There are two striking observations from this data. First, the classic WS
model overall performs poorly at these small scales. The clustering
coefficients in the WS model are flatlined at zero, almost regardless of the
value of $p$. Moreover, for very low $p$ the average shortest path length in
the original WS model remains relatively high until $p>0.01$. Alternatively,
the GMM models are performing much better at generating networks with small
world structure.
In fact, in all cases the GMM simulations outperform the classic WS model in
this experiment. The clustering coefficients are higher for all simulations,
and gradually increase as $p$ increases. Unlike the WS model, wherein the
clustering coeffecient is flat at zero. Interestingly, the average shortest
path length in the GMM simulations is low and only slightly affected by
increases in $p$. One of the reasons that the GMM results may be performing
better is the framework is a much more natural way to model a small world
structure. Rather than assuming a fixed initial structure and randomly
permuting edges, the GMM is effectively bringing in clustered groups through
the motif. The motifs create high localized structure, which in turn decreases
the average shortest path for networks generated this way.
As before, a key strength of the GMM modeling framework is its flexibility.
With the ability to specify tailored growth rules GMM can describe a very
robust set of network evolutionary mechanisms. In this case, using graph
motifs to model growth is a much more intuitive way of generating small world
networks. In the third and final experiment a GMM is specified which attempts
to recover the classic Barabási-Albert (BA) preferential attachment model.
The BA model is motivated by the strong empirical observations of heavy-tailed
degree distributions in large networks. This comes from very few nodes acting
as massive hubs in large networks. The vast majority of nodes have few
connections, while a minority have very many. Networks with these features are
often referred to as “power-law networks,” because their degree distributions
are so heavily skewed. Some examples of networks that have been shown to
exhibit these features are the World Wide Web (Albert et al., 1999), protein
interaction networks (Ito et al., 2000), email networks (Ebel et al., 2002),
and country-to-country war dyads (Roberts and Turcotte, 1998).
In these networks nodes that have many edges tend to get more new edges faster
than those with fewer edges. This has been described as a “rich get richer”
dynamic, or preferential attachment. These nodes have some inherent attribute
that makes them more likely to get new connections. In the context of the
World Wide Web Google is a massive hub, primarily because other web sites want
to connect to Google in order to be found. In war, some countries are much
more aggressive than others and tend to get into more militarized disputes.
This dynamic is the center piece of the BA model.
The classic model takes two parameters: $n$ and $m$. The first, $n$, is simply
the number of nodes in the final graph. The $m$ parameter is how many edges
each new node will make when entering the network. The model always begins
with a simple base structure, usually a three node line graph. Then, nodes
iteratively enter the network forming $m$ ties to the nodes in the base graph
as a function of the number of edges each node in the base graph already has.
That is, nodes with more edges have a higher probability of forming ties with
new nodes, i.e., preferential attachment based on degree. With this framework
the BA model produces networks with heavy-tailed degree distributions.
(a) $n=100$, $m=1$
(b) $n=100$, $m=3$
(c) $n=100$, $m=5$
(d) $n=100$, $m=7$
Figure 10: The above graphs compare kernel density plots of the power-law
scaling parameters calculated from the degree distributions for networks
generated with a classic Barabási-Albert (BA) random graphs networks with
those from a GMM equivalent. In all simulations $n=100$, and starting in the
upper-left panel $m=1$. Moving counter clockwise through the panels $m$
increases to 3, 5, and 7. These parameters were estimated graphically. In each
graph the red curve represents the density of the scaling parameters for 100
simulations of the class BA model for that parameterization. The blue curves
are the cumulative results of GMM simulations with the same specifications,
but with varying base sizes. Each blue curve contains results from 25 GMM
simulations using BA models as the base with increasing size,
$n={20,40,60,80}$.
(a) $n=100$, $m=1$
(b) $n=100$, $m=3$
(c) $n=100$, $m=5$
(d) $n=100$, $m=7$
Figure 11: The above graphs compare kernel density plots of the alpha
parameters for fits to a power-law distribution from degree distributions for
networks generated with a classic Barabási-Albert (BA) random graphs networks
and those from a GMM equivalent. In all simulations $n=100$, and starting in
the upper-left panel $m=1$. Moving counter clockwise through the panels $m$
increases to 3, 5, and 7. Unlike in Figure 10, where the scaling parameter is
estimated graphically, the alpha values here are estimated using maximum
likelihood. In each graph the red curve represents the density of the scaling
parameters for 100 simulations of the class BA model for that
parameterization. Curves match Figue 10.
To specify a GMM that recovered the BA model is simple. Using a small network
generated with the classic BA model the GMM takes an $m$ parameter and
connects the motif $m$ times to the base structure as a function of the nodal
degree in the base structure. As in the WS experiment the size of the networks
in both the BA and GMM are held constant at $n=100$. Variation is done on the
$m$ parameter, starting with $m=1$ and increasing to 3, 5, and 7. With the
classic BA models, 100 simulations were generated for each parameterization.
For the GMM 25 simulations were conducted using BA models as the base with
increasing size, $n={20,40,60,80}$. This variation was included to investigate
if the quality of the GMM results were a function of the size of the base
structure. A pseudo-code implementation of the growth rule used in the GMM is
included in Appendix A.
The ability of these models to produce power-law-like networks is tested by
fitting the resulting degree distributions to a power-law. A common method for
fitting these degree distributions is to do so graphically. This is done by
fitting a line to the normalized degree distributions in log-scale. The slope
of that line is an estimate of the power-law scaling parameter $\alpha$. Using
this method an estimate of $\alpha>2$ is considered a “good fit” to a power-
law. This is the method used in the original BA piece, and the results of this
test on the experimental networks are reported in Figure 10.
To compare the results from the BA model and the GMM simulations the scaling
parameters estimated for all graphs are grouped together and kernel density
plots are used to illustrate the frequency of scaling parameters. The red
curves in Figure 10 represent the densities of scaling parameters from 100 BA
simulations, and the blue curves the 100 GMM simulations with varying base
graph sizes. Starting in the upper-left panel $m=1$, and moving clockwise
through the panels $m$ increases to 3, 5, and 7. The results of these tests
highlight both strengths and weaknesses of the GMM framework in this context.
Panel (a) of Figure 10 shows that the GMM produces very similar scaling
parameters to the BA model. In this case $m=1$, so only one node from the
motif is being connected to the base structure during the GMM simulation. It
is also clear that even in panel (a) the GMM simulation produces more
variation in the scaling parameter. Moving through the other panels, the
contrast is much starker. The GMM is producing more varied and less similar
scaling parameters than the classic BA model. The peaks in the blue curve seem
to indicate that variation in the size of the base structure for the GMM does
affect the results. Further numeric investigation of these results confirm
this. What is interesting, however, is that despite this variation the GMM
simulations are producing better results in terms of power-law fit.
When directly comparing the results of the scaling parameters using the
graphical estimation technique the GMM does better. But, considering
$\alpha>2$ as the rule of thumb, in fact neither model is producing good
results. This may be because of the relatively small sizes of the simulated
networks. At this scale the GMM is on average producing networks that have a
better fit by this metric. Using a graphical estimate of a power-law fit,
however, is undesirable. It has been shown that graphical estimates, like the
one used in the classic BA piece and reported in Figure 10, are known to
produce systematically biased estimates (Clauset et al., 2009). A more precise
method is to use maximum-likelihood to estimate the scaling parameter. Such a
method for fitting power-laws has been proposed (Newman, 2005), and is used to
re-estimate the scaling parameters for all of the networks in Figure 11.
The most immediate difference between the results of the graphical estimates
and the maximum likelihood is the considerable reduction in variance of the
scaling parameter in the GMM simulations. Moreover, estimates using the MLE
method significantly increase the variance for scaling parameters from BA
networks. Unlike the results from the graphical estimates, in this case the
GMM simulations are producing networks with very similar scaling parameters.
This can be seen by the the tight overlaps on the curves in all of the panels
in Figure 11. Finally, these re-estimates reinforce the observation that the
size of the base structure affects results in the GMM. This is particularly
noticeable in panels (c) and (d), where there are several small curves for the
GMM simulations.
The results of these experiments are very encouraging. As a proof of concept
exercise, the GMM used here were all able to successfully recover the data
generating processes described by these three classic models. In addition, in
many cases the GMM were able to produce networks that better achieved the
structural features described by the classic models. These results also reveal
a potential weakness of the GMM. The final experiment highlights the
sensitivity of the GMM to the size of the base structure.
## Conclusions
In this paper I have introduced an alternative technique for modeling network
evolution using graph motifs, i.e the GMM. This method differs greatly from
current models in its core assumptions. First, the GMM framework assumes
networks evolve through the addition of nodes with exogenous structure. When
new actors enter a network they do so with some amount of preexisting
structure. This structure should therefore be used to model network growth.
Second, future structure in a network will resemble current structure. This
assumption relies on the observation that networks exhibit considerable
fractal scaling as they increase in complexity. To estimate this the GMM uses
counts subgraph isomorphisms of a set of graph motifs to measure the frequency
of various network structures within a given graph. Using these assumptions,
the GMM is constructed as computational framework for simulating network
growth.
The basic GMM framework has been implemented as the GMM package in the Python
programming language. Relying on high-quality scientific computing packages
already available in Python, this package allows for the specification of a
near boundless set of GMM to model any number of networks. To test this
framework various GMM are specified that attempt to recover three classic
random graph models: the Erdős-Rènyi binomial random graph, the Watts-Strogatz
“small world” model, and the Barabási-Albert preferential attachment model. In
each case the GMM is able to not only successfully recover these models, but
also often outperforms them. These results are very encouraging for the use of
GMM in modeling many different types of network growth dynamics.
This work has many potential contributions to political science. As stated,
much of the data studied in the social sciences can be modeled as a network.
More specifically, this data often represents relationships among people.
While there have been great advances in techniques for modeling these
relationships, current methods lack a flexible framework for modeling network
evolution. The dynamics of human interactions are both complex and subtle.
Attempting to force these complexities into overly simple models massively
limits the types of networks that can be studied. By using a more flexible
framework, such as the GMM proposed here, social science researchers may be
able to specify models that capture these elusive dynamics. Experiments like
the ones described above can be used to explore the ramifications of these
dynamics, and how they affect social outcomes.
It is important to note, however, that the technique proposed here also has
many limitations. As was drawn out in the experiments, the results of GMM
appear sensitive to the size and structure of the initial base graph.
Additional tests must be conducted to determine the nature of this
sensitivity. Also, in its current form the growth and termination rules are
treated as fixed for the entire simulation. Conceptually, this is useful
because it simplifies the construction of a model and provides a basis for
interpretation. In some cases, however, it may be useful to “endogenize” these
rules given the initial state of the base structure. Consider the case where
the base structure is unknown to the modeler at the outset. Here, we may want
rules that emerge as the result of this structure given the context of our
modeling task. In this case endogenous rule generation will be necessary.
Furthermore, the notion of “learning rules” is a potential extension. Growth
rules could be tuned given the evolution of the network through the iterations
of the model. These adaptations, however, make model interpretation more
difficult.
Finally, a more general observation is the GMM is poorly suited to model non-
human networks. There are many networks for which exogenous growth may be a
contradiction, such as physical networks like transportation or
telecommunication. Also, the evolution of some biological networks may be
poorly modeled using graph motifs, such as protein-interaction or neural
networks. In cases where network evolution via graph motifs is not feasible
then the GMM should not be applied. That said, the primary motivation for this
work is to attempt to establish a framework for modeling the evolution of
human social networks.
Beyond these theoretical limitations, there are also some technical
limitations. A linchpin of the model is the need to count subgraph isomorphism
in order to form beliefs about future network structure. As stated, this
problem is known to be NP-complete, and therefore the method scales very
poorly as either $\tau$ or the complexity of the base structure increases. In
practice both of these model parameters must be relatively small to compute
results in a reasonable amount of time. Improving the speed of the VF-2
algorithm is a computer science problem, and therefore beyond the scope of
this research. With current technology, however, there are methods for
improving runtime as the networks scale. First, rather than recomputing the
probability distribution at every iteration this could remain static, meaning
that subgraph isomorphism would only need to calculated once. Additionally,
these counts could easily be done in parallel in a high-performance computing
environment. Future version of the GMM package will allow for such distributed
computing.
Additional improvements need to be made to future versions of the software.
Better accounting for simulation statistics, including runtime, growth
metrics, probability mass convergence and iterative changes to the base
network. For example, given the potential path dependence of the model from
the base structure it would be very useful to have some knowledge of the
distribution of motifs used in a given simulation. This information could be
used to compare and interpret results from multiple runs of the same model.
Finally, utilizing the ERGM literature, considerations for the quality of
“model fit” within the context of the GMM must be made. A large advantage of
ERGM models is the ability to compare model fitness. In order to more fully
understand how these two modeling techniques differ direct comparisons must be
made across a large class of networks. This type of research, therefore, will
constitute a large portion of the future effort in the work.
## References
* Albert and Barabási (2002) Albert, R. and A. Barabási (2002). Statistical mechanics of complex networks. Reviews of Modern Physics 74(1), 47. Copyright (C) 2009 The American Physical Society; Please report any problems to prola@aps.org.
* Albert et al. (1999) Albert, R., H. Jeong, and A.-L. Barabási (1999, September). Diameter of the world-wide web. Nature, 130–131.
* Barabási and Albert (1999) Barabási, A. and R. Albert (1999, October). Emergence of scaling in random networks. cond-mat/9910332. Science 286, 509 (1999).
* Bollobás (2001) Bollobás, B. (2001). Random Graphs (Second ed.). Cambridge University Press.
* (5) Carley, K. M., J. Reminga, and N. Kamneva. Destabilizing terrorist networks. Institute for Software Research. Paper 45.
* Clauset et al. (2009) Clauset, A., C. R. Shalizi, and M. E. J. Newman (2009). Power-law distributions in empirical data. SIAM Review 51, 661–703.
* Cordella et al. (2001) Cordella, L., F. P, C. Sansone, and M. Vento (2001). An improved algorithm for matching large graphs. In Proceedings of the 3rd IAPR TC-15 Workshop on Graph-based Representations in Pattern Recognition, pp. 149–159.
* Easley and Kleinberg (2010) Easley, D. and J. Kleinberg (2010). Networks, Crowds, and Markets: Reasoning About a Highly Connected World. Cambridge University Press.
* Ebel et al. (2002) Ebel, H., L.-I. Mielsch, and S. Bornholdt (2002, Sep). Scale-free topology of e-mail networks. Phys. Rev. E 66(3), 035103.
* Erdős and Rènyi (1959) Erdős, P. and A. Rènyi (1959). On random graphs, i. Publicationes Mathematicae (Debrecen) 6, 297, 290.
* Fowler (2006) Fowler, J. H. (2006). Connecting the congress: A study of cosponsorship networks. Political Analysis 14(4), 456–487.
* Fowler et al. (2011) Fowler, J. H., M. T. Heaney, D. W. Nickerson, J. F. Padgett, and B. Sinclair (2011). Causality in political networks. American Politics Research 39(2), 437–480.
* Freeman (2004) Freeman, L. C. (2004). The Development of Social Network Analysis: A Study in the Sociology of Science. Empirical Press.
* Hafner-Burton et al. (2009) Hafner-Burton, E. M., M. Kahler, and A. H. Montgomery (2009). Network analysis for international relations. International Organization 63(03), 559–592.
* Hafner-Burton and Montgomery (2006) Hafner-Burton, E. M. and A. H. Montgomery (2006). Power positions: International organizations, social networks, and conflict. The Journal of Conflict Resolution 50(1), pp. 3–27.
* Hagberg et al. (2008) Hagberg, A. A., D. A. Schult, and P. J. Swart (2008, August). Exploring network structure, dynamics, and function using networkx. In G. Varoquaux, T. Vaught, and J. Millman (Eds.), Proceedings of the 7th Python in Science Conference (SciPy2008), pp. 11–15.
* Handcock (2003) Handcock, M. S. (2003). Dynamic Social Network Modeling and Analysis, Chapter Statistical Models for Social Networks: Inference and Degeneracy, pp. 229–240.
* Hegghammer (2006) Hegghammer, T. (2006). Terrorist recruitment and radicalization in saudi arabia. Middle East Policy 13(4), 39–60.
* Hunter et al. (2008) Hunter, D. R., M. S. Handcock, C. T. Butts, S. M. Goodreau, and M. Morris (2008, 5). ergm: A package to fit, simulate and diagnose exponential-family models for networks. Journal of Statistical Software 24(3), 1–29.
* Hämmerli et al. (2006) Hämmerli, A., R. Gattiker, and R. Weyermann (2006). Conflict and cooperation in an actors’ network of chechnya based on event data. The Journal of Conflict Resolution 50(2), pp. 159–175.
* Ito et al. (2000) Ito, T., K. Tashiro, S. Muta, R. Ozawa, T. Chiba, M. Nishizawa, K. Yamamoto, S. Kuhara, and Y. Sakaki (2000). Toward a protein interaction map of the budding yeast: A comprehensive system to examine two-hybrid interactions in all possible combinations between the yeast proteins. Proceedings of the National Academy of Sciences 97(3), 1143–1147.
* Jackson (2008) Jackson, M. O. (2008). Social and Economic Networks. Princeton University Press.
* Junttila and Kaski (2007) Junttila, T. and P. Kaski (2007). Engineering an efficient canonical labeling tool for large and sparse graphs. In Proceedings of the Ninth Workshop on Algorithm Engineering and Experiments and the Fourth Workshop on Analytic Algorithms and Combinatorics.
* Kim and Jeong (2006) Kim, D.-H. and H. Jeong (2006). Inhomogeneous substructures hidden in random networks. Physical Review E (Statistical, Nonlinear, and Soft Matter Physics) 73(3), 037102.
* Kim et al. (2006) Kim, J. S., K. I. Goh, G. Salvi, E. Oh, B. Kahng, and D. Kim (2006, May). Fractality in complex networks: critical and supercritical skeletons. Phys. Rev. E 75, 016110 (2007)..
* Krebs (2002) Krebs, V. (2002). Uncloaking terrorist networks. First Monday.
* Leskovec et al. (2005) Leskovec, J., J. Kleinberg, and C. Faloutsos (2005). Graphs over time: densification laws, shrinking diameters and possible explanations. In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, KDD ’05, New York, NY, USA, pp. 177–187. ACM.
* Maoz et al. (2006) Maoz, Z., R. D. Kuperman, L. Terris, and I. Talmud (2006). Structural equivalence and international conflict. Journal of Conflict Resolution 50(5), 664–689.
* McClurg (2003) McClurg, S. D. (2003). Social Networks and Political Participation: The Role of Social Interaction in Explaining Political Participation. Political Research Quarterly 56(4), 449–464.
* Newman (2005) Newman, M. (2005). Power laws, pareto distributions and zipf’s law. Contemporary Physics 46, 323–351.
* Newman (2003) Newman, M. E. J. (2003). The structure and function of complex networks. SIAM Review 45, 167–256.
* Roberts and Turcotte (1998) Roberts, D. C. and D. L. Turcotte (1998). Fractality and self-organized criticality of wars. Fractals (4), 351–357.
* Robins et al. (2007) Robins, G., P. Pattison, Y. Kalish, and D. Lusher (2007). An introduction to exponential random graph (p*) models for social networks. Social Networks 29(2), 173 – 191. Special Section: Advances in Exponential Random Graph (p*) Models.
* Sageman (2008) Sageman, M. (2008). Leaderless Jihad: Terror Networks in the Twenty-First Century. University of Pennsylvania Press.
* Scholz and Wang (2006) Scholz, J. T. and C.-L. Wang (2006). Cooptation or transformation? local policy networks and federal regulatory enforcement. American Journal of Political Science 50(1).
* Siegel (2009) Siegel, D. A. (2009). Social networks and collective action. American Journal of Political Science 53(1).
* Snijders (2008) Snijders, T. A. B. (2008). Encyclopedia of Complexity and Systems Science, Chapter Longitudinal Methods of Network Analysis. Springer Reference.
* Snijders et al. (2010) Snijders, T. A. B., J. Koskinen, and M. Schweinberger (2010). Maximum likelihood estimation for social network dynamics. Annals of Applied Statistics 4(2), 567–588.
* Snijders et al. (2006) Snijders, T. A. B., P. E. Pattison, G. L. Robins, and M. S. Handcock (2006). New specifications for exponential random graph models. Sociological Methodology 36, pp. 99–153.
* Song et al. (2005) Song, C., S. Havlin, and H. A. Makse (2005). Self-similarity of complex networks. Nature 433(7024), 392–395.
* Steglich et al. (2010) Steglich, C., T. A. Snijders, and M. Pearson (2010). Dynamic networks and behavior: Separating selection from influence. Sociological Methodology 40, 329–393.
* Ullmann (1976) Ullmann, J. R. (1976). An algorithm for subgraph isomorphism. J. ACM 23(1), 31–42.
* Wasserman and Faust (1994) Wasserman, S. and K. Faust (1994, November). Social Network Analysis: Methods and Applications (1 ed.). Cambridge University Press.
* Watts and Strogatz (1998) Watts, D. J. and S. H. Strogatz (1998, June). Collective dynamics of ’small-world’ networks. Nature 393(6684), 440–442. PMID: 9623998.
## Appendix A: Growth rules for experiments
The following are the Python functions used for the growth and termination
rules in the GMM specified to generate the simulated network described in the
_Recovering classic models_ section above.
0: $G,H,p$
$G=G[H]$ {Compose $H$ with $G$}
for $i$ in $H$ do
for $j$ in $G$ do
$ran=RANDOM(low=0,high=1)$ {Draw a random value from a uniform distribution}
if $ran<=p$ then
$G=EDGE(G,i,j)$ {If random draw less than $p$, create edge}
end if
end for
end for{For each node in $H$ test if it will connect to each node in $G$}
return $G$
Algorithm 4 Pseudo-code growth rule for Erdős-Rènyi binomial random graph
0: $G,H,k,p$
$G=G[H]$ {Compose $H$ with $G$}
$SHUFFLE(H)$ {Shuffle the nodes in $H$}
for $i$ in $H[0:k]$ do
for $j$ in $G$ do
$ran=RANDOM(low=0,high=1)$ {Draw a random value from a uniform distribution}
if $ran<=p$ then
$G=EDGE(G,i,j)$ {If random draw less than $p$, create edge}
end if
end for
end for{With probability $p$, connect $k$ nodes from $H$ to all nodes in $G$.}
$FULL(G)$ {Ensure that $G$ is fully connected.}
return $G$
Algorithm 5 Pseudo-code growth rule for Watts-Strogatz “small world” model
0: $G,H,m$
$G=G[H]$ {Compose $H$ with $G$}
$DEG=DEGREE(G)$ {Calculate degree of nodes in $G$}
for $i$ in $m$ do
$EDGE\\_MADE=FALSE$
while $EDGE\\_MADE==FALSE$ do
$p=RANDOM(low=0,high=1)$ {Draw a random value from a uniform distribution}
$j=RAN\\_NODE(G)$ {Pick a random node in $G$}
if $p<=DEG[j]$ then
$G=EDGE(G,i,j)$ {If random draw less than degree of $j$, create edge}
end if
end while
end for{Connect $m$ nodes from $G$ to $H$ as function of degree in $G$}
return $G$
Algorithm 6 Pseudo-code growth rule for Barabási-Albert preferential
attachment model
|
arxiv-papers
| 2011-05-04T19:23:48 |
2024-09-04T02:49:18.603907
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Drew Conway",
"submitter": "Drew Conway",
"url": "https://arxiv.org/abs/1105.0902"
}
|
1105.1026
|
# Strains and pseudo-magnetic fields in circular graphene rings
Nima Abedpour School of Physics, Institute for Research in Fundamental
Sciences, IPM, Tehran 19395-5531, Iran Reza Asgari asgari@ipm.ir School of
Physics, Institute for Research in Fundamental Sciences, IPM, Tehran
19395-5531, Iran F. Guinea Instituto de Ciencia de Materiales de Madrid,
CSIC, Sor Juana Inés de la Cruz 3, E-28049 Madrid, Spin
###### Abstract
We demonstrate that circular graphene ring under a shear stress displays
strong pseudo-magnetic fields. We calculate the pseudo-magnetic field both
from continuum elasticity theory as well as molecular dynamics simulations.
Stable wrinkles are induced by shear deformations and lead to enhancement of
the pseudo-magnetic field. The strong pseudo-magnetic field found here can be
observed by imaging graphene flake at the atomic level e.g. through scanning
tunneling microscope.
###### pacs:
61.48.Gh, 81.40.Jj, 07.55.Db
## I introduction
Graphene has recently attracted intensive interest as a promising candidate
material for the new generation of electronics and spintronics geim . One of
the exciting physics on graphene is strain exerted on graphene samples
vozmediano ; pereira ; guinea1 . It was proposed that strain can be utilized
to generate various basic elements for all-graphene electronics pereira .
Semiconductor quantum rings have been investigated by a number of groups bayer
; ribeiro ; Retal08 ; YPSR010 ; Hetal10 . In graphene ring, the spectrum
reveals signatures of effective time-reversal symmetry breaking, in which the
spectra are most naturally interpreted in terms of effective magnetic flux
contained in the ring, even when no real flux is present. Quantum rings can be
considered as prototypical devices in mesoscopic physics, as they show one of
the most basic coherence effects, namely the Aharonov-Bohm effect,
oscillations of the transmission as a function of the magnetic flux through
the ring. The reason for these oscillations is the phase difference between
electrons traveling along the different arms of the ring. Furthermore,
Benjamin and Pachos benjamin proposed creating a ring of single-layer
graphene in which they induce d-wave superconductivity via the proximity
effect or directly make graphene superconducting by doping. The quantum qubits
would be built around the $\pi$-junction that naturally occurs in graphene and
would not require bilayer structures. The Aharonov Bohm oscillations, on the
other hand, has been observed Hetal10 in a graphene ring, consisting of a
planar honeycomb lattice of carbon atoms in a ring shape, by changing the
voltage applied to the side gate or the back gate.
When the graphene sheet is under tension, the side contacts induce a long-
range elastic deformation which acts as a pseudo-magnetic field for its
massless charge carriers suzuura ; maes . This is because strain changes the
bonds length between atoms and affects the way electrons move among them. The
pseudo-magnetic field would reveal itself through its effects on electron
orbits. The tension can be generated either by the electrostatic force of the
underlying gate fogler by interaction of graphene with the side walls bunch ,
as a result of thermal expansion bao or by quench height fluctuations guinea
. A particular strain geometry in graphene could lead to a uniform pseudo-
magnetic field and might open up interesting applications in graphene nano-
electronics with real magnetic fields low . Mechanical strains can introduce
new environments in studying novel physics of graphene.
It is commonly believed that strains have important influence on the
electronic structure of graphene neto . A graphene ring is a particularly
convenient geometry. Strains can possibly be manipulated efficiently in
samples with good adhesion to the substrate, such as graphene layers grown
epitaxially on SiC. Recently, the physical properties of graphene when its
hexagonal lattice is stretched out of equilibrium have been investigated by
many groups exp ; teague . Scanning tunneling microscopy studies on graphene
surface have indeed revealed a correlation between local strain and tunneling
conductance teague . Motivated by experiments pointing to a remarkable
stability of graphene with large strains, we have carried out theoretical
analysis and the molecular dynamics simulation to explore the pseudo-magnetic
field in strained graphene ring.
In this papers we focus on a particular aspect of the physics of graphene ring
namely the appearance of gauge fields and its corresponding pseudo-magnetic
fields which arise in the circular graphene ring when a shear force is applied
to its boundary. The common belief is that the morphologies of graphene
surface under strain are negligible. The aim of this paper is to show that,
contrary to these expectations, the structure of a deformed surface can lead
to a strong pseudo-magnetic field.
The paper is organized as follows. In Sec. II we introduce our model and
formalism. In Sec. III, our numerical results for the strain and the pseudo-
magnetic field for deformed graphene ring are presented. Finally, we conclude
in Sec. IV with a brief summary.
## II Model and Theory
We analyze both analytically and numerically the strains and pseudo-magnetic
fields in circular graphene ring under a shear stress. A shear force is
applied to the boundary, inducing shear deformations inside graphene ring.
The properties of graphene presented lie on the special character of its low
energy excitations obeying a two dimensional massless Dirac equation. The
graphene ring with valley degree of freedom, $\tau=\pm 1$ for the inequivalent
$K$ and $K^{\prime}$ valleys, is modelled by the massless Dirac Hamiltonian in
the continuum model slon ; haldane
${\cal H}_{0}=\hbar v_{\rm
F}\tau\left(\sigma_{1}\,k_{1}+\sigma_{2}\,k_{2}\right)$
where $k_{i}$ is an envelope function momentum operator, $v_{\rm F}\simeq
10^{6}m/s$ is the Fermi velocity, and $\sigma_{i}=x,y,z$ are the Pauli
matrices that act on the sublattice pseudospin degree of freedom. Hence it is
important to establish the robustness of the low energy description under
small lattice deformations. The concepts of gauge fields and covariant
derivatives can be translated into the language of differential geometry,
based on differential forms. When ideal graphene is distorted, the effective
Hamiltonian will be changed into
$\displaystyle H=v_{\rm F}~{}(~{}\bf{p}-e~{}\bf{A})\cdot\bf{\sigma}$ (1)
where the induced vector potential field is defined through the deformations
of sample and $\bf{p}$ is the momentum in the polar coordinate. The induced
gauge fields can be calculated through the following expressions suzuura
$\displaystyle v_{\rm F}eA_{x}$ $\displaystyle=$ $\displaystyle\hbar
g_{2}(u_{xx}-u_{yy})$ (2) $\displaystyle v_{\rm F}eA_{y}$ $\displaystyle=$
$\displaystyle-2\hbar g_{2}u_{xy}$
with
$\displaystyle g_{2}$ $\displaystyle=$
$\displaystyle\frac{3\kappa\beta}{4}~{}t$ (3) $\displaystyle\kappa$
$\displaystyle=$ $\displaystyle\frac{\sqrt{2}\mu}{2B}$
where $t\approx 2.7$ eV is the nearest-neighbor hopping parameter and
$\beta=\partial\ln(t)/\partial\ln(a)\simeq 2$ is the electron Grüneisen
parameter. For the shear $\mu$ and bulk moduli, $B$ we have used zakharchenko
the values $\mu=9.95$ eV $\AA^{-2}$ and $B=12.52$ eV $\AA^{-2}$. We thus find
that $\kappa\approx 0.56$. Once the induced gauge filed is obtained, the
pseudo-magnetic field can be calculated by $\bm{B}=\nabla\times\bm{A}$.
When the graphene is deformed due to the force exerts on the boundaries, the
strain tensor can be calculated from
$\displaystyle
u_{\alpha\beta}=\frac{\partial_{\alpha}u_{\beta}+\partial_{\beta}u_{\alpha}}{2}+\frac{\partial_{\alpha}h\partial_{\beta}h}{2}$
(4)
where $\bm{u}$ is the atomic displacement field and $h$ is the out of plane
displacement. The contribution of the out-of-plane atomic displacements are
noticeable in our numerical calculations.
### II.1 Analytical expression for pseudo-magnetic filed
We consider the case of a graphene ring of radius $R$ and width $W=R-R_{1}$
where the clamped outer and inner boundaries are circles of radius $R$ and
$R_{1}$, respectively. Additionally, we assume that the out-of-plane atomic
displacements being zero, $h=0$. The displacement at the outer boundary is
$u_{\theta}(R,\theta)=U_{\theta},u_{r}(R,\theta)=0$, which the shear
deformation at the boundary is defined by $U_{\theta}$, and at the inner
boundary we have $u_{\theta}(R_{1},\theta)=u_{r}(R_{1},\theta)=0$. The
displacements in the graphene ring are
$\displaystyle u_{r}(r,\theta)$ $\displaystyle=0$ $\displaystyle
u_{\theta}(r,\theta)$
$\displaystyle=U_{\theta}\left[-\frac{R_{1}^{2}R}{r(R^{2}-R_{1}^{2})}+\frac{Rr}{R^{2}-R_{1}^{2}}\right]~{}.$
(5)
The second term in the expression for $u_{\theta}(r,\theta)$ is a pure
rotation, required to satisfy the boundary conditions. The only non-zero
component of the strain tensor is $u_{r\theta}$. The strain at radius $r$ is
$u_{r\theta}(r,\theta)=\frac{1}{2}\left(\frac{\partial u_{\theta}}{\partial
r}+\frac{1}{r}\frac{\partial
u_{r}}{\partial\theta}-\frac{u_{\theta}}{r}\right)=\frac{U_{\theta}R_{1}^{2}R}{r^{2}(R^{2}-R_{1}^{2})}$
(6)
The maximum strain at the boundary is actually given by
$\bar{u}=U_{\theta}R/(R^{2}-R_{1}^{2})$. Using polar coordinates $(r,\theta)$,
Eq. (2) can be rewritten as
$\displaystyle A_{r}=$ $\displaystyle\Phi_{0}$
$\displaystyle\frac{c\beta}{a}[(\frac{\partial u_{r}}{\partial
r}-\frac{u_{r}}{r}-\frac{1}{r}\frac{\partial
u_{\theta}}{\partial\theta})\cos(3\theta)$ $\displaystyle+$
$\displaystyle(-\frac{\partial u_{\theta}}{\partial
r}+\frac{u_{\theta}}{r}-\frac{1}{r}\frac{\partial
u_{\theta}}{\partial\theta})\sin(3\theta)]~{},$ $\displaystyle A_{\theta}=$
$\displaystyle\Phi_{0}$ $\displaystyle\frac{c\beta}{a}[(-\frac{\partial
u_{\theta}}{\partial r}+\frac{u_{\theta}}{r}-\frac{1}{r}\frac{\partial
u_{r}}{\partial\theta})\cos(3\theta)$ (7) $\displaystyle+$
$\displaystyle(-\frac{\partial u_{r}}{\partial
r}+\frac{u_{r}}{r}+\frac{1}{r}\frac{\partial
u_{\theta}}{\partial\theta})\sin(3\theta)]~{},$
where $\Phi_{0}=h/2e$ is the quantum unit of magnetic flux and
$c=\sqrt{3}\kappa/(2\pi)$ is a constant. Furthermore, the induced gauge field
in the polar coordinate is given by
$\displaystyle A_{r}=\Phi_{0}\frac{c\beta
U_{\theta}}{a}\frac{2R_{1}^{2}R}{r^{2}(R^{2}-R_{1}^{2})}\sin(3\theta)$
$\displaystyle A_{\theta}=\Phi_{0}\frac{c\beta
U_{\theta}}{a}\frac{2R_{1}^{2}R}{r^{2}(R^{2}-R_{1}^{2})}\cos(3\theta)$ (8)
Eventually, the pseudo-magnetic field acting on the electrons is simply given
by
$B(r,\theta)=\Phi_{0}\frac{c\beta}{a}\frac{8U_{\theta}R_{1}^{2}R}{r^{3}(R^{2}-R_{1}^{2})}\cos(3\theta)$
(9)
It would be worthwhile mentioning that the pseudo-magnetic field diverges near
the clamped sites as $B(r)\approx 8c\Phi_{0}\beta\bar{u}R_{1}^{2}/(ar^{3})$.
### II.2 Molecular dynamics simulation
We used Molecular Dynamics simulation (MD) to simulate a suspended circular
graphene ring, consists of a planar honeycomb lattice of carbon atoms in a
ring shape. Rigidly clamped boundary conditions were employed. We simulated
the system at different temperatures by employing Nosè-Hoover thermostat to
help the system reach equilibrium at a given temperature. Our present results
are limited to $T=50K$. In this work, we used both the Brenner’s bond-order
potential brenner ; khodemoon incorporating the second nearest neighbors
interactions through their dependence on the bond angles and the second-
generation reactive empirical bond-order (REBO) potential rebo for the
carbon-carbon interactions (for more details see Appendix A). It is well
established that MD simulations predict the correct mechanical properties and
the form of the structures with carbon atoms by using the Brenner potential.
The number of carbon atoms in our simulations is $5125$ corresponding to
graphene ring of radius $R=7$ nm and width $W=4$ nm. At the beginning of
calculations, we simulate the circular graphene ring at a given temperature
with circular clamped boundary conditions. After reaching a stable
configuration, we rotate the outer boundary of graphene ring about
$\theta_{0}=12^{\circ}$ as the system shown in Fig. 1( left panel). For later
purposes, we name the system as case one. In this case, the displacement of
the boundary atoms is about $U_{\theta}=R\pi\theta_{0}/180$. For the sake of
comparison, we would like to consider another system in which the inner
boundary is pulled down. In the later case, in order to avoid from strong
wrinkles, we reduce the rotation of the outer boundary about $6^{\circ}$ and
then pull slowly down the central part of graphene sheet about $z=-1.7$nm as
shown in Fig. 1( right panel). Again, the latter system is labeled as case
two.
## III Numerical Results
From the analytical calculations, the pseudo-magnetic field obtained in the
graphene ring geometry where $h=0$ is shown in Fig. 2. For given the circular
radius, $B=7.89\times 10^{7}\cos(3\theta)/{r_{0}}^{3}$ in units of tesla where
$r_{0}$ scaled in units of Angstrom. The maximum field occurs in the vicinity
of the inner boundary and it decreases like $1/r^{3}$. Although the analytical
model described above can explain the behavior of the strains and magnetic
field, we point out that the wrinkle structures represent an important feature
of the results.
We use two different systems in the numerical simulations. We first study a
graphene ring which is placed in the $x-y$ plane and set at a given
temperature by allowing the surface fluctuations in the third dimension, and
then the outer boundary of graphene ring is rotated while the inner boundary
is clamped as is shown in Fig. 1 ( left panel). In case one, we have seen
noticeable wrinkle structures around the inner boundary along the
perpendicular direction (z-axis). In case two, on the other hand, the graphene
ring surface is smoother, but the average carbon-carbon distance is longer
than the one obtained in case one. In the following, we will describe two
cases and find the pseudo-magnetic fields emerge due to the strains. In our MD
simulations, the equations of motion are integrated by the Verlet algorithm
with a time step of $0.5$ fs. Our numerical results show that the formation of
the wrinkles are quantitatively sensitive to the details of the potential. We
simulate a system with two mentioned potentials and find that the order of the
pseudo-magnetic fields are the same ( for more details, see Appendix A ).
Therefore, we mainly use the Brenner potential brenner to calculate the
pseudo-magnetic filed.
### III.1 Deformed structure
As it can be seen from the left panel of Fig 1, the main part of the
deformation occurs in the area near the inner boundary and makes noticeable
fluctuations along the $z$-direction. Due to the appearance of wrinkles in the
system, the strain increases near the inner boundary and reduces in the far
regions. In contrast to the analytical calculations, the strain does not
behave as $r^{-2}$. The analytical model does not describe well a system with
out of plane wrinkles.
In the right panel of Fig 1, we show case two, where the static wrinkles are
approximately washed out and seemingly the surface of graphene ring is much
smoother than case one. The structure of surface displacements can also be
seen in Fig 3. This figure clearly shows that the morphology of the surfaces
are enormously different and we expect to have a stronger pseudo-magnetic
fields for case one with respect to case two.
Figure 1: ( Color online) A representative atomic configuration in MD
simulations for circular graphene ring at $T=50$ K. Left panel: case one in
which the rotation of the inner boundary is $12^{\circ}$ in $x-y$ plane. Right
panel, case two in which the inner boundary has been rotated about $6^{\circ}$
and afterwards the central part has been pulled down about $z=-1.7$ nm .
It should be noticed that the structure of the simulated samples weakly depend
on temperature up to $T=300$K.
### III.2 Pseudo-magnetic field
Since the structure of wrinkles, after the relaxation of graphene’s shape, is
static, we can calculate the gauge fields and also the pseudo-magnetic fields
from Eq. (2).
Our numerical results for the induced gauge field calculated from the strains
are illustrated in Fig. 4 for two mentioned cases in the $x-y$ plane. We take
a time average over the atom positions in order to pass over the thermal
fluctuation effects and study only the wrinkling structures. The length value
of the vector denotes the absolute value of the gauge field. For case one, the
induced gauge fields are not uniform in the area which wrinkles appear.
The pseudo-magnetic field can be calculated from the induced gauge field and
its distributions are demonstrated in Fig. 5. It is worth mentioning that the
non-zero pseudo-magnetic field, $B_{z}$ almost occurs along the wrinkles,
however its value and structure depends on each wrinkle structure. This is due
to the fact that the structure of the wrinkles is different along different
lattice directions such as armchair and zigzag. The value of the pseudo-
magnetic field that increases with increasing curvature of the wrinkles mostly
occur around the central part.
The structure of the pseudo-magnetic field along one of wrinkles, which is
indicated in the inset figure, is illustrated in Fig 6 (left panel) for case
one. Notice that $B_{z}$ decreases for $x<0$ along the wrinkle. The curve can
be fitted quite well with the expression $\exp(-\alpha x)$ where
$\alpha=1.2\pm 0.1$, which displays fast decaying of the pseudo-magnetic field
close to the outer boundary. The pseudo-magnetic field behaves randomly for
$x>0$. The latter behavior is due to the fact that the path is no longer along
a specific wrinkle. As it can be observed from the results, the pseudo-
magnetic field value is immense. Similar massive pseudo-magnetic fields raised
by highly strained nanobubbles that form when graphene is grown on a platinum
surface have been measured by Levy et al. levy .
For case two, which wrinkles are disappeared, the symmetry of the graphene is
clearly induced in the $B_{z}$, (see Fig 5, right panel ).
In order to have a better understanding regarding the structure of the pseudo-
magnetic fields, we calculate the pseudo-magnetic fields for two cases along
the azimuthal angel at given radius, $r$. Fig. 7, shows the pseudo-magnetic
field $B_{z}$ as a function of $\theta$ around a ring for radius $r=4$ nm in
case one (left panel) and moreover the pseudo-magnetic field is shown at
$r=6.5$ nm for case two (right panel). In the latter case, we have seen the
behavior most likely as $B_{z}\propto\cos(3\theta)$. In this figure, we have
plotted function, which is proportional to $\cos(3\theta)$ as solid lines.
Apparently the result in the left panel refers to case one behaves similarly
to case two with respect to the azimuthal variable, however there are
detectable fluctuations due to the appearance of the wrinkles.
The analytical expression of the pseudo-magnetic field acting on the electrons
is $B(r,\theta)\propto\frac{1}{r^{3}}\cos(3\theta)$ when atomic out-of-plane
displacements are ignored. The expression was obtained for a system that lies
in the $x-y$ plane, $h=0$, where there are no wrinkle structures on the flake.
Since case two, which has less wrinkle structures, is somehow similar to the
aforementioned system, we have found numerically the $\cos(3\theta)$
dependence for the pseudo-magnetic field. However due to the fact that the
inner boundary is pulled down, the pseudo-magnetic field behaviors differently
as a function of $r$.
## IV Conclusions
In summary, we have investigated the strains and pseudo-magnetic fields in
circular graphene ring under a shear stress. We find, from the elastic theory,
the induced gauge filed as function of the maximum strain at the boundary,
$\bar{u}$. The magnitude of the pseudo-magnetic field near the boundary is
$B\approx 4\sqrt{3}\Phi_{0}\beta\kappa\bar{u}/(\pi aR)$. Moreover, The field
diverges near the clamped sites as $B(r)\propto 1/r^{3}$, where $r$ is the
distance to the site while the strains diverge as $1/r^{2}$. From numerical
simulation results, we find wrinkles structures on graphene flakes and the
pseudo-magnetic field is the same order of magnitude that obtained by
analytical calculations. We also find that the wrinkle structures represent an
important feature of the pseudo-magnetic field. In addition, it is also shown
that the pseudo-magnetic field behaves like as $\cos(3\theta)$. These results
are essential for understanding the electronic properties of graphene ring and
its strain engineering for potential applications.
## V ACKNOWLEDGMENTS
We thank A. Fognini and A. Naji for their useful comments. This research was
supported in part by the Project of Knowledge Innovation Program (PKIP) of
Chinese Academy of Sciences, Grant No. KJCX2.YW.W10. F. G. is supported by by
MICINN (Spain), grants FIS2008-00124 and CONSOLIDER CSD2007-00010.
## Appendix A Empirical potentials
We used both the Brenner’s bond-order potential brenner incorporating the
second nearest neighbors interactions through their dependence on the bond
angles and the second-generation reactive empirical bond-order (REBO)
potential rebo for the carbon-carbon interactions. The latter potential is
based on the empirical bond-order formalism and allows for covalent bond
binding and breaking with associated changes in the atomic hybridization.
Consequently, such a classical potential allows to model complex chemistry in
large many-atom systems. The Brenner bond-order potential can be written in
the following general form for the binding energy,
$E_{b}=\sum_{i}~{}\sum_{j>i}~{}\\{V^{R}(r_{ij})-b_{ij}V^{A}(r_{ij})\\}$ (10)
The first term is repulsive, and the second one is attractive. $r_{ij}$ is the
distance between pairs of nearest-neighbor atoms $i$ and $j$. Although this
expression is a simple sum over bond energies, it is not a pair potential
since the $b_{ij}$, which is called the bond-order factor, is in essence a
many-body factor. The many-body nature of $b_{ij}$ makes the bond energy
depend on the local environment of the bond. This feature allows the Brenner
potential to predict correct geometries and energies for many different carbon
structures. The empirical bond-order function used here is written as a sum of
terms $b_{ij}=[b_{ij}^{\sigma-\pi}+b_{ji}^{\sigma-\pi}]/2+b_{ij}^{\pi}$ where
values for the functions $b_{ij}^{\sigma-\pi}$ and $b_{ji}^{\sigma-\pi}$
depend on the local coordination and bond angles for atoms $i$ and $j$ ,
respectively. The first term is a function of the bond angles similar to that
in the Brenner potential brenner while the second term incorporates the third
nearest neighbors via a bond-order term associated with the dihedral angles
and becomes nonzero upon bending of the graphene sheets. The values for all
the parameters used in our calculation for the potentials can be found in
Refs. brenner, ; rebo, and are therefore not listed here.
We used the Brenner potential khodemoon and alternatively the REBO potential
by using Large-scale Atomic or Molecular Massively Parallel Simulator package
lammps ( LAMMPS) for carrying out the molecular dynamic simulations in this
work. We calculated the pseudo-magnetic fields for the two different
potentials along the azimuthal angel at given radius, $r\simeq 40$Å . Fig. 8,
shows the pseudo-magnetic fields $B_{z}$ as a function of $\theta$ along a
circular ring of radius $r=4$ nm. Note that the order of the pseudo-magnetic
field is the same in both cases and moreover, we see the same envelop behavior
at long-wavelength region. However, they exhibit different oscillations modes
at short wavelength due to the way in which the bond-order ( the dihedral
angle) is handled in the REBO potential.
## Appendix B Simulation methods
Computer simulations generates very detailed information at the microscopic
level and the conversion of this information into macroscopic level is the
province of statistical mechanics. Molecular dynamics, on the other hand, is
an important tool to investigate the microscopic behaviors by integrating the
motions of particles or particle clusters. In the molecular dynamics, the
trajectories of atoms are determined by numerically solving Newton’s equations
of motion for a many-body interacting systems, where forces between the
particles and their potential energy are defined by certain force fields. We
used NVT ensemble where the system is isolated from changes in moles (N),
volume (V) and temperature (T). In NVT, the energy of endothermic and
exothermic processes is exchanged with a thermostat. A variety of thermostat
methods is available to add and remove energy from the boundaries of a MD
system in a nearly realistic way, approximating the canonical ensemble. On the
other hand, a micro-canonical molecular dynamics trajectory may be seen as an
exchange of potential and kinetic energy, with total energy being conserved.
For every time step, each particle’s position and velocity may be integrated
with a method such as Verlet. Given the initial positions and velocities, all
future positions and velocities can be calculated. If there is a large enough
number of atoms, statistical temperature can be estimated from the
instantaneous temperature, which is found by equating the kinetic energy of
the system to $nk_{B}T/2$ where $n$ is the number of degrees of freedom of the
system.
We simulated the system at nonzero temperatures by employing a Nosé-Hoover
thermostat and the time step is taken as $0.5$ fs. At the beginning of the
simulation, we consider a circular graphene ring incorporating atoms which are
located in the $x-y$ plane. The initial structures are firstly optimized
giving the carbon-carbon bond length of $1.45$ Å and all samples are initially
relaxed at a desired temperature for a duration $1.5$ ns. In atomic positions
at the boundaries, we enforce $z=0$ to prevent atomic motion along the $z$
direction and furthermore, after some simulation run-times, we set
$x=y=$const. to clamp atoms.
## References
* (1) A. K. Geim, K. S. Novoselov, Nature Matherial 6, 183 (2007) .
* (2) M. A. H. Vozmediano, M. I. Katsnelson, F. Guinea, Phys. Reports 496, 109 (2010) .
* (3) V. M. Pereira, A. H. Castro Neto, Phys. Rev. Lett. 103, 046801 (2009) .
* (4) F. Guinea, M. I. Katsnelson, A. K. Geim, Nature Physics 6, 30 (2010) .
* (5) M. Bayer, M. Korkusinki, P. Hawrylak, T. Gutbrod, M. Michel and A. Forchel, Phys. Rev. Lett. 90, 186801 (2003) .
* (6) E. Ribeiro, A. O. Govorov, W. Carvalho, Jr. and G. Medeiro-Ribeiro, Phys. Rev. Lett. 92, 126402 (2004) .
* (7) Saverio Russo, Jeroen B. Oostinga, Dominique Wehenkel, Hubert B. Heersche, Samira Shams Sobhani, Lieven M. K. Vandersypen, and Alberto F. Morpurgo, Phys. Rev. B 77, 085413 (2008) .
* (8) Jai Seung Yoo, Yung Woo Park, Viera Skakalova, and Siegmar Roth, Appl. Phys. Lett. 96, 143112 (2010) .
* (9) Magdalena Huefner, Françoise Molitor, Arnhild Jacobsen, Alessandro Pioda, Christoph Stampfer, Klaus Ensslin, and Thomas Ihn, New Journ. Phys. 12, 043054 (2010) .
* (10) Colin Benjamin, Jiannis K. Pachos, Phys. Rev. B 79, 155431 (2009) .
* (11) H. Suzuura and T. Ando, Phys. Rev. B 65, 235412 (2002) .
* (12) J. L. Maès, Phys. Rev. B 76, 045430 (2007) .
* (13) M. M. Fogler, F. Guinea, and M. I. Katsnelson, Phys. Rev. Lett. 101, 226804 (2008) .
* (14) J. S. Bunch, S. S. Verbridge, J. S. Alden, A. M. van der Zande, J. M. Parpia, H. G. Craighead, and P. L. McEuen, Nano Lett. 8, 2458 (2008) .
* (15) W. Bao, F. Miao, Z. Chen, H. Zhang, W. Jang, C. Dames, and C. N. Lau, Nat. Nanotechnol. 4, 562 (2009) .
* (16) F. Guinea, Baruch Horovitz, and P. Le Doussal Phys. Rev. B 77, 205421 (2008) .
* (17) Tony Low, F. Guinea, Nano Lett. 10, 3551 (2010) .
* (18) A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim , Rev. Mod. Phys. 81, 109 (2009) .
* (19) Keun Soo Kim, Yue Zhao, Houk Jang, Sang Yoon Lee, Jong Min Kim, Kwang S. Kim, Jong-Hyun Ahn, Philip Kim, Jae-Young Choi, and Byung Hee Hong, Nature 457, 706 (2009); T. M. G. Mohiuddin, A. Lombardo, R. R. Nair, A. Bonetti, G. Savini, R. Jalil, N. Bonini, D. M. Basko, C. Galiotis, N. Marzari, K. S. Novoselov, A. K. Geim, and A. C. Ferrari, Phys. Rev. B 79, 205433 (2009); Changgu Lee, Xiaoding Wei, Jeffrey W. Kysar, and James Hone, Science 321, 385 (2008); Mingyuan Huang, Hugen Yan, Tony F. Heinz, and James Hone Nano Lett. 10, 4074 (2010) .
* (20) M. L. Teague,A. P. Lai, J. Velasco, C. R. Hughes, A. D. Beyer, M. W. Bockrath, C. N. Lau, N. C. Yeh, Nano Lett. 9, 2542 (2009) .
* (21) J. C. Slonczewski and P. R. Weiss, Phys. Rev. 109, 272 (1958) .
* (22) F. D. M. Haldane, Phys. Rev. Lett. 61, 2015 (1988) .
* (23) K.V. Zakharchenko, M.I. Katsnelson, and A. Fasolino, Phys. Rev. Lett. 102, 046808 (2009) .
* (24) D. W. Brenner, Phys. Rev. B 42, 9458 (1990) .
* (25) D. W. Brenner, O. A. Shenderova, J. A. Harrison, S. J. Stuart, B. Ni, and S. B. Sinnott, J. Phys.: Condens. Matter 14, 783 (2002) .
* (26) N. Abedpour, M. Neek-Amal, R. Asgari, F. Shahbazi, N. Nafari, and M.R. Tabar, Phys. Rev. B 76, 195407 (2007); N. Abedpour, R. Asgari, and M.R. Tabar, Phys. Rev. Lett. 104, 196804 (2010) .
* (27) N. Levy, S. A. Burke1, K. L. Meaker, M. Panlasigui, A. Zettl1, F. Guinea, A. H. Castro Neto and M. F. Crommie, Science 329 , 554 (2010) .
* (28) ”LAMMPS Molecular Dynamics Simulator”. Sandia National Laboratories.
Figure 2: (Color online) Pseudo-magnetic field, given by Eq. 9 in units of
tesla induced by shear strains in a circular graphene ring. Here, $R=70$,
$R_{1}=30$Åand $\theta=12^{\circ}$.
Figure 3: (Color online) Projected atomic displacements for the case one, left
panel and case two, right panel.
Figure 4: (Color online) Projected the gauge filed, $A_{\mu}$ for two
mentioned cases. Notice that the length value of the vector denotes the
absolute value of the gauge filed.
Figure 5: (Color online) Distribution of the pseudo-magnetic field, $B_{z}$ on
graphene ring for two mentioned cases. Green, red and black colors correspond
to positive, negative and zero values of the pseudo-magnetic field,
respectively. For the case one, the dark-green(or dark-red) correspond to the
strongest pseudo-magnetic filed which is about $1000T$ and for the case case
two, it is about $200T$
Figure 6: (Color online) Pseudo-magnetic field, $B_{z}$ on a specific
direction indicates in the inset figure, left panel, and along $y$-direction
passing the origin of the ring, right panel.
Figure 7: (Color online) Pseudo-magnetic field, $B_{z}$ as a function of
azimuthal angle for case one at $r=4$ nm, left panel and $r=6.5$nm for case
two( right panel). Solid line curve denotes a function which is proportional
to $\cos(3\theta)$. Figure 8: (Color online) Pseudo-magnetic field, $B_{z}$ as
a function of azimuthal angle at $T=50K$ along a circular ring of radius $4$
nm when the Brenner potential brenner and the REBO potential rebo are used.
Their long-wavelength behavior is very similar to the form
$B_{z}\propto\cos(3\theta)$, however their short wavelength behavior is
different due to the way in which the bond-order is handled in the REBO
potential. The outer boundary of graphene ring, in both cases, is rotated
about $\theta_{0}=8^{\circ}$.
|
arxiv-papers
| 2011-05-05T10:15:25 |
2024-09-04T02:49:18.614362
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Nima Abedpour, Reza Asgari and F. Guinea",
"submitter": "Reza Asgari",
"url": "https://arxiv.org/abs/1105.1026"
}
|
1105.1160
|
11institutetext: Particle Physics Department, Rutherford Appleton Laboratory,
Didcot, United Kingdom.
# Unfolding algorithms and tests using RooUnfold
Tim Adye
###### Abstract
The RooUnfold package provides a common framework to evaluate and use
different unfolding algorithms, side-by-side. It currently provides
implementations or interfaces for the Iterative Bayes, Singular Value
Decomposition, and TUnfold methods, as well as bin-by-bin and matrix inversion
reference methods. Common tools provide covariance matrix evaluation and
multi-dimensional unfolding. A test suite allows comparisons of the
performance of the algorithms under different truth and measurement models.
Here I outline the package, the unfolding methods, and some experience of
their use.
## 1 RooUnfold package aims and features
The RooUnfold package [1] was designed to provide a framework for different
unfolding algorithms. This approach simplifies the comparison between
algorithms and has allowed common utilities to be written. Currently RooUnfold
implements or interfaces to the Iterative Bayes [2, 3], Singular Value
Decomposition (SVD) [4, 5, 6], TUnfold [7], bin-by-bin correction factors, and
unregularized matrix inversion methods.
The package is designed around a simple object-oriented approach, implemented
in C++, and using existing ROOT [8] classes. RooUnfold defines classes for the
different unfolding algorithms, which inherit from a common base class, and a
class for the response matrix. The response matrix object is independent of
the unfolding, so can be filled in a separate ‘training’ program.
RooUnfold can be linked into a stand-alone program, run from a ROOT/CINT
script, or executed interactively from the ROOT prompt. The response matrix
can be initialized using existing histograms or matrices, or filled with
built-in methods (these can take care of the normalization when inefficiencies
are to be considered). The results can be returned as a histogram with errors,
or a vector with full covariance matrix. The framework also takes care of
handling multi-dimensional distributions (with ROOT support for 1–, 2–, and
3–dimensional (1D,2D,3D) histograms), different binning for measured and truth
distributions, variable binning, and the option to include or exclude under-
and over-flows. It also supports different methods for calculating the errors
that can be selected with a simple switch: bin-by-bin errors with no
correlations, the full covariance matrix from the propagation of measurement
errors in the unfolding, or the covariance matrix calculated using Monte Carlo
(MC) toys.
All these details are handled by the framework, so do not have to be
implemented for each algorithm. However different bin layouts may not produce
good results for algorithms that rely on the global shape of the distribution
(SVD).
A toy MC test framework is provided, allowing selection of different MC
probability density functions (PDF) and parameters, comparing different
binning, and performing the unfolding with the different algorithms and
varying the unfolding regularization parameters. Tests can be performed with
1D, 2D, and 3D distributions. The results of a few such tests are presented in
section 4.
## 2 C++ classes
Figure 1 summarizes how the ROOT and RooUnfold classes are used together. The
RooUnfoldResponse object can be constructed using a 2D response histogram
(TH2D) and 1D truth and measured projections (these are required to determine
the effect of inefficiencies). Alternatively, RooUnfoldResponse can be filled
directly with the Fill($x_{\rm measured}$, $x_{\rm true}$) and Miss($x_{\rm
true}$) methods, where the Miss method is used to count an event that was not
measured and should be counted towards the inefficiency.
Figure 1: The RooUnfold classes. The training truth, training measured,
measured data, and unfolded distributions can also be given as TH2D or TH3D
histograms.
The RooUnfoldResponse object can be saved to disk using the usual ROOT
input/output streamers. This allows the easy separation in separate programs
of MC training from the unfolding step.
A RooUnfold object is constructed using a RooUnfoldResponse object and the
measured data. It can be constructed as a RooUnfoldBayes, RooUnfoldSvd,
RooUnfoldTUnfold, (etc) object, depending on the algorithm required.
The results of the unfolding can be obtained as ROOT histograms (TH1D, TH2D,
or TH3D) or as a ROOT vector (TVectorD) and covariance matrix (TMatrixD). The
histogram will include just the diagonal elements of the error matrix. This
should be used with care, given the significant correlations that can occur if
there is much bin-to-bin migration.
## 3 Unfolding algorithms
### 3.1 Iterative Bayes’ theorem
The RooUnfoldBayes algorithm uses the method described by D’Agostini in [2].
Repeated application of Bayes’ theorem is used to invert the response matrix.
Regularization is achieved by stopping iterations before reaching the ‘true’
(but wildly fluctuating) inverse. The regularization parameter is just the
number of iterations. In principle, this has to be tuned according to the
sample statistics and binning. In practice, the results are fairly insensitive
to the precise setting used and four iterations are usually sufficient.
RooUnfoldBayes takes the training truth as its initial prior, rather than a
flat distribution, as described by D’Agostini. This should not bias result
once we have iterated, but could reach an optimum after fewer iterations.
This implementation takes account of errors on the data sample but not, by
default, uncertainties in the response matrix due to finite MC statistics.
That calculation can be very slow, and usually the training sample is much
larger than the data sample.
RooUnfoldBayes does not normally do smoothing, since this has not been found
to be necessary and can, in principle, bias the distribution. Smoothing can be
enabled with an option.
### 3.2 Singular Value Decomposition
RooUnfoldSvd provides an interface to the TSVDUnfold class implemented in ROOT
by Tackmann [6], which uses the method of Höcker and Kartvelishvili [4]. The
response matrix is inverted using singular value decomposition, which allows
for a linear implementation of the unfolding algorithm. The normalization to
the number of events is retained in order to minimize uncertainties due to the
size of the training sample. Regularization is performed using a smooth cut-
off on small singular value contributions ($s_{i}^{2}\rightarrow
s_{i}^{2}/(s_{i}^{2}+s_{k}^{2})$, where the $k$th singular value defines the
cut-off), which correspond to high-frequency fluctuations.
The regularization needs to be tuned according to the distribution, binning,
and sample statistics in order minimize the bias due to the choice of the
training sample (which dominates at small $k$) while retaining small
statistical fluctuations in the unfolding result (which grow at large $k$).
The unfolded error matrix includes the contribution of uncertainties on the
response matrix due to finite MC training statistics.
### 3.3 TUnfold
RooUnfoldTUnfold provides an interface to the TUnfold method implemented in
ROOT by Schmitt [7]. TUnfold performs a matrix inversion with 0-, 1-, or
2-order polynomial regularization of neighbouring bins. RooUnfold
automatically takes care of packing 2D and 3D distributions and creating the
appropriate regularization matrix required by TUnfold.
TUnfold can automatically determine an optimal regularization parameter
($\tau$) by scanning the ‘L-curve’ of $\log_{10}\chi^{2}$ vs $\log_{10}\tau$.
### 3.4 Unregularized algorithms
Two simple algorithms, RooUnfoldBinByBin, which applies MC correction factors
with no inter-bin migration, and RooUnfoldInvert, which performs unregularized
matrix inversion with singular value removal (TDecompSVD) are included for
reference. These methods are not generally recommended: the former risks
biases from the MC model, while the latter can give large bin-bin correlations
and magnify statistical fluctuations.
## 4 Examples
Examples of toy MC tests generated by RooUnfoldTest are shown in Figs. 2–4.
These provide a challenging test of the procedure. Completely different
training and test MC models are used: a single wide Gaussian PDF for training
and a double Breit-Wigner for testing. In both cases these are smeared,
shifted, and a variable inefficiency applied to produce the ‘measured’
distributions.
Figure 2: Unfolding with the Bayes algorithm. On the left, a double Breit-
Wigner PDF on a flat background (green curve) is used to generate a test
‘truth’ sample (upper histogram in blue). This is then smeared, shifted, and a
variable inefficiency applied to produce the ‘measured’ distribution (lower
histogram in red). Applying the Bayes algorithm with 4 iterations on this
latter gave the unfolded result (black points), shown with errors from the
diagonal elements of the error matrix. The bin-to-bin correlations from the
error matrix are shown on the right.
Figure 3: Unfolding with the SVD algorithm ($k=30$) on the same training and
test samples as described in Fig. 2.
Figure 4: Unfolding with the TUnfold algorithm ($\tau=0.004$) on the same
training and test samples as described in Fig. 2. Here we use two measurement
bins for each truth bin.
## 5 Unfolding errors
Regularization introduces inevitable correlations between bins in the unfolded
distribution. To calculate a correct $\chi^{2}$, one has to invert the
covariance matrix:
$\chi^{2}=(\mathbf{x}_{\mathrm{measured}}-\mathbf{x}_{\mathrm{true}})^{\mathrm{T}}\mathbf{V}^{-1}(\mathbf{x}_{\mathrm{measured}}-\mathbf{x}_{\mathrm{true}})$
(1)
However, in many cases, the covariance matrix is poorly conditioned, which
makes calculating the inverse problematic. Inverting a poorly conditioned
matrix involves subtracting large, but very similar numbers, leading to
significant effects due to the machine precision.
### 5.1 Unfolding errors with the Bayes method
As shown on the left-hand side of Fig. 5, the uncertainties calculated by
propagation of errors in the Bayes method were found to be significantly
underestimated compared to those given by the toy MC. This was found to be due
to an omission in the original method outlined by D’Agostini ([2] section 4).
Figure 5: Bayesian unfolding errors (lines) compared to toy MC RMS (points)
for 1, 2, 3, and 9 iterations on the Fig. 2 test. The left-hand plot shows the
errors using D’Agostini’s original method, ignoring any dependence on previous
iterations (only the $M_{ij}$ term in Eq. (3)). The right-hand plot shows the
full error propagation.
The Bayes method gives the unfolded distribution (‘estimated causes’),
$\hat{n}(\mathrm{C}_{i})$, as the result of applying the unfolding matrix,
$M_{ij}$, to the measurements (‘effects’), $n(\mathrm{E}_{j})$:
$\hat{n}(\mathrm{C}_{i})=\sum_{j=1}^{n_{\mathrm{E}}}M_{ij}n(\mathrm{E}_{j})\quad\mathrm{where}\quad
M_{ij}=\frac{P(\mathrm{E}_{j}|\mathrm{C}_{i})n_{0}(\mathrm{C}_{i})}{\epsilon_{i}\sum_{l=1}^{n_{\mathrm{C}}}P(\mathrm{E}_{j}|\mathrm{C}_{l})n_{0}(C_{l})}$
(2)
$P(\mathrm{E}_{j}|\mathrm{C}_{i})$ is the $n_{\mathrm{E}}\times
n_{\mathrm{C}}$ response matrix,
$\epsilon_{i}\equiv\sum_{j=1}^{n_{\mathrm{E}}}P(\mathrm{E}_{j}|\mathrm{C}_{i})$
are efficiencies, and $n_{0}(C_{l})$ is the prior distribution — initially
arbitrary (eg. flat or MC model), but updated on subsequent iterations.
The covariance matrix, which here we call
$V(\hat{n}(\mathrm{C}_{k}),\hat{n}(\mathrm{C}_{l}))$, is calculated by error
propagation from $n(\mathrm{E}_{j})$, but $M_{ij}$ is assumed to be itself
independent of $n(\mathrm{E}_{j})$. That is only true for the first iteration.
For subsequent iterations, $n_{0}(\mathrm{C}_{i})$ is replaced by
$\hat{n}(\mathrm{C}_{i})$ from the previous iteration, and
$\hat{n}(\mathrm{C}_{i})$ depends on $n(\mathrm{E}_{j})$ (Eq. (2)).
To take this into account, we compute the error propagation matrix
$\frac{\partial{\hat{n}(\mathrm{C}_{i})}}{\partial{n(\mathrm{E}_{j})}}=M_{ij}+\sum_{k=1}^{n_{\mathrm{E}}}M_{ik}n(\mathrm{E}_{k})\left(\frac{1}{n_{0}(\mathrm{C}_{i})}\frac{\partial{n_{0}(\mathrm{C}_{i})}}{\partial{n(\mathrm{E}_{j})}}-\sum_{l=1}^{n_{\mathrm{C}}}\frac{\epsilon_{l}}{n_{0}(\mathrm{C}_{l})}\frac{\partial{n_{0}(\mathrm{C}_{l})}}{\partial{n(\mathrm{E}_{j})}}M_{lk}\right)$
(3)
This depends upon the matrix
$\frac{\partial{n_{0}(\mathrm{C}_{i})}}{\partial{n(\mathrm{E}_{j})}}$, which
is $\frac{\partial{\hat{n}(\mathrm{C}_{i})}}{\partial{n(\mathrm{E}_{j})}}$
from the previous iteration. In the first iteration, the second term vanishes
($\frac{\partial{n_{0}(\mathrm{C}_{i})}}{\partial{n(\mathrm{E}_{j})}}=0$) and
we get
$\frac{\partial{\hat{n}(\mathrm{C}_{i})}}{\partial{n(\mathrm{E}_{j})}}=M_{ij}$.
The error propagation matrix can be used to obtain the covariance matrix on
the unfolded distribution
$V(\hat{n}(\mathrm{C}_{k}),\hat{n}(\mathrm{C}_{l}))=\sum_{i,j=1}^{n_{\mathrm{E}}}\frac{\partial{\hat{n}(\mathrm{C}_{k})}}{\partial{n(\mathrm{E}_{i})}}V(n(\mathrm{E}_{i}),n(\mathrm{E}_{j}))\frac{\partial{\hat{n}(\mathrm{C}_{l})}}{\partial{n(\mathrm{E}_{j})}}$
(4)
from the covariance matrix of the measurements,
$V(n(\mathrm{E}_{i}),n(\mathrm{E}_{j}))$.
Without the new second term in Eq. (3), the error is underestimated if more
than one iteration is used, but agrees well with toy MC tests if the full
error propagation is used, as shown in Fig. 5.
## 6 Status and plans
RooUnfold was first developed in the BABAR software environment and released
stand-alone in 2007. Since then, it has been used by physicists from many
different particle physics, particle-astrophysics, and nuclear physics groups.
Questions, suggestions, and bug reports from users have prompted new versions
with fixes and improvements.
Last year I started working with a small group hosted by the Helmholtz
Alliance, the Unfolding Framework Project[9]. The project is developing
unfolding experience, software, algorithms, and performance tests. It has
adopted RooUnfold as a framework for development.
Development and improvement of RooUnfold is continuing. In particular,
determination of the systematic errors due to uncertainties on the response
matrix, and due to correlated measurement bins will be added. The RooUnfold
package will be incorporated into the ROOT distribution, alongside the
existing TUnfold and TSVDUnfold classes.
## References
* [1] The RooUnfold package and documentation are available from
`http://hepunx.rl.ac.uk/~adye/software/unfold/RooUnfold.html`
* [2] G. D’Agostini, “A Multidimensional unfolding method based on Bayes’ theorem”, Nucl. Instrum. Meth. A 362 (1995) 487.
* [3] K. Bierwagen, “Bayesian Unfolding”, presented at PHYSTAT 2011 (CERN, Geneva, January 2011), to be published in a CERN Yellow Report.
* [4] A. Hocker and V. Kartvelishvili, “SVD Approach to Data Unfolding”, Nucl. Instrum. Meth. A 372 (1996) 469.
* [5] V. Kartvelishvili, “Unfolding with SVD”, presented at PHYSTAT 2011 (CERN, Geneva, January 2011), to be published in a CERN Yellow Report.
* [6] K. Tackmann, “SVD-based unfolding: implementation and experience”, presented at PHYSTAT 2011 (CERN, Geneva, January 2011), to be published in a CERN Yellow Report.
* [7] The TUnfold package is available in ROOT [8] and documented in
`http://www.desy.de/~sschmitt/tunfold.html`
* [8] R. Brun and F. Rademakers, “ROOT: An object oriented data analysis framework”, Nucl. Instrum. Meth. A 389 (1997) 81. See also `http://root.cern.ch/`.
* [9] For details of the Unfolding Framework Project, see
`https://www.wiki.terascale.de/index.php/Unfolding_Framework_Project`
|
arxiv-papers
| 2011-05-05T19:59:50 |
2024-09-04T02:49:18.620920
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Tim Adye",
"submitter": "Tim Adye",
"url": "https://arxiv.org/abs/1105.1160"
}
|
1105.1180
|
# New Young Star Candidates in CG4 and Sa101
L. M. Rebull11affiliation: Spitzer Science Center/Caltech, M/S 220-6, 1200 E.
California Blvd., Pasadena, CA 91125 (luisa.rebull@jpl.nasa.gov) , C. H.
Johnson22affiliation: Breck School, Minneapolis, MN , V. Hoette33affiliation:
Yerkes Observatory, University of Chicago , J. S. Kim44affiliation: University
of Arizona, Tucson, AZ , S. Laine11affiliation: Spitzer Science
Center/Caltech, M/S 220-6, 1200 E. California Blvd., Pasadena, CA 91125
(luisa.rebull@jpl.nasa.gov) , M. Foster44affiliation: University of Arizona,
Tucson, AZ , R. Laher11affiliation: Spitzer Science Center/Caltech, M/S 220-6,
1200 E. California Blvd., Pasadena, CA 91125 (luisa.rebull@jpl.nasa.gov) , M.
Legassie11affiliation: Spitzer Science Center/Caltech, M/S 220-6, 1200 E.
California Blvd., Pasadena, CA 91125 (luisa.rebull@jpl.nasa.gov)
55affiliation: Raytheon, Pasadena, CA , C. R. Mallory66affiliation: Pierce
College, Woodland Hills, CA , K. McCarron77affiliation: Oak Park and River
Forest High School, Oak Park, IL W. H. Sherry88affiliation: NOAO/NSO, Tucson,
AZ
###### Abstract
The CG4 and Sa101 regions together cover a region of $\sim$0.5 square degree
in the vicinity of a “cometary globule” that is part of the Gum Nebula. There
are seven previously identified young stars in this region; we have searched
for new young stars using mid- and far-infrared data (3.6 to 70 microns) from
the Spitzer Space Telescope, combined with ground-based optical data and near-
infrared data from the Two-Micron All-Sky Survey (2MASS). We find infrared
excesses in all 6 of the previously identified young stars in our maps, and we
identify 16 more candidate young stars based on apparent infrared excesses.
Most (73%) of the new young stars are Class II objects. There is a tighter
grouping of young stars and young star candidates in the Sa101 region, in
contrast to the CG4 region, where there are fewer young stars and young star
candidates, and they are more dispersed. Few likely young objects are found in
the “fingers” of the dust being disturbed by the ionization front from the
heart of the Gum Nebula.
stars: formation – stars: circumstellar matter – stars: pre-main sequence –
infrared: stars
††slugcomment: Version from
## 1 Introduction
Hawarden and Brand (1976) identified “several elongated, comet-like objects”
in the Gum Nebula. These objects have dense, dark, dusty heads and long, faint
tails, which are generally pointing away from the center of the Vela OB2
association. More such “Cometary Globules” (CGs) were subsequently identified
in the Gum Nebula (e.g., Sandqvist 1976, Reipurth 1983), but similar
structures had also been identified elsewhere (e.g., Orion, Rosette Nebula,
etc.) in the context of Bok Globules (e.g., Bok & Reilly 1947) and “elephant
trunks” (e.g., Osterbrock 1957). These objects are all thought to be related
in the following sense – certain regions of the molecular cloud are dense
enough to persist when the stellar winds and ionizing radiation from OB stars
powering an H II region move over them, initially forming elephant trunks and
then eventually cometary globules. These structures often also have bright
rims, thought to originate from the OB stars’ winds and radiation, and are
often actively forming stars (e.g., Reipurth 1983), most likely triggered by
the interaction with winds and radiation from the OB stars (e.g., Haikala et
al. 2010).
Cometary Globule 4 (CG4) in the Gum Nebula has a striking appearance (see
Figures 1–5), where the combination of the original dust distribution plus
ablation from the OB winds and radiation has resulted in a relatively
complicated structure. A serendipitous placement of background galaxy ESO 257-
G 019 just 0.15$\arcdeg$ to the East of the heart of CG4 adds to the drama of
the image (see Figures 1–5). About a half a degree to the West of the heart of
CG4 is another cloud, named Sa101. This region was initially recognized by
Sandqvist (1977) as an opacity class 5 (on a scale of 6, e.g., fairly dark)
dark cloud. This cloud appears to have been shadowed, at least partially, by
CG4 from the ionization front. (See discussion in Pettersson 2008.)
Reipurth & Pettersson (1993) studied the region of CG4+Sa101, finding several
H$\alpha$ emission stars; see Table 1. They point out that CG-H$\alpha$1 and 7
are not associated with dusty material, and, as such, may have been associated
with clumps that have already evaporated, as opposed to the stars still
projected onto molecular cloud material. They argue on the basis of H$\alpha$
equivalent widths that these cannot be foreground or background dMe stars, but
are instead likely young stars, members of the association. They then use this
to argue that this cloud is most likely a part of the Gum Nebula Complex (as
opposed to a foreground or background object). We therefore assume as well
that the stars associated with CG4+Sa101 are also associated with the Gum
Nebula.
Distances to CG4+Sa101 and even the Gum Nebula are uncertain, with values
between 300 and 500 pc appearing in the literature (e.g., Franco 1990). The
generally accepted source of strong ultraviolet (UV) radiation is
$\gamma^{2}$Velorum, which is taken to be 360-490 pc away (Pozzo et al. 2000).
However, the Gum Nebula is elongated along our line of sight, so the distance
to different parts of the nebula could be significantly different than the
distance to $\gamma^{2}$Velorum. Vela OB2 is $\sim$ 425 pc (Pozzo et al.
2000). In the context of this paper, we test the extrema of the distance
estimates of 300 and 500 pc, though we note that our results are not strongly
dependent on distance.
Since the CG4+Sa101 region contains some previously identified young stars, it
is likely that there are more young stars, perhaps lower mass or more embedded
than those discovered previously. Kim et al. (2003), using a preliminary
reduction of some of the optical data used here, discussed some additional
candidate young stars in this region. Since it is now commonly believed that
every low-mass star goes through a period of having a circumstellar disk,
young stars can be identified via an infrared (IR) excess, assumed to be due
to a circumstellar disk. A survey in the IR can be used to identify objects
having an IR excess and thus distinguish candidate young stars from most
foreground or background objects, at least those foreground or background
stars without circumstellar disks. The IR also more easily penetrates the
dusty environs of star-forming regions, particularly globules such as these
cometary globules in the Gum Nebula.
The Spitzer Space Telescope (Werner et al. 2004) observed the CG4+Sa101 region
with the Infrared Array Camera (IRAC; Fazio et al. 2004) at 3.6, 4.5, 5.8, and
8 $\mu$m, and with the Multiband Imaging Photometer for Spitzer (MIPS; Rieke
et al. 2004) at 24 and 70 $\mu$m. We used these data to search this region for
additional young stellar object (YSO) candidates. We combined these Spitzer
data with data from the near-infrared Two-Micron All-Sky Survey (2MASS;
Skrutskie et al. 2006) and from ground-based optical photometric data that we
have obtained, and used the multi-wavelength catalog to evaluate and rank our
list of Spitzer-selected YSO candidates.
The observations and data reduction are described in §2. We select YSO
candidates using Spitzer colors in §3, and discuss their overall properties in
§4. We include a few words on the serendipitously observed galaxy in §5.
Finally, we summarize our main points in §6.
Table 1: Previously identified young stars in the CG4+Sa101 regionaaInformation tabulated here comes from Reipurth & Pettersson (1993), with positions updated to be J2000 and tied to the Spitzer and 2MASS coordinate system. We assumed the errors on the photometry to be $\sim$20% when plotting them in the spectral energy distributions (SEDs) in Figures 14–16. name | Region | RA (J2000) | Dec (J2000) | $U$ (mag) | $B$ (mag) | $V$ (mag) | $J$ (mag) | $H$ (mag) | $K$ (mag) | Spec. Type
---|---|---|---|---|---|---|---|---|---|---
CG-H$\alpha$ 1bbOff the edge of the Spitzer maps discussed here. | SA 101 | 07 30 37.6 | -47 25 06 | $\cdots$ | $\cdots$ | $>$17 | $\cdots$ | $\cdots$ | $\cdots$ | M3-4
CG-H$\alpha$ 2 | SA 101 | 07 30 57.5 | -46 56 11 | $\cdots$ | $\cdots$ | $>$17 | $\cdots$ | $\cdots$ | $\cdots$ | M2:
CG-H$\alpha$ 3 | SA 101 | 07 31 10.8 | -47 00 32 | 17.50 | 16.59 | 14.99 | 11.51 | 10.35 | 9.62 | K7
CG-H$\alpha$ 4 | SA 101 | 07 31 21.8 | -46 57 45 | 16.91 | 15.99 | 14.59 | 11.21 | 10.38 | 9.91 | K7-M0
CG-H$\alpha$ 5 | SA 101 | 07 31 36.6 | -47 00 13 | 16.74 | 16.51 | 15.25 | 11.73 | 10.64 | 9.96 | K2-5
CG-H$\alpha$ 6 | SA 101 | 07 31 37.4 | -47 00 21 | 16.53 | 15.63 | 14.21 | 10.42 | 9.52 | 9.06 | K7
CG-H$\alpha$ 7 | CG 4 | 07 33 26.8 | -46 48 42 | 16.00 | 15.16 | 13.97 | $\cdots$ | $\cdots$ | $\cdots$ | K5
## 2 Observations, Data Reduction, and Ancillary Data
Figure 1: Approximate location of optical, IRAC, MIPS coverage, superimposed
on a Palomar Observatory Sky Survey (POSS) image. Purple is optical, blue is
IRAC-1 and -3, green is IRAC-2 and -4, orange is MIPS-1, and red is MIPS-2.
The approximate locations of CG4 and Sa101 are also indicated. The galaxy ESO
257- G 019 is located on the left, partially obscured by the edge of the
optical coverage.
In this section, we discuss the IRAC and MIPS data acquisition and reduction.
We briefly summarize the optical ($BVR_{c}I_{c}$) data reduction, which will
be covered in more detail in Kim et al., in preparation. We also discuss
merging the photometric data across bands, and with the 2MASS near-IR catalog
($JHK_{s}$). The regions of the sky covered by IRAC, MIPS, and the optical
observations are indicated in Figure 1.
We note for completeness that the four channels of IRAC are 3.6, 4.5, 5.8, and
8 microns, and that the three channels of MIPS are 24, 70, and 160 microns.
These bands can be referred to equivalently by their channel number or
wavelength; the bracket notation, e.g., [24], denotes the measurement in
magnitudes rather than flux density units (e.g., Jy). Further discussion of
the bandpasses can be found in, e.g., the Instrument Handbooks, available from
the Spitzer Science Center (SSC) or the Infrared Science Archive (IRSA)
Spitzer Heritage Archive (SHA) websites.
### 2.1 IRAC Data
Figure 2: The IRAC 3.6 $\mu$m (channel 1) mosaic. Figure 3: The IRAC 4.5
$\mu$m mosaic (channel 2). Figure 4: The IRAC 5.8 $\mu$m mosaic (channel 3).
Figure 5: The IRAC 8 $\mu$m mosaic (channel 4).
We used the IRAC data for CG4 from program 462, AORKEY111An AOR is an
Astronomical Observation Request, the fundamental unit of Spitzer observing.
An AORKEY is the unique 8-digit identifier for the AOR, which can be used to
retrieve these data from the Spitzer Archive. 24250880; for Sa101, we used the
IRAC data from program 20714, AORKEY 16805888. The IRAC data from program 462
(for CG4) were taken on 2007-12-29 with 12 sec high-dynamic-range (HDR)
frames, so there are two exposures at each pointing, 0.6 and 12 sec, with 3
dithers per position, for a total integration time of 36 seconds (on average).
The IRAC data from program 20714 (for Sa101) were taken on 2006-03-27 with 30
sec HDR frames; there are also two exposures per pointing, but deeper, 1.2 and
30 seconds. For this observation, there are two dithers per position, for a
total integration time of 60 seconds (on average). Because of the different
integration times, we reduced the Sa101 and CG4 observations independently
even though they overlap on the sky (in Figures 2-5, the jagged-edged
observation is from the observation in program 462, and the smooth-edged
observation on the right is from the observation in program 20714).
We note that there are additional IRAC data in this region that we did not
use. IRAC data from program 202 were of a very small region centered on the
head of the globule. We did not include these data in an effort to make our
survey as uniform as possible over the entire surveyed region. IRAC data from
program 20714 for CG4 were taken in non-HDR mode, with 30 second exposures; as
a result of some very bright stars in the field of view, the instrumental
effects rendered these data very difficult to work with. Our science goals
near CG4 can be met with the total integration time from program 462 alone. In
addition, the data from program 462 cover a larger area ($\sim
0.8\arcdeg\times\sim 1\arcdeg$) than from program 20714 ($\sim
0.4\arcdeg\times\sim 0.5\arcdeg$). For these reasons, we did not incorporate
the CG4 IRAC data from program 20714 in this analysis.
We started with the corrected basic calibrated data (CBCDs) processed using
SSC pipeline version 18.7. Because of the very bright stars in the field of
view, and because the data from program 462 were taken with cluster targets,
we could not use the pipeline-processed mosaics. Moreover, the artifact
correction, which is normally done for individual cluster targets separately
in the SSC pipeline processing, is much improved when using the CBCD files
from program 462 all at once. We reprocessed the IRAC data from both program
462 (for CG4) and program 20714 (for Sa101), using MOPEX (Makovoz & Marleau
2005) to calculate overlap corrections and create mosaics with very much
reduced instrumental artifacts compared to the pipeline mosaics. The pixel
size for our mosaics was the same as the pipeline mosaics, 0.6 arcseconds,
half of the native pixel scale. We created separate mosaics for the long and
the short exposures at each channel for photometric analysis. For display
purposes, we further used MOPEX to combine the two long-frame observations
into one large mosaic per channel, as seen in Figures 2-5. The component
mosaics were properly weighted in terms of signal-to-noise and exposure time.
The total area covered by at least one IRAC channel (as seen in Figures 2-5)
is $\sim$0.5 square degrees.
To obtain photometry of sources in this region, we used the APEX-1frame module
from MOPEX to perform source detection on the resultant long and short mosaics
for each observation separately. We took those source lists and used the
aper.pro routine in IDL to perform aperture photometry on the mosaics with an
aperture of 3 native pixels (6 resampled pixels), and an annulus of 3-7 native
pixels (6-14 resampled pixels). The corresponding aperture corrections are,
for the four IRAC channels, 1.124, 1.127, 1.143, & 1.234, respectively, as
listed in the IRAC Instrument Handbook. As a check on the photometry, the
educators and students associated with this project (see Acknowledgments) used
the Aperture Photometry Tool (APT; Laher et al. 2011a,b) to confirm by hand
the measurements for all the targets of interest. To convert the flux
densities to magnitudes, we used the zero points as provided in the IRAC
Instrument Handbook: 280.9, 179.7, 115.0, and 64.13 Jy, respectively, for the
four channels. (No array-dependent color corrections nor regular color
corrections were applied.) We took the errors as produced by IDL to be the
best possible internal error estimates; to compare to flux densities from
other sources, we took a flat error estimate of 5% added in quadature.
To obtain one source list per channel per observation, we then merged the
short and the long exposures for each channel separately, and for each
observation independently because of the different exposure times as noted
above. The crossover point between taking fluxes from the short and long
exposures were taken from empirical studies of prior star-forming regions, and
were magnitudes of 9.5, 9.0, 8.0, & 7.0 for the four IRAC channels
repsectively. We performed this merging via a strict by-position search,
looking for the closest match within 1$\arcsec$. This maximum radius for
matching was determined via experience with other star-forming regions (e.g.,
Rebull et al. 2010). The limiting magnitudes of these final source lists are
the same for both observations, and are [3.6]$\sim$17 mag, [4.5]$\sim$17 mag,
[5.8]$\sim$15.5 mag, and [8]$\sim$14.5 mag.
### 2.2 MIPS Data
Figure 6: The MIPS 24 $\mu$m mosaic. The 24 $\mu$m coverage consists of two
pointed photometry-mode small maps, plus the 24 $\mu$m data serendipitously
obtained during 70 $\mu$m photometry observations. Extended emission and point
sources are both apparent. Figure 7: The MIPS 70 $\mu$m mosaic. The 70 $\mu$m
coverage is two pointed photometry-mode small maps. Extended emission and
point sources are both apparent.
There are two MIPS AORs in the CG4 region and two MIPS AORs in the Sa101
region, all four of which were obtained as part of program 20714 (AORKEYs
16805632, 16807936, 16806144, 16808192) on 2006-05-08 or 2006-06-12; see
Figures 1, 6, and 7. The AORs were obtained in MIPS photometry mode, nominally
centered on 7:31:18.7, -46:57:45 for Sa101 and 7:33:48 -46:49:59.9 for CG4.
One of each pair of AORs is explicitly a MIPS-24 photometry observation, and
the other is a MIPS-70 photometry observation. During the 70 $\mu$m
observation, the 24 $\mu$m array is still turned on and is still collecting
valid data. We combined the prime 24 $\mu$m data from the MIPS-24 photometry
mode observations with the serendipitous 24 $\mu$m data from the MIPS-70
photometry mode observations to obtain larger maps at 24 $\mu$m. The original
explicitly 24 $\mu$m photometry observations are small photometry-mode maps,
with 3 s integration per pointing, but a net integration time of $\sim$210 s
over most of the resultant mosaic. The serendipitously obtained data averaged
$\sim$350 s net integration time; where the deliberate and serendipitous data
overlapped, integration times can be $\sim$450 s. The 70 $\mu$m photometry
observations were also small photometry-mode maps, with 10 s integration per
pointing; the net integration time over most of the mosaics was $\sim$400 s.
The data for 24 $\mu$m, like those for IRAC, were affected by the bright
objects, and required additional processing beyond what the online pipelines
could provide. We started with S18.13 enhanced BCDs (eBCDs) from the pipeline.
We implemented a self-flat for each AOR separately, as described in the MIPS
Instrument Handbook, available from the SSC or IRSA SHA websites. For each
pair of overlapping 24 micron maps, we then ran an overlap correction using
the overlap script that comes with MOPEX, and then created one mosaic for CG4
and one for Sa101, again using MOPEX. Our mosaics had the same pixel size as
the online mosaics, 2.45$\arcsec$. In order to combine the images into one
mosaic for display in Figure 6, the different overall background levels
between the two observations (having an origin in the different Zodiacal light
levels at the times of the two observations) were problematic. The brighter of
the two was artificially lowered via median subtraction to bring its dynamic
range into a similar regime as the fainter; this renders photometry on this
net mosaic invalid, but the morphology seen in Figure 6 is still valid. The
total area covered by the net 24 $\mu$m map is only $\sim$0.3 square degrees,
smaller than that of the IRAC map.
To obtain photometry at 24 $\mu$m, we ran APEX-1frame on each of the mosaics
(one per observation) and performed point-response-function (PRF) fitting
photometry using the SSC-provided PRF. Tests using the apex_qa module portion
of MOPEX suggest that our photometry is well within expected errors. For three
problematic sources, we used aperture photometry instead of the PRF-fitted
photometry, as they provided a better fit in apex_qa. We used the signal-to-
noise ratio (SNR) value returned by APEX-1frame as the best estimate of the
internal (statistical) errors, adding a 4% flux density error in quadrature as
a best estimate of the absolute uncertainty. The limiting magnitude of these
observations is [24]$\sim$10.5 mag. Note that we optimized our data reduction
to obtain measurements of the brighter sources and sources superimposed on the
nebulosity; many sources fainter than this are apparent in the image but not
included in our catalog, simply because our scientific goals are aimed at the
brighter objects. For one source of interest below (073243.5-464941, which was
considered and then rejected as a YSO candidate; see §3), an upper limit was
obtained at the given position by laying down an aperture as if a source were
there, and taking 3 times that value for the 3$\sigma$ limit. To convert the
flux densities to magnitudes, we used the zero point as found in the MIPS
Instrument Handbook, 7.14 Jy.
At 70 $\mu$m, there are viable observations from AORKEYs 16807936 and
16808192\. We downloaded data processed with pipeline version S18.12. The
online pipeline does a very good job of producing mosaics; see Figure 7, where
there are a handful of point sources and extended emission visible. The online
pipeline produces both filtered and unfiltered mosaics; the filtering
preserves the flux densities of the point sources and improves their signal-
to-noise, especially for faint sources, but destroys the flux density
information for the extended emission. The unfiltered mosaics are shown in
Figure 7, but we performed photometry on the filtered mosaics. The pipeline
mosaics have resampled 4$\arcsec$ pixels (as opposed to 5.3$\arcsec$ native
pixels), and the two observations together cover about 0.1 square degrees. We
used APEX-1frame to do PRF fitting on the pipeline filtered mosaics for the
point sources, using the SSC-provided PRF. For one problematic source,
aperture photometry provided a better flux density estimate. There are only 11
objects with 70 $\mu$m detections, and there is a large variation in
background levels, so quoting a limiting magnitude is difficult, but is very
approximately 3 mag. We assumed a conservative, flat 20% flux density error.
The zero point we used again came from the MIPS Instrument Handbook, 0.775 Jy.
Where there was 70 $\mu$m coverage for the sources of interest, we placed an
aperture at the expected location of the source, and performed photometry as
if there were a source there, taking 3 times that value as the 3$\sigma$ limit
that appears in Table 2 below.
### 2.3 Optical Data
The optical data will be discussed further in Kim et al., in preparation, but
we summarize the important aspects of the data reduction here.
The $BVR_{c}I_{c}$ photometry of the CG4+Sa101 region were obtained during
2001 March 6, 7, 9, 10, and 11 using the 2K$\times$2K CCD at the 0.9m
telescope at the Cerro Tololo Inter-American Observatory (CTIO). The images
have a pixel scale of 0.4″in a 13.6′field of view.
Bias and twilight sky flat fields were taken at the beginning and at the end
of each night. Long and short (300 sec and 30 sec) exposures were taken for
object fields. During every photometric night we observed Landolt (1992)
standard stars of two or three fields several times per night for photometric
calibration.
We performed aperture photometry using multiple aperture sizes, and the
photometry with highest signal-to-noise (S/N) ratio were chosen as final
photometry for each star. For the standard stars, we used an aperture radius
of 17 pixels. We used IRAF222IRAF is distributed by the National Optical
Astronomy Observatory, which is operated by the Association of Universities
for Research in Astronomy (AURA) under cooperative agreement with the National
Science Foundation./PHOTCAL routines to solve for the zero point, extinction
and color terms of the standard star solution.
For the target stars in CG4 and Sa101, we used a custom IDL photometry
pipeline (written by W. H. Sherry), which was developed for the CTIO 0.9m
telescope. For each target, aperture photometry was performed using multiple
size apertures starting from aperture size of 2 pixels to 17 pixels. The
highest S/N photometry was chosen as the final photometry.
We used an aperture correction to place our photometry on the same photometry
system as our standard stars. The point-spread function (PSF) of the CTIO 0.9m
telescope varies noticeably as a function of location on the CCD. This is
insignificant for large apertures, but for aperture sizes of 2-3 pix, the
difference can be a few percent. We accounted for the spatial dependence of
the aperture correction in each image by fitting the aperture corrections for
stars with photometric errors better than 0.02 mag with a quadratic function.
The uncertainty of the aperture correction is about 0.01 magnitude. For each
aperture, the uncertainty on the instrumental magnitude is calculated
including the uncertainty on the aperture correction for each aperture.
We used imwcs program written by D. Mink333Documentation and source code are
available at http://tdc-www.harvard.edu/software/wcstools/ (Mink 1997) to
determine the astrometric solution of each image. It fits the pixel
coordinates of our targets to the known positions of USNO A2.0 stars located
in each field. We then measured positions to an accuracy of $\sim$0$\farcs$3
relative to the USNO A2.0 reference frame.
For each pointing, we used a 1$\arcsec$ matching radius to match sources
within the optical $BVR_{c}I_{C}$ filters. The coordinates are averaged over
different filters, and the typical average uncertainty is
0$\farcs$2–0$\farcs$3\. To find duplicates between adjacent pointings, we
again looked for positional matches within 1$\arcsec$, and then took a
weighted mean of the available photometry.
Basically the entire IRAC map was covered west of 113.8 degrees RA (07:35:12);
see Figure 1. We have found no YSO candidates east of this. The completeness
limits over the field are as follows: $B\sim 19$ mag, $V\sim 18$ mag,
$R_{c}\sim 17.5$ mag, $I_{c}\sim 17$ mag. However, we note that there are
fewer sources per projected area detected (e.g., effectively shallower limits)
in the regions where there is molecular cloud material. The zero-points we
used for conversion of the magnitudes to flux densities (for inclusion in the
spectral energy distributions in Figures 14–16) were, respectively, 4000.87,
3597.28, 2746.63, and 2432.84 Jy.
### 2.4 Bandmerging
In summary, to bandmerge the available data, we first merged the photometry
from all four IRAC channels together with near-IR 2MASS data within each
observation (CG4 and Sa101), and then merged together those source lists from
each observation. We next included the MIPS data, and then the optical data.
We now discuss each of these steps in more detail. At the end of this section,
we discuss some aggregate statistics of the bandmerged catalog.
To merge the photometry from all four IRAC channels together, we started with
a source list from 2MASS. This source list includes $JHK_{s}$ photometry and
limits, with high-quality astrometry. We merged this source list by position
to the IRAC-1 source list, using a search radius of 1$\arcsec$, a value
empirically determined via experience with other star-forming regions (e.g.,
Rebull et al. 2010). Objects appearing in the IRAC-1 list but not the
$JHK_{s}$ list were retained as new potential sources. The master catalog was
then merged, in succession, to IRAC-2, 3, and 4, again using a matching radius
of 1$\arcsec$.
Because we are primarily interested in objects detected by Spitzer, we dropped
any objects not having flux densities in at least one Spitzer band (e.g.,
objects off the edge of the Spitzer maps, having measurements only in 2MASS).
Because the source detection algorithm we used can be fooled by instrumental
artifacts, we also explicitly dropped objects seen only in one IRAC band as
likely artifacts.
For the sources of interest later in the paper, most have counterparts in all
three bands in 2MASS by this point in the merging. However, for two sources,
this matching failed, at least in part. For one source, 073121.8-465745 (=CG-
Ha4), the automatic merging found a counterpart with a measured $J$ magnitude,
but the $HK_{s}$ measurements are flagged with a photometric quality (ph_qual)
flag of ‘E’, denoting that the goodness of fit quality was very poor, or that
the photometry fit did not converge, or that there were insufficient
individual data frames for the measurement. Since there was a good measurement
at $J$, we assumed that there were sufficient frames at $HK_{s}$, and that
something had happened to the fit. We took the values as reported in the
catalog, assumed a large error bar, and used these values in the table and
plots below; as will be seen in the spectral energy distribution (SED) for
this object (Fig. 15, top center), these values are probably close to what is
most appropriate for this object. For source 073425.3-465409, the automatic
merging fails, most likely because the 2MASS counterpart is slightly extended,
and it may be slightly extended at 3.6 $\mu$m as well. The nearest match in
the catalog is $\sim 2\arcsec$ away, very large by comparison to other source
matches here, but manual inspection strongly suggests that this is the
appropriate counterpart to the source seen at Spitzer bands. See Appendix A.21
for more on this interesting source.
We then compared the source lists from the separate observations, CG4 and
Sa101, again using a matching radius of 1$\arcsec$. For objects detected in
more than one mosaic, we took a weighted average of the flux density at the
corresponding band.
Next, we merged the 70 $\mu$m source list to the 24 $\mu$m source list. The 70
$\mu$m point-spread function is large compared to the positional accuracy
needed, and astrophysically, each 70 $\mu$m source ought to have a counterpart
at 24 $\mu$m, given the sensitivity of these observations. We individually
verified that each of the 70 $\mu$m point sources had a 24 $\mu$m counterpart,
and then merged these two source lists. To successfully have the computer
match the sources that were clearly matches by eye, a positional accuracy of
2.5$\arcsec$ was required, consistent with our experience in other star-
forming regions (e.g., Rebull et al. 2010). Since the two MIPS observations do
not overlap with each other, no explicit merging of the MIPS source lists from
the two observations was required beyond simple concatenation.
To combine the merged MIPS source list into the merged 2MASS+IRAC catalog, we
used a positional source match radius of 2$\arcsec$, again determined via
experience with other star-forming regions (e.g., Rebull et al. 2010). The
MOPEX source detection algorithm can be fooled by structure in the nebulosity
in the image, and by inspection, this was the case for these data. To weed out
these false ‘sources’, we then dropped objects from the catalog that were
detections only at 24 $\mu$m and no other bands. Finally, to merge the $J$
through 70 $\mu$m catalog to the optical ($BVR_{c}I_{c}$) catalog, we looked
for nearest neighbors within 1$\arcsec$.
After this entire process, there are $\sim$21,000 sources with IRAC-1 (3.6
$\mu$m) or IRAC-2 (4.5 $\mu$m) detections, $\sim$9000 sources with IRAC-3 (5.8
$\mu$m) detections, and $\sim$4000 sources with IRAC-4 (8 $\mu$m) detections.
About 3000 ($\sim$15%) of the IRAC sources have viable data at all 4 IRAC
bands, nearly all of which have counterparts in 2MASS. The optical data do not
cover the entire IRAC map, but about half of the 4-band IRAC detections have
counterparts in the optical catalog. There are only $\sim$500 sources at 24
$\mu$m in our catalog and just 11 sources at 70 $\mu$m; note that the MIPS-24
map covers a much smaller area than the IRAC maps, and the MIPS-70 map is
smaller still (see Figures 1-7). Ten of the 11 MIPS-70 sources have
counterparts at all four IRAC bands; the one that does not is saturated at 2
of the IRAC bands. About 200 sources have all four IRAC bands plus MIPS-24.
## 3 Selection of YSO candidates with infrared excess
With our new multi-wavelength view of the CG4+Sa101 region, we can begin to
look for young stars. We focus on finding sources having an infrared excess
characteristic of YSOs surrounded by a dusty disk. There is no single Spitzer
color selection criterion (or set of criteria) that is 100% reliable in
separating members from non-member contaminants. Many have been considered in
the literature (e.g., Allen et al. 2004, Rebull et al. 2007, Harvey et al.
2007, Gutermuth et al. 2008, 2009, Rebull et al. 2010, Rebull et al. 2011).
Some make use of just MIPS bands, some make use of just IRAC bands, most use a
series of color criteria, and where possible, they make use of (sometimes
substantial) ancillary data. In our case of the CG4+Sa101 region, we have some
ancillary data, but the bulk of the data are IRAC+2MASS data. In this case,
the best choice for selecting YSO candidates is the approach developed by
Gutermuth et al. (2008, 2009) and adapted by Guieu et al. (2009, 2010) for the
case in which no extinction map is available. This selection method starts
from the set of objects detected at all four IRAC bands and uses 2MASS and
MIPS data where possible. It implements a series of color cuts to attempt to
remove contaminants such as background galaxies and knots of nebulosity.
Figure 8: [3.6]$-$[4.5] vs. [5.8]$-$[8] color-color diagram for CG4+Sa101.
Small dots are objects in the catalog; larger dots are objects identified as
contaminants, and large red diamonds highlight our YSO candidates. All of our
Spitzer-selected YSO candidates have colors in this diagram consistent with
known YSOs, but many contaminants do too.
When we impose these IRAC-based color cuts, we find 25 potential YSO
candidates. We then inspected each of these in all available images and color-
color and color-magnitude diagrams. On the basis of this inspection, we
dropped three of the 25 potential YSO candidates off of our list, though
additional data will be needed to be sure that these objects are
extragalactic. Two of those dropped objects (073542.2-470126 and
073548.5-470727) have no available data other than IRAC, their SEDs are very
flat, and they are located near the edges of our images, far from other YSOs
and nebulosity. We suspect that these are extragalactic contaminants. The
third object, 073243.5-464941, is returned by the IRAC selection as having a
small excess at 5.8 and 8 $\mu$m. It is seen at 2MASS and IRAC bands, but is
undetected at 24 $\mu$m to a fairly stringent limit (971 $\mu$Jy, or 9.67
mag). If it has an excess at 24 $\mu$m, it is a very small excess. The 70
$\mu$m data do not cover this object, so there are no constraints (not even
limits) at 70 $\mu$m. Moreover, it is a relatively faint source next to a very
bright source, located far from any nebulosity. The wings of the bright source
are likely to adversely affect the photometric accuracy of the measurements
associated with this object. Because of this uncertainty and the very low
excess as measured, we have dropped this object from our YSO candidate list as
a likely foreground or background star.
The remaining 22 YSO candidates that pass the color cuts are shown in Figure
8. In this Figure, objects with zero color are likely foreground or background
stars (photospheres without disks), though some could be young stars that have
already shed their disks. Young stars with circumstellar disks are generally
red in both IRAC colors, but contaminants such as galaxies also may have these
colors. All of the 22 objects highlighted in this Figure have IRAC colors
consistent with young stars with disks.
Figure 9: [3.6] vs. [3.6]$-$[24] color-magnitude diagram for CG4+Sa101. Small
dots are objects in the catalog; larger dots are objects identified as
contaminants, and large red diamonds highlight our IRAC-selected YSO
candidates. All of the IRAC-selected YSO candidates have colors in this
diagram consistent with known YSOs. Figure 10: [24] vs. [24]$-$[70] color-
magnitude diagram for CG4+Sa101. As in prior plots, dots are objects
identified as contaminants, and large red diamonds highlight our IRAC-selected
YSO candidates. A red square highlights a very bright likely background star;
see the text. The arrows indicate the positions of the YSO candidates for
which we could obtain upper limits at 70 $\mu$m. The IRAC-selected YSO
candidates have colors in this diagram consistent with known YSOs.
The Gutermuth et al. (2008, 2009) selection criteria have provisions for
adding stars to the list of candidate YSOs based on properties at other bands,
such as MIPS bands. We now investigate the properties of objects in our
catalogs at the MIPS bands to see if we should add additional objects to our
list of YSO candidates. In summary of the rest of this section, while we find
some interesting objects, in the end, we do not add any more YSO candidates to
our list.
Young stars having inner disk holes and thus excesses at only the longest
bands can be revealed via comparison of the 24 $\mu$m measurement to a shorter
band, such as $K_{\rm s}$ or [3.6]. If the data are available, one should use
[3.6] vs. [3.6]$-$[24] rather than $K_{s}$ vs. $K_{s}-[24]$. There is an
intrinsic spread in $K_{s}-[24]$ photospheric colors that is not present in
[3.6]$-$[24] because late type stars are not colorless at $K_{s}-[24]$
(Gautier et al. 2007). The effects of reddening are stronger at $K_{\rm s}$
than at 3.6 $\mu$m. And, if 2MASS is the only source of $K_{\rm s}$, even
short 3.6 $\mu$m integrations can reach fainter sources than 2MASS does. In
our case of CG4+Sa101, the IRAC coverage is larger than the MIPS coverage, and
so we use [3.6] vs. [3.6]$-$[24] to look for any objects with an excess
starting at 24 $\mu$m.
Figure 9 shows this [3.6] vs. [3.6]$-$[24] diagram, with the same notation as
the prior figure. Ordinary stellar photospheres (likely foreground or
background stars) have $[3.6]-[24]\sim$0, and galaxies make up the large,
elongated source concentration near [3.6]$-$[24]$\sim$6, [3.6]$\sim$16\.
Objects not in this region, e.g., the brighter and/or redder objects, are less
likely to be part of the Galactic or extragalactic backgrounds, and more
likely to be YSOs with a 24 $\mu$m excess. Most of the IRAC-identified YSO
candidates are indeed in the region of this diagram occupied by other known
YSOs (see, e.g., Rebull et al. 2010, 2011, Guieu et al. 2009, 2010). There is
one (073049.1-470209) that is among the reddest objects in this diagram, near
[3.6]$-$[24]$\sim$10; see Appendix LABEL:sec:073049.1-470209 for more on this
specific object. Most of the objects already ruled out as YSO candidates based
on their IRAC properties are in the extragalactic concentration of sources.
The objects with $[3.6]-[24]\sim$0 do not have apparent excesses, but there
are eight additional objects with [3.6]$-$[24]$>$1 and [3.6]$<$14.5 that seem
to have the right placement in this diagram to be YSO candidates. We
investigated each of these candidates, and none had evidence based on SED
shape, significance of excess, or appearance in the images compelling enough
to have us add them to our list of YSO candidates. The apparent small excesses
just at 24 $\mu$m are most likely due to source confusion at the lower
resolution 24 $\mu$m band, with either a background source or a low-mass
companion. The most compelling one based on the SED is 073355.0-464838, but
the source seems to be confused with a nearby source that emerges at 8 $\mu$m,
and we strongly suspect that the 24 $\mu$m flux instead corresponds to the
object appearing at 8 $\mu$m, rather than the point source seen at 8 $\mu$m
and shorter bands. We do not add this source to our list of YSO candidates at
this time. Higher spatial resolution 24 $\mu$m observations would be required
to resolve this issue.
Some of the brightest stars in the CG4+Sa101 region are saturated in at least
the first two IRAC bands, so neither of the YSO search mechanisms we have used
thus far would find them. However, these sources could be YSOs, and they are
not all saturated in MIPS bands. As our last attempt to search for YSO
candidates with infrared excesses, Figure 10 shows the [24] vs. [24]$-$[70]
diagram for our region. This diagram for this region is sparse, but a better-
populated diagram (see, e.g., Rebull et al. 2010) would basically resemble
Figure 9, with photospheres being bright and colorless (having
[24]$-$[70]$\sim$0), and galaxies red and faint. In Figure 10, the few points
that are available include things that are galaxies (things that have been
ruled out as contaminants based on IRAC) which are in the extragalactic part
of the diagram here as well, six detections that have colors consistent with
YSOs, many limits for our YSO candidates, and one thing (square in Figure 10)
that is too bright to be a likely galaxy, and does not appear to be
particularly red in this diagram. This very bright object is 073339.7-464839,
and its SED suggests at first glance that it might be a YSO with a small
excess just at 24 and 70 $\mu$m. It is detected at $JHK_{s}$, [5.8], and [8],
and the $K_{s}$, [5.8], and [8] measurements are all consistent with one
Rayleigh-Jeans tail, and the [24] and [70] measurements are offset on a
different, redder Rayleigh-Jeans tail, as might be consistent with a small
thermally-heated dust disk. However, it is a very bright object, and saturated
at the shortest two IRAC bands; the photometry at $JHK_{s}$[5.8][8] may also
be compromised beyond what our formal errors suggest. It is matched in Simbad
to IRAS 07321-4642. We suspect that this is a background asymptotic giant
branch (AGB) star, or another sort of bright background giant, and not a
legitimate YSO candidate. We do not include it on our YSO candidate list.
We move ahead from here with the 22 IRAC-selected YSO candidates, and now
investigate their multi-band properties.
## 4 Properties of selected YSO candidates
### 4.1 Optical properties
Figure 11: $V$ vs. $(V-I_{c})$ color-magnitude diagram for CG4+Sa101. Small
dots are objects in the catalog, larger dots are objects identified as
contaminants, and large red diamonds highlight our YSO candidates. Isochrones
given are models from Siess et al. (2000) at 1, 10, and 30 Myr, scaled to 500
pc, where we have tuned the color-effective temperature relation such that the
100 Myr isochrone matches that of the Pleiades single-star sequence (Stauffer
et al. 2007, Jeffries et al. 2007). A reddening vector is also indicated. All
of the YSO candidates shown here, except for one, are in the region occupied
by young stars at 300-500 pc.
Optical data can greatly aid in confirming or refuting YSO candidacy because
they provide constraints on the Wien side of the SED. In Guieu et al. (2010),
most of the IRAC-selected candidates in IC 2118 proved to be too faint, most
vividly in the optical, to be likely cluster members. Just ten of our
candidate YSOs have optical data available, and they appear in Figure 11. The
objects with optical data that have already been ruled out as YSOs based on
their IRAC properties are all well below the Siess et al. (2000) 30 Myr
isochrone scaled to 500 pc. One YSO candidate object appears below the 30 Myr
isochrone; 073337.6-464246 is within the distribution of clear non-YSO points.
We do not remove this object from our list, since a variety of reasons (such
as scattered light) could result in a YSO appearing below the 30 Myr isochone;
for more discussion of this object, please see Appendix A.19. As noted above,
the distance to this association is uncertain; if the isochrones are instead
scaled to 300 pc, then one other object (073121.8-465745, a previously
identified YSO) appears to be just below the 30 Myr isochone instead of just
above it.
Deeper optical data are desirable in order to obtain magnitude estimates for
the remaining YSO candidates.
### 4.2 Near-IR properties
Figure 12: $J-H$ vs. $H-K_{s}$ diagram for the sample, with the same notation
as earlier figures. The main sequence is indicated by a solid line. Most of
the YSO candidates have an infrared excess starting at H-band with moderate
reddening.
Near-IR data can also aid in confirming or refuting YSO candidacy. Since we do
not have spectral types for most of our sources, it is difficult to estimate
the degree of reddening. Figure 12 shows $J-H$ vs. $H-K_{s}$ for the sample.
This plot suggests that most of our YSO candidates have an infrared excess
with a moderate degree of reddening.
### 4.3 $B$-band properties
Figure 13: $B-V$ vs. $R_{c}-I_{c}$ for the sample, with the same notation as
earlier figures. The main sequence is indicated by a solid line. Stars with a
$B$-band excess and relatively small values of $A_{V}$ would be blue, e.g.,
below the line in this figure. Objects above the line on the upper right of
this figure are pushed into that location by high $A_{V}$. At least 4 and
probably 8 of the YSO candidates have a $B$ excess, most likely from mass
accretion.
Young stars that are actively accreting from their circumstellar disk can have
excess ultraviolet emission at $U$ or $B$ bands, or even longer bands during
periods of intense accretion. However, these bands are also most sensitive to
reddening. Figure 13 shows $B-V$ vs. $R_{c}-I_{c}$ for the sample, with the
main sequence indicated as a solid line. Stars with a $B$-band excess and
relatively small values of $A_{V}$ would be blue, e.g., below the line in this
figure. Objects above the line on the upper right of this figure are pushed
into that location by high $A_{V}$. At least 4 of the YSO candidates have a
$B$ excess, most likely from mass accretion; four more appear to have been
pushed from the region of clear $B$ excess by high $A_{V}$. Similar results
for the same objects are obtained from the $B-V$ vs. $V-I_{c}$ plot. The
individual objects are listed in the Appendix.
### 4.4 Spectral Energy Distributions
Table 2: Multiband measurements of Spitzer-identified YSO candidates in the CG4+Sa101 region name | $B$ (mag) | $V$ (mag) | $R_{c}$ (mag) | $I_{c}$ (mag) | $J$ (mag) | $H$ (mag) | $K_{s}$ (mag) | [3.6] (mag) | [4.5] (mag) | [5.8] (mag) | [8.0] (mag) | [24] (mag) | [70] (mag)
---|---|---|---|---|---|---|---|---|---|---|---|---|---
073049.1-470209 | $\ldots$ | $\ldots$ | $\ldots$ | $\ldots$ | $\ldots$ | $\ldots$ | $\ldots$ | 18.15$\pm$ 0.28 | 16.51$\pm$ 0.17 | 14.54$\pm$ 0.11 | 12.57$\pm$ 0.08 | 8.37$\pm$ 0.05 | $>$ 3.40
073049.8-465806 | $\ldots$ | $\ldots$ | $\ldots$ | $\ldots$ | $>$ 17.32 | $>$ 15.61 | 15.19$\pm$ 0.17 | 14.26$\pm$ 0.08 | 13.69$\pm$ 0.14 | 12.39$\pm$ 0.08 | 11.05$\pm$ 0.07 | 7.11$\pm$ 0.05 | $>$ 2.87
073053.6-465742 | $\ldots$ | $\ldots$ | $\ldots$ | $\ldots$ | 16.89$\pm$ 0.18 | 15.70$\pm$ 0.10 | 14.87$\pm$ 0.11 | 14.18$\pm$ 0.08 | 13.87$\pm$ 0.09 | 13.53$\pm$ 0.09 | 13.04$\pm$ 0.09 | 9.53$\pm$ 0.09 | $>$ 5.45
073057.5-465611 | 19.61$\pm$ 0.06 | 18.27$\pm$ 0.06 | 16.72$\pm$ 0.06 | 15.13$\pm$ 0.06 | 12.86$\pm$ 0.02 | 11.93$\pm$ 0.02 | 11.40$\pm$ 0.02 | 10.86$\pm$ 0.07 | 10.43$\pm$ 0.07 | 10.04$\pm$ 0.07 | 9.26$\pm$ 0.07 | 6.36$\pm$ 0.05 | 3.05$\pm$ 0.22
073106.5-465454 | $\ldots$ | $\ldots$ | $\ldots$ | $\ldots$ | 16.72$\pm$ 0.14 | 14.90$\pm$ 0.05 | 13.65$\pm$ 0.04 | 11.28$\pm$ 0.07 | 10.63$\pm$ 0.07 | 10.00$\pm$ 0.07 | 9.09$\pm$ 0.07 | 6.45$\pm$ 0.04 | $>$ 2.31
073108.4-470130 | $\ldots$ | $\ldots$ | $\ldots$ | $\ldots$ | 15.21$\pm$ 0.04 | 14.39$\pm$ 0.03 | 13.83$\pm$ 0.05 | 13.21$\pm$ 0.08 | 12.88$\pm$ 0.08 | 12.50$\pm$ 0.08 | 11.71$\pm$ 0.08 | 8.68$\pm$ 0.05 | $>$ 3.21
073109.9-465750 | $\ldots$ | $\ldots$ | $\ldots$ | $\ldots$ | 16.55$\pm$ 0.12 | 15.71$\pm$ 0.13 | 15.09$\pm$ 0.15 | 14.35$\pm$ 0.09 | 13.91$\pm$ 0.09 | 13.52$\pm$ 0.09 | 12.82$\pm$ 0.08 | 9.70$\pm$ 0.08 | $>$ 3.57
073110.8-470032 | 17.12$\pm$ 0.03 | 15.49$\pm$ 0.03 | 14.41$\pm$ 0.03 | 13.34$\pm$ 0.03 | 11.20$\pm$ 0.02 | 10.22$\pm$ 0.02 | 9.58$\pm$ 0.02 | 8.64$\pm$ 0.05 | 8.22$\pm$ 0.05 | 7.88$\pm$ 0.07 | 7.38$\pm$ 0.05 | 4.46$\pm$ 0.04 | 2.01$\pm$ 0.22
073114.6-465842 | $\ldots$ | $\ldots$ | $\ldots$ | $\ldots$ | 13.56$\pm$ 0.03 | 11.75$\pm$ 0.03 | 10.88$\pm$ 0.02 | 10.07$\pm$ 0.08 | 9.63$\pm$ 0.07 | 8.93$\pm$ 0.07 | 7.88$\pm$ 0.07 | 4.24$\pm$ 0.04 | 1.21$\pm$ 0.22
073114.9-470055 | $\ldots$ | $\ldots$ | $\ldots$ | $\ldots$ | 15.53$\pm$ 0.06 | 14.95$\pm$ 0.05 | 14.43$\pm$ 0.09 | 13.84$\pm$ 0.08 | 13.54$\pm$ 0.08 | 13.20$\pm$ 0.08 | 12.55$\pm$ 0.08 | 9.82$\pm$ 0.09 | $>$ 5.41
073121.8-465745 | 17.49$\pm$ 0.05 | 16.18$\pm$ 0.05 | 15.51$\pm$ 0.05 | 14.43$\pm$ 0.05 | 11.42$\pm$ 0.03 | 10.67$\pm$0.2aaValues appear in the 2MASS Point Source Catalog with a photometric quality (ph_qual) flag of ‘E’, denoting that the goodness of fit quality was very poor, or that the photometry fit did not converge. We took the values as reported, with a large error bar, as additional constraints on the source. | 10.24$\pm$0.2aaValues appear in the 2MASS Point Source Catalog with a photometric quality (ph_qual) flag of ‘E’, denoting that the goodness of fit quality was very poor, or that the photometry fit did not converge. We took the values as reported, with a large error bar, as additional constraints on the source. | 9.26$\pm$ 0.07 | 8.84$\pm$ 0.07 | 8.73$\pm$ 0.08 | 8.18$\pm$ 0.07 | 5.97$\pm$ 0.04 | $>$ 2.53
073136.6-470013 | 17.65$\pm$ 0.04 | 16.07$\pm$ 0.04 | 14.97$\pm$ 0.04 | 13.94$\pm$ 0.04 | 11.96$\pm$ 0.02 | 10.82$\pm$ 0.03 | 10.02$\pm$ 0.02 | 9.07$\pm$ 0.08 | 8.81$\pm$ 0.05 | 8.53$\pm$ 0.05 | 8.19$\pm$ 0.05 | 5.44$\pm$ 0.04 | 2.51$\pm$ 0.22
073137.4-470021 | 15.84$\pm$ 0.04 | 14.19$\pm$ 0.04 | 13.21$\pm$ 0.04 | 12.12$\pm$ 0.04 | 10.45$\pm$ 0.02 | 9.53$\pm$ 0.02 | 9.11$\pm$ 0.02 | 8.50$\pm$ 0.08 | 8.21$\pm$ 0.05 | 7.98$\pm$ 0.05 | 7.19$\pm$ 0.05 | 4.23$\pm$ 0.04 | 1.74$\pm$ 0.22
073143.8-465818 | 19.25$\pm$ 0.05 | 18.11$\pm$ 0.05 | 16.70$\pm$ 0.05 | 15.19$\pm$ 0.05 | 13.40$\pm$ 0.03 | 12.57$\pm$ 0.02 | 12.06$\pm$ 0.02 | 11.33$\pm$ 0.05 | 10.65$\pm$ 0.05 | 9.98$\pm$ 0.05 | 8.77$\pm$ 0.05 | 5.55$\pm$ 0.04 | $>$ 2.53
073144.1-470008 | 18.73$\pm$ 0.05 | 17.67$\pm$ 0.05 | 17.36$\pm$ 0.05 | 15.05$\pm$ 0.05 | 13.39$\pm$ 0.05 | 12.48$\pm$ 0.07 | 11.88$\pm$ 0.03 | 11.02$\pm$ 0.05 | 10.59$\pm$ 0.05 | 10.21$\pm$ 0.05 | 9.50$\pm$ 0.05 | 6.86$\pm$ 0.04 | $>$ 1.68
073145.6-465917 | 18.95$\pm$ 0.05 | 17.28$\pm$ 0.05 | 16.04$\pm$ 0.05 | 14.60$\pm$ 0.05 | 12.96$\pm$ 0.02 | 12.04$\pm$ 0.02 | 11.71$\pm$ 0.02 | 11.44$\pm$ 0.05 | 11.27$\pm$ 0.05 | 11.02$\pm$ 0.05 | 10.21$\pm$ 0.05 | 7.47$\pm$ 0.05 | $>$ 4.40
073326.8-464842 | 15.24$\pm$ 0.05 | 14.11$\pm$ 0.05 | 13.35$\pm$ 0.05 | 12.64$\pm$ 0.05 | 11.49$\pm$ 0.02 | 10.74$\pm$ 0.02 | 10.35$\pm$ 0.02 | 9.91$\pm$ 0.07 | 9.70$\pm$ 0.07 | 9.43$\pm$ 0.07 | 8.56$\pm$ 0.07 | 5.14$\pm$ 0.04 | 1.83$\pm$ 0.22
073337.0-465455 | $\ldots$ | $\ldots$ | $\ldots$ | $\ldots$ | 15.65$\pm$ 0.09 | 15.15$\pm$ 0.11 | 14.87$\pm$ 0.13 | 14.04$\pm$ 0.08 | 13.74$\pm$ 0.08 | 13.39$\pm$ 0.09 | 12.79$\pm$ 0.08 | 10.64$\pm$ 0.09 | $>$ 3.60
073337.6-464246 | 18.83$\pm$ 0.02 | 17.58$\pm$ 0.02 | 16.85$\pm$ 0.02 | 16.24$\pm$ 0.02 | 15.43$\pm$ 0.07 | 14.85$\pm$ 0.10 | 14.62$\pm$ 0.11 | 14.19$\pm$ 0.08 | 14.14$\pm$ 0.09 | 13.75$\pm$ 0.09 | 13.44$\pm$ 0.11 | 9.99$\pm$ 0.12 | $>$ 6.64
073406.9-465805 | $\ldots$ | $\ldots$ | $\ldots$ | $\ldots$ | 14.97$\pm$ 0.05 | 14.24$\pm$ 0.05 | 13.73$\pm$ 0.04 | 13.27$\pm$ 0.08 | 12.99$\pm$ 0.08 | 12.67$\pm$ 0.08 | 11.88$\pm$ 0.08 | 8.63$\pm$ 0.04 | $>$ 1.68
073425.3-465409 | $\ldots$ | $\ldots$ | $\ldots$ | $\ldots$ | 13.44$\pm$0.05bb2MASS $JHK_{s}$ photometry comes from the 2MASS extended source catalog, not the point source catalog; see discussion in sections 2.4 and A.21. | 12.51$\pm$0.05bb2MASS $JHK_{s}$ photometry comes from the 2MASS extended source catalog, not the point source catalog; see discussion in sections 2.4 and A.21. | 12.06$\pm$0.06bb2MASS $JHK_{s}$ photometry comes from the 2MASS extended source catalog, not the point source catalog; see discussion in sections 2.4 and A.21. | 11.73$\pm$ 0.07 | 11.44$\pm$ 0.08 | 10.94$\pm$ 0.07 | 9.82$\pm$ 0.07 | 3.54$\pm$ 0.04 | $\ldots$
073439.9-465548 | $\ldots$ | $\ldots$ | $\ldots$ | $\ldots$ | 15.98$\pm$ 0.12 | 15.03$\pm$ 0.09 | 14.19$\pm$ 0.07 | 12.99$\pm$ 0.08 | 12.33$\pm$ 0.08 | 11.72$\pm$ 0.08 | 10.90$\pm$ 0.07 | $\ldots$ | $\ldots$
Table 3: Final list of YSO candidates in the CG4+Sa101 region name | syn. | Sp.Ty. | class | qualityaaThis grade is meant to indicate rough confidence in the liklihood that the given YSO candidate is a legitimate YSO. Previously identified YSOs are given a grade of ‘A+’, our highest quality YSO candidates are grade ‘A’, our mid-grade YSO candidates are grade ‘B’, and our lowest-confidence YSO candidates are grade ‘C’. For discussion of individual objects (and an explanation of why each object has that grade), please see the Appendix. | region | notes
---|---|---|---|---|---|---
073049.1-470209 | | | I | C | Sa101 | reddest and faintest in [3.6] vs. [3.6]$-$[24]
073049.8-465806 | | | I | B | Sa101 |
073053.6-465742 | | | II | A | Sa101 | small excess at 8 $\mu$m, most of the excess at 24 $\mu$m
073057.5-465611 | CG-Ha2 | M2: | II | A+ | Sa101 | apparently lowest mass object in this list
073106.5-465454 | | | flat | C | Sa101 | somewhat discontinuous SED
073108.4-470130 | | | II | A | Sa101 |
073109.9-465750 | | | II | A | Sa101 |
073110.8-470032 | CG-Ha3 | K7 | II | A+ | Sa101 |
073114.6-465842 | | | flat | A | Sa101 | high $A_{V}$ likely
073114.9-470055 | | | II | A | Sa101 |
073121.8-465745 | CG-Ha4 | K7-M0 | II | A+ | Sa101 |
073136.6-470013 | CG-Ha5 | K2-5 | II | A+ | Sa101 |
073137.4-470021 | CG-Ha6 | K7 | II | A+ | Sa101 |
073143.8-465818 | | | flat | A | Sa101 |
073144.1-470008 | | | II | A | Sa101 |
073145.6-465917 | | | II | A | Sa101 |
073326.8-464842 | CG-Ha7 | K5 | II | A+ | CG4 | inner disk hole?
073337.0-465455 | | | II | B | CG4 |
073337.6-464246 | | | II | C | CG4 | very low in optical color-mag diagram, near edges of maps
073406.9-465805 | | | II | A | CG4 |
073425.3-465409 | | | I | A | CG4 | extended in optical, NIR
073439.9-465548 | | | II | C | CG4 | sparse SED
Figure 14: Spectral Energy Distributions (SEDs) for the Spitzer-selected YSOs
presented here. $+$ symbols are optical data, diamonds are 2MASS (NIR) data,
circles are IRAC data, and squares are MIPS data. If a previous source ID
exists in Reipurth & Pettersson (1993), it is in the upper right of the plot,
along with their spectral type (if it exists). Similarly, if there are
photometric data from Reipurth & Pettersson (1993), they are marked with light
grey dots. The error bars (most frequently far smaller than the size of the
symbol) are indicated at the center of the symbol. Reddened models from the
Kurucz-Lejeune model grid (Lejeune et al. 1997, 1998) are shown for reference;
see the text. Units of $\lambda F_{\lambda}$ as presented are erg s-1 cm-2,
and $\lambda$ is in microns. Figure 15: SEDs, continued. Notation as in
previous figure. Figure 16: SEDs, continued. Notation as in previous figure.
Coordinates and our measured magnitudes between $B$ and 70 $\mu$m for our 22
YSO candidates appear in Table 2. Six of them (27%) are rediscoveries of the
previously known YSOs in this region from Table 1. Figures 14–16 are the SEDs
for the YSO candidates.
To guide the eye, we wished to add reddened stellar models to the SEDs, but
spectral types are only known for the 6 previously known YSOs. In order to
provide a reference, for the remaining objects, we assumed a spectral type of
M0. For each object, a reddened model is shown, selected from the Kurucz-
Lejeune model grid (Lejeune et al. 1997, 1998) and normalized to $K_{s}$ band
where possible (and to the closest band otherwise). Note that this is not
meant to be a robust fit to the object, but rather a representative stellar
SED to guide the eye such that the infrared excesses are immediately apparent.
In some cases, ultraviolet excesses may also be present. Additional
spectroscopic observations are needed to better constrain these fits.
In the spirit of Wilking et al. (2001), we define the near- to mid-IR slope of
the SED, $\alpha=d\log\lambda F_{\lambda}/d\log\lambda$, where $\alpha>0.3$
for a Class I, 0.3 to $-$0.3 for a flat-spectrum source, $-$0.3 to $-$1.6 for
a Class II, and $<-$1.6 for a Class III. For each of the YSO candidate objects
in our sample, we performed a simple ordinary least squares linear fit to all
available photometry (just detections, not including upper or lower limits)
between 2 and 24 $\mu$m, inclusive. Note that errors on the infrared points
are so small as to not affect the fitted SED slope. The precise definition of
$\alpha$ can vary, resulting in different classifications for certain objects.
Classification via this method is provided specifically to enable comparison
within this paper via internally consistent means. Note that the formal
classification puts no lower limit on the colors of Class III objects (thereby
including those with SEDs resembling bare stellar photospheres, and allowing
for other criteria to define youth). By searching for IR excesses, we are
incomplete in our sample of Class III objects. The classes for the YSO
(previously known and candidate) sample appear in Table 3. Out of the 22
stars, 16 (73%) are Class II.
Based on the SEDs and location in several color-color and color-magnitude
diagrams, we have ranked the YSO candidates loosely into three bins: high
likelihood of being YSOs (grade A), mid-grade quality (grade B), and
relatively low likelihood of being YSOs (grade C). This grade also appears in
Table 3. Most of them (16) are in the grade A bin, which includes the 6
previously identified ones. Comments on individual objects (including
justifications for the grades that were given) appear in the Appendix.
Figure 17: Reverse greyscale mosaic of 8 $\mu$m data with the locations of the
YSOs (previously known and new candidates) indicated, color-coded by YSO
quality grade. Previously known YSOs are indicated by a green dot with a “Y”,
high-quality (grade ‘A’) YSOs are indicated by a red dot and an “A”, mid-
quality (grade ‘B’) YSOs are indicated by a blue dot and a “B”, and low-
quality (grade ‘C’) YSOs are indicated by a yellow dot and a “C.”
Figure 17 shows the 8 $\mu$m mosaic with the positions of the YSO candidates
overlaid, color-coded by YSO quality. Most of the grade A and B objects are
clustered near the previously-known YSOs. The Sa101 region has a relatively
tight clumping of most (16) of the YSOs, with a median nearest neighbor
distance of $\sim$62$\arcsec$. The CG4 region has 6 YSOs, much less tightly
clumped, with a median nearest neighbor distance nearly 5 times larger,
$\sim$301$\arcsec$. Clustering is also very commonly found among young stars,
so especially in the case of the Sa101 association, the fact that they are
clustered also bolsters the case that they are legitimate YSOs.
Figure 18: Optical $M_{V}$ vs. $V-I_{C}$ color-magnitude diagram. The Siess et
al. (2000) isochrones are included (1, 10, and 30 Myr), but shifted to
absolute $M_{V}$. The black stars are our YSO candidates, assuming a distance
of 500 pc, and the red stars are our YSO candidates, assuming a distance of
300 pc. The grey $\times$ symbols are Taurus YSOs (from Rebull et al. 2010 and
Güdel et al. 2007 and references therein), taken to be at 140 pc; Torres et
al. (2007, 2009). The Taurus distribution is broad and there are many fewer
CG4+Sa101 stars, but this distribution weakly suggests that CG4+Sa101 is
farther rather than closer (see text).
Because the distance to this association is uncertain, we looked at whether
the relative placement of stars in the optical color-magnitude diagram could
be used to constrain the distance to the stars. Figure 18 presents the $M_{V}$
vs. $V-I$ CMD, comparing CG4+Sa101 stars to YSOs in Taurus (with data from
Rebull et al. 2010, Güdel et al. 2007, and references therein). Based on
morphological grounds (e.g., the degree to which the YSOs are still embedded
in their natal gas), we expect that the CG4+Sa101 stars might be slightly
younger than the often more physically dispersed Taurus stars. On the other
hand, based on the ratio of Class I to Class II sources, the CG4+Sa101 objects
might be slightly older than Taurus. In Figure 18, there are not many
CG4+Sa101 objects, and the distribution is broad, but assuming a distance of
500 pc, then CG4+Sa101 appears to be quite comparable in age to Taurus at
$\sim$3 Myr. Assuming a distance of 300 pc, the CG4+Sa101 stars are on the
whole older than the Taurus stars. Figure 18 thus weakly suggests that
CG4+Sa101 is farther rather than closer.
## 5 The Galaxy
The galaxy in our field, ESO 257- G 019, has not been the target of any case
studies before. The galaxy is classified as type ‘SB(s)cd?’ in NED, the NASA
Extragalactic Database. We measure a major axis length of 3$\farcm$3 or 34.7
kpc at the assumed distance of 36.1 Mpc at the 3$\sigma$ level above the noise
in our channel 1 image. We also measured a minor axis length of 0$\farcm$8 or
8.4 kpc in the same image for this highly inclined galaxy. The surface
brightness distribution is close to exponential within the central 3 kpc. The
galaxy has a clumpy structure at larger distances from its center, with the
most prominent clumps appearing about 26″ and 73″ (along the major axis) to
the northwest and 26″ to the southeast (just outside of the plane of the
galaxy). We also measured the IRAC [3.6]$-$[4.5] and [3.6]$-$[8.0] colors by
using all the pixels above the 10$\sigma$ level. The [3.6]$-$[4.5] color is
0.0 and the [3.6]$-$[8.0] color is 1.9, both close to the typical values for
late-type galaxies, as determined by Pahre et al. (2004), and consistent with
the galaxy classification as given in NED. ESO 257- G 019 appears to be a
fairly isolated galaxy, with no nearby companions within our mapped areas.
## 6 Conclusions
We used Spitzer Space Telescope data from the Spitzer Heritage Archive to
search for new candidate young stars in the CG4+Sa101 region of the Gum
Nebula. This region appears to be actively forming young stars, perhaps as a
result of the passage of an ionization front from the stars powering the Gum
Nebula (Reipurth & Pettersson 1993). We rediscovered all six of the previously
identified young stars in our maps as having excesses at Spitzer bands. We
have also discovered 16 entirely new young star candidates based on their
Spitzer properties. We used optical ground-based data and near-infrared data
from 2MASS to help constrain the SEDs of these new young star candidates. We
have sorted the new young star candidates into grades of confidence that they
are, in fact, legitimate new young stars. We find 16 high confidence (grade
“A”) objects, including the 6 previously identified YSOs, 2 mid-grade
confidence (grade “B”), and 4 low-grade confidence (grade “C”) young star
candidates. For all of the new young star candidates, though, additional data
will be needed, such as optical photometry where it is missing, and optical
spectroscopy to obtain spectral types (and rule out extragalactic
contamination). Most of the new objects are clustered in the Sa101 region, and
most are SED Class II.
This work was performed as part of the NASA/IPAC Teacher Archive Research
Program (NITARP; http://nitarp.ipac.caltech.edu), class of 2010\. We
acknowledge here all of the students and other folks who contributed their
time and energy to this work and the related poster papers presented at the
January 2011 American Astronomical Society (AAS) meeting in Seattle, WA. They
include: With V. Hoette: C. Gartner, J. VanDerMolen, L. Gamble, L. Matsche, A.
McCartney, M. Doering, S. Brown, R. Wernis, J. Wirth, M. Berthoud. With C.
Johnson: R. Crump, N. Killingstad, T. McCanna, S. Caruso, A. Laorr, K., Mork,
E. Steinbergs, E. Wigley. With C. Mallory: N. Mahmud. We thank J. R. Stauffer
for helpful comments on the manuscript. This research has made use of NASA’s
Astrophysics Data System (ADS) Abstract Service, and of the SIMBAD database,
operated at CDS, Strasbourg, France. This research has made use of data
products from the Two Micron All-Sky Survey (2MASS), which is a joint project
of the University of Massachusetts and the Infrared Processing and Analysis
Center, funded by the National Aeronautics and Space Administration and the
National Science Foundation. These data were served by the NASA/IPAC Infrared
Science Archive, which is operated by the Jet Propulsion Laboratory,
California Institute of Technology, under contract with the National
Aeronautics and Space Administration. This research has made use of the
Digitized Sky Surveys, which were produced at the Space Telescope Science
Institute under U.S. Government grant NAG W-2166. The images of these surveys
are based on photographic data obtained using the Oschin Schmidt Telescope on
Palomar Mountain and the UK Schmidt Telescope. The plates were processed into
the present compressed digital form with the permission of these institutions.
This research has made use of the NASA/IPAC Extragalactic Database (NED) which
is operated by the Jet Propulsion Laboratory, California Institute of
Technology, under contract with the National Aeronautics and Space
Administration. The research described in this paper was partially carried out
at the Jet Propulsion Laboratory, California Institute of Technology, under
contract with the National Aeronautics and Space Administration.
## References
* Allen et al. (2004) Allen, L. E., Calvet, N., D’Alessio, P., Merin, B., Hartmann, L., Megeath, S. T., Gutermuth, R., Muzerolle, J., Pipher, J., Myers, P., Fazio, G. 2004, ApJS, 154, 363
* Bok & Reilly (1947) Bok, B. J., & Reilly, E. F., 1947, ApJ, 105, 255
* Fazio et al. (2004) Fazio, G., Hora, J. L., Allen, L. E., Ashby, M. L. N., Barmby, P., Deutsch, L. K., Huang, J.-S., Kleiner, S., Marengo, M., Megeath, S. T., Melnick, G. J., Pahre, M. A., Patten, B. M., Polizotti, J., Smith, H. A., Taylor, R. S., Wang, Z., Willner, S. P., Hoffmann, W. F., Pipher, J. L., Forrest, W. J., McMurty, C. W., McCreight, C. R., McKelvey, M. E., McMurray, R. E., Koch, D. G., Moseley, S. H., Arendt, R. G., Mentzell, J. E., Marx, C. T., Losch, P., Mayman, P., Eichhorn, W., Krebs, D., Jhabvala, M., Gezari, D. Y., Fixsen, D. J., Flores, J., Shakoorzadeh, K., Jungo, R., Hakun, C., Workman, L., Karpati, G., Kichak, R., Whitley, R., Mann, S., Tollestrup, E. V., Eisenhardt, P., Stern, D., Gorjian, V., Bhattacharya, B., Carey, S., Nelson, B. O., Glaccum, W. J., Lacy, M., Lowrance, P. J., Laine, S., Reach, W. T., Stauffer, J. A., Surace, J. A., Wilson, G., Wright, E. L., Hoffman, A., Domingo, G., Cohen, M. 2004, ApJS, 154, 10
* Franco (1990) Franco, G. A. P., 1990, A&A, 227, 499
* Gautier et al. (2007) Gautier, T. N., Rieke, G. H., Stansberry, J., Bryden, G., Stapelfeldt, K., Werner, M., Beichman, C., Chen, C., Su, K., Trilling, D., Patten, B., Roellig, T. 2007, ApJ, 667, 527
* Guieu et al. (2009) Guieu, S., Rebull, L. M., Stauffer, J. R., Hillenbrand, L. A., Carpenter, J. M., Noriega-Crespo, A., Padgett, D. L., Cole, D. M., Carey, S. J., Stapelfeldt, K. R., Strom, S. E. 2009, ApJ, 697, 787
* Guieu et al. (2010) Guieu, S., Rebull, L. M., Stauffer, J. R., Vrba, F. J., Noriega-Crespo, A., Spuck, T., Roelofsen Moody, T., Sepulveda, B., Weehler, C., Maranto, A., Cole, D. M., Flagey, N., Laher, R., Penprase, B., Ramirez, S., Stolovy, S., 2010, ApJ, 720, 46
* Guedel et al. (2007) Guëdel, M., et al., 2007, A&A, 468, 353
* Gutermuth et al. (2008) Gutermuth, R., Myers, P. C., Megeath, S. T., Allen, L. E., Pipher, J. L., Muzerolle, J., Porras, A., Winston, E., Fazio, G., 2008, ApJ, 674, 336
* Gutermuth et al. (2009) Gutermuth, R., Megeath, S. T., Myers, P. C., Allen, L. E., Pipher, J. L., Fazio, G. G, 2009, ApJS, 184, 18
* Harvey et al. (2007a) Harvey, P., Rebull, L., Brooke, T., Spiesman, W., Chapman, N., Huard, T., Evans, N., Cieza, L., Lai, S.-P., Allen, L., Mundy, L., Padgett, D., Sargent, A., Stapelfeldt, K., Myers, P., van Dishoeck, E., Blake, G., Koerner, D. 2007, ApJ, 663, 1139
* Hawarden & Brand (1976) Hawarden, T. G., and Brand, P. W. J., L., 1976, MNRAS, 175, 19
* Haikala et al. (2010) Haikala, L. K., Mäkelä, M. M., & Väisänen, P., 2010, A&A, 522, 106
* Jeffries et al. (2007) Jeffries, R., Oliveira, J., Naylor, T., Mayne, N., & Littlefair, S., 2007, MNRAS, 376, 580
* Kim et al. (2003) Kim, J. S., Walter, F., & Wolk, S., 2003, in The Future of Cool-Star Astrophysics: 12th Cambridge Workshop on Cool Stars, Stellar Systems, and the Sun (2001 July 30 - August 3), eds. A. Brown, G.M. Harper, and T.R. Ayres, (University of Colorado), 2003, p. 799-804.
* Laher et al. (2011a) Laher, R., Gorjian, V., Rebull, L., Masci, F., Kulkarni, S., Law, N., 2011a, PASP, submitted
* Laher et al. (2011b) Laher, R., et al. 2011b, PASP, submitted
* Landolt (1992) Landolt, A., 1992, AJ, 104, 340
* Lejeune et al. (1997) Lejeune, T., Cuisinier, F., & Buser, R., 1997, A&AS, 125, 229
* Lejeune et al. (1998) Lejeune, T., Cuisinier, F., & Buser, R., 1998, A&AS, 130, 65
* Makovoz & Marleau (2005) Makovoz, D., & Marleau, F. 2005, PASP, 117, 1113
* Mink (1997) Mink, D., 1997, in Astronomical Data Analysis Software and Systems VI, A.S.P. Conference Series, Vol. 125, 1997, Gareth Hunt and H. E. Payne, eds., p. 249
* Osterbrock (1957) Osterbrock, D., 1957, ApJ, 125, 622
* pahre (2004) Pahre, M. A., Ashby, M. L. N., Fazio, G. G., & Willner, S. P. 2004, ApJS, 154, 235
* Pettersson (2008) Pettersson, B., 2008, in Handbook of Star Forming Regions, Volume II: The Southern Sky ASP Monograph Publications, Vol. 5. Edited by Bo Reipurth, p. 43
* Pozzo et al. (2000) Pozzo, M., Jeffries, R., Naylor, R., Totten, E., Harmer, S., Kenyon, M., 2000, MNRAS, 313, L23
* Rebull et al. (2007) Rebull, L., Stapelfeldt, K. R., Evans, N. J., II, J rgensen, J. K., Harvey, P. M., Brooke, T. Y., Bourke, T. L., Padgett, D. L., Chapman, N. L., Lai, S.-P., Spiesman, W. J., Noriega-Crespo, A., Mer n, B., Huard, T., Allen, L. E., Blake, G. A., Jarrett, T., Koerner, D. W., Mundy, L. G., Myers, P. C., Sargent, A. I., van Dishoeck, E. F., Wahhaj, Z., Young, K. E., 2007, ApJS, 171, 447
* Rebull et al. (2010) Rebull, L., Padgett, D. L., McCabe, C.-E., Hillenbrand, L. A., Stapelfeldt, K. R., Noriega-Crespo, A., Carey, S. J., Brooke, T., Huard, T., Terebey, S., Audard, M., Monin, J.-L., Fukagawa, M., G del, M., Knapp, G. R., Menard, F., Allen, L. E., Angione, J. R., Baldovin-Saavedra, C., Bouvier, J., Briggs, K., Dougados, C., Evans, N. J., Flagey, N., Guieu, S., Grosso, N., Glauser, A. M., Harvey, P., Hines, D., Latter, W. B., Skinner, S. L., Strom, S., Tromp, J., Wolf, S., 2010, ApJS, 186, 259
* Rebull et al. (2011) Rebull, L., Guieu, S., Stauffer, J. R., Hillenbrand, L. A., Noriega-Crespo, A., Stapelfeldt, K. R., Carey, S. J., Carpenter, J. M., Cole, D. M., Padgett, D. L., Strom, S. E., Wolff, S. C., 2011, ApJS, 193, 25
* Reipurth (1983) Reipurth, B., 1983, A&A, 117, 183
* Reipurth & Pettersson (1993) Reipurth, B., & Pettersson, B., 1993, A&A, 267, 439
* Rieke et al. (2004) Rieke, G., Young, E. T., Engelbracht, C. W., Kelly, D. M., Low, F. J., Haller, E. E., Beeman, J. W., Gordon, K. D., Stansberry, J. A., Misselt, K. A., Cadien, J., Morrison, J. E., Rivlis, G., Latter, W. B., Noriega-Crespo, A., Padgett, D. L., Stapelfeldt, K. R., Hines, D. C., Egami, E., Muzerolle, J., Alonso-Herrero, A., Blaylock, M., Dole, H., Hinz, J. L., Le Floc’h, E., Papovich, C., Perez-Gonzalez, P. G., Smith, P. S., Su, K. Y. L., Bennett, L., Frayer, D. T., Henderson, D., Lu, N., Masci, F., Pesenson, M., Rebull, L., Rho, J., Keene, J., Stolovy, S., Wachter, S., Wheaton, W., Werner, M. W., Richards, P. L. 2004, ApJS, 154, 25
* Sandqvist (1976) Sandqvist, A., 1976, MNRAS, 177, 69
* Sandqvist (1977) Sandqvist, A., 1977, A&A, 57, 467
* Siess et al. (2000) Siess, L., Dufour, W., Forestini, M., 2000, A&A, 358, 593
* Skrutskie et al. (2006) Skrutskie, M., Cutri, R. M., Stiening, R., Weinberg, M. D., Schneider, S., Carpenter, J. M., Beichman, C., Capps, R., Chester, T., Elias, J., Huchra, J., Liebert, J., Lonsdale, C., Monet, D. G., Price, S., Seitzer, P., Jarrett, T., Kirkpatrick, J. D., Gizis, J. E., Howard, E., Evans, T., Fowler, J., Fullmer, L., Hurt, R., Light, R., Kopan, E. L., Marsh, K. A., McCallon, H. L., Tam, R., Van Dyk, S., Wheelock, S. 2006, AJ, 131, 1163
* Stauffer et al. (20087) Stauffer, J., et al., 2007, ApJS, 172, 663
* Torres et al. (2007) Torres, R. M., Loinard, L., Mioduszewski, A. J., & Rodríguez, L. F. 2007, ApJ, 671, 1813
* Torres et al. (2009) Torres, R. M., Loinard, L., Mioduszewski, A. J., & Rodríguez, L. F. 2009, ApJ, 698, 242
* Werner et al. (2004) Werner, M., Roellig, T. L., Low, F. J., Rieke, G. H., Rieke, M., Hoffmann, W. F., Young, E., Houck, J. R., Brandl, B., Fazio, G. G., Hora, J. L., Gehrz, R. D., Helou, G., Soifer, B. T., Stauffer, J., Keene, J., Eisenhardt, P., Gallagher, D., Gautier, T. N., Irace, W., Lawrence, C. R., Simmons, L., Van Cleve, J. E., Jura, M., Wright, E. L., Cruikshank, D. P., 2004, ApJS, 154, 1
* Wilking et al. (2001) Wilking, B., Bontemps, S., Schuler, R., Greene, T., Andre, P., 2001, ApJ, 551, 357
## Appendix A Comments on individual objects
### A.1 073049.1-470209
This object, 073049.1-470209, the westernmost in the set, is also the reddest
source in the set. It is the YSO candidate farthest in the upper right of the
IRAC color-color diagram (Fig. 8) and is the reddest and faintest YSO in [3.6]
vs. [3.6]$-$[24] (Fig. 9). Based on its SED (Fig. 14), it is not surprising
that there are no optical or even 2MASS counterparts; it very steeply falls
from 24 $\mu$m back to 3.6 $\mu$m, and it seems that very deep integrations
would be needed to obtain measurements of this object at shorter wavelength
bands. It is not detected at 70 $\mu$m, but it is in a high background region,
so the 70 $\mu$m limit is not very constraining. This type of SED can be found
in very young, very embedded, very low mass objects, but also in extragalactic
contaminants. Because of its proximity to the rest of the Sa101 objects, it
remains in the list, but we have given it the lowest-quality grade (C). Its
steep SED means that it is one of three Class I objects in our list.
Additional follow-up data are needed to determine the true nature of this
source.
### A.2 073049.8-465806
This object, 073049.8-465806, is relatively faint at 3.6 $\mu$m, at least in
comparison to the rest of the YSO candidates here. It appears on the top edge
of the clump of likely extragalactic sources in the [3.6] vs. [3.6]$-$[24]
plot (Fig. 9). It is only detected at $K_{s}$ in 2MASS, with limits at $J$ and
$H$. Based on the SED, it looks quite extinguished, e.g., there seems to be
high $A_{V}$ in the direction of this source. It is located in close
(projected) proximity to other YSOs and YSO candidates; YSOs are also
frequently found near other YSOs. It is not detected at 70 $\mu$m, but the
limit is shallow enough that it does not provide a strong constraint on the
SED. It has an SED characteristic of YSOs, and that plus the apparently high
$A_{V}$ plus its location in the cloud close to other YSOs has yielded a “B”
grade. The steep rise of the SED from 4.5 to 24 $\mu$m influences the
classification slope fitting such that it is one of three Class I objects in
our list. Additional follow-up data are needed.
### A.3 073053.6-465742
Object 073053.6-465742 has 2MASS data, but no available optical data, and just
a limit at 70 $\mu$m. The SED (Fig. 14) suggests that there is high $A_{V}$
towards this source. Given our overly simple modelling, there seems to be a
small but significant excess at 8 $\mu$m (better modelling is required to
confirm this), and there is an apparently large excess at 24 $\mu$m. Whenever
an apparent excess is seen only at 24 $\mu$m, because the MIPS-24 camera has
lower spatial resolution than the shorter bands, there is a risk that the flux
density measured at 24 $\mu$m is contaminated by source confusion, most likely
with a nearby background source, but it could also be with a low mass
companion to the young star candidate. There is only a limit at 70 $\mu$m to
help constrain the SED, though it is located in close (projected) proximity to
other YSOs and YSO candidates. In this case, the fact that the excess appears
to be significant at at least 2 bands, its proximity to other nearby Sa101
sources, and the likelihood that there is high $A_{V}$ towards this source
suggests that it may be a legitimate YSO, so we have placed it in the highest
quality source bin (grade “A”), and it is a Class II. Spectroscopy of this
target is required to determine whether or not it is a YSO.
### A.4 073057.5-465611=CG-Ha2
CG-H$\alpha$2 was identified in Reipurth & Pettersson (1993) as a YSO. It has
counterparts at all bands we considered here, including 70 $\mu$m. In the
optical (Fig. 11), it is in the locus of young stars above the 30 Myr
isochrone, and it appears there to be the lowest-mass object of the YSO
candidates with optical data. The spectral type given in Reipurth & Pettersson
(1993) is M2:, so that is the model we have used in Figure 14. It also is one
of the stars with a clear $B$ excess in Fig. 13, which can also be noted in
the SED itself. Such an excess is a characteristic of active accretion.
Because it is a previously identified YSO, it appears as grade “A+” in Table
3; it is a Class II.
### A.5 073106.5-465454
073106.5-465454 is the northernmost YSO candidate within the Sa101
association. It has a 2MASS counterpart, but the SED (Fig. 14) seems somewhat
“disjoint” between the 2MASS and IRAC portions. This could be from very high
extinction, or intrinsic stellar variability between the epochs of observation
at the NIR and MIR, or source confusion. It is detected at 24 $\mu$m but there
is only a limit at 70 $\mu$m. We categorize it as a “C”-grade YSO candidate,
with a “flat” SED class. Additional data are needed to confirm or refute the
YSO status of this object.
### A.6 073108.4-470130
This object, 073108.4-470130, has an SED (Fig. 14) quite consistent with other
known YSOs, although there are no optical data and just a limit at 70 $\mu$m.
If it is really a young star, it has a significant excess in at least 8 and 24
$\mu$m, and probably 5.8 $\mu$m as well. We categorize it as a very high
quality YSO candidate (“A”), SED Class II.
### A.7 073109.9-465750
Like the prior object, 073109.9-465750 has an SED (Fig. 14) quite consistent
with other known YSOs, again without optical data. There is a limit at 70
$\mu$m, and it is in a high background region, so there is little constraint
to the SED. Like the prior object (073108.4-470130), if it is really a young
star, it has a significant excess in at least 2 bands. We categorize it as a
very high quality YSO candidate (“A”), SED Class II.
### A.8 073110.8-470032=CG-Ha3
CG-H$\alpha$3 is another YSO identified in Reipurth & Pettersson (1993), and
has some photometry reported there as well. It is detected in all bands
considered here, $B$ through 70 $\mu$m. In the optical (Fig. 11), it is well
within the clump of YSO candidates above the 30 Myr isochrone. As seen in its
SED (Fig. 14), the optical and NIR data from Reipurth & Pettersson (1993) are
quite consistent with the optical and NIR data we report here, with some weak
evidence for variability in the optical. Reipurth & Pettersson (1993) report a
type of K7, and that is the model we have used in Fig. 14. Like CG-H$\alpha$2,
the SED suggests that there may be some $B$-band excess (most likely due to
accretion); in Fig. 13, it appears as one of the objects apparently pushed
above the main sequence due to reddening. Because it is a previously
identified YSO, it appears as grade “A+” in Table 3; it is a Class II.
### A.9 073114.6-465842
This object can be seen at all four IRAC bands and both of the MIPS bands
considered here. It also has a 2MASS counterpart, but no optical data. The SED
(Fig. 14) suggests that there is relatively high $A_{V}$ towards this object.
Of the new YSO candidates that appear to have some photometric points on the
photosphere, this object appears to have the highest $A_{V}$. If this object
is actually a young star, it has an IR excess in four bands. We categorize
this as another highest-quality (grade “A”) YSO candidate, with SED Class
“flat.” Additional data are needed, including optical photometry.
### A.10 073114.9-470055
Another new YSO candidate, 073114.9-470055 has a 2MASS counterpart but no
optical data and just a limit at 70 $\mu$m (Fig. 15). If it is a legitimate
young star, it has an IR excess in more than two bands. We categorize it as a
grade “A” YSO candidate, SED Class II. Additional data are needed, including
optical photometry.
### A.11 073121.8-465745=CG-Ha4
CG-H$\alpha$4 is another YSO from Reipurth & Pettersson (1993), and has some
photometry reported there as well as a spectral type of K7–M0. As discussed
above, this object has a high-quality $J$ measurement in the 2MASS point
source catalog, but the $HK_{s}$ measurements were flagged as having low-
quality photometry. We accepted the low-quality measurements with a large
uncertainty and added them to the SED in Fig. 15. Given the the normalization
as seen there, there could be a disk excess beginning as early as 3.6 $\mu$m,
but with a better measurement at the near-IR and thus a better constraint on
the location of the photosphere, the disk excess could start at a longer
wavelength, and we suspect that it probably does. The optical data we report,
as compared with the optical data from Reipurth & Pettersson (1993), suggest
substantial intrinsic source variability, a common characteristic of young
stars. In the optical color-magnitude diagram in Figure 11, it is apparently
the oldest YSO of the set of YSO candidates above the 30 Myr isochrone. This
object’s apparent intrinsic variability could move it around in this diagram;
moreover, the uncertain distance to the CG4+Sa101 cloud moves this object
above or below the 30 Myr isochrone (as discussed above). This is the only
previously-known YSO in our set that is not detected at 70 $\mu$m. It appears
in Fig. 13 as having one of the smallest $B$-band excesses, but the photometry
as seen in Fig. 15 suggests that perhaps its placement in Fig. 13 could be
improved. Because it is a previously identified YSO, it appears as grade “A+”
in Table 3; it is a Class II.
### A.12 073136.6-470013=CG-Ha5
CG-H$\alpha$5 is reported as a K2–K5 in Reipurth & Pettersson (1993). It is
detected at all available bands we discuss here. It appears in Figure 13 as
having a $B$-band excess and subject to high $A_{V}$. Because it is a
previously identified YSO, it appears as grade “A+” in Table 3; it is a Class
II.
### A.13 073137.4-470021=CG-Ha6
CG-H$\alpha$6, located very close to CG-H$\alpha$5, is reported as a K7 in
Reipurth & Pettersson (1993). It is detected at all available bands we discuss
here, $B$ through 70 $\mu$m. The infrared excess appears to start around 8
$\mu$m, suggesting a possible inner disk hole; more detailed modelling is
needed to be sure. It appears in Figure 13 as having a $B$-band excess and
subject to high $A_{V}$. It is another “A+”, Class II, in Table 3.
### A.14 073143.8-465818
This object, 073143.8-465818, is detected at optical through 24 $\mu$m. It is
a high-quality YSO candidate (grade “A”), and SED class “flat.” In the optical
(Fig. 11), it is the lowest mass YSO candidate without a prior identification.
It has a clear $B$ excess in Fig. 13. It is not detected at 70 $\mu$m.
Spectroscopy of this object is required to confirm/refute its YSO status and
obtain an initial guess at its mass.
### A.15 073144.1-470008
073144.1-470008 is another high-quality YSO candidate (grade “A”, Class II) ,
detected in the optical through 24 $\mu$m. It too has a clear $B$ excess in
Fig. 13. It is undetected at 70 $\mu$m.
### A.16 073145.6-465917
073145.6-465917 is the last of the YSO candidates in this list associated with
Sa101. It is a high-quality (grade “A”) YSO candidate, detected at optical
through 24 $\mu$m, but with only a limit at 70 $\mu$m. Based on the
approximate SED fit in Fig. 15, the disk excess starts at 8 $\mu$m, though
additional modelling (and a spectral type) are required to be sure. In Fig.
13, it appears as having a $B$-band excess and subject to $A_{V}$. It has an
SED Class of II.
### A.17 073326.8-464842=CG-Ha7
CG-H$\alpha$7 is the only previously known YSO in the CG4 region covered by
our maps. Reipurth & Pettersson (1993) report a K5 spectral type; their
optical data are quite consistent with the optical data we report. We detect
this object at all available bands reported here, $B$ through 70 $\mu$m. In
Fig. 11, it is the highest apparent mass YSO candidate, though reddening can
strongly influence this placement. The disk excess (Fig. 15) begins at the
longer IRAC bands, suggesting a possible inner disk hole. It does not appear
to have a $B$-band excess in Fig. 13. It is located far from any nebulosity
(Fig. 17), as noted by Reipurth & Pettersson (1993). It is an SED Class II.
### A.18 073337.0-465455
Object 073337.0-465455 has counterparts at $J$ through 24 $\mu$m; there are no
optical data available. Its placement in the [3.6] vs. [3.6]$-$[24] diagram
(Fig. 9) suggests that it is at the edge of our detection limits; it appears
at [3.6]$-$[24]$\sim$3.4, [3.6]$\sim$14\. It is undetected at 70 $\mu$m. If it
is a legitimate YSO, it has an excess in at least 3 bands. It is located just
off the northern edge of the globule. We place it in the “Grade B” bin, and it
is SED Class II. Additional follow-up data are needed.
### A.19 073337.6-464246
This object, 073337.6-464246, is detected at optical through 24 $\mu$m, but
not at 70 $\mu$m. It is the candidate seen in Fig. 11 as very low in the
optical color-magnitude diagram, and closest to the IRAC photospheric locus of
points in Fig. 8. There is a marginal excess seen at 5.8 and 8 $\mu$m, and
then a larger excess at 24 $\mu$m; as discussed above, excesses seen only (or
primarily) in a single point at 24 $\mu$m can be a result of source confusion
with adjacent sources. This source is also located very far from any
nebulosity, near the edges of some of our IRAC maps (Fig. 17). We place this
marginal candidate as a grade “C”; it is an SED Class II. Additional data are
needed.
### A.20 073406.9-465805
073406.9-465805 is the only YSO candidate of our set located projected onto
the globule/elephant trunk. It appears to also be projected onto a bright rim
at 8 $\mu$m (Fig 17), so it is potentially being revealed now by the action of
the ionization front. There are no optical or 70 $\mu$m counterparts, but
there are counterparts at $J$ through 24 $\mu$m. If it is a young star, there
is an IR excess at more than 2 bands. We place it as a Grade “A,” Class II
object, and additional data are needed.
### A.21 073425.3-465409
073425.3-465409 is a very interesting source. In the [3.6] vs. [3.6]$-$[24]
diagram (Fig. 9), it is the brightest, reddest object ([3.6]$\sim$11.7,
[3.6]$-$[24]$\sim$8.2). And, it is located right on the “lip” of the globule
(Fig. 17), in a region where YSOs might be expected to form. None of the other
“fingers” of molecular cloud have apparent associations with infrared objects.
After the automatic bandmerging described in §2 above, this object did not
appear to have a 2MASS counterpart. However, in looking at the overall shape
and brightness of the SED, we suspected that it should have a match in 2MASS.
Examining the images by hand, there is clearly a source at this location
visible in 2MASS and the POSS images. At the POSS bands, it is distinctly
fuzzy. It is also slightly resolved at $J$; it has been identified with 2MASX
J07342550-4654106. The position given for this source in the 2MASS point
source catalog is 2.1$\arcsec$ away from the position in the IRAC catalog, and
the position from the 2MASS extended source catalog is 1.98$\arcsec$ away from
the IRAC position, both of which are very large compared to most of the rest
of the catalog. If it is also slightly resolved at IRAC, this could affect the
positional uncertainty. Certainly, the structure of the molecular cloud around
this source is complex and could also have affected the position in the
catalog. In any case, manual inspection of the images ensures that the object
is really the same in the two catalogs, so the $JHK_{s}$ photometry from the
extended source catalog was attached to this Spitzer source. The SED (Fig. 16)
rises steeply at the long wavelengths, but it is unfortunately just off the
edge of the 70 $\mu$m maps.
Given this object’s SED and location, we have classified this as a high-
quality (grade “A”) YSO candidate, with SED Class I. Additional data are
required to confirm or refute this object’s status.
### A.22 073439.9-465548
073439.9-465548, the last object in our list, has counterparts at $J$ through
only 8 $\mu$m, and as such, it has a sparse SED. (It is off the edge of both
the 24 and 70 $\mu$m maps.) It is located just to the East of the end of the
globule, consistent with it having been relatively recently uncovered. The
shape of the IRAC portion of the SED is a little different than that for other
objects in this set; despite the negative overall slope of the IRAC points,
none of the IRAC points seem to be photospheric (compare to other SEDs in Fig.
14–16), and there is even a very slight negative curvature (the slope between
3.6 and 4.5 $\mu$m is slightly shallower than that between 4.5 and 5.8
$\mu$m). On the basis of experience looking at many hundreds of SEDs for YSOs
and contaminants (Rebull et al. 2010, 2011), we have some reservations about
the shape of this SED; without additional photometric data, it is hard to give
this object a high grade. We give this one a “C” grade; it is a Class II.
|
arxiv-papers
| 2011-05-05T21:36:15 |
2024-09-04T02:49:18.626348
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "L. M. Rebull, C. H. Johnson, V. Hoette, J. S. Kim, S. Laine, M.\n Foster, R. Laher, M. Legassie, C. R. Mallory, K. McCarron, W. H. Sherry",
"submitter": "Luisa Rebull",
"url": "https://arxiv.org/abs/1105.1180"
}
|
1105.1283
|
# Large peak-to-valley ratio of negative-differential-conductance
in graphene p-n junctions
V. Hung Nguyen1,2111E-mail: viet-hung.nguyen@u-psud.fr, A. Bournel1 and P.
Dollfus1 1Institut d’Electronique Fondamentale, UMR8622, CNRS, Univ. Paris
Sud, 91405 Orsay, France
2Center for Computational Physics, Institute of Physics, VAST, P.O. Box 429 Bo
Ho, Hanoi 10000, Vietnam
###### Abstract
We investigate the transport characteristics of monolayer graphene p-n
junctions by means of the non-equilibrium Green’s function technique. It is
shown that thanks to the high interband tunneling of chiral fermions and to a
finite bandgap opening when the inversion symmetry of graphene plane is
broken, a strong negative-differential-conductance behavior with peak-to-
valley ratio as large as a few tens can be achieved even at room temperature.
The dependence of this behavior on the device parameters such as the Fermi
energy, the barrier height, and the transition length is then discussed.
## I Introduction
Since the discovery of the Esaki tunnel diode esak58 , the effect of negative-
differential-conductance (NDC) has given rise to an intense research activity
from fundamental aspects of transport to possible applications including
oscillator, frequency multiplier, memory, fast switching, etc. mizu95 .
Motivated by the recent development of graphene nanoelectronics neto09 ;
schw10 ; lin010 , this effect has been also investigated and discussed in some
nanostructures based on monolayer graphene vndo08 ; chau09 , bilayer graphene
hung09 , and graphene nanoribbons ren009 ; lian10 ; vndo10 . However, these
studies have shown that due to their zero-bandgap, the peak-to-valley ratio
(PVR) of NDC in 2D-graphene structures is relatively small, while, though it
can be improved significantly in narrow graphene nanoribbons, the results
obtained are severely affected by increasing the ribbon’s width and/or,
especially, by the edge defects lian10 ; vndo10 .
A bandgap opening may be a key-step to improve the operation of graphene
devices. Beyond the technique which consists in patterning a graphene sheet
into nanoribbons mhan07 , some recent works have suggested that a bandgap can
open when the inversion symmetry of the graphene plane is broken, e. g., by
the interaction with the substrate zhou07 ; vita08 ; ende10 , the patterned
hydrogen adsorption balo10 , the adsorption of water molecules yava10 , or by
the controlled structural modification echt08 . In particular, the experiment
reported in zhou07 has shown that graphene epitaxially grown on SiC can
exhibit a bandgap up to $260$ meV. Though relatively small compared to that in
conventional semiconductors, it is about ten times greater than the thermal
energy at room temperature, which has stimulated further investigations of
graphene-based transistors lian07 ; kedz08 ; mich10 . In this work, our aim is
to look for possibilities of achieving a strong NDC behavior in monolayer
graphene p-n junctions. This expectation comes from the fact that the
appearance of a finite bandgap can result in a low valley current, while, the
high interband tunneling of chiral fermions in graphene may lead to a high
current peak. Therefore, a large PVR is expected to be achieved in these
junctions.
## II Model and calculation
Figure 1: (Color online) Schematic of p-n junction based on monolayer graphene
(a) and its potential profile along the transport direction (b). The n-doped
(left) and the p-doped (right) regions are used to form a potential barrier in
the device. The transition between two regions is characterized by the length
$L$.
Graphene has a honeycomb lattice structure with a unit cell consisting of two
carbon atoms - normally referred to as $A$ and $B$ atoms. To describe the
charge states in the system, a simple nearest neighbor tight binding model can
be conveniently used neto09 , with $a_{c}=0.142$ nm as carbon-carbon distance,
$t=2.7$ eV as hopping energy between nearest neighbor sites, and
$\varepsilon_{A}=-\varepsilon_{B}=\Delta$ as the onsite energies in the two
sublattices. When $\Delta=0$, this model results in a band structure with zero
gap, i.e., the conduction and valence bands meet at the K and K’ points in the
Brillouin zone. By making $\Delta\neq 0$, the inversion symmetry is broken and
therefore a finite gap opens in graphene’s band-structure. The energy
dispersion close to the K-point simply writes
$E\left({\vec{k}}\right)=\pm\sqrt{\hbar^{2}v_{F}^{2}\left({k_{x}^{2}+k_{y}^{2}}\right)+\Delta^{2}}$
(1)
where $v_{F}=3a_{c}t/2\hbar\approx 10^{6}$ m/s is the Fermi velocity,
$\vec{k}=\left({k_{x},k_{y}}\right)$ is the 2D-momentum, and the sign +/-
stands for the conduction/valence band, respectively. From eq. (1), the
bandgap is determined as $E_{g}=2\Delta$. To describe the excited states
around such K-point, one can use the following effective (massive Dirac-like)
Hamiltonian:
$H=-i\hbar
v_{F}\left({\sigma_{x}\partial_{x}+\sigma_{y}\partial_{y}}\right)+\Delta\sigma_{z}+U$
(2)
where $U$ is the external potential energy and $\sigma_{x,y,z}$ are the Pauli
matrices. The Hamiltonian (2) is now used to study the transport
characteristics of the p-n junction schematized in Fig. 1. The p-doped and
n-doped graphene regions can be generated by electrostatic doping huar07 ;
zhan08 or by chemical doping farm09 ; bren10 . The back-gate is used to
control the carrier density in the graphene layer by applying a voltage
$V_{sub}$. The junction is characterized by the potential barrier $U_{0}$ and
the transition length $L$. This length is the size of the region across which
the charge density changes monotonically from n-type to p-type. Though
expected to be short farm09 , it is finite and is considered in this work as a
parameter.
In principle, the presence of defects and impurities is unavoidable and may
result in a substantial amount of disorder in the graphene. However, such
disorder affects strongly the electronic characteristics only when the
graphene sheets are narrow cres08 . Thus, we assume that it can be negligible
in this work where we consider the transport in wide systems vozm05 . Our
study addresses the ballistic transport through the junction and the non-
equilibrium Green’s function (NEGF) technique (see the formulation in ref.
vndo08 ) is then used to solve eq. (2) and to determine the transport
quantities.
Figure 2: (Color online) Local density of states (a) and transmission
coefficient (b) in a graphene p-n junction. The transition length in (a)
$L=20$ nm; in (b): $L=5$ nm (dotted-dashed), 10 nm (solid) and 20 nm (dashed
lines). All results are for $\tilde{\Delta}=65$ meV
($\tilde{\Delta}=\sqrt{E_{y}^{2}+\Delta^{2}}$ and $E_{y}=\hbar v_{F}k_{y}$).
## III Results and discussion
Figure 3: (Color online) (a) $I-V$ characteristics of graphene p-n junction
with different energy bandgaps. Other parameters are: $E_{F}=0.26$ eV,
$U_{0}=0.52$ eV, and $L=10$ nm. (b) and (c) illustrate the band diagrams at
low bias and in the current valley, respectively.
Before investigating the behavior of the electrical current through these
junctions, we plot a map of the local density of states and the transmission
coefficient as a function of energy in Fig. 2(a) and Fig. 2(b), respectively.
The results show clearly three important transport regions: (i) thermionic
when $E<U_{N}-\tilde{\Delta}$ or $E>U_{P}+\tilde{\Delta}$, (ii) interband
tunneling when $U_{N}+\tilde{\Delta}<E<U_{P}-\tilde{\Delta}$, and (iii)
transmission gap when $\left|{E-U_{\alpha}}\right|\leq\tilde{\Delta}$, where
$U_{\alpha}\equiv U_{N,P}$ denote the potential energies in the n-doped and
p-doped regions, respectively. The appearance of transmission gap is
essentially due to the fact that the longitudinal momentum $k_{x}$ defined
from eq. (1) is imaginary in such energy region, i.e., the carrier states are
evanescent and therefore decay rapidly when going to one of the two junction
sides. Note that the same results are obtained for
$\tilde{\Delta}={\rm{const}}$ even with different $E_{y}$ and/or different
$\Delta$. When $\Delta=0$, the transmission gap is just a function of $E_{y}$
and disappears for normal incident particles. By rising $\Delta$, this gap
increases and gets its minimum value $E_{g}=2\Delta$ when $E_{y}=0$. Moreover,
due to the appearance of evanescent states around the neutral points in the
transition region, Fig. 2(b) shows clearly that the larger the transition
length, the lower the interband tunneling. However, it is worth noting that
the interband tunneling of chiral fermions observed here is still very high in
comparison with typical values of the order of $10^{-7}$ observed in
conventional Esaki tunnel diodes carl97 . Therefore, a high current peak and
then a large PVR are expected to be observed in the considered graphene
junctions.
Now, in Fig. 3(a), we display the $I-V$ characteristics obtained for different
energy bandgaps. Note that, throughout the work, the current is computed at
room temperature. It is shown that the NDC behavior appears more clearly and
its PVR increases when increasing the bandgap though the current peak reduces.
The form of the $I-V$ curve can be explained well by looking at the diagrams
in Fig. 3(b,c): at low bias (3(b)), the contribution of interband tunneling
processes makes the current rising; when further rising the bias (3(c)), the
interband tunneling is suppressed due to the transmission gaps and therefore
the current decreases; when the bias is high enough, the contribution of
charge carriers in the thermionic region makes the current rising very
rapidly. Because the transmission gap increases, the current, especially in
the valley region, decreases when increasing the bandgap. This results in a
large PVR of NDC, for instance, about 46 in the case of $E_{g}=260$ meV shown
in Fig. 3(a).
Figure 4: (Color online) $I-V$ characteristics of graphene p-n junction with
different Fermi energies (a) and different potential barriers (b). Other
parameters are: $E_{g}=260$ meV, $E_{F}=U_{N}+U_{0}/2$ in (b), $U_{0}=0.52$ eV
in (a), and $L=10$ nm, where $U_{N}$ is the potential energy in the n-doped
graphene. Figure 5: (Color online) $I-V$ characteristics of graphene p-n
junction with different transition lengths. The inset shows the evolution of
peak-to-valley ratio with respect to $L$. Other structure parameters are:
$E_{g}=260$ meV, $E_{F}=0.26$ eV, and $U_{0}=0.52$ eV.
To evaluate the possibilities of having a strong NDC behavior in the
junctions, we consider the behavior of the $I-V$ characteristics with
different Fermi energies in Fig. 4(a) and different barrier heights in Fig.
4(b). Fig. 4(a) shows that the current valley shifts to the low bias region
while the current peak decreases when $E_{F}$ goes from the value of
$U_{m}=U_{N}+U_{0}/2$ to the top of the valence band in the p-doped region or
to the bottom of conduction band in the n-doped one. Simultaneously, the
current is a symmetric function of $E_{F}$ around $U_{m}$. This result leads
to the fact that the strongest NDC is achieved when $E_{F}=U_{m}$ and the
rectification behavior around the zero bias can be observed when $E_{F}$ goes
away from such value, e.g., see the cases of $E_{F}=0.13$ or 0.39 eV in Fig.
4(a). The latter behavior is essentially due to the role of the transmission
gap (as illustrated in the diagrams in Figs. 3(b,c)), which induces a strong
reduction of current in the low-positive bias region. Besides, when decreasing
the barrier height, we find that the transmission gap around $U_{P}$ moves
downward in energy, and as a consequence, the interband tunneling and the
current peak are reduced. Indeed, this point is clearly illustrated in Fig.
4(b), where $E_{F}$ is chosen to be $U_{N}+U_{0}/2$ to achieve the strongest
NDC. Moreover, it is shown that the current valley shifts to the low bias and
its value is reduced, which finally results in an increase of the PVR when
decreasing $U_{0}$. For instance, the PVR is $\sim 30$, $46$, and $79$ for
$U_{0}=0.6$, $0.52$, and $0.44$ eV, respectively. To obtain the best device
operation (either a high current peak or a large PVR), the study suggests that
the barrier height about two times greater than the energy bandgap should be
used.
Finally, the $I-V$ characteristics in the junctions with different transition
lengths are displayed in Fig. 5. It is shown that because of the suppression
of interband tunneling (as in Fig. 2(b)), the current peak is reduced when
increasing $L$. Therefore, as seen in the inset of Fig. 5, the PVR decays as a
function of the length $L$. However, the PVR as large as about 10 is still
obtained even for $L$ up to 20 nm. This suggests the possibility of achieving
a very large PVR with a short transition length, which can be realized by
controlling the device geometry, e.g., by appropriately reducing the gate
dielectric thickness in the case of the electrostatic doping zhan08 , or by
using the chemical doping to generate the junction as mentioned in ref. farm09
. As expected above the large PVR, e.g., $\sim$ 123 obtained for $L=5$ nm in
this work, is much higher than for conventional Esaki tunnel diodes where its
highest reported value is just about 16 (see in ref. oehm10 and references
therein).
## IV Conclusion
Using the NEGF technique, we have investigated the transport characteristics
of monolayer graphene p-n junctions. Even at room temperature a negative-
differential-conductance with peak-to-valley ratio as large as a few tens can
be observed in these junctions thanks to a finite bandgap and to the high
interband tunneling of chiral fermions. The dependence of this behavior on the
device parameters was analyzed. It is shown that the strong negative-
differential-conductance behavior is achieved when the Fermi level is in the
center of the potential barrier, the barrier height is about two times greater
than the bandgap, and the transition length is short. We hope that obtained
results can be helpful for designing efficient room-temperature negative-
differential-conductance devices based on finite-bandgap graphene.
## Acknowledgments
This work was partially supported by the French ANR through the projects
NANOSIM-GRAPHENE (ANR-09-NANO-016) and MIGRAQUEL (ANR-10-BLAN-0304). One of
the authors (VHN) acknowledges the Vietnam’s National Foundation for Science
and Technology Development (NAFOSTED) for financial support under the Project
No. 103.02.76.09.
## References
* (1) L. Esaki, Phys. Rev. 109, 603 (1958).
* (2) H. Mizuta and T. Tanoue, _The Physics and Applications of Resonant Tunneling Diodes_ (Cambridge University Press, Cambridge, 1995).
* (3) A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov and A. K. Geim, Rev. Mod. Phys. 81, 109 (2009).
* (4) F. Schwierz, Nat. Nanotech. 5, 487 (2010).
* (5) Y.-M. Lin, C. Dimitrakopoulos, K. A.Jenkins, D. B. Farmer, H.-Y. Chiu, A. Grill, and Ph. Avouris, Science 327, 662 (2010); L. Liao, Y.-C. Lin, M. Bao, R. Cheng, J. Bai, Y. Liu, Y. Qu, K. L. Wang, Y. Huang, and X. Duan, Nature 467, 305 (2010).
* (6) V. Nam Do, V. Hung Nguyen, P. Dollfus, and A. Bournel, J. Appl. Phys. 104, 063708 (2008).
* (7) H. Chau Nguyen and V. Lien Nguyen, J. Phys.: Condens. Matter 21, 045305 (2009).
* (8) V. Hung Nguyen, A. Bournel, V. Lien Nguyen, and P. Dollfus, Appl. Phys. Lett. 95, 232115 (2009).
* (9) H. Ren, Q.-X. Li, Y. Luo, and J. Yang, Appl. Phys. Lett. 94, 173110 (2009).
* (10) G. Liang, S. B. Khalid, and K.-T. Lam, J. Phys. D: Appl. Phys. 43, 215101 (2010).
* (11) V. Nam Do and P. Dollfus, J. Appl. Phys. 107, 063705 (2010).
* (12) M. Y. Han, B. Ozyilmaz, Y. Zhang, and P. Kim, Phys. Rev. Lett. 98, 206805 (2007).
* (13) S. Y. Zhou, G.-H. Gweon, A. V. Fedorov, P. N. First, W. A. de Heer, D.-H. Lee, F. Guinea, A. H. Castro Neto, and A. Lanzara, Nat. Mater. 6, 770 (2007).
* (14) L. Vitali, C. Riedl, R. Ohmann, I. Brihuega, U. Starke, and K. Kern, Surf. Sci. 602, L127 (2008).
* (15) C. Enderlein, Y. S. Kim, A. Bostwick, E. Rotenberg, and K. Horn, New J. Phys. 12, 033014 (2010)
* (16) R. Balog, B. Jørgensen, L. Nilsson, M. Andersen, E. Rienks, M. Bianchi, M. Fanetti, E. Lægsgaard, A. Baraldi, S. Lizzit, Z. Sljivancanin, F. Besenbacher, B. Hammer, T. G. Pedersen, P. Hofmann, and L. Hornekær, Nat. Mater. 9, 315 (2010).
* (17) F. Yavari, C. Kritzinger, C. Gaire, L. Song, H. Gullapalli, T. Borca-Tasciuc, P. M. Ajayan, and N. Koratkar, Small 6, 2535 (2010).
* (18) T. J. Echtermeyer, M. C. Lemme, M. Baus, B. N. Szafranek, A. K. Geim, and H. Kurz, IEEE Electron Device Lett. 29, 952 (2008).
* (19) G. Liang, N. Neophytou, M. S. Lundstrom, and D. E. Nikonov, J. Appl. Phys. 102, 054307 (2007).
* (20) J. Kedzierski, P.-L. Hsu, P. Healey, P. W. Wyatt, C. L. Keast, M. Sprinkle, C. Berger, and W. A. de Heer, IEEE Trans. Electron Devices 55, 2078 (2008).
* (21) P. Michetti, M. Cheli, and G. Iannaccone, Appl. Phys. Lett. 96, 133508 (2010).
* (22) B. Huard, J.A. Sulpizio, N. Stander, K. Todd, B. Yang, and D. Goldhaber-Gordon, Phys. Rev. Lett. 98, 236803 (2007).
* (23) L. M. Zhang and M. M. Fogler, Phys. Rev. Lett. 100, 116804 (2008); T. Low, S. Hong, J. Appenzeller, S. Datta, and M. S. Lundstrom, IEEE Trans. Electron Devices 56, 1292 (2009).
* (24) D. B. Farmer, Y.-M. Lin, A. Afzali-Ardakani, and P. Avouris, Appl. Phys. Lett. 94, 213106 (2009).
* (25) K. Brenner and R. Murali, Appl. Phys. Lett. 96, 063104 (2010).
* (26) A. Cresti, N. Nemec, B. Biel, G. Niebler, F. Triozon, G. Cuniberti, and S. Roche, Nano Res. 1, 361 (2008).
* (27) In wide graphene sheets, the most remarkable change in the electronic structure induced by the disorder appears at the K-point where it can give rise to localized states (see in M. A. H. Vozmediano _et al._ , Phys. Rev. B 72, 155121 (2005)).
* (28) A. D. Carlo, P. Lugli, and P. Vogl, Solid State Commun. 101, 921 (1997).
* (29) M. Oehme, M. Šarlija, D. Hähnel, M. Kaschel, J. Werner, E. Kasper, and J. Schulze, IEEE Trans. Electron Devices 57, 2857 (2010).
|
arxiv-papers
| 2011-05-06T13:53:25 |
2024-09-04T02:49:18.635719
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "V. Hung Nguyen, A. Bournel, and P. Dollfus",
"submitter": "Viet Hung Nguyen",
"url": "https://arxiv.org/abs/1105.1283"
}
|
1105.1334
|
# On the Expansions in Spin Foam Cosmology
Frank Hellmann
Max Planck Institute for Gravitational Physics
(Albert Einstein Institute)
###### Abstract
We discuss the expansions used in spin foam cosmology. We point out that
already at the one vertex level arbitrarily complicated amplitudes contribute,
and discuss the geometric asymptotics of the five simplest ones. We discuss
what type of consistency conditions would be required to control the
expansion. We show that the factorisation of the amplitude originally
considered is best interpreted in topological terms. We then consider the next
higher term in the graph expansion. We demonstrate the tension between the
truncation to small graphs and going to the homogeneous sector, and conclude
that it is necessary to truncate the dynamics as well.
## 1 Introduction
The idea of a spin foam cosmology [1, 2] is to take the spin foam approach to
quantum gravity seriously and attempt to study the homogeneous sector of the
full theory using a series of approximations. The physical meaning of these
expansions is of crucial importance for this project. The original paper [1]
described three approximations made to the full theory: small graph, single
spin foam, and large volume. Then the homogeneous sector within this
truncation of the theory was studied. The amplitude associated to the
transition from a homogeneous state to another homogeneous state was found to
have an intriguing factorisation property. This was interpreted as arising
from the factorisation of the Hamiltonian constraint of the classical theory,
and taken as evidence for the soundness of the approximations.
More concretely the theory considered in [1] is the KKL formulation [3] of the
EPRL model [4, 5, 6]. The small boundary graph considered is the graph
consisting of two graphs with two vertices and four edges each. Following [7]
each of these graphs is called a dipole graph, and can be seen as dual to a
degenerate triangulation of $S^{3}$. This means we are considering spacetimes
with boundary $S^{3}\cup S^{3}$. The single spin foam considered was the one
obtained by shrinking the full boundary to a single vertex.
The overall aim of this paper is to begin the investigation of the physical
meaning of the approximations used. We will sketch conditions the amplitudes
have to satisfy in order for low order calculations to be sensible. The idea
is simply to compare the behaviour of the next order of the approximation to
the behaviour of the previous order. This can be seen as an appropriate
implementation of the idea of renormalisation in the current context.
Our main technical point is the derivation of the asymptotic geometry of the
five smallest spin foams with dipole boundary containing one vertex, thus
giving a geometric picture of the simplest terms of the first order in the
vertex expansion. We will also consider what the known asymptotic results for
the amplitudes of the larger graph $\Gamma^{5}$ tell us about the relationship
of the graph expansion and the restriction to the homogeneous sector.
There are numerous ambiguities in the precise definition of the model. We will
keep the discussion mostly independent of these ambiguities but will point out
where our arguments depend on them. A complete assessment of the plausibility
of these expansions will crucially depend on these choices though and
therefore is beyond the reach of the current paper. Nevertheless an immediate
result of our calculations is to reinterpret the factorisation property found
in [1] as a topological factorisation. That is, the classical solution seen
there lives on the topology $B^{4}\cup B^{4}$.
## 2 The vertex expansion
We begin by considering the vertex expansion. There are two possible views on
this expansion, either as the first term in a sum over 2-complexes or as the
coarse approximation of a theory defined by a refinement limit. In [8] it was
argued that these two possibilities can be the same if the amplitudes satisfy
certain conditions. We will assume that the no vertex term represented by the
foam $\Gamma^{dipole}\times[0,1]$ corresponds to the identity operator.
### 2.1 The one vertex foams
In this section we will discuss the possible foams with one internal vertex
and two dipole graph as boundary. Recall that a generic $n$-complex can be
constructed inductively by taking an $n-1$-complex and attaching $n$-balls to
it (see e.g. [9]). In our case we consider 2-complexes with the boundary given
by two dipole graphs and one vertex not on the boundary. That means we start
with a set of five vertices. We will start by constructing a particularly
simple set of five two complexes.
We begin by gluing in four edges, connecting the four vertices of the two
boundary dipole graphs to the internal vertex. Together with two edges of the
boundary graph we have six edges forming a circle with two points identified,
or a figure eight. The two simplest ways of gluing faces to these six edges,
are by gluing a disc to the whole figure eight, or by gluing two discs to the
two circles making up the figure eight, see Figure 1.
Figure 1: The two ways to glue discs to the edges. On the left the boundary of
the disc is identified with the whole figure eight, on the right two discs are
glued to the two triangles individually.
If we restrict the neighbourhood of the boundary graph $\Gamma$ to look like
$[0,1]\times\Gamma$, we can’t glue any other edges to the boundary vertices,
and only one face to every boundary edge. Thus we obtain five types of
vertices, depending on the number of each type of face gluings. These are
shown in the top row of Figure 2111The existence of these spin foams was first
pointed out by B. Bahr in an unpublished note.. We can think of the second
type of gluing as an edge of the boundary shrinking to a point and then
expanding again, the dashed line in Figure 2 indicates a face that does not
shrink to a point, or a gluing with one disc rather than two. In the second
row we give the KKL spin networks (see [3]) that give the vertex amplitudes of
these spin foams. These are simply the intersection of a small sphere around
the vertex with the spin foam.
Figure 2: The one vertex spin foams and their associated KKL spin networks.
The dashed lines indicate that the face bounded by the edges it connects does
not shrink to a point.
Note that while we will assume that the boundary topology is fixed by the
boundary spin network graph to be $S^{3}\cup S^{3}$ we make no such assumption
for the bulk topology. While for the 2-complexes that arise, for example, in
group field theory and which are dual to triangulations, there always is a
natural dual topological space, this is not the case for the more general
2-complexes considered here. Instead we will have to read out the space time
topology from the behaviour of the amplitudes. We will argue for the
identification of the last foam with the space time topology $B^{4}\cup B^{4}$
in section 2.4, the topology for the other foams is considerably more
challenging though.
In [1] only the operator corresponding to the last 2-complex in Figure 2 was
considered. The factorisation property crucial to the physical interpretation
in [1] can be seen here already in the fact that in the last KKL spin network
there are no edges going from the top to the bottom. This is due to the fact
that the spin foam becomes disconnected upon the removal of the vertex, it has
the same amplitude as a disconnected foam with two vertices instead of one. In
[10] it was suggested to therefore consider these as equivalent foams, which
would place the fifth foam at the second order of the vertex expansion.
Beyond the five simple 2-complexes in Figure 2 we can construct arbitrarily
complicated ones, even subject to the conditions mentioned above. First note
that the attaching map for the faces doesn’t need to transverse the vertices
in the same order, in this way one obtains twisted 2-complexes for which the
KKL spin network contain links crossing from the top left corner to the bottom
right and vice versa.
Furthermore we can attach more edges to the central vertex, this provides more
figure eights, or more generally “flowers” with several petals starting at the
central vertex. We can then glue further faces to these additional edges. As
there is no restriction coming from the topology of the boundary on these, we
can glue arbitrarily many faces, creating arbitrarily many links in the KKL
spin network. We can distinguish two ways of doing so. If the additional edges
and faces become disconnected from the faces and edges we started with upon
removal of the central vertex we have a spin foam that factorizes and we speak
of a tadpole foam. Its amplitude is a product of a tadpole contribution and
the original amplitude. Thus we can divide out the (divergent) sum over the
(divergent) amplitudes of all tadpoles.
If, however, we glue the faces touching the boundary to these additional
petals these KKL spin networks will be arbitrarily complicated connected
networks. In this way we can obtain foams with arbitrarily many faces and
arbitrarily complex spin network graphs as amplitudes already at the one
vertex level. We will restrict our analysis to the five basic foams depicted
above. In particular these are the untwisted foams leading to the smallest KKL
spin networks, namely to those with four vertices and eight edges.
### 2.2 Asymptotic geometry
We will now consider the asymptotic geometry of the five basic terms of the
first order of the vertex expansion. For completeness we will discuss the
asymptotic geometry of the amplitudes for all five terms, including the one
discussed in [1]. For simplicity and clarity we will use the standard
Perelomov coherent state basis [11], rather than the heat kernel coherent
states. This will give us a direct interpretation of the geometry in terms of
simplicial manifolds. It should be noted that this might not be the only, nor
the preferred interpretation of the geometry in this case. We nevertheless
expect the salient features of the amplitudes to become clear from these
pictures already. As our main interest are the geometric correlations induced
by the non-factorising terms we will limit the discussion to the asymptotic
geometry of these amplitudes and will not give the full leading order of the
large spin expansion of the amplitudes.222It is worth noting that with the
face amplitudes chosen in [1] the first four terms appear to be polynomially
suppressed relative to the fifth one, simply by having more faces. However, as
described in the last section, there are one vertex spin foams with
arbitrarily many faces. These in turn polynomially dominate the fifth term.
Figure 3: Our notation for the boundary state space.
The boundary state space is the spin network space on two dipole graphs. We
will arbitrarily label one graph as the $in$ graph and the other as $out$, and
one vertex as $l$-eft and the other as $r$-ight. The spins will be denoted
$j^{in/out}_{i}$, with $i=1\dots 4$. We then have four intertwiners
$\iota^{in}_{l}$, $\iota^{in}_{r}$, $\iota^{out}_{l}$, $\iota^{out}_{r}$,
living in the invariant subspaces of $\bigotimes_{i=1}^{4}j^{in}_{i}$ and
$\bigotimes_{i=1}^{4}j^{out}_{i}$ respectively, see Figure 3. They will be
labelled by four unit vectors each ${\bf n}^{in}_{li}$, ${\bf n}^{in}_{ri}$,
${\bf n}^{out}_{li}$, ${\bf n}^{out}_{ri}$ each, from which we construct the
coherent intertwiners of [11]. This leaves the phase of the intertwiners
unspecified. As we will only discuss the asymptotic geometry it will not enter
our discussion here and can be fixed arbitrarily.
The spin network evaluation is given by using the EPRL map $I_{EPRL}$ to boost
the $\mathrm{SU}(2)$ intertwiners into $\mathrm{SU}(2)\times\mathrm{SU}(2)$
intertwiners and contracting these using the invariant bilinear inner product
$\langle,\rangle$ according to the pattern given by the KKL spin
network333Note that the analysis would be essentially the same if we just used
standard $\mathrm{SU}(2)$ spin networks rather than the EPRL model [12, 13].
This corresponds to setting $\gamma=1$.. As the inner product is antisymmetric
for half integer representations this only defines the amplitude up to a sign
which can be fixed by choosing an orientation on the edges444We prefer the
bilinear inner product as the result only depends on the orientation by sign,
not by complex conjugation, and because it implements an orientation
preserving gluing in the asymptotic geometry.. We will only consider the case
of Euclidean $\gamma<1$, writing $\gamma^{\pm}=\frac{1\pm\gamma}{2}$, so we
have
$I_{EPRL}(j_{i},{\bf
n}_{i})=\int_{\mathrm{SU}(2)\times\mathrm{SU}(2)}\mathrm{d}g^{+}\mathrm{d}g^{-}\bigotimes_{i=1}^{4}g^{+}|{\bf
n}_{i},\gamma^{+}j_{i}\rangle\otimes g^{-}|{\bf
n}_{i},\gamma^{-}j_{i}\rangle.$ (1)
The asymptotic geometry of these boundary intertwiners is given by considering
the vectors $j_{i}{\bf n}_{i}$ as the outward area normals of a tetrahedron.
This is possible as the intertwiners associated to data that does not satisfy
closure, $\sum_{i}j_{i}{\bf n}_{i}=0,$ are exponentially suppressed for large
$j$. Thus the picture we have is two unrelated tetrahedra making up the two
hemispheres of $S^{3}$ associated to the left and right vertex of the dipole
respectively. The induced geometry is thus discontinuous along the equator of
the $S^{3}$. The two key questions we will investigate are then:
* •
Do the amplitudes correlate the $in$ and $out$ tetrahedra?
* •
Do the amplitudes enforce geometricity between the left and the right
hemisphere? That is, do they reduce the discontinuity at the equator?
We denote the vertex amplitudes $A_{k}(j^{in},{\bf n}^{in},j^{out},{\bf
n}^{out})$, where $k=0\dots 4$ labels the number of edges that contract to a
vertex in the spin foam. We will always assume that it is the lowest labelled
edges that contract, that is for $A_{k}$ the edges labelled by $j^{in}_{i}$
and $j^{out}_{i}$ with $i\leq k$ contract to a vertex. The other possible
amplitudes are obtained by permuting the edges at the boundary graphs.
The first term $A_{0}$ is simply the propagation by chained EPRL maps acting
on the left and right intertwiner separately:
$A_{0}=I_{EPRL}^{*}I_{EPRL}\otimes I_{EPRL}^{*}I_{EPRL},$ (2)
where $I^{*}$ denotes the adjoint map under the bilinear inner products. The
amplitude of the originally considered graph, $A_{4}$ is simply the inner
product of the EPRL intertwiners on the vertices:
$A_{4}=\left\langle I_{EPRL}(j^{in}_{l},{\bf
n}^{in}_{l}),I_{EPRL}(j^{in}_{r},{\bf n}^{in}_{r})\right\rangle\left\langle
I_{EPRL}(j^{out}_{l},{\bf n}^{out}_{l}),I_{EPRL}(j^{out}_{r},{\bf
n}^{out}_{r})\right\rangle$ (3)
This can be written in terms of
$f(j_{i},{\bf n}_{i},{\bf
n}^{\prime}_{i})=\int_{\mathrm{SU}(2)}\mathrm{d}g\prod_{i=1}^{4}\langle-{\bf
n}_{i}|g|{\bf n}^{\prime}_{i}\rangle^{2j_{i}},$ (4)
where $\langle|\rangle$ is the Hermitian inner product on the fundamental
representation. The amplitude then factorises as
$A_{4}=f(\gamma^{+}j^{in}_{i},{\bf n}^{in}_{li},{\bf
n}^{in}_{ri})\;f(\gamma^{-}j^{in}_{i},{\bf n}^{in}_{li},{\bf
n}^{in}_{ri})\;f(\gamma^{+}j^{out}_{i},{\bf n}^{out}_{li},{\bf
n}^{out}_{ri})\;f(\gamma^{-}j^{out}_{i},{\bf n}^{out}_{li},{\bf
n}^{out}_{ri}).$ (5)
When evaluated in the coherent intertwiner basis the map $A_{0}$ can be
similarly expressed by switching the roles of $in$/$out$ and $l$/$r$:
$\displaystyle A_{0}$ $\displaystyle=$ $\displaystyle
f(\gamma^{+}j^{in}_{i},{\bf n}^{in}_{li},{\bf
n}^{out}_{li})\;f(\gamma^{-}j^{in}_{i},{\bf n}^{in}_{li},{\bf
n}^{out}_{li})\times$ (6) $\displaystyle\times f(\gamma^{+}j^{in}_{i},{\bf
n}^{in}_{ri},{\bf n}^{out}_{ri})\;f(\gamma^{-}j^{in}_{i},{\bf
n}^{in}_{ri},{\bf n}^{out}_{ri})\;\prod_{i}\delta(j^{in}_{i},j^{out}_{i})$
The asymptotic geometry of $f$ is simply that $f$ is exponentially small
unless there is an $\mathrm{SO}(3)$ element $G$ such that $-{\bf n}_{i}=G{\bf
n}^{\prime}_{i}$, see for example [11]. If we interpret $j_{i}{\bf n}_{i}$ as
the outward normals of a tetrahedron this simply says that the two tetrahedra
have the same geometry but opposite orientation. From this we can simply read
of the asymptotic geometry of the amplitudes $A_{0}$ and $A_{4}$. The former
simply is that the geometry on the left and right tetrahedron of the $in$
state propagate unperturbed to $out$ state without interacting. The asymptotic
geometry of $A_{4}$ on the other hand has the interpretation that the $in$ and
$out$ state are completely uncorrelated while the geometry induced on $S^{3}$
by the tetrahedra becomes continuous. The left and right tetrahedra can now
really be seen as the images of continuous, isometric, orientation preserving
maps from the left and right hemisphere of a singularly triangulated, metric
$S^{3}$ into $\mathbb{R}^{3}$.
We now turn to the amplitude $A_{1}$ associated to the spin foam where one
edge contracts. The KKL spin network for this spin foam is two connected. It
is worth noting that in the Lorentzian case the two connected spin networks
are naively divergent. As all three or more connected ones were shown to be
finite in [14], the two connected ones would dominate the amplitude. In the
Euclidean case which we are studying, we can use Schur’s lemma to turn it into
the network for $A_{0}$ rescaled by the inverse dimension of the edge that
contracts. It thus has the same geometric interpretation as $A_{0}$. Writing
$d_{j}$ for the signed dimension of the $\mathrm{SU}(2)\times\mathrm{SU}(2)$
irrep $(\gamma^{+}j,\gamma^{-}j)$, we thus have
$A_{1}=\frac{1}{d_{j_{1}}}A_{0}.$ (7)
The amplitude for the case where three edges contract is also two connected
and can be rewritten as the amplitude $A_{4}$, rescaled by the dimension of
the edge that does not shrink. Geometrically it thus has the same
interpretation again except that the $in$ and $out$ $S^{3}$ are no longer
independent but need to have a face with the same area:
$A_{3}=\frac{1}{d_{j^{in}_{4}}}\,\delta(j^{in}_{4},j^{out}_{4})\,A_{4}.$ (8)
This leaves the amplitude $A_{2}$, where two edges contract, as the only case
that can not be reduced to $f$. The amplitude is given by:
$\displaystyle A_{2}$ $\displaystyle=$
$\displaystyle\prod_{\epsilon=\pm}\int_{(\mathrm{SU}(2))^{4}}\prod_{l=1}^{4}\mathrm{d}g^{\epsilon}_{l}$
(9) $\displaystyle\prod_{i=1,2}\langle-{\bf
n}^{in}_{li}|{(g^{\epsilon}_{1})}^{\dagger}g^{\epsilon}_{2}|{\bf
n}^{in}_{ri}\rangle^{2\gamma^{\epsilon}j^{in}_{i}}\langle-{\bf
n}^{out}_{li}|{(g^{\epsilon}_{3})}^{\dagger}g^{\epsilon}_{4}|{\bf
n}^{out}_{ri}\rangle^{2\gamma^{\epsilon}j^{out}_{i}}$
$\displaystyle\prod_{k=3,4}\langle-{\bf
n}^{in}_{lk}|{(g^{\epsilon}_{1})}^{\dagger}g^{\epsilon}_{3}|{\bf
n}^{out}_{lk}\rangle^{2\gamma^{\epsilon}j^{in}_{k}}\langle-{\bf
n}^{in}_{rk}|{(g^{\epsilon}_{2})}^{\dagger}g^{\epsilon}_{4}|{\bf
n}^{out}_{rk}\rangle^{2\gamma^{\epsilon}j^{in}_{i}}\delta(j^{in}_{k},j^{out}_{k})$
We introduce the $\mathrm{SO}(3)$ group elements $G_{in}$, $G_{l}$, $G_{r}$
and $G_{out}$, related to the $SO(3)$ elements $G_{i}$ covered by the
integration variables $g^{+}_{i}$ evaluated at the critical points by
$G_{in}=G_{1}^{\dagger}G_{2}$, $G_{l}=G_{1}^{\dagger}G_{3}$,
$G_{r}=G_{3}^{\dagger}G_{4}$ and $G_{out}=G_{3}^{\dagger}G_{4}$. The critical
point equations then are:
$\displaystyle-{\bf n}^{in/out}_{li}=$ $\displaystyle G_{in/out}{\bf
n}^{in/out}_{ri}$ $\displaystyle\;\;\mbox{for}\,i=1,2$ $\displaystyle-{\bf
n}^{in}_{l/r\,i}=$ $\displaystyle G_{l/r}{\bf n}^{out}_{l/r\,i}$
$\displaystyle\;\;\mbox{for}\,i=3,4.$ (10)
For non-degenerate boundary data the ${\bf n}$ are pairwise linearly
independent. Therefore these equations fix the $\mathrm{SO}(3)$ elements up to
gauge, thus, as in the other cases, the second sector
$\mathrm{SU}(2)\times\mathrm{SU}(2)$, $g^{-}$, has to agree with $g^{+}$ at
the critical point up to gauge.
We now see that the geometric correlations induced by this intermediate
amplitude are indeed in between the other two amplitudes. To see this consider
the shape of each tetrahedron determined by the four areas given by the
$j_{i}$ and the two exterior dihedral angles $\phi_{12}$ and $\phi_{34}$ given
by $\cos(\phi_{ij})={\bf n}_{i}\cdot{\bf n}_{j}$ and $0\leq\phi_{ij}<\pi$. By
definition the areas of the left and right tetrahedra always coincide. In the
case of the $A_{4}$ amplitude the asymptotic geometry is such that furthermore
$\phi_{12}$ and $\phi_{34}$ coincide as well. We obtain the same geometry on
the left and the right, which in turn eliminates the discontinuity at the
equator, but leaves the $in$ and $out$ geometry completely uncorrelated. In
the case of $A_{0}$, $\phi_{12}$ and $\phi_{34}$ in the left and right
tetrahedron remain uncorrelated. Instead they now propagate from the $in$
state to the $out$ state. Furthermore now the four areas between the $in$ and
$out$ state are forced to be the same, thus the entire (discontinuous)
geometry from the $in$ state propagates to the $out$ state. For the $A_{2}$
amplitude we have exactly the intermediate situation. The areas $j^{in}_{3}$
and $j^{in}_{4}$ propagate to the areas $j^{out}_{3}$ and $j^{out}_{4}$, and
the dihedral angles $\phi^{in}_{34}$ to $\phi^{out}_{34}$. They do however
remain uncorrelated between the left and right tetrahedron on the $in$ and
$out$ slice, and the geometry remains discontinuous. On the other hand the
angles $\phi_{12}$ get matched between the left and right tetrahedron but do
not propagate between $in$ and $out$.
We thus see that we have a trade off in the amplitudes between geometry
propagating within the boundary and between boundaries. The only amplitude
that both eliminates the discontinuity and induces correlations between the
different boundaries is $A_{3}$.
In our discussion we have ignored the face amplitudes, as fixing these would
require a separate discussion and separate assumptions. We refer the
interested reader to the literature [15] and references therein. To give the
full asymptotics it would furthermore be necessary to systematically fix the
phase of the intertwiners and understand the spin structure of the critical
points.
### 2.3 The consistency of the expansion
As a sum over 2-complexes, the vertex expansion can be implemented as a
Feynman expansion of an auxiliary field theory, called group field theory
[16]. In the refinement limit it is more akin to a lattice approach. In both
cases we are however lacking the physical expansion parameter that fixes the
meaning of the expansion, in the former case the coupling constant associated
to the vertices, in the latter case the lattice scale. Thus these analogies
should be taken with a grain of salt. As mentioned above it has actually been
suggested in [8] that both of these expansions can be considered to coincide
for spin foam models, precisely due to the absence of a scale. However, we can
still quite generically consider what sort of consistency conditions the
amplitudes would need to satisfy for the low order calculations to be
indicative of the behaviour of the full theory defined by either of these
limiting procedures.
An example of such a consistency condition would be the invariance under
Pachner moves used to construct lattice topological quantum field theories
[17, 18, 19, 20]. In that case the 2-complex is taken to be the dual two
skeleton of the triangulation of a PL-manifold [21]. The condition then is
that the observables on the boundary do not change under changes of the
2-complex that correspond to changing the triangulation of the manifold. A
trivial refinement limit can then be taken that will depend on the PL-manifold
and no longer on its triangulation. In particular observables can simply be
calculated exactly using the simplest triangulations available.
The refinement limit considered in [8] envisions going to an “infinite
complete 2-complex” and thus is of a significantly different nature. We can
see this for example in the fact that as opposed to the Feynman expansion and
the lattice approach the topology of the neighbourhood of a vertex is not
restricted leading to infinitely many terms at the one vertex level already.
We would also expect that in the case of gravity we will not be able to define
the refinement limit exactly, but only approximately. Conversely that means
that the consistency conditions should only be satisfied approximately, e.g.
up to higher terms in a refinement parameter. In the case of perturbative
renormalisation of QFT the statement is that the divergent parts of the higher
order calculations resemble the lower order ones. In the case of non-
perturbative renormalisation of lattice theories the hope is to take the
refinement limit by approaching the critical point. In concrete lattice
calculations the physical quantities are calculated for several values of the
lattice spacing and then extrapolated to the continuum limit. Thus in this
case, too, studying the behaviour under refinement is crucial for obtaining
physical answers555E.g. [22] lists among the essential ingredients for a full
and controlled study: “V. Controlled extrapolations to the continuum limit,
requiring that the calculations be performed at no less than three values of
the lattice spacing, in order to guarantee that the scaling region is
reached.”.
Dittrich and Bahr have emphasized the importance of studying the approximate
behaviour of the theory and simpler models under such refinements, especially
with respect to (restoring) their symmetries, and carried out a number of
calculations in this direction for the classical [23, 24, 25, 26] and quantum
case [27]. In the case of general spin foam models Bahr has suggested a set of
cylindrical consistency conditions [28]. An analysis along these lines would
be necessary to answer the question of the consistency of the approximation in
the affirmative.
However, even without going to such lengths, and just using the operators
defined above we can already formulate minimal consistency checks. Write
$\tilde{A}_{k}$ for the sum of the $A_{k}$ operators with permuted edges and
face factors taken into account. As the initial and final state on these
operators have the same form, we can consider arbitrary combinations of these
operators. These are the summands in
$(\tilde{A}_{0}+\tilde{A}_{1}+\tilde{A}_{2}+\tilde{A}_{3}+\tilde{A}_{4})^{n}$.
Then, assuming the relative weight of terms depends only on the order of the
vertex expansion, the sum over all spin foams with only these vertices is
given by
$A_{total}=\sum_{n}c_{n}(\tilde{A}_{0}+\tilde{A}_{1}+\tilde{A}_{2}+\tilde{A}_{3}+\tilde{A}_{4})^{n}.$
(11)
For example, if
$||\tilde{A}_{0}+\tilde{A}_{1}+\tilde{A}_{2}+\tilde{A}_{3}+\tilde{A}_{4}||<1$
and $c_{n}=1$ we would obtain a close analogy to mass renormalisation in QFT
$A_{total}=\frac{1}{1-(\tilde{A}_{0}+\tilde{A}_{1}+\tilde{A}_{2}+\tilde{A}_{3}+\tilde{A}_{4})}.$
If this term is structurally very different from the original
$\tilde{A}_{0}+\tilde{A}_{1}+\tilde{A}_{2}+\tilde{A}_{3}+\tilde{A}_{4}$ term,
the results of the first order calculation are meaningless for the behaviour
of the full the theory.
Of course, the possibility of summing up these operators depends on the
combinatorial factors in front of the sums, as well as possible face
amplitudes that could easily render this sum strongly divergent. Furthermore
it would require us to analyse the sum over spins in the theory for which no
effective methods are available. Such an analysis is therefore unfortunately
beyond the scope of this paper.
### 2.4 The origin of factorisations
We will now consider the factorisation of the amplitude $A_{4}$ discussed in
[1]. Note that while $A_{4}$ factorises the entire amplitude, consisting of
the 5 different terms as well as the identity operator, does not. The argument
of [1] was that the amplitude $A_{4}$ should be understood as a Hawking-Hartle
type no-boundary amplitude truncated to a single state:
$A^{\prime}_{4}=\langle\Psi_{HH}|\phi_{in}\otimes\phi_{out}\rangle,$
for the state $\phi_{in}\otimes\phi_{out}$ in the boundary Hilbert space. It
was then further argued that as the classical constraint for the FRW universe
factorises the state $\Psi_{HH}$ should be of the form
$\Psi^{\prime}_{HH}\otimes\Psi^{\prime}_{HH}$, and the amplitude should
factorise:
$A^{\prime}_{4}=\langle\Psi^{\prime}_{HH}|\phi_{in}\rangle\langle\Psi^{\prime}_{HH}|\phi_{out}\rangle.$
However, this requires that the state encoded in the amplitude $A_{4}$ is a
physical state in the sense that it is annihilated by the constraint. When
viewed from the point of view of group averaging, which heuristically connects
the Hamiltonian and the spin foam picture, first order terms should correspond
to the matrix elements of the Hamiltonian, not eigenstates of the projector on
its $0$ eigenvalue. Thus the factorisation of the constraint does not explain
the factorisation of this contribution to the amplitude at first order.
Note that this factorisation property is very robust. The amplitude of every
2-complex that is connected only through one vertex, that is, that becomes
disconnected upon removing one vertex, is equivalent to that of the
disconnected 2-complex with an additional vertex. The amplitudes for such
disconnected spin foams trivially factorise. Thus we can already see that an
infinite number of terms in the graph and vertex expansion have this
factorisation property.
Furthermore it should be pointed out that the factorisation is not a product
of the asymptotic or semiclassical limit. It is exact for the full amplitude
at this level of the vertex expansion. In particular this means that all
correlation functions between the initial and the final state, including the
graviton propagator, vanish.
From the boundary perspective we do not know the topology of spacetime, only
of its boundary. That means that when we find the amplitude peaked it might be
peaked on a solution to the classical equations on any spacetime with the
boundary considered. A priori it is not clear that the topology of the
2-complexes dominating the peaked amplitude must correspond to the topology of
the manifold on which the classical solution lives, on the other hand the very
general factorisation property for 2-complexes connected at one vertex
suggests that the amplitudes should correspond to classical solutions living
on disconnected spacetimes. For the spin foam associated to $A_{4}$ the
natural candidate is the union of two balls $B^{4}\cup B^{4}$. In this case
the Hartle-Hawking state will robustly factorise simply for topological
reasons, no matter what approximations are performed on the disjoint
spacetimes.
Another way to understand what type of space time topology the amplitude is
seeing is by noting that the amplitudes we are considering are based on
$\mathrm{SU}(2)$ BF amplitudes [13]. The suitably regularized $\mathrm{SU}(2)$
BF partition function is the integral over the flat connections on the
2-complex. In the case of triangulations this coincides with the integral over
the flat connections on the manifold. Gluing the boundaries of the 2-complex
corresponds to taking its trace, we can then see that constructing the
operator $A_{4}$ from BF spin networks, instead of EPRL spin networks we have
that
$tr(A^{BF}_{4})=Z^{BF}(S^{4}).$ (12)
That is, by identifying the components of the boundary $S^{3}\cup S^{3}$ we
obtain $S^{4}$ rather than $S^{3}\times S^{1}$, further indicating that the
space time topology the amplitude $A_{4}$ is $B^{4}\cup B^{4}$.
We will see this interpretation further corroborated by the higher terms in
the graph expansion we will discuss in the next section.
## 3 The graph expansion
In the graph expansion we are in a considerably better situation, as the
refinement limit Hilbert space has been rigorously constructed [29]. We know
that the unrefined or truncated Hilbert space sits in the refined Hilbert
space as a well defined subspace. In particular if the larger graph only
contains one subgraph isomorphic to the unrefined graph this is the subspace
given by setting the spin quantum numbers on all edges not in that subgraph to
zero.
It had also been a long-standing problem how to find the homogeneous sector of
this Hilbert space, for example to find what the right states corresponding to
the Minkowski vacuum are. One way to attack this question is to attempt to
embed the Hilbert space of loop quantum cosmology into the loop quantum
gravity Hilbert space [30, 31]. This was successfully done by avoiding the no
go theorem of [32] in [33, 34], and by changing the loop quantum cosmology
Hilbert space in [35, 36].
The crucial question for spin foam cosmology is then how the truncation to a
small graph and the restriction to the homogeneous sector interact. In
particular the states considered in the above embeddings generally have
support on arbitrarily large graphs. Truncating to the small graph corresponds
to setting quantum numbers to zero in the full theory. As these quantum
numbers are the quantum numbers of geometry, the small graph truncation
corresponds to having most areas and volumes evaluate to zero. This does not
necessarily correspond to deep Planck scale behaviour, as the degrees of
freedom that are excited can be arbitrarily large, but it is a very
inhomogeneous distribution of quantum numbers throughout the graph.
### 3.1 The 4-simplex graph.
Keeping in the simplicial picture of this paper we will illustrate this issue
by going to the complete graph on five vertices, $\Gamma^{5}$. This is the
boundary spin network graph of a 4-simplex. In particular we will consider the
analogue of the geometricity inducing amplitude $A_{4}$.
Figure 4: $\Gamma^{5}$, the complete graph on five vertices. The vertices are
labelled by $j,k=1\dots 5$, each edge is labelled by the end vertices and
carries a spin $j_{kl}$, and each vertex carries an intertwiner $\iota_{k}$.
The graph $\Gamma^{5}$ is dual to the smallest proper triangulation of the
3-sphere. The asymptotic geometry of the one vertex decomposable amplitude is
given by the asymptotic analysis in [37, 38, 39, 13]. It consists of two
distinct flat 4-simplices with uncorrelated geometry. This reinforces the
interpretation of the one vertex connected spin foam as having asymptotics
dominated by bulk solutions of the classical theory living on the spacetime
$B^{4}\cup B^{4}$, rather than on the cosmological spacetime
$S^{3}\times[0,1]$.
To approximate the homogeneous sector we would usually choose a triangulation
of the 3-sphere by equilateral tetrahedra. On the other hand we can embed the
dipole graph into $\Gamma^{5}$ by setting the three edges connecting three
vertices to zero, that is
$j_{34}=0,\,j_{45}=0,\,j_{35}=0.$
This implies that $j_{13}=j_{23}$, $j_{14}=j_{24}$, and $j_{15}=j_{25}$ and we
can embed the graphs considered above into $\Gamma^{5}$ by setting
$j_{1i}=j_{i+1}$. We see that the subspace corresponding to the truncated
graph is in fact orthogonal to the homogeneous sector.
In this scenario the geometry of the original graph inherited from the
cylindrically consistent embedding into $\Gamma^{5}$ is that of a very
degenerate and highly curved 4-simplex immersed degenerately into
$\mathbb{R}^{3}\subset\mathbb{R}^{4}$. In particular the integrated extrinsic
curvature of the general $\Gamma^{5}$ geometric asymptotics is given by
$K^{5}=\sum_{i,k\,i<k}\Theta_{ik}\,\gamma j_{ik},$
where $\Theta$ is the exterior dihedral angle of the embedding of the
4-simplex in $\mathbb{R}^{4}$. This extrinsic curvature gives the phase of the
asymptotics if the boundary is chosen to be the Regge state [37]. Conceptually
it corresponds to the integral of the action on the solution on which we are
peaked. As the action in the interior is zero, this is the Regge analogue of
the York-Gibbons-Hawking boundary term, that is, the integrated extrinsic
curvature. Thus the appearance of this phase is considered the clearest
confirmation that these amplitudes code a discrete gravity theory.
For the truncated graph the corresponding term will be
$K_{4}=\pi\,\sum_{i=2}^{5}\gamma j_{1i},$
as every dihedral angle will go to $\pi$ or to $0$. As the sum of spins at
each vertex needs to be integer the phase factor in this case will simply be a
sign. We can see from the form of the integrated extrinsic curvature that the
dynamics of the full theory treat this term as a highly degenerate
configuration with no 4-volume. Remember that we considered only the non-
propagating, geometricity inducing amplitude here. For more complex amplitudes
we expect this problem to get worse as the inhomogeneities created by the
cylindrical embedding will start propagating. Thus it appears clear that
reducing to the homogeneous sector after truncating the boundary, while
keeping the full dynamics in the sense of cylindrical consistent embeddings
gives physically nonsensical results as the truncation will interact in
essentially uncontrolled ways with the degrees of freedom we intend to
capture.
Ignoring the simplicial picture we can still see this discrepancy. The
extrinsic curvature of a homogeneous 3-sphere in $\mathbb{R}^{4}$ with equator
of area $J=\sum_{i=2}^{5}\gamma j_{1i}$ is given by
$K_{hom}=2\pi J,$
that is, it is twice as large as the one seen in the phase of the amplitude.
### 3.2 Truncated dynamics
We have seen that the truncated state, when interpreted in the sense of
cylindrical consistency, will look highly inhomogeneous to the dynamics of the
theory. On the other hand there are natural notions of homogeneous states on
the truncated theory e.g. in [7, 40]. To take advantage of these it is
necessary to consistently truncate the dynamics together with the state space.
It is a priori unclear what this should mean for the sum over spin foams. A
naive possibility would be to simply insist that the spin foam has a slicing
that looks like the truncated boundary graph at all times, in which case only
the identity operator and $A_{5}$ would contribute. This would also provide a
restriction on the topology of the neighbourhood of the vertex necessary to
render the one vertex expansion meaningful.
In the context of the Hamiltonian theory we can arrive at such a truncation by
insisting that the matrix elements of the Hamiltonian for all larger graphs
should vanish. It would be interesting to study the expansion of such a
truncated Hamiltonian into spin foams, in particular whether it would be
possible to express it in terms of the five basic amplitudes defined above.
Such Hamiltonians for truncated systems, in particular with an eye to the
homogeneous sector, were discussed recently in [41, 42]. There the family of
dipole graphs with $n$ legs connecting the two vertices is considered. The
homogeneous states at each level are given by invariance under the truncated
area preserving diffeomorphisms. The homogeneous subspaces are isomorphic for
arbitrarily high $n$, but the isomorphism is not achieved by considering the
cylindrically consistent embeddings of the smaller Hilbert spaces into the
larger ones. For every particular truncation $n$ [41, 42] define a Hamiltonian
consistent with the homogeneous sector. Interestingly in this case the
restriction of the Hamiltonian to a subspace corresponding to a truncation
$n^{\prime}<n$ commutes with the action of the part of the symmetry group that
leaves the subspace invariant. In this way this is an example of a system
where symmetry reduction and cylindrical consistency commute, and as a
consequence the large $n$ limit of the symmetry reduced sector is trivial.
Similarly in [40] the discretised Hamiltonian of the truncated classical
theory is considered. There is no reason to expect the dynamics induced by the
Hamiltonian on a small triangulation to coincide with the dynamics induced by
the Hamiltonian of a larger triangulation on the subspace corresponding to the
further truncation to the small triangulation.
To derive the effective theory of the truncated state space from the dynamics
of the full theory is already a difficult problem in the classical theory, see
for example [43]. Thus whether the ad hoc truncations would be indicative of
the behaviour of the full theory would be hard detect. However, as in the case
of the vertex expansion, one can study how the approximation behaves when
switching on more degrees of freedom. For non-loop quantum cosmology this was
already suggested and studied in [44]. To do so it will be necessary to
understand how to embed the models of various complexity into each other in an
appropriate sense.
## 4 Conclusions
We pointed out that even at the one vertex level arbitrarily complex
amplitudes exist. Thus without further restrictions on the topology of the
neighbourhood of the vertex the restriction to one vertex is too weak to be
studied effectively. We then computed the five simplest terms of the first
order in the vertex expansion, characterised by the minimal number of vertices
and edges in the KKL network. We found an interesting trade off between
geometricity and geometry propagation. Amplitudes that enforced a continuous
geometry on the boundary did not propagate and vice versa. We discussed
different coherence conditions necessary for the proposed expansions to work.
We then showed that the factorisation property of the amplitude $A_{4}$ can be
understood as a topological factorisation.
From our analysis it thus seems premature to claim that the lowest order terms
show that there is a regime in which the theory reproduces cosmological
spacetimes. Instead we find that the original calculation has to be
reinterpreted as a calculation of the first topology changing spacetime. The
topology of the type $B^{4}\cup B^{4}$ is suggested by the general
factorisation properties of one vertex connected spin foams, and corroborated
by the calculation of the corresponding term in the next order of the graph
expansion.
We then considered the meaning of the graph expansion in the sense of
cylindrical consistency, and in particular its interaction with restricting to
the homogeneous sector of the theory. We demonstrated the tension between the
two by considering the next higher term in the graph expansion, the complete
graph on five vertices, $\Gamma^{5}$. We concluded that in order to be able to
consistently go to the homogeneous sector of the truncated theory, the
dynamics of the theory need to be truncated, too. This is to not allow
propagation to higher terms in the graph expansion, to which the truncated
states look highly inhomogeneous. Such a truncation could help restrict the
topology in the neighbourhood of the vertex, thus rendering the one-vertex
expansion meaningful. The choice of the original paper could be seen as such a
truncation. We then pointed out several proposals for truncated dynamics in
Hamiltonian approaches to the dynamics. We pointed out that similar
consistency conditions as for the vertex expansion have to be considered for
the graph expansion as well. These can be seen as analogous to embeddings
studied in the context of traditional quantum cosmology.
## 5 Acknowledgements
I would like to thank B. Bahr, M. Martín-Benito, E. Bianchi, B. Dittrich, R.
Dowdall, C. Rovelli, S. Steinhaus, S. Speziale and F. Vidotto for discussions
and comments on a draft of this paper.
## References
* [1] E. Bianchi, C. Rovelli and F. Vidotto, Phys. Rev. D82, 084035 (2010), [1003.3483].
* [2] F. Vidotto, 1011.4705.
* [3] W. Kaminski, M. Kisielowski and J. Lewandowski, 0909.0939.
* [4] J. Engle, E. Livine, R. Pereira and C. Rovelli, Nucl. Phys. B799, 136 (2008), [0711.0146].
* [5] J. Engle, R. Pereira and C. Rovelli, Nucl. Phys. B798, 251 (2008), [0708.1236].
* [6] J. Engle, R. Pereira and C. Rovelli, Phys. Rev. Lett. 99, 161301 (2007), [0705.2388].
* [7] C. Rovelli and F. Vidotto, Class. Quant. Grav. 25, 225024 (2008), [0805.4585].
* [8] C. Rovelli and M. Smerlak, 1010.5437.
* [9] A. Hatcher, Algebraic Topology (Cambridge University Press, 2002).
* [10] B. Bahr, Class. Quant. Grav. 28, 045002 (2011), [1006.0700].
* [11] E. R. Livine and S. Speziale, Phys. Rev. D76, 084028 (2007), [0705.0674].
* [12] F. Hellmann, State Sums and Geometry, PhD thesis, 2010, 1102.1688.
* [13] J. W. Barrett, W. J. Fairbairn and F. Hellmann, Int. J. Mod. Phys. A25, 2897 (2010), [0912.4907].
* [14] W. Kaminski, 1010.5384.
* [15] B. Bahr, F. Hellmann, W. Kaminski, M. Kisielowski and J. Lewandowski, 1010.4787.
* [16] H. Ooguri, Mod.Phys.Lett. A7, 2799 (1992), [hep-th/9205090], Dedicated to Huzihiro Araki and Noboru Nakanishi on occasion of their 60th birthdays.
* [17] M. Fukuma, S. Hosono and H. Kawai, Commun. Math. Phys. 161, 157 (1994), [hep-th/9212154].
* [18] J. W. Barrett and B. W. Westbury, Trans. Am. Math. Soc. 348, 3997 (1996), [hep-th/9311155].
* [19] V. G. Turaev, De Gruyter Stud. Math. 18, 1 (1994).
* [20] D. N. Yetter, (1993).
* [21] C. P. Rourke and B. J. Sanderson, Introduction to piecewise-linear topology (Springer-Verlag, Berlin, New York,, 1972).
* [22] S. Durr et al., Science 322, 1224 (2008), [0906.3599].
* [23] B. Bahr and B. Dittrich, Class. Quant. Grav. 26, 225011 (2009), [0905.1670].
* [24] B. Bahr and B. Dittrich, 0909.5688.
* [25] B. Bahr and B. Dittrich, Phys. Rev. D80, 124030 (2009), [0907.4323].
* [26] B. Bahr, B. Dittrich and S. He, 1011.3667.
* [27] B. Bahr, B. Dittrich and S. Steinhaus, 1101.4775.
* [28] B. Bahr, Cylindrical consistency for spin foams, to appear.
* [29] A. Ashtekar and J. Lewandowski, gr-qc/9311010.
* [30] M. Bojowald and H. A. Kastrup, gr-qc/0101061.
* [31] M. Bojowald and H. A. Kastrup, Class. Quant. Grav. 17, 3009 (2000), [hep-th/9907042].
* [32] J. Brunnemann and C. Fleischhack, 0709.1621.
* [33] J. Engle, Class. Quant. Grav. 27, 035003 (2010), [0812.1270].
* [34] T. A. Koslowski, 0711.1098.
* [35] J. Brunnemann and T. A. Koslowski, 1012.0053.
* [36] C. Fleischhack, 1010.0449.
* [37] J. W. Barrett, R. J. Dowdall, W. J. Fairbairn, H. Gomes and F. Hellmann, 0909.1882.
* [38] J. W. Barrett, R. J. Dowdall, W. J. Fairbairn, H. Gomes and F. Hellmann, J. Math. Phys. 50, 112504 (2009), [0902.1170].
* [39] J. W. Barrett, R. J. Dowdall, W. J. Fairbairn, F. Hellmann and R. Pereira, Class. Quant. Grav. 27, 165009 (2010), [0907.2440].
* [40] M. V. Battisti, A. Marciano and C. Rovelli, Phys. Rev. D81, 064019 (2010), [0911.2653].
* [41] E. F. Borja, J. Diaz-Polo, I. Garay and E. R. Livine, Class. Quant. Grav. 27, 235010 (2010), [1006.2451].
* [42] E. F. Borja, J. Diaz-Polo, I. Garay and E. R. Livine, 1012.3832.
* [43] T. Buchert, 1103.2016.
* [44] K. V. Kuchar and M. P. Ryan, Phys. Rev. D40, 3982 (1989).
|
arxiv-papers
| 2011-05-06T17:38:40 |
2024-09-04T02:49:18.641841
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Frank Hellmann",
"submitter": "Frank Hellmann",
"url": "https://arxiv.org/abs/1105.1334"
}
|
1105.1347
|
# Modeling queuing dynamics of TCP: a simple model and its empirical
validation.
D. Genin, T. Nakassis
Email: dgenin@nist.gov, tnakasis@nist.gov
###### Abstract
Understanding queuing dynamics of TCP is important for correct router buffer
sizing as well as for optimizing the performance of the TCP protocol itself.
However, modeling of buffer content dynamics under TCP has received relatively
little attention given its importance. Commonly used queuing models are based
on overly simplistic assumptions about the packet arrival process. As a
consequence, there are no quantitatively accurate closed loop TCP models
capable of predicting performance even for a single link shared by multiple
flows. Our present paper aims to close this gap by proposing a simple TCP
queuing model, which is based on experimental observations and validated by
extensive packet level simulations.
## I Introduction
Queuing dynamics of TCP packet traffic has long been a research topic of great
interest. Understanding queuing dynamics of TCP is important for correct
router buffer sizing as well as for optimizing the performance of the TCP
protocol itself. However, modeling of buffer content dynamics under TCP has
received relatively little attention given its importance. Commonly used
queuing models, on which packet loss models of closed loop TCP models are
based, are often ad hoc and are based on overly simplistic assumptions, such
as Poisson arrivals of buffer overflow moments and Poisson packet arrivals. As
a consequence, there are no quantitatively accurate closed loop models (i.e.
mathematical models parametrized solely by the a priori known network
parameters, such as router capacity, propagation delay and buffer size)
capable of predicting performance even for a single link shared by multiple
flows. This is the gap that our present paper aims to fill by proposing a
simple TCP queuing model, which is based on experimental observations and
validated by extensive packet level simulations.
Before describing the proposed model we give a brief survey of the queuing and
packet loss models that have appeared in TCP literature. Mathematical models
of TCP require some way of modeling packet loss in order to close the control
loop of the TCP congestion avoidance mechanism. Since packet losses occur most
often at buffer overflow111at least in wired networks, which today constitute
the bulk of the Internet it is natural to take packet loss probability as the
blocking probability in a single server queue. Thus there is a close
connection between packet loss models and queuing models.
There are two distinct approaches to modeling packet loss: stateless and
stateful. The former assumes that in equilibrium queue length sampled at the
times of packet arrivals behaves as a sequence of i.i.d. random variables.
Correctness of this assumption has been experimentally tested for cross-WAN
and -Internet TCP flows but it appears to be wrong for LAN flows [1]. On the
other hand, the stateful approach assumes that queue length has long range
autocorrelation and so the queue state needs to be a part of the whole TCP
model. Packet loss probability is then expressed as a function or a random
process (depending on the specific model) which depends on the length of the
queue.
For the stateless models the queue length distribution is typically assumed to
be one of the standard queuing theory models – M/M/1, M/M/1/B or M/D/1/B. The
advantage of these models is the ready availability of explicit analytical
expressions for most quantities interest, including the blocking probability.
Models based on M/M/1/B queues have been extensively used in publications on
fluid approximation, which are numerous, e.g. [10],[27],[25],[21],[23],[18],
as well as in some papers on the problem of router buffer sizing[9],[22],[11].
It has long been known, however, that these models fail to adequately capture
the complex statistical structure of TCP traffic [20],[13],[16] and produce
significantly inaccurate results [12]. We discuss the reason for this failure
below. Attempts have been made to modify these models, for example, by adding
flow synchronization effects, in order to improve their fidelity but with
limited success [2],[4].
The stateful queuing models have been especially popular in stochastic TCP
modeling literature [3],[5],[6] but also have appeared in fluid approximation
models of TCP with router implementing Random Early Drop[14],[26],[24]. These
models explicitly include queue length as a variable governed by a coupled
(stochastic) differential equation. Packet loss is then modeled as a
probability function or a random, usually Poisson, process dependent on the
current queue length. While few of the resulting models have been
experimentally verified, partially due to their mathematical complexity, they
are unlikely to produce accurate results because they are based on overly
optimistic assumptions about the uniformity of TCP transmission rate, similar
to the M/M/1/B models.
An interesting exception to this rough classification is the model of Dumas,
Guillemin and Robert in which the packet loss is assumed to be a Bernoulli
process on the RTT222We use round trip propagation delay and round trip time
interchangeably because processing and buffering delays are largely
insignificant in the setting we consider. (round trip time) time scale but has
state given by the order of transmission on the sub-RTT scale [8]. Their
results, however, give asymptotic formulas for the congestion window size
distribution in the limit of zero packet loss for a fixed packet loss
probability function, i.e. it is an open loop model, which ignores the problem
of relating TCP and network variables to packet loss.
We must also, separately, mention the work of Wischik on router buffer sizing,
which attempts to account for the effects of packet burstiness on queue length
with a model based on an M/M/1/B queue[28]. While burstiness is indeed the
crux of the issue, the results of the study have not been experimentally
tested, to the best of our knowledge, and the degree of their qualitative and
quantitative accuracy remains unclear.
In summary, both the stateless and the stateful queuing models are based on
fundamentally flawed assumptions about the uniformity of the TCP packet
emission process. This observation has been made by a number of authors
[15],[13],[12]. Most explicitly the issue has been described by Baccelli and
Hong in [4]. There the authors describe a large scale fluid approximation
TCP/IP network simulator based on the M/M/1/B queuing model with a
modification accounting for increased packet loss due to flow synchronization.
The simulator performed well when the access link speed was not too high,
however, when it became larger than a certain threshold the simulator
significantly overestimated the throughput because “with high speed local
links, packets are very likely to be concentrated at the beginning of RTTs.
Such a packet concentration creates losses even if the input rate averaged
over one RTT is much smaller than the capacity of the shared resource.”[4]
The origin of the burstiness observed by Baccelli and Hong, and others is
easily traced to the operation of the sliding window algorithm used by TCP.
This algorithm releases packets for transmission only in response to
acknowledgments received for previously sent packets. The result of this
behavior is that packets flow back and forth in formation, creating bursts
followed by silences, during which the algorithm awaits the acknowledgments of
the transmitted packets. Instead of sliding smoothly along the queue of
packets ready to be transmitted, the sliding window instead moves in fits and
starts. We note, that the sliding window algorithm is implemented in all
current TCP variants. Therefore, we can be sure that this transmission
burstiness affects not only the long standard TCP-Reno but every
implementation of TCP in existence. The only modification that could change
this would be transmission pacing, but it may have issues of its
own[28],[Agg].
The present paper is meant to fill a certain gap in the program we envision as
leading to high-fidelity TCP networking models. It has long been assumed that
mean value models, such as fluid approximation, can give an accurate estimate
of the steady state, and even dynamics of, TCP performance. Recent research,
however, indicates that statistics of the TCP congestion window size and of
the transmission process contribute significantly even when the number of
flows is very large, and that more sophisticated models are necessary if
quantitatively, as well as qualitatively, accurate results are desired. The
ingredients necessary to achieve this goal, in our view, are
1. i)
an accurate model for the congestion window size distribution as a function of
the network parameters and packet loss;
2. ii)
an accurate queuing model that can be used to deduce the packet loss
probability in terms of the congestion window size distribution and parameters
of the network.
These correspond to the two halves of the TCP congestion avoidance feedback
loop. Significant progress has been made by a number of authors in tackling
the first item. Closed, though, still rather mathematically complex,
expressions have been obtained for the stationary distribution of the
congestion windows sizes of TCP flows under the restriction of a constant RTT
[3],[8],[6]. Our present work is aimed at providing a partial answer to the
second problem.
The on-off fluid source model we propose here is not new, although some
adaptation to the specifics of TCP was necessary. We describe the model in
detail in Section II, but we highlight here the observations about the
equilibrium TCP queuing behavior that lead to it:
1. 1)
in equilibrium TCP sources can be treated as statistically
independent333Synchronization between flows does occur for certain network
parameter combinations in our study but is atypical (see Figure 3).,
2. 2)
in equilibrium a TCP source can be treated as a stationary random fluid on-off
source.
The first observation is perhaps not very surprising given that the flows are
coupled only by packet loss at the queue, which at equilibrium typically
behaves like a stationary random process. Observation (2), however, is
somewhat less obvious, as it says that the exact congestion window
trajectories of individual flows do not matter to the router queue length
distribution. That is, from the point of view of the buffer, a collection of
ordinary TCP sources and ones whose congestion windows change randomly, but
follow the same distribution as the former, are equivalent444provided the
number of sources is sufficiently large.
These observations permit construction of a simple yet accurate TCP queuing
model. The basic structure of the model follows that of Kosten [17] and Anik,
Mitra and Sondhi [7]. The main difference with the former is that the “on” and
“off” periods are not exponentially distributed, which makes exact solution
for the stationary queue length distribution unlikely. Still, there is hope
that some analytical headway can be made with more general distributions based
on the work of Palmowski and Rolski[19]. The proposed model also has the
advantage of providing a unified framework for treating large as well as small
buffer regimes, which so far have usually been treated as distinct asymptotic
regimes requiring separate calculations.
We validate the Kosten-Anik-Mitra-Sondhi (KAMS) model by comparing the
stationary queue length distribution obtained from _ns2_ simulations with that
obtained from numerical simulations of the model for a large number of network
parameter combinations. Since we measure the distance between whole queue
length probability distributions, which are signatures of the underlying
queuing processes, the near perfect observed match between distributions is an
indication of the identity of the queuing processes that produced them.
This work is a refinement of our previous paper [12], where we first applied
KAMS to TCP queuing and provided experimental evidence in its support.
Although our work is independent, we have recently learned that several other
authors have also arrived at related models, in our view, strengthening the
case for its correctness and utility [15],[13].
The rest of the paper is arranged as follows. In Section II we layout the
details of the KAMS model and its adaptation to TCP. Section III describes
details of the validation approach. Results of validation are presented in
Section IV. Section V summarizes our findings and discusses directions for
future work.
## II KAMS TCP Queuing Model
The model we propose for TCP queuing is derived from the multi-input buffer
queuing model originally studied by Kosten [17] and later by Anik, Mitra and
Sondhi [7] (and many others since). The main advantage of KAMS is that it is
able to incorporate TCP burstiness, essential to accurate queue modeling, with
a minimum of mathematical complexity.
The basic, in the topological sense, KAMS model is a network with a single
server and a fixed number of inputs. This admittedly simple setup is a
fundamental building block in and stepping stone to the study of more
complicated TCP networks. Even this topologically simple model can be
practically useful for modeling bottleneck routers in the broadband edge
network.
The queue input process in KAMS is modeled as a superposition of some number,
$N$, of statistically independent random on-off fluid sources. Each source has
two states — ”on”, when it is transmitting, and ”off”, when it is not. In the
on state, every source emits data with some constant rate $\nu$.
This two level fluid source model is a natural fit for approximating bursty
TCP transmission behavior. The on periods correspond to packet-burst
transmissions, when the TCP source transmits the packets one after the other
in quick succession, and the off periods to silences as the source awaits
acknowledgment of the transmitted packets.
Data streams from all sources merge in the server queue so that when $k$
sources are active the total data inflow rate is $k\nu$. The server processes
incoming data at a constant rate $C$. When $k\nu>C$ the arrived but
unprocessed data is stored in a buffer of size $B$. Any data arriving after
the buffer becomes full is discarded.
The equivalent computer network looks like a double fan (Fig. 1). The TCP
sources on the left transmit data to their respective receivers on the right
over the bottleneck link in the middle. To make this basic architecture as
simple as possible we will also assume that round trip propagation delays
between the source-receiver pairs are roughly equal. This is the archetypal
bottleneck network.
Figure 1: Basic network topology.
Most of the KAMS parameters match up naturally with this model bottle
network’s parameters, so $C$ is the bottleneck router capacity, $B$ is the
router buffer size and $N$ the number of TCP sources. The three remaining
parameters — $\nu$, and the random processes determining the on and off period
durations require specific tuning to match KAMS behavior to the model TCP
network.
In the original KAMS model $\nu$ is a free parameter, permitted to vary from
source to source. For a TCP flow the rate of transmission is determined by the
parameters of the network, provided a source’s rate is not itself the limiting
factor. The timing of packet emissions is determined by the TCP sliding window
algorithm. Since a new packet is sent only when an acknowledgment for the
previous packet is received, the speed of the access link, which is usually
the slowest segment of the network, determines the interval between
consecutive packets emissions. We will assume that the maximum speed of a TCP
source is not less than the speed of its access link. Thus $\nu$ is equal to
the speed of the access link. In practice, this assumption is vacuous, since
the speed of the access link can be assumed to be equal to the speed of the
source if the source is slower.
Duration of a given on period is determined by the size of the packet burst.
Further, the number of packets in a burst, due to idiosyncrasies of the
sliding window algorithm, is equal to the size of the sliding window, which we
will assume is equal to the congestion window. Indeed, analysis of _ns2_
packet traces confirms that most packets are transmitted in bursts equal in
size to the congestion window [12]. Moreover, packets in a burst are
transmitted back to back at the speed of the slowest router in the path, i.e.
$\nu$. Thus a duration of a given on period is simply the congestion window
size at the time of its initiation divided by $\nu$.
It turns out that due to the multiplexing at the router it is not necessary to
have the on periods of a given source follow the additive increase
multiplicative decrease(AIMD) of the TCP congestion avoidance algorithm.
Provided the sources activate independently of each other, the on periods of a
given source can be modeled as a sequence of i.i.d. random variables with the
same distribution as a deterministic AIMD source. From the point of view of
the router these two input processes appear to be indistinguishable. The
problem of determining the on period duration thus shifts to identifying the
stationary congestion window size distribution.
Several papers have been published describing the stationary congestion window
size distribution for a single source as well as for a large number of sources
sharing a bottleneck link. The resulting mathematical expressions are,
unfortunately, highly complex, making it difficult to check their identity or
difference. Their complexity also makes it very difficult to implement
realizations of the corresponding random variables. In practice, a good
approximation is often sufficient for obtaining very accurate results. One
characteristic that all theoretically computed distributions share is the
Gaussian tail. We, therefore, postulate that the congestion window size
distribution is approximated by a truncated normal distribution, restricted to
the range $[0,\infty)$. Lacking a ready formula relating the parameters of
this distribution to the network parameters of the model, we estimated them
instead from experimental data. Details of the estimation procedure are
explained in Section III. We emphasize that this is not a shortcoming of the
proposed model, since it is only meant to model the steady state queuing
statistics and _not_ the TCP congestion avoidance algorithm.
Finally, the off period is the silence between consecutive bursts of packets.
Since at most a congestion window’s worth of packets can be sent in a single
round trip and since this is exactly the burst size observed it follows that
the duration of an off period is equal to RTT minus the duration of the last
burst. However, because we will be concerned with high speed routers the burst
duration will be negligibly small compared to the RTT. Since the same applies
to the buffering delay, the off periods can be assumed to be equal to the
round trip propagation delay, identical and constant for all sources.
## III Methodology
To validate the constructed model we compared the queue length distributions
obtained from numerical simulations of the KAMS model with the same
distributions computed from _ns2_ packet level simulations for a range of
network parameter combinations. Below we describe the parameter values used in
the validation study and the rational behind the specific choices.
The number of flows $N$ in the validation study was set to 1000. Firstly, this
is a large number of flows that might realistically be observed at a router at
the periphery of the Internet, where most bottlenecks lie. Secondly, at
$N=1000$ the model is close enough to the asymptotic limit that increasing the
number of flows further does not significantly affect the queue length
distribution. Router capacity was fixed at $C=1$ Gbps. Access link speed was
set at 100 Mbps, which gives $\nu=100$ Mbps.
The remaining scalar parameters — router buffer size and round trip
propagation delay — were varied to determine how accurately the KAMS model
tracks packet level simulations. These parameters are also known to have the
strongest influence on TCP behavior and so we were most interested in the
fidelity of the KAMS model with respect to them.
Router buffer size was varied in steps of 50 pkts from 50 pkts to 300 pkts.
Preliminary experiments with _ns2_ suggested that increasing buffer size
beyond 300 pkts does not significantly affect packet loss and so 300 pkts was
chosen as an upper limit on the buffer size. Conversely, for buffer sizes
below 50 pkts packet loss increases dramatically, which makes it a reasonable
lower bound.
The round trip propagation delay was varied between 50 ms and 300 ms in
increments of 50 ms. This range of propagation delays corresponds
approximately to wired networks varying in diameter from a LAN to the global
Internet.
Finally, to fix the on period duration distribution it was necessary to
determine the parameters of the truncated normal distribution, which we
postulated approximates the congestion window size distribution. Analyzing
_ns2_ experimental data we found that the congestion window size distribution
is, indeed, very well approximated by a truncated normal distribution. The
mean of the distribution was take to be the mode of the empirical
distribution, computed as the average of the four most likely values. The
variance was then estimated by performing a least squares fit of the truncated
normal distribution with the predetermined mean. The fit was performed on the
logarithm of the data, which gives a greater weight to the accurate fitting of
the tail of the distribution. In this way the mean and variance of the
congestion window size distribution were computed for each of the 36 pairs of
propagation delay and buffer size to be used for running the KAMS model
simulation with corresponding parameters.
To summarize, the KAMS model was tested with $N=1000$ flows, $C=1$Gbps,
$\nu=100$Mbps, router buffer size ranging from 50 pkts to 300 pkts in steps of
50 pkts and RTT ranging from 50 ms to 300 ms in steps of 50ms, giving 36 test
points.
Validation was performed by running simulations of _ns2_ and KAMS for each of
the 36 test points for 600 simulated seconds. In _ns2_ simulations queue
length was sampled once per round trip. The KAMS simulation sampled queue
length at the moments when sources changed state. The cumulative queue length
distributions were computed by binning the queue length data into consecutive
integer bins. Only data from the final 80% of the simulated time interval was
used to erase the effects of the transient phase.
## IV Results
Results of the comparison between KAMS and _ns2_ simulations are summarized in
Figure 2. The figure contains the contour plot of normalized root mean square
error (NRMSE) in KAMS cumulative queue length distribution relative to _ns2_
over queue lengths greater than 5 packets. We chose to drop the distribution
values for the queue lengths between 0 and 5 because they are of little
practical importance but contribute disproportionately to the error. Including
the discarded values does increase NRMSE by about 5%. The reason for the
disproportionately high contribution is that near the buffer boundaries, i.e.
near 0 and $B$, the accuracy of the fluid approximation breaks down. This is
because KAMS stationary probability distribution becomes singular at the
buffer boundaries, giving rise to jump discontinuities in the cumulative
distribution. On the other hand, the _ns2_ cumulative queue length
distribution, while rising sharply near the boundaries, remains continuous.
The result is a very large relative error in the boundary regions. This
becomes a problem especially when attempting to estimate the buffer overflow
probability.
Figure 2: Contour plot of NRMSE in KAMS cumulative queue length distribution.
As can be seen from Figure 2 the KAMS model exhibits a remarkably high
accuracy — better than 5% NRMSE — for all but two parameter combinations with
round trip propagation delay $\geq$100 ms. (Experiments with $\nu=C$ produce
similar results which we do not present here for the sake of brevity.)
An interesting observation emerges from the analysis of failure of KAMS in the
region RTT$<100$ ms. _Ns2_ packet loss time series data indicate that for
RTT=50 ms there is significant packet loss synchronization between flows.
Figure 3 attempts to capture the degree of synchronization of aggregate packet
loss with one number. The function, whose contours appear in the figure, is a
simple measure of “spikiness” of the Fourier transform of the aggregate packet
loss time series. We defined it as
$\frac{\max_{i>0}|Re(\omega_{i})|}{\frac{1}{M}\sum_{i>0}|Re(\omega_{i})|}$ (1)
where $\omega_{i}$ is the $i$th Fourier coefficient of the aggregate packet
loss time series and $M$ is the length of the data series. The largest values,
occurring for RTT=50 ms, correspond to spectral densities with pronounced
peaks arising from strong periodicity in the packet loss time series. This
agrees with observations made in [AltAvrCha], where authors conclude that for
long RTTs packet losses are well approximated by an i.i.d. random process,
whereas for short RTTs there is a significant long-range time correlation
between packet losses.
Counterintuitively, KAMS errs on the side of higher queue lengths and higher
packet losses, which is the opposite of what we expected to see when buffer
overflow events become synchronized. Synchronization is still a poorly
understood phenomenon and why it occurs for short RTTs, when the lag in the
congestion avoidance control loop is small, rather than for long RTTs is a
topic for future research.
Figure 3: Contour plot of the degree of synchronization of aggregate packet
loss. Higher values correspond to more synchronization.
Ultimately, the point of TCP queuing models is to provide a formula
approximating the buffer overflow probability and, hopefully, observed packet
loss probability. Hence, we consider next the accuracy of KAMS full buffer
probability. As pointed out above, KAMS performs poorly near the buffer
boundaries. The relative error in the full buffer probability is between 600
and 1400%. However, the error is nearly constant across most of the test
parameter range, allowing us to introduce a scaling correction factor. Figure
4 show the contour plot of the multiplicative error in KAMS full buffer
probability data after rescaling by the correction factor. The correction
factor was chosen equal to the average of the multiplicative error for
RTT$\geq 100$ ms, to avoid skewing the mean by the abnormally high values from
the parameter combinations exhibiting synchronization. As can be seen from the
plot, for most test points the multiplicative error drops below 20%. Since in
the TCP throughput formula packet loss probability appears under the square
root, the error in estimated throughput would be smaller still, on the order
of 10%.
Figure 4: Multiplicative error in corrected full buffer probability of KAMS.
We note that the _ns2_ observed full buffer probability does not equal the
observed mean packet loss rate, although, in the absence of synchronization,
they are roughly proportional with a coefficient of about 0.6. Understanding
of the relationship between full buffer probability and packet loss requires
further research.
We also note that the KAMS model is highly sensitive to the shapes of the on
and and off period distributions. Initially, we tried using exponential
distributions for both since a closed formula already exists for the
stationary queue length distribution in this setting. The results were
considerably worse with NRMSE running as high as 20% for many test points.
Even using a half-normal distribution (i.e. a truncated normal distribution
with mean 0) produced results that were substantially worse, even though the
means of the truncated normals used above were usually in the low teens.
Switching an exponential to constant off period distribution had a less
dramatic but still noticeable.
## V Conclusion
Overall, when the assumptions of the model are satisfied, KAMS can be said to
approximate the stationary queue length distribution exceptionally well. This,
in our opinion, is a strong indication that KAMS correctly represents the TCP
queuing process. For well understood reasons, in absolute terms KAMS is less
accurate at the upper and lower buffer limits. However, the KAMS full buffer
probability is roughly proportional to the observed one with a universal
(across network parameters) constant coefficient.
The ultimate goal of this work is to facilitate the creation of numerically
accurate closed form mathematical models of TCP networks. In light of the
presented work, two more problems need to be resolved to achieve this final
goal: first, a closed form mathematical expression approximating buffer
overflow probability in the KAMS model, and, second, a closed form
approximation expressing the relationship between mean and variance of the
truncated normal distribution, representing the stationary congestion window
size distribution, and packet loss probability. Headway, has already been made
in both of these directions by Palmowski and Rolski in establishing results on
the statistics of the stationary KAMS distribution for general on-off period
distributions[19], and by Bacelli, Dumas, Chaintreau and others in
establishing the relationship between the packet loss process and stationary
congestion window size distribution [3],[8],[6]. The final synthesis of these
elements into a highly accurate model of TCP networking is a topic for future
research.
## References
* [1] Eitan Altman, Konstantin Avrachenkov, and Chadi Barakat. A stochastic model of TCP/IP with stationary random losses. In ACM SIGCOMM, pages 231–242, 2000.
* [2] F. Baccelli and D. Hong. Interaction of TCP flows as billiards. Networking, IEEE/ACM Transactions on, 13(4):841–853, Aug. 2005\.
* [3] François Baccelli, David R. McDonald, and Julien Reynier. A mean-field model for multiple TCP connections through a buffer implementing red. Perform. Eval., 49(1/4):77–97, 2002.
* [4] Francois Baccelli and Dohy Hong. Flow level simulation of large ip networks. In INFOCOM 2003, volume 3, pages 1911– 1921, 2003.
* [5] François Baccelli, David R. Mcdonald, and Julien Reynier. A Mean-Field Model for Multiple TCP Connections through a Buffer Implementing RED. Research Report RR-4449, INRIA, 2002.
* [6] Augustin Chaintreau and Danny DeVleeschauwer. A closed form formula for long-lived tcp connections throughput. Perform. Eval., 49:57–76, September 2002.
* [7] M.M. Sondhi D. Anick, D. Mitra. Stochastic theory of a data handling system with multiple sources. ICC’80; International Conference on Communications, Seattle, 1, 1980\.
* [8] Vincent Dumas, Fabrice Guillemin, and Philippe Robert. A markovian analysis of additive-increase multiplicative-decrease (AIMD) algorithms. In Advances in Applied Probability, pages 85–111, 2002.
* [9] M. Enachescu, Y. Ganjali, A. Goel, N. McKeown, and T. Roughgarden. Routers with very small buffers. pages 1–11, April 2006.
* [10] D. Tan F. Kelly, A. Maulloo. Rate control for communication networks: shadow prices, proportional fairness and stability. J. Oper. Res. Soc., 49:237–252, 1998.
* [11] D. Towsley G. Raina and D. Wischik. Part II: Control theory for buffer. ACM SIGCOMM Computer Communication Review, 2005.
* [12] Daniel Genin and Vladimir Marbukh. Bursty fluid approximation of TCP for modeling internet congestion at the flow level. In Forty Seventh Annual Allerton Conference on Communication, Control and Computing, September 2009.
* [13] Piyush Goel and Natarajan Gautam. On using fluid flow models for performance analysis of computer networks. Proceedings of the Eighth INFORMS Telecommunications Conference, March 2006.
* [14] C. V. Hollot, Vishal Misra, Don Towsley, and Wei bo Gong. A control theoretic analysis of red. In In Proceedings of IEEE Infocom, pages 1510–1519, 2001.
* [15] Yong Huang, Yong Liu, Weibo Gong, and D. Towsley. Two-level stochastic fluid tandem queuing model for burst impact analysis. In Decision and Control, 2007 46th IEEE Conference on, pages 3042 –3047, dec. 2007.
* [16] Hao Jiang and Constantinos Dovrolis. The origin of TCP traffic burstiness in some time scales. Technical report, in IEEE INFOCOM, 2004.
* [17] L. Kosten. Stochastic theory of a multi-entry buffer (I). Technical report, 1974.
* [18] R. Srikant L. Ying, G. Dullerud. Global stability of internet congestion controllers with heterogeneous delays. Transactions on Networking, 14(3):579–591, 2006.
* [19] Z. Palmowski and T. Rolski. The superposition of alternating on-off flows and a fluid model. Ann. Appl. Probab., 8(2):524–540, 1998.
* [20] Vern Paxson and Sally Floyd. Wide-area traffic: The failure of poisson modeling. IEEE/ACM Transactions on Networking, 3:226–244, 1995.
* [21] D. Tan R. Johari. End-to-end congestion control for the internet: delays and stability. Transactions on Networking, 9(6):818–832, 2001.
* [22] G. Raina and D. Wischik. Buffer sizes for large multiplexers: TCP queuing theory and instability analysis. Next Generation Internet Networks, 2005, 2005.
* [23] R. Srikant S. Deb, S. Shakkotai. Stability and convergence of TCP-like congestion controllers in a many-flows regime. INFOCOMM, 2003.
* [24] J. Wang S. Low, F. Paganini and J. Doyle. Linear stability of TCP/red and scalable control. Computer Netwo rks Journal, 43(5):633–647, 2003.
* [25] R. Srikant. Mathematics of Internet congestion control. Willey, 2001.
* [26] D. Towsley V. Misra, W. Gong. Fluid-based analysis of a network of aqm routers supporting TCP flows with an application to red. SIGCOMM, 2000.
* [27] G. Vinnicombe. On the stability of end-to-end congestion control for the internet. Univ. Cambridge Tech. Rep. CUED/F-INFENG/TR.398, 2001.
* [28] D. Wischik. Buffer sizing theory for bursty TCP flows. pages 98–101, 0-0 2006.
|
arxiv-papers
| 2011-05-06T18:19:42 |
2024-09-04T02:49:18.648663
|
{
"license": "Public Domain",
"authors": "Daniel Genin and Tassos Nakassis",
"submitter": "Daniel Genin",
"url": "https://arxiv.org/abs/1105.1347"
}
|
1105.1528
|
# Adams type inequalities and related elliptic partial differential equations
in dimension four
Yunyan Yang yunyanyang@ruc.edu.cn Department of Mathematics, Renmin
University of China, Beijing 100872, P. R. China
###### Abstract
Motivated by Ruf-Sani’s recent work, we prove an Adams type inequality and a
singular Adams type inequality in the whole four dimensional Euclidean space.
As applications of those inequalities, a class of elliptic partial
differential equations are considered. Existence of nontrivial weak solutions
and multiplicity results are obtained via the mountain-pass theorem and the
Ekeland’s variational principle. This is a continuation of our previous work
about singular Trudinger-Moser type inequality.
###### keywords:
Trudinger-Moser inequality, Adams inequality, Mountain-pass theorem, Ekeland’s
variational principle
###### MSC:
35J60, 35B33, 35J20
††journal: $\ast\ast\ast$
## 1 Introduction and main results
Let $\Omega\subset\mathbb{R}^{n}$ be a smooth bounded domain. The classical
Trudinger-Moser inequality [26, 29, 36] says
$\sup_{u\in W_{0}^{1,n}(\Omega),\,\|\nabla u\|_{L^{n}(\Omega)}\leq
1}\int_{\Omega}e^{\alpha|u|^{\frac{n}{n-1}}}dx<\infty$ (1.1)
for all $\alpha\leq\alpha_{n}=n\omega_{n-1}^{1/(n-1)}$, where $\omega_{n-1}$
is the area of the unit sphere in $\mathbb{R}^{n}$. $(\ref{T-M})$ is sharp in
the sense that for any $\alpha>\alpha_{n}$, the integrals in (1.1) are still
finite, but the supremum of the integrals are infinite. (1.1) plays an
essential role in the study of the following partial differential equations
$\left\\{\begin{array}[]{lll}-{\rm div}(|\nabla u|^{n-2}\nabla
u)=f(x,u)\,\,\,{\rm in}\,\,\,\Omega\\\\[6.45831pt] u\in
W_{0}^{1,n}(\Omega)\setminus\\{0\\},\end{array}\right.$ (1.2)
where, roughly speaking, $f(x,u)$ behaves like $e^{|u|^{n/(n-1)}}$ as
$|u|\rightarrow\infty$. Problem (1.2) and similar problems were studied by
many authors. Here we mention Atkinson-Peletier [10], Carleson-Chang [12],
Adimurthi et al. [3]-[7], de Figueiredo-Miyagaki-Ruf [14], Panda [28], J. M.
do Ó [16], de Figueiredo-do Ó-Ruf [13], Silva-Soares [33], Yang-Zhao [38], do
Ó-Yang [19], Lam-Lu [23] and the references therein.
When $\Omega=\mathbb{R}^{n}$, the integrals in (1.1) are infinite. To get a
Trudinger-Moser type inequality in this case, D. Cao [11] proposed the
following: $\forall\alpha<4\pi$, $\forall M>0$,
$\sup_{\int_{\mathbb{R}^{2}}|\nabla u|^{2}dx\leq
1,\,\int_{\mathbb{R}^{2}}u^{2}dx\leq M}\int_{\mathbb{R}^{2}}\left(e^{\alpha
u^{2}}-1\right)dx<\infty,\,\,$ (1.3)
which is equivalent to saying that for any $\tau>0$ and $\alpha<4\pi$,
$\sup_{\int_{\mathbb{R}^{2}}\left(|\nabla u|^{2}+\tau u^{2}\right)dx\leq
1}\int_{\mathbb{R}^{2}}\left(e^{\alpha u^{2}}-1\right)dx<\infty.$ (1.4)
(1.3) was independently generalized by Panda [27] and J. M. do Ó [15] to
$n$-dimensional case. Later Adachi and Tanaka [1] gave another type of
generalization. (1.3) and its high dimensional generalizations were
extensively used to study the equation
$-{\rm div}(|\nabla u|^{n-2}\nabla u)+V(x)|u|^{n-2}u=f(x,u)\quad{\rm
in}\quad\mathbb{R}^{n},$
where $f(x,u)$ behaves like $e^{\alpha|u|^{n/(n-1)}}$ as
$|u|\rightarrow\infty$. See for examples [9, 11, 15, 18, 27].
Notice that (1.3) or (1.4) is a subcritical Trudinger-Moser type inequality in
the whole Euclidean space. While the critical inequality was obtained by B.
Ruf [31] in dimension two and Li-Ruf [24] in general dimension. Using a simple
variable substitution, Adimurthi-Sandeep [4] established a singular Trudinger-
Moser inequality, which is generalized to the whole $\mathbb{R}^{n}$ by
Adimurthi-Yang [8], namely
$\sup_{\int_{\mathbb{R}^{n}}\left(|\nabla u|^{n}+\tau|u|^{n}\right)dx\leq
1}\int_{\mathbb{R}^{n}}\frac{e^{\alpha|u|^{n/(n-1)}}-\sum_{k=0}^{n-2}\frac{1}{k!}|u|^{kn/(n-1)}}{|x|^{\beta}}dx<\infty,$
(1.5)
where $0\leq\beta<n$, $\alpha/\alpha_{n}+\beta/n\leq 1$, $\tau$ is any fixed
positive real number. When $\beta=0$ and $\tau=1$, (1.5) is the standard
critical Trudinger-Moser type inequality [24, 31]. In [8] we also employed
(1.5) to obtain existence of weak solutions to the equation
$-{\rm div}(|\nabla u|^{n-2}\nabla
u)+V(x)|u|^{n-2}u=\frac{f(x,u)}{|x|^{\beta}}+\epsilon h\quad{\rm
in}\quad\mathbb{R}^{n},$
where $f(x,u)$ behaves like $e^{\alpha|u|^{n/(n-1)}}$ as
$|u|\rightarrow\infty$, $\epsilon>0$, $h$ belongs to the dual space of
$W^{1,n}(\mathbb{R}^{n})$. Similar problems were also studied by J. M. do Ó
and M. de Souza [17] in the case $n=2$.
Our aim is to derive similar results to [8] for bi-Laplacian operator in
dimension four. The essential tool will be the Adams type inequality in the
whole $\mathbb{R}^{4}$. Let $\Omega\subset\mathbb{R}^{4}$ be a smooth bounded
domain. As a generalization of the Trudinger-Moser inequality, Adams
inequality [2] reads
$\sup_{u\in W_{0}^{2,2}(\Omega),\,\int_{\Omega}|\Delta u|^{2}dx\leq
1}\int_{\Omega}e^{32\pi^{2}u^{2}}dx<\infty.$ (1.6)
This inequality was extended by Tasi [34] (see also Theorem 3.1 in [32]),
namely
$\sup_{u\in W^{2,2}(\Omega)\cap W_{0}^{1,2}(\Omega),\,\int_{\Omega}|\Delta
u|^{2}dx\leq 1}\int_{\Omega}e^{32\pi^{2}u^{2}}dx<\infty.$ (1.7)
Also the integrals in (1.6) will be infinite when $\Omega$ is replaced by the
whole $\mathbb{R}^{4}$. But B. Ruf and F. Sani [32] were able to establish the
corresponding Adams type inequality in $\mathbb{R}^{4}$, say
Theorem A (Ruf-Sani). There holds
$\sup_{u\in W^{2,2}(\mathbb{R}^{4}),\,\int_{\mathbb{R}^{4}}(-\Delta
u+u)^{2}dx\leq 1}\int_{\mathbb{R}^{4}}(e^{32\pi^{2}u^{2}}-1)dx<\infty.$ (1.8)
Furthermore this inequality is sharp, i.e. if $32\pi^{2}$ is replaced by any
$\alpha>32\pi^{2}$, then the supremum is infinite.
In fact they obtained more in [32], but here we focus on four dimensional
case. Noticing that for all $u\in W^{2,2}(\mathbb{R}^{4})$
$\int_{\mathbb{R}^{4}}(-\Delta u+u)^{2}dx=\int_{\mathbb{R}^{4}}(|\Delta
u|^{2}+2|\nabla u|^{2}+u^{2})dx,$
one can rewrite (1.8) as
$\sup_{u\in W^{2,2}(\mathbb{R}^{4}),\,\int_{\mathbb{R}^{4}}(|\Delta
u|^{2}+2|\nabla u|^{2}+u^{2})dx\leq
1}\int_{\mathbb{R}^{4}}(e^{32\pi^{2}u^{2}}-1)dx<\infty.$ (1.9)
One of our goals is the following:
Theorem 1.1. Let $0\leq\beta<4$. Then for all $\alpha>0$ and $u\in
W^{2,2}(\mathbb{R}^{4})$, there holds
$\int_{\mathbb{R}^{4}}\frac{e^{\alpha u^{2}}-1}{|x|^{\beta}}dx<\infty.$ (1.10)
Furthermore, assume $\tau$ and $\sigma$ are two positive constants, we have
for all $\alpha<32\pi^{2}\left(1-\frac{\beta}{4}\right)$,
$\sup_{u\in W^{2,2}(\mathbb{R}^{4}),\,\int_{\mathbb{R}^{4}}(|\Delta
u|^{2}+\tau|\nabla u|^{2}+\sigma u^{2})dx\leq
1}\int_{\mathbb{R}^{4}}\frac{e^{\alpha u^{2}}-1}{|x|^{\beta}}dx<\infty.$
(1.11)
When $\alpha>32\pi^{2}\left(1-\frac{\beta}{4}\right)$, the supremum is
infinite.
We remark that the inequality (1.11) in Theorem 1.1 is only subcritical case.
How to establish it in the critical case
$\alpha=32\pi^{2}\left(1-{\beta}/{4}\right)$ is still open. In Section 2, we
will show that Theorem 1.1 can be derived from the following:
Theorem 1.2. For all $\alpha>0$ and $u\in W^{2,2}(\mathbb{R}^{4})$, there
holds
$\int_{\mathbb{R}^{4}}\left(e^{\alpha u^{2}}-1\right)dx<\infty.$ (1.12)
For all constants $\tau>0$ and $\sigma>0$, there holds
$\sup_{u\in W^{2,2}(\mathbb{R}^{4}),\,\int_{\mathbb{R}^{4}}(|\Delta
u|^{2}+\tau|\nabla u|^{2}+\sigma u^{2})dx\leq
1}\int_{\mathbb{R}^{4}}\left(e^{32\pi^{2}u^{2}}-1\right)dx<\infty.$ (1.13)
Furthermore this inequality is sharp, i.e. if $32\pi^{2}$ is replaced by any
$\alpha>32\pi^{2}$, then the supremum is infinite.
Though the second part of Theorem 1.2 is similar to Theorem A, (1.12) and
(1.13) are more suitable to use than (1.8) or (1.9) when considering the
related partial differential equations. This is also our next goal. Precisely
Theorem 1.1 can be applied to study the existence of weak solutions to the
following nonlinear equation
$\begin{array}[]{lll}\Delta^{2}u-{\rm div}(a(x)\nabla
u)+b(x)u=\frac{f(x,u)}{|x|^{\beta}}+\epsilon h(x)\quad{\rm
in}\quad\mathbb{R}^{4}.\\\\[6.45831pt] \end{array}$ (1.14)
Here and throughout this paper we assume $0\leq\beta<4$, $a(x)$, $b(x)$ are
two continuous functions satisfying
$(A_{1})$ there exist two positive constants $a_{0}$ and $b_{0}$ such that
$a(x)\geq a_{0}$ and $b(x)\geq b_{0}$ for all $x\in\mathbb{R}^{4}$;
$(A_{2})$ $\frac{1}{b(x)}\in L^{1}(\mathbb{R}^{4})$.
We also assume the following growth condition on the nonlinearity $f(x,s)$:
$(H_{1})$ There exist constants $\alpha_{0}$, $b_{1}$, $b_{2}>0$ and
$\gamma\geq 1$ such that for all $(x,s)\in\mathbb{R}^{4}\times\mathbb{R}$,
$|f(x,s)|\leq b_{1}|s|+b_{2}|s|^{\gamma}\left(e^{\alpha_{0}s^{2}}-1\right).$
$(H_{2})$ There exists $\mu>2$ such that for all $x\in\mathbb{R}^{4}$ and
$s\not=0$,
$0<\mu F(x,s)\equiv\mu\int_{0}^{s}f(x,t)dt\leq sf(x,s).$
$(H_{3})$ There exist constants $R_{0}$, $M_{0}>0$ such that for all
$x\in\mathbb{R}^{4}$ and $|s|\geq R_{0}$,
$0<F(x,s)\leq M_{0}|f(x,s)|.$
Define a function space
$E=\left\\{u\in W^{2,2}(\mathbb{R}^{4}):\int_{\mathbb{R}^{4}}\left(|\Delta
u|^{2}+a(x)|\nabla u|^{2}+b(x)u^{2}\right)dx<\infty\right\\}.$ (1.15)
We say that $u\in E$ is a weak solution of problem (1.14) if for all
$\varphi\in E$ we have
$\int_{\mathbb{R}^{4}}\left(\Delta u\Delta\varphi+a(x)\nabla
u\nabla\varphi+b(x)u\varphi\right)dx=\int_{\mathbb{R}^{4}}\frac{f(x,u)}{|x|^{\beta}}\varphi
dx+\epsilon\int_{\mathbb{R}^{4}}h\varphi dx,$
where $h\in E^{*}$. Here and in the sequel we denote the dual space of $E$ by
$E^{*}$. For all $u\in E$, we denote for simplicity the norm of $u$ by
$\|u\|_{E}=\left(\int_{\mathbb{R}^{4}}\left(|\Delta u|^{2}+a(x)|\nabla
u|^{2}+b(x)u^{2}\right)dx\right)^{1/2}.$ (1.16)
For $\beta:0\leq\beta<4$, we define a singular eigenvalue by
$\lambda_{\beta}=\inf_{u\in E,\,u\not\equiv
0}\frac{\|u\|_{E}^{2}}{\int_{\mathbb{R}^{4}}\frac{u^{2}}{|x|^{\beta}}dx}.$
(1.17)
If $\beta=0$, then by $(A_{1})$, obviously we have $\lambda_{0}\geq b_{0}>0$.
If $0<\beta<4$, the continuous embedding of
$W^{2,2}(\mathbb{R}^{4})\hookrightarrow L^{q}(\mathbb{R}^{4})$ $(\forall q\geq
2)$ together with the Hölder inequality implies
$\int_{\mathbb{R}^{4}}\frac{u^{2}}{|x|^{\beta}}dx\leq\int_{|x|>1}u^{2}dx+\left(\int_{|x|\leq
1}|u|^{2t}dx\right)^{1/t}\left(\int_{|x|\leq 1}\frac{1}{|x|^{\beta
t^{\prime}}}dx\right)^{1/t^{\prime}}\leq
C\|u\|_{W^{2,2}(\mathbb{R}^{4})}^{2},$ (1.18)
where $1/t+1/t^{\prime}=1$, $0<\beta t^{\prime}<4$ and
$\|u\|_{W^{2,2}(\mathbb{R}^{4})}^{2}=\int_{\mathbb{R}^{4}}\left(|\nabla^{2}u|^{2}+|\nabla
u|^{2}+u^{2}\right)dx$. Standard elliptic estimates (see for example [20],
Chapter 9) imply that the above $W^{2,2}(\mathbb{R}^{4})$-norm is equivalent
to
$\|u\|_{W^{2,2}(\mathbb{R}^{4})}=\left(\int_{\mathbb{R}^{4}}\left(|\Delta
u|^{2}+|\nabla u|^{2}+u^{2}\right)dx\right)^{1/2}.$ (1.19)
In the sequel, we use (1.19) as the norm of function in
$W^{2,2}(\mathbb{R}^{4})$. Combining (1.16), (1.18), (1.19) and the assumption
$(A_{1})$, we have $\int_{\mathbb{R}^{4}}\frac{u^{2}}{|x|^{\beta}}dx\leq
C\|u\|_{E}^{2}$. Hence, by (1.17), we conclude $\lambda_{\beta}>0$.
When $\epsilon=0$, (1.14) becomes
$\Delta^{2}u-{\rm div}(a(x)\nabla u)+b(x)u=\frac{f(x,u)}{|x|^{\beta}}.$ (1.20)
Now we state an application of Theorem 1.1 as follows:
Theorem 1.3. Assume that $a(x)$ and $b(x)$ are two continuous functions
satisfying $(A_{1})$ and $(A_{2})$.
$f:\mathbb{R}^{4}\times\mathbb{R}\rightarrow\mathbb{R}$ is a continuous
function and the hypothesis $(H_{1})$, $(H_{2})$ and $(H_{3})$ hold.
Furthermore we assume
$None$ $None$
Then the equation (1.20) has a nontrivial mountain-pass type weak solution
$u\in E$.
We remark that the result in Theorem 1.3 is stronger than Theorem 1.2 of [8]
in the case $\epsilon=0$. One reason is that $E$ is compactly embedded in
$L^{q}(\mathbb{R}^{4})$ for all $q\geq 1$ (see Lemma 3.6 below), but $E$ is
compactly embedded in $L^{q}(\mathbb{R}^{N})$ for all $q\geq N$ under the
assumptions in [8]. The other reason is that here we have the additional
assumption $(H_{5})$.
When $\epsilon\not=0$, we have the following:
Theorem 1.4. Assume that $a(x)$ and $b(x)$ are two continuous functions
satisfying $(A_{1})$ and $(A_{2})$.
$f:\mathbb{R}^{4}\times\mathbb{R}\rightarrow\mathbb{R}$ is a continuous
function and the hypothesis $(H_{1})$, $(H_{2})$ and $(H_{3})$ hold.
Furthermore we assume $(H_{4})$. Then there exists $\epsilon_{1}>0$ such that
if $0<\epsilon<\epsilon_{1}$, then the problem (1.14) has a weak solution of
mountain-pass type.
Theorem 1.5. Assume that $a(x)$ and $b(x)$ are two continuous functions
satisfying $(A_{1})$ and $(A_{2})$.
$f:\mathbb{R}^{4}\times\mathbb{R}\rightarrow\mathbb{R}$ is a continuous
function and the hypothesis $(H_{1})$, $(H_{2})$ and $(H_{4})$ hold.
Furthermore assume $h\not\equiv 0$. Then there exists $\epsilon_{2}>0$ such
that if $0<\epsilon<\epsilon_{2}$, then the problem (1.14) has a weak solution
with negative energy.
The most interesting question is that under what conditions the two solutions
obtained in Theorem 1.4 and Theorem 1.5 are distinct. Precisely we have the
following:
Theorem 1.6. Assume that $a(x)$ and $b(x)$ are two continuous functions
satisfying $(A_{1})$ and $(A_{2})$.
$f:\mathbb{R}^{4}\times\mathbb{R}\rightarrow\mathbb{R}$ is a continuous
function and the hypothesis $(H_{1})$, $(H_{2})$, $(H_{3})$, $(H_{4})$ and
$(H_{5})$ hold. Furthermore assume $h\not\equiv 0$. Then there exists
$\epsilon_{3}>0$ such that if $0<\epsilon<\epsilon_{3}$, then the problem
(1.14) has two distinct weak solutions.
Before ending the introduction, we give an example of $f(x,s)$ satisfying
$(H_{1})-(H_{5})$, say
$f(x,s)=\psi(x)s(e^{\alpha_{0}s^{2}}-1),$ (1.21)
where $\alpha_{0}>0$ and $\psi(x)$ is a continuous function with
$0<c_{1}\leq\psi\leq c_{2}$ for constants $c_{1}$ and $c_{2}$. Obviously
$(H_{1})$ is satisfied. Integrating (1.21), we have
$F(x,s)=\int_{0}^{s}f(x,t)dt=\frac{1}{2\alpha_{0}}\psi(x)\left(e^{\alpha_{0}s^{2}}-1-\alpha_{0}s^{2}\right).$
(1.22)
For $2<\mu\leq 4$, we have for $s\not=0$,
$\displaystyle 0<\mu
F(x,s)=\frac{\mu}{2\alpha_{0}}\psi(x)\sum_{k=2}^{\infty}\frac{\alpha_{0}^{k}s^{2k}}{k!}\leq\frac{\mu}{4}\psi(x)s^{2}\sum_{k=2}^{\infty}\frac{\alpha_{0}^{k-1}s^{2(k-1)}}{(k-1)!}\leq
sf(x,s).$
Hence $(H_{2})$ holds. It follows from (1.21) and (1.22) that
$0<F(x,s)\leq\frac{1}{2\alpha_{0}}f(x,s)$ for $|s|\geq 1$. Thus $(H_{3})$ is
satisfied. By (1.22), we have ${F(x,s)}/{s^{2}}\rightarrow 0$ as $s\rightarrow
0$. Hence $(H_{4})$ holds. Finally $(H_{5})$ follows from (1.21) immediately.
We organize this paper as follows: In Section 2, we prove an Adams type
inequality and a singular Adams type inequality in the whole $\mathbb{R}^{4}$
(Theorem 1.1 and Theorem 1.2). Applications of singular Adams inequality
(Theorems 1.3-1.6) will be shown in Section 3.
## 2 Adams type inequality in the whole $\mathbb{R}^{4}$
In this section, we will prove Theorem 1.1 and Theorem 1.2. Let us first prove
Theorem 1.2 by using the density of $C_{0}^{\infty}(\mathbb{R}^{4})$ in
$W^{2,2}(\mathbb{R}^{4})$ and an argument of Ruf-Sani [32].
Proof of Theorem 1.2. Firstly we prove (1.13). $\forall\tau>0$, $\sigma>0$, we
denote $c_{0}=\min\\{\tau/2,\sqrt{\sigma}\\}$. Let $u$ be a function belonging
to $W^{2,2}(\mathbb{R}^{4})$ and satisfying
$\int_{\mathbb{R}^{4}}(-\Delta u+c_{0}u)^{2}dx=1,$
or equivalently
$\int_{\mathbb{R}^{4}}(|\Delta u|^{2}+2c_{0}|\nabla
u|^{2}+c_{0}^{2}u^{2})dx=1.$
By the density of $C_{0}^{\infty}(\mathbb{R}^{4})$ in
$W^{2,2}(\mathbb{R}^{4})$, without loss of generality, we can find a sequence
of functions $u_{k}\in C_{0}^{\infty}(\mathbb{R}^{4})$ such that
$u_{k}\rightarrow u$ in $W^{2,2}(\mathbb{R}^{4})$ as $k\rightarrow\infty$ and
$\int_{\mathbb{R}^{4}}(-\Delta u_{k}+c_{0}u_{k})^{2}dx=1$. For otherwise we
can use
$\widetilde{u}_{k}=\frac{u_{k}}{\left(\int_{\mathbb{R}^{4}}(-\Delta
u_{k}+c_{0}u_{k})^{2}dx\right)^{1/2}}$
instead of $u_{k}$. Now suppose ${\rm supp}\,u_{k}\subset\mathbb{B}_{R_{k}}$
for any fixed $k$. Let $f_{k}=-\Delta u_{k}+c_{0}u_{k}$. Consider the problem
$\left\\{\begin{array}[]{lll}-\Delta v_{k}+c_{0}v_{k}=f_{k}^{\sharp}\quad{\rm
in}\quad\mathbb{B}_{R_{k}}\\\\[6.45831pt] v_{k}\in
W_{0}^{1,2}(\mathbb{B}_{R_{k}}),\end{array}\right.$
where $f_{k}^{\sharp}$ is the Schwarz decreasing rearrangement of $f_{k}$ (see
for example [21]). By the property of rearrangement, we have
$\int_{\mathbb{B}_{R_{k}}}(-\Delta
v_{k}+c_{0}v_{k})^{2}dx=\int_{\mathbb{B}_{R_{k}}}(-\Delta
u_{k}+c_{0}u_{k})^{2}dx=1.$ (2.1)
It follows from Trombetti-Vazquez [35] that $v_{k}$ is radially symmetric and
$\int_{\mathbb{B}_{R_{k}}}(e^{32\pi^{2}{u_{k}}^{2}}-1)dx=\int_{\mathbb{B}_{R_{k}}}(e^{32\pi^{2}{u_{k}^{\sharp}}^{2}}-1)dx\leq\int_{\mathbb{B}_{R_{k}}}(e^{32\pi^{2}{v_{k}}^{2}}-1)dx.$
(2.2)
The radial lemma ([22], Lemma 1.1, Chapter 6) implies
$|v_{k}(x)|\leq\frac{1}{\sqrt{2}\pi}\frac{1}{|x|^{{3}/{2}}}\|v_{k}\|_{W^{1,2}(\mathbb{R}^{4})}.$
(2.3)
The equality (2.1) implies that
$\|v_{k}\|_{W^{1,2}(\mathbb{R}^{4})}=\left(\int_{\mathbb{B}_{R_{k}}}(|\nabla
v_{k}|^{2}+v_{k}^{2})dx\right)^{1/2}\leq\sqrt{\frac{1}{2c_{0}}+\frac{1}{c_{0}^{2}}}.$
(2.4)
Choose
$r_{0}=\left(\frac{1}{2\pi^{2}}\left(\frac{1}{2c_{0}}+\frac{1}{c_{0}^{2}}\right)\right)^{1/3}$.
If $R_{k}\leq r_{0}$, then (1.7) and (2.1) imply
$\int_{\mathbb{B}_{R_{k}}}(e^{32\pi^{2}v_{k}^{2}}-1)dx\leq C$ (2.5)
for some constant $C$ depending only on $c_{0}$. If $R_{k}>r_{0}$, (2.3)
implies that $|v_{k}(x)|\leq 1$ when $|x|\geq r_{0}$. Thus we have by (2.1),
$\int_{\mathbb{B}_{R_{k}}\setminus\mathbb{B}_{r_{0}}}(e^{32\pi^{2}v_{k}^{2}}-1)dx\leq\sum_{j=1}^{\infty}\frac{(32\pi^{2})^{j}}{j!}\int_{\mathbb{B}_{R_{k}}}v_{k}^{2}dx\leq\frac{1}{c_{0}^{2}}\sum_{j=1}^{\infty}\frac{(32\pi^{2})^{j}}{j!}.$
(2.6)
On $\mathbb{B}_{r_{0}}$, we have for any $\epsilon>0$ by using the Young
inequality
$v_{k}^{2}(x)\leq(1+\epsilon)(v_{k}(x)-v_{k}(r_{0}))^{2}+\left(1+\frac{1}{\epsilon}\right)v_{k}^{2}(r_{0}).$
Take $\epsilon$ such that
$\frac{1}{1+\epsilon}=\int_{\mathbb{B}_{R_{k}}}|\Delta
v_{k}|^{2}dx=1-2c_{0}\int_{\mathbb{B}_{R_{k}}}|\nabla v_{k}|^{2}dx-
c_{0}^{2}\int_{\mathbb{B}_{R_{k}}}v_{k}^{2}dx.$
It follows that
$1+\frac{1}{\epsilon}=\frac{1}{2c_{0}\int_{\mathbb{B}_{R_{k}}}|\nabla
v_{k}|^{2}dx+c_{0}^{2}\int_{\mathbb{B}_{R_{k}}}v_{k}^{2}dx}\leq\frac{1}{\min\\{2c_{0},c_{0}^{2}\\}\|v_{k}\|^{2}_{W^{1,2}}}.$
This together with (2.3) and (2.4) gives
$\left(1+\frac{1}{\epsilon}\right)v_{k}^{2}(r_{0})\leq\frac{1}{\min\\{2c_{0},c_{0}^{2}\\}}\frac{1}{2\pi^{2}r_{0}^{3}}=\frac{1}{\min\\{2c_{0},c_{0}^{2}\\}\left(\frac{1}{2c_{0}}+\frac{1}{c_{0}^{2}}\right)}<1.$
Notice that $v_{k}(x)-v_{k}(r_{0})\in W^{2,2}(\mathbb{B}_{r_{0}})\cap
W_{0}^{1,2}(\mathbb{B}_{r_{0}})$ and $\int_{\mathbb{B}_{R_{k}}}|\Delta
v_{k}|^{2}dx\geq\int_{\mathbb{B}_{r_{0}}}|\Delta(v_{k}-v_{k}(r_{0}))|^{2}dx$,
we obtain by (1.7)
$\int_{\mathbb{B}_{r_{0}}}(e^{32\pi^{2}v_{k}^{2}}-1)dx\leq C$
for some constant $C$ depending only on $c_{0}$. This together with (2.2),
(2.5), (2.6) and Fatou’s Lemma implies that there exists a constant $C$
depending only on $c_{0}$ such that
$\int_{\mathbb{R}^{4}}(e^{32\pi^{2}u^{2}}-1)dx\leq\liminf_{k\rightarrow\infty}\int_{\mathbb{B}_{R_{k}}}(e^{32\pi^{2}u_{k}^{2}}-1)dx\leq\liminf_{k\rightarrow\infty}\int_{\mathbb{B}_{R_{k}}}(e^{32\pi^{2}v_{k}^{2}}-1)dx\leq
C.$ (2.7)
Notice that
$\int_{\mathbb{R}^{4}}(|\Delta u|^{2}+\tau|\nabla u|^{2}+\sigma
u^{2})dx\geq\int_{\mathbb{R}^{4}}(-\Delta u+c_{0}u)^{2}dx.$
We obtain
$\displaystyle\sup_{\int_{\mathbb{R}^{4}}(|\Delta u|^{2}+\tau|\nabla
u|^{2}+\sigma u^{2})dx\leq
1}\int_{\mathbb{R}^{4}}\left(e^{32\pi^{2}u^{2}}-1\right)dx$
$\displaystyle\leq$ $\displaystyle\sup_{\int_{\mathbb{R}^{4}}(-\Delta
u+c_{0}u)^{2}dx\leq
1}\int_{\mathbb{R}^{4}}\left(e^{32\pi^{2}u^{2}}-1\right)dx$ $\displaystyle=$
$\displaystyle\sup_{\int_{\mathbb{R}^{4}}(-\Delta
u+c_{0}u)^{2}dx=1}\int_{\mathbb{R}^{4}}\left(e^{32\pi^{2}u^{2}}-1\right)dx.$
This together with (2.7) implies (1.13).
Secondly, for $\alpha>32\pi^{2}$, we employ a sequence of functions
$u_{\epsilon}$ constructed in Section 2 of [25] (see also (33) in [32]). Let
$\widetilde{u_{\epsilon}}=u_{\epsilon}/\left(\int_{\mathbb{R}^{4}}(|\Delta
u_{\epsilon}|^{2}+\tau|\nabla u_{\epsilon}|^{2}+\sigma
u_{\epsilon}^{2})dx\right)^{1/2}$. A straightforward calculation shows that
$\sup_{u\in W^{2,2}(\mathbb{R}^{4}),\,\int_{\mathbb{R}^{4}}(|\Delta
u|^{2}+\tau|\nabla u|^{2}+\sigma u^{2})dx\leq
1}\int_{\mathbb{R}^{4}}\left(e^{32\pi^{2}u^{2}}-1\right)dx\geq\int_{\mathbb{R}^{4}}\left(e^{32\pi^{2}\widetilde{u}_{\epsilon}^{2}}-1\right)dx\rightarrow+\infty$
as $\epsilon\rightarrow 0$. Hence $32\pi^{2}$ is the best constant for (1.13).
Thirdly we prove (1.12). Let $\alpha>0$ be a real number and $u$ be a function
belonging to $W^{2,2}(\mathbb{R}^{4})$. By the density of
$C_{0}^{\infty}(\mathbb{R}^{4})$ in $W^{2,2}(\mathbb{R}^{4})$, there exists
some $u_{0}\in C_{0}^{\infty}(\mathbb{R}^{4})$ such that
$\|u-u_{0}\|_{W^{2,2}(\mathbb{R}^{4})}<\frac{1}{\sqrt{2\alpha}}.$
Here we use (1.19) as the definition of $W^{2,2}(\mathbb{R}^{4})$-norm. Thus
$\int_{\mathbb{R}^{4}}(|\Delta(u-u_{0})|^{2}+|\nabla(u-u_{0})|^{2}+(u-u_{0})^{2})dx\leq\frac{1}{2\alpha}.$
Assume ${\rm supp}\,u_{0}\subset\mathbb{B}_{R}$ for some $R>0$ and
$|u_{0}|\leq M$ for some $M>0$. Using the inequality $(a+b)^{2}\leq
2a^{2}+2b^{2}$, we have
$\displaystyle{}\int_{\mathbb{R}^{4}}\left(e^{\alpha u^{2}}-1\right)dx$
$\displaystyle\leq$
$\displaystyle\int_{\mathbb{R}^{4}}\left(e^{2\alpha(u-u_{0})^{2}+2\alpha
u_{0}^{2}}-1\right)dx$ (2.8) $\displaystyle\leq$ $\displaystyle e^{2\alpha
M^{2}}\int_{\mathbb{R}^{4}}\left(e^{2\alpha(u-u_{0})^{2}}-1\right)dx+\int_{\mathbb{R}^{4}}\left(e^{2\alpha
u_{0}^{2}}-1\right)dx$ $\displaystyle\leq$ $\displaystyle e^{2\alpha
M^{2}}\int_{\mathbb{R}^{4}}\left(e^{2\alpha(u-u_{0})^{2}}-1\right)dx+(e^{2\alpha
M^{2}}-1)|\mathbb{B}_{R}|,$
where $|\mathbb{B}_{R}|$ denotes the volume of $\mathbb{B}_{R}$. By (1.13)
with $\tau=1$ and $\sigma=1$, we have
$\int_{\mathbb{R}^{4}}\left(e^{2\alpha(u-u_{0})^{2}}-1\right)dx\leq C$
for some universal constant $C$. Thus (1.12) follows from (2.8) immediately.
$\hfill\Box$
Now we use the Hölder inequality and Theorem 1.2 to prove Theorem 1.1. To do
this, we need a technical lemma, namely
Lemma 2.1. For all $p\geq 1$ and $t\geq 1$, we have $(t-1)^{p}\leq t^{p}-1$.
In particular $\left(e^{s^{2}}-1\right)^{p}\leq e^{ps^{2}}-1$ for all
$s\in\mathbb{R}$ and $p\geq 1$.
Proof. For all $p\geq 1$ and $t\geq 1$, we set
$\varphi(t)=t^{p}-1-(t-1)^{p}.$
Since the derivative of $\varphi$ satisfies
$\frac{d}{dt}\varphi(t)=pt^{p-1}-p(t-1)^{p-1}\geq 0,\,\,\forall t\geq 1,$
thus $\varphi(t)\geq 0$ for all $t\geq 1$ and the lemma follows immediately.
$\hfill\Box$
Proof of Theorem 1.1: For any $\alpha>0$ and $u\in W^{2,2}(\mathbb{R}^{4})$,
we have by using the Hölder inequality and Lemma 2.1
$\displaystyle\int_{\mathbb{R}^{4}}\frac{e^{\alpha u^{2}}-1}{|x|^{\beta}}dx$
$\displaystyle=$ $\displaystyle\int_{|x|>1}\frac{e^{\alpha
u^{2}}-1}{|x|^{\beta}}dx+\int_{|x|\leq 1}\frac{e^{\alpha
u^{2}}-1}{|x|^{\beta}}dx{}$ (2.9) $\displaystyle\leq$
$\displaystyle\int_{\mathbb{R}^{4}}(e^{\alpha u^{2}}-1)dx+\left(\int_{|x|\leq
1}\left(e^{\alpha u^{2}}-1\right)^{p}dx\right)^{1/p}\left(\int_{|x|\leq
1}\frac{1}{|x|^{\beta q}}dx\right)^{1/q}$ $\displaystyle\leq$
$\displaystyle\int_{\mathbb{R}^{4}}(e^{\alpha
u^{2}}-1)dx+C\left(\int_{\mathbb{R}^{4}}\left(e^{\alpha
pu^{2}}-1\right)dx\right)^{1/p}$
for some constant $C$ depending only on $q$ and $\beta$, where $q>1$ is a real
number such that $\beta q<4$ and $1/p+1/q=1$. This together with (1.12)
implies (1.10).
Assume $\alpha<32\pi^{2}(1-\beta/4)$ and $u\in W^{2,2}(\mathbb{R}^{2})$
satisfies
$\int_{\mathbb{R}^{4}}\left(|\Delta u|^{2}+\tau|\nabla u|^{2}+\sigma
u^{2}\right)dx\leq 1.$
Coming back to (2.9), since $\beta q<4$ and $1/p+1/q=1$, one has
$\alpha p<32\pi^{2}\frac{1-\beta/4}{1-1/q}.$
We can further choose $q$ sufficiently close to $4/\beta$ such that $\alpha
p<32\pi^{2}$. Hence (1.11) follows from (2.9) and (1.13) immediately.
$\hfill\Box$
## 3 Partial differential equations related to Adams type inequality in
$\mathbb{R}^{4}$
In this section, we will use the mountain-pass theory to discuss the existence
of solutions to the problem (1.14). Precisely we will prove Theorems 1.3-1.6.
Firstly we construct the functional framework corresponding to (1.14).
Secondly we analyze the geometry of the functional. Thirdly we use the
mountain-pass theory to prove Theorem 1.3 and Theorem 1.4. Finally we use
compactness analysis to prove Theorem 1.5 and Theorem 1.6. Throughout this
section we assume that $f:\mathbb{R}^{4}\times\mathbb{R}\rightarrow\mathbb{R}$
is a continuous function.
### 3.1 The functional
Now we use the notations of Section 1. For $u\in W^{2,2}(\mathbb{R}^{4})$, we
define a functional
$J_{\epsilon}(u)=\frac{1}{2}\int_{\mathbb{R}^{4}}\left(|\Delta
u|^{2}+a(x)|\nabla
u|^{2}+b(x)u^{2}\right)dx-\int_{\mathbb{R}^{4}}\frac{F(x,u)}{|x|^{\beta}}dx-\epsilon\int_{\mathbb{R}^{4}}hudx,$
where $h\in E^{*}$, the dual space of $E$ (see (1.15)). When $\epsilon=0$, we
write
$J(u)=\frac{1}{2}\int_{\mathbb{R}^{4}}\left(|\Delta u|^{2}+a(x)|\nabla
u|^{2}+b(x)u^{2}\right)dx-\int_{\mathbb{R}^{4}}\frac{F(x,u)}{|x|^{\beta}}dx.$
Here $F(x,s)=\int_{0}^{s}f(x,s)ds$. Since we assume $f(x,s)$, $a(x)$, $b(x)$
are all continuous functions and $(A_{1})$, $(A_{2})$, $(H_{1})$ hold, it
follows from Theorem 1.1 that $J_{\epsilon}$ or $J$ is well defined and
$J_{\epsilon},\,J\in\mathcal{C}^{1}(E,\mathbb{R}).$ (3.1)
Let us explain how to show (3.1). It suffices to show that if
$u_{j}\rightarrow u_{\infty}$ in $E$, then $J_{\epsilon}(u_{j})\rightarrow
J_{\epsilon}(u_{\infty})$ and $J_{\epsilon}^{\prime}(u_{j})\rightarrow
J_{\epsilon}^{\prime}(u_{\infty})$ in $E^{*}$ as $j\rightarrow\infty$. We
point out a crucial fact: for all $q\geq 1$, $E$ is embedded in
$L^{q}(\mathbb{R}^{4})$ compactly and postpone its proof to Lemma 3.6 below.
By $(H_{1})$,
$|F(x,u_{j})|\leq
b_{1}u_{j}^{2}+b_{2}|u_{j}|^{\gamma+1}\left(e^{\alpha_{0}u_{j}^{2}}-1\right).$
(3.2)
Firstly, since $\|u_{j}\|_{E}$ is bounded and $E\hookrightarrow
L^{q}(\mathbb{R}^{4})$ is compact for all $q\geq 1$, we may assume
$u_{j}\rightarrow u_{\infty}$ in $L^{q}(\mathbb{R}^{4})$ for all $q\geq 1$. An
easy computation gives
$\lim_{j\rightarrow\infty}\int_{\mathbb{R}^{4}}\frac{|u_{j}|^{q}}{|x|^{\beta}}dx=\int_{\mathbb{R}^{4}}\frac{|u_{\infty}|^{q}}{|x|^{\beta}}dx\quad{\rm
for\,\,all}\quad q\geq 1.$ (3.3)
Nextly we claim that
$\lim_{j\rightarrow\infty}\int_{\mathbb{R}^{4}}\frac{|u_{j}|^{\gamma+1}\left(e^{\alpha_{0}u_{j}^{2}}-1\right)}{|x|^{\beta}}dx=\int_{\mathbb{R}^{4}}\frac{|u_{\infty}|^{\gamma+1}\left(e^{\alpha_{0}u_{\infty}^{2}}-1\right)}{|x|^{\beta}}dx.$
(3.4)
For this purpose, we define a function
$\varphi:\mathbb{R}^{4}\times[0,\infty)\rightarrow\mathbb{R}$ by
$\varphi(x,s)=\frac{s^{\gamma+1}\left(e^{\alpha_{0}s^{2}}-1\right)}{|x|^{\beta}}.$
By the mean value theorem
$\displaystyle{}|\varphi(x,|u_{j}|)-\varphi(x,|u_{\infty}|)|$
$\displaystyle\leq$ $\displaystyle|{\partial\varphi}/{\partial
s}(\xi)||u_{j}-u_{\infty}|$ (3.5) $\displaystyle\leq$
$\displaystyle(\eta(|u_{j}|)+\eta(|u_{\infty}|))\frac{|u_{j}-u_{\infty}|}{|x|^{\beta}},$
where $\xi$ lies between $|u_{j}(x)|$ and $|u_{\infty}(x)|$,
$\eta:[0,\infty)\rightarrow\mathbb{R}$ is a function defined by
$\eta(s)=\left((\gamma+1)s^{\gamma}+2\alpha_{0}s^{\gamma+2}\right)\left(e^{\alpha_{0}s^{2}}-1\right)+2\alpha_{0}s^{\gamma+2}.$
Using Lemma 2.1 and the inequalities $(a+b)^{2}\leq 2a^{2}+2b^{2}$,
$ab-1\leq\frac{a^{p}-1}{p}+\frac{b^{r}-1}{r}$, where $a$, $b\geq 0$,
$\frac{1}{p}+\frac{1}{r}=1$, we have
$\displaystyle\int_{\mathbb{R}^{4}}\left(e^{\alpha_{0}u_{j}^{2}}-1\right)^{q}dx$
$\displaystyle\leq$
$\displaystyle\int_{\mathbb{R}^{4}}\left(e^{2\alpha_{0}q(u_{j}-u_{\infty})^{2}+2\alpha_{0}qu_{\infty}^{2}}-1\right)dx$
$\displaystyle\leq$
$\displaystyle\frac{1}{p}\int_{\mathbb{R}^{4}}\left(e^{2\alpha_{0}qp(u_{j}-u_{\infty})^{2}}-1\right)dx+\frac{1}{r}\int_{\mathbb{R}^{4}}\left(e^{2\alpha_{0}qru_{\infty}^{2}}-1\right)dx.$
Recalling that $\|u_{j}-u_{\infty}\|_{E}\rightarrow 0$ as $j\rightarrow\infty$
and applying Theorem 1.2, we can see that
$\sup_{j}\int_{\mathbb{R}^{4}}\left(e^{\alpha_{0}u_{j}^{2}}-1\right)^{q}dx<\infty,\quad\forall
q\geq 1.$
This together with the compact embedding $E\hookrightarrow
L^{q}(\mathbb{R}^{4})$ for all $q\geq 1$ implies that
$\eta(|u_{j}|)+\eta(|u_{\infty}|)$ is bounded in $L^{q}(\mathbb{R}^{4})$ for
all $q\geq 1$. By (3.3) and (3.5), the Hölder inequality leads to
$\lim_{j\rightarrow\infty}\int_{\mathbb{R}^{4}}|\varphi(x,|u_{j}|)-\varphi(x,|u_{\infty}|)|dx=0.$
Hence $(\ref{hia})$ holds. In view of (3.2), we obtain by using (3.3), (3.4)
and the generalized Lebesgue’s dominated theorem
$\lim_{j\rightarrow\infty}\int_{\mathbb{R}^{4}}\frac{F(x,u_{j})}{|x|^{\beta}}dx=\int_{\mathbb{R}^{4}}\frac{F(x,u_{\infty})}{|x|^{\beta}}dx.$
Therefore $J_{\epsilon}(u_{j})\rightarrow J_{\epsilon}(u_{\infty})$ as
$j\rightarrow\infty$. In a similar way we can prove
$J_{\epsilon}^{\prime}(u_{j})\rightarrow J_{\epsilon}^{\prime}(u_{\infty})$ in
$E^{*}$ as $j\rightarrow\infty$. Hence (3.1) holds.
It is easy to see that the critical point $u_{\epsilon}$ of $J_{\epsilon}$ is
a weak solution to (1.14) and the critical point $u$ of $J$ is a weak solution
to (1.20). Thus, to find weak solutions to (1.14) or (1.20), it suffices to
find critical points of $J_{\epsilon}$ or $J$ in the function space $E$.
### 3.2 The geometry of the functional
In this subsection, we describe the geometry of the functional $J_{\epsilon}$.
Lemma 3.1. Assume that $(H_{2})$ and $(H_{3})$ are satisfied. Then
$J_{\epsilon}(tu)\rightarrow-\infty$ as $t\rightarrow+\infty$, for all
compactly supported $u\in W^{2,2}(\mathbb{R}^{4})\setminus\\{0\\}$.
Proof. Assume $u$ is supported in a bounded domain $\Omega$. Since $f(x,s)$ is
continuous, in view of $(H_{2})$, there exists constants $c_{1}$, $c_{2}>0$
such that $F(x,s)\geq c_{1}|s|^{\mu}-c_{2}$ for all
$(x,s)\in\overline{\Omega}\times\mathbb{R}$. It then follows that
$\displaystyle J_{\epsilon}(tu)$ $\displaystyle=$
$\displaystyle\frac{t^{2}}{2}\int_{\Omega}\left(|\Delta u|^{2}+a(x)|\nabla
u|^{2}+b(x)u^{2}\right)dx-\int_{\Omega}\frac{F(x,tu)}{|x|^{\beta}}dx-\epsilon\,t\int_{\Omega}hudx$
$\displaystyle\leq$ $\displaystyle\frac{t^{2}}{2}\int_{\Omega}\left(|\Delta
u|^{2}+a(x)|\nabla u|^{2}+b(x)u^{2}\right)dx-
c_{1}t^{\mu}\int_{\Omega}\frac{|u|^{\mu}}{|x|^{\beta}}dx+O(t).$
Then the lemma holds since $\mu>2$.$\hfill\Box$
Lemma 3.2. Assume that $(A_{1})$, $(H_{1})$ and $(H_{4})$ hold. Then there
exists $\epsilon_{1}>0$ such that for any $\epsilon:$
$0<\epsilon<\epsilon_{1}$, there exist $r_{\epsilon}>0$,
$\vartheta_{\epsilon}>0$ such that $J_{\epsilon}(u)\geq\vartheta_{\epsilon}$
for all $u$ with $\|u\|_{E}=r_{\epsilon}$. Furthermore $r_{\epsilon}$ can be
chosen such that $r_{\epsilon}\rightarrow 0$ as $\epsilon\rightarrow 0$. When
$\epsilon=0$, there exist $r_{0}>0$, such that if $r\leq r_{0}$, then there
exists $\vartheta>0$ depending only on $r$ such that $J(u)\geq\vartheta$ for
all $u$ with $\|u\|_{E}=r$.
Proof. By $(H_{4})$, there exist $\tau$, $\delta>0$ such that if
$|s|\leq\delta$, then
$|F(x,s)|\leq\frac{\lambda_{\beta}-\tau}{2}|s|^{2}$
for all $x\in\mathbb{R}^{4}$. By $(H_{1})$, there holds for $|s|\geq\delta$
$\displaystyle{}|F(x,s)|$ $\displaystyle\leq$
$\displaystyle\int_{0}^{|s|}\left\\{b_{1}t+b_{2}t^{\gamma}(e^{\alpha_{0}t^{2}}-1)\right\\}dt$
$\displaystyle\leq$
$\displaystyle\frac{b_{1}}{2}s^{2}+{b_{2}}|s|^{\gamma+1}(e^{\alpha_{0}s^{2}}-1)$
$\displaystyle\leq$ $\displaystyle C|s|^{q}(e^{\alpha_{0}s^{2}}-1){}$
for any $q>\gamma+1\geq 2$, where $C$ is a constant depending only on $b_{1}$,
$b_{2}$, $q$ and $\delta$. Combining the above two inequalities, we obtain for
all $s\in\mathbb{R}$
$|F(x,s)|\leq\frac{\lambda_{\beta}-\tau}{2}|s|^{2}+C|s|^{q}(e^{\alpha_{0}s^{2}}-1).$
(3.6)
Recall that $\|u\|_{E}$ and $\lambda_{\beta}$ are defined by (1.16) and (1.17)
respectively. It follows from (3.6) that
$\displaystyle J_{\epsilon}(u)$ $\displaystyle=$
$\displaystyle\frac{1}{2}\|u\|_{E}^{2}-\int_{\mathbb{R}^{4}}\frac{F(x,u)}{|x|^{\beta}}dx-\epsilon\int_{\mathbb{R}^{4}}hudx{}$
(3.7) $\displaystyle\geq$
$\displaystyle\frac{1}{2}\|u\|_{E}^{2}-\frac{\lambda_{\beta}-\tau}{2\lambda_{\beta}}\|u\|_{E}^{2}-C\int_{\mathbb{R}^{4}}\frac{e^{\alpha_{0}u^{2}}-1}{|x|^{\beta}}|u|^{q}dx-\epsilon\int_{\mathbb{R}^{4}}hudx{}$
$\displaystyle\geq$
$\displaystyle\frac{\tau}{2\lambda_{\beta}}\|u\|_{E}^{2}-C\int_{\mathbb{R}^{4}}\frac{e^{\alpha_{0}u^{2}}-1}{|x|^{\beta}}|u|^{q}dx-\epsilon\|h\|_{E^{*}}\|u\|_{E},$
where
$\|h\|_{E^{*}}=\sup_{\|\varphi\|_{E}=1}\left|\int_{\mathbb{R}^{4}}h\varphi
dx\right|.$
Using the Hölder inequality, Lemma 2.1 and the continuous embedding
$E\hookrightarrow L^{q}(\mathbb{R}^{4})$, we have
$\displaystyle\int_{\mathbb{R}^{4}}\frac{e^{\alpha_{0}u^{2}}-1}{|x|^{\beta}}|u|^{q}dx$
$\displaystyle\leq$
$\displaystyle\left(\int_{\mathbb{R}^{4}}\frac{(e^{\alpha_{0}u^{2}}-1)^{r\,^{\prime}}}{|x|^{\beta
r\,^{\prime}}}dx\right)^{1/r\,^{\prime}}\left(\int_{\mathbb{R}^{4}}|u|^{qr}dx\right)^{1/r}{}$
(3.8) $\displaystyle\leq$ $\displaystyle
C\left(\int_{\mathbb{R}^{4}}\frac{e^{\alpha_{0}r\,^{\prime}u^{2}}-1}{|x|^{\beta
r\,^{\prime}}}dx\right)^{1/r\,^{\prime}}\|u\|_{E}^{q},$
where $1/r+1/r\,^{\prime}=1$, $0\leq\beta r\,^{\prime}<4$ and $C$ is a
constant such that $\|u\|_{L^{qr}(\mathbb{R}^{4})}\leq C^{1/q}\|u\|_{E}$. Here
and in the sequel we often denote various constants by the same $C$. By
$(A_{1})$,
$\int_{\mathbb{R}^{4}}\left(|\Delta u|^{2}+a_{0}|\nabla
u|^{2}+b_{0}u^{2}\right)dx\leq\|u\|_{E}^{2}.$
Theorem 1.1 implies that if
$\|u\|_{E}^{2}<\frac{16\pi^{2}}{\alpha_{0}r\,^{\prime}}\left(1-\frac{\beta
r\,^{\prime}}{4}\right),$ (3.9)
then
$\int_{\mathbb{R}^{4}}\frac{e^{\alpha_{0}r\,^{\prime}u^{2}}-1}{|x|^{\beta
r\,^{\prime}}}dx\leq C$
for some constant $C$ depending only on $\alpha_{0}$, $\beta$ and
$r\,^{\prime}$. This together with (3.8) gives
$\int_{\mathbb{R}^{4}}\frac{e^{\alpha_{0}u^{2}}-1}{|x|^{\beta}}|u|^{q}dx\leq
C\|u\|_{E}^{q},$ (3.10)
provided that $u$ satisfies (3.9). Hence, assuming (3.9), we obtain by
combining (3.7) and (3.10)
$J_{\epsilon}(u)\geq\|u\|_{E}\left(\frac{\tau}{2\lambda_{\beta}}\|u\|_{E}-C\|u\|_{E}^{q-1}-\epsilon\|h\|_{E^{*}}\right).$
(3.11)
Since $\tau>0$, there holds for sufficiently small $r>0$,
$\frac{\tau}{2\lambda_{\beta}}r-Cr^{q-1}\geq\frac{\tau}{4\lambda_{\beta}}r.$
If $h\not\equiv 0$, for sufficiently small $\epsilon>0$, we may take
$r_{\epsilon}$ and $\vartheta_{\epsilon}$ such that
$\frac{\tau}{4\lambda_{\beta}}r_{\epsilon}=2\epsilon\|h\|_{E^{*}},\quad\vartheta_{\epsilon}=\epsilon
r_{\epsilon}\|h\|_{E^{*}}.$
This implies $J_{\epsilon}(u)\geq\vartheta_{\epsilon}$ for all $u$ with
$\|u\|_{E}=r_{\epsilon}$ and $r_{\epsilon}\rightarrow 0$ as
$\epsilon\rightarrow 0$. If $\epsilon=0$, (3.11) implies
$J(u)\geq\|u\|_{E}\left(\frac{\tau}{2\lambda_{\beta}}\|u\|_{E}-C\|u\|_{E}^{q-1}\right).$
Hence there exists some $r_{0}>0$ such that if $r\leq r_{0}$, then
$J(u)\geq\frac{\tau r^{2}}{4\lambda_{\beta}}$ for all $u$ with $\|u\|_{E}=r$.
$\hfill\Box$
Lemma 3.3. Assume $h\not\equiv 0$, $(A_{1})$ and $(H_{1})$ hold. Then there
exist $\tau>0$ and $v\in E$ with $\|v\|_{E}=1$ such that $J_{\epsilon}(tv)<0$
for all $t$: $0<t<\tau$. Particularly
$\inf_{\|u\|_{E}\leq\tau}J_{\epsilon}(u)<0$.
Proof. For any fixed $h\in E^{*}$, one can view $h$ as a linear functional
defined on $E$ by
$\int_{\mathbb{R}^{4}}hudx,\quad\forall u\in E.$
By $(A_{1})$, $E$ is a Hilbert space under the inner product
$\langle u,v\rangle=\int_{\mathbb{R}^{4}}\left(\Delta u\Delta v+a(x)\nabla
u\nabla v+b(x)uv\right)dx.$
By the Riesz representation theorem,
$\Delta^{2}u-{\rm div}(a(x)\nabla u)+b(x)u=\epsilon h\quad{\rm
in}\quad\mathbb{R}^{4}$
has a unique weak solution $u\in E$. If $h\not\equiv 0$, then for any fixed
$\epsilon>0$, we have $u\not\equiv 0$ and
$\epsilon\int_{\mathbb{R}^{4}}hudx=\|u\|_{E}^{2}>0.$
A simple calculation shows
$\frac{d}{dt}J_{\epsilon}(tu)=t\|u\|_{E}^{2}-\int_{\mathbb{R}^{4}}\frac{f(x,tu)}{|x|^{\beta}}udx-\epsilon\int_{\mathbb{R}^{4}}hudx.$
(3.12)
By $(H_{1})$, we have
$\displaystyle\left|\int_{\mathbb{R}^{4}}\frac{f(x,tu)}{|x|^{\beta}}udx\right|$
$\displaystyle\leq$ $\displaystyle
b_{1}|t|\int_{\mathbb{R}^{4}}\frac{u^{2}}{|x|^{\beta}}dx+b_{2}|t|^{\gamma}\int_{\mathbb{R}^{4}}\frac{e^{\alpha_{0}t^{2}u^{2}}-1}{|x|^{\beta}}|u|^{1+\gamma}dx{}$
$\displaystyle\leq$
$\displaystyle\frac{b_{1}|t|}{\lambda_{\beta}}\|u\|_{E}^{2}+b_{2}|t|^{\gamma}\int_{\mathbb{R}^{4}}\frac{e^{\alpha_{0}t^{2}u^{2}}-1}{|x|^{\beta}}|u|^{1+\gamma}dx.$
(3.13)
Using the same argument we prove (3.10), there exists some $t_{0}>0$ such that
if $|t|<t_{0}$, then
$\int_{\mathbb{R}^{4}}\frac{e^{\alpha_{0}t^{2}u^{2}}-1}{|x|^{\beta}}|u|^{\gamma+1}dx\leq
C$
for some constant $C$ depending only on $t_{0}$, $\alpha_{0}$ and $\beta$. It
then follows from (3.13) that
$\lim_{t\rightarrow 0}\int_{\mathbb{R}^{4}}\frac{f(x,tu)}{|x|^{\beta}}udx=0.$
This together with (3.12) implies that there exists some $\delta>0$ such that
$\frac{d}{dt}J_{\epsilon}(tu)<0,$
provided that $0<t<\delta$. Notice that $J_{\epsilon}(0)=0$, we have
$J_{\epsilon}(tu)<0$ for all $0<t<\delta$. $\hfill\Box$
### 3.3 Min-max level
In this subsection, we estimate the min-max level of $J_{\epsilon}$ or $J$. To
do this, we define a sequence of functions $\widetilde{\phi}_{n}$ by
$\widetilde{\phi}_{n}(x)=\left\\{\begin{array}[]{lll}\sqrt{\frac{\log
n}{8\pi^{2}}}-\frac{n^{2}}{\sqrt{32\pi^{2}\log
n}}{|x|^{2}}+\frac{1}{\sqrt{32\pi^{2}\log n}}&{\rm
for}&|x|\leq{1}/{n}\\\\[6.45831pt] \frac{1}{\sqrt{8\pi^{2}\log
n}}\log\frac{1}{|x|}&{\rm for}&{1}/{n}<|x|\leq 1,\\\\[6.45831pt]
\zeta_{n}(x)&{\rm for}&|x|>1\end{array}\right.$
where $\zeta_{n}\in C_{0}^{\infty}(\mathbb{B}_{2}(0))$,
$\zeta_{n}\mid_{\partial\mathbb{B}_{1}(0)}=\zeta_{n}\mid_{\partial\mathbb{B}_{2}(0)}=0$,
$\frac{\partial\zeta_{n}}{\partial\nu}\mid_{\partial\mathbb{B}_{1}(0)}=\frac{1}{\sqrt{8\pi^{2}\log
n}}$,
$\frac{\partial\zeta_{n}}{\partial\nu}\mid_{\partial\mathbb{B}_{2}(0)}=0$, and
$\zeta_{n}$, $|\nabla\zeta_{n}|$, $\Delta\zeta_{n}$ are all $O(1/\sqrt{\log
n})$. One can check that $\widetilde{\phi}_{n}\in
W_{0}^{2,2}(\mathbb{B}_{2}(0))\subset W^{2,2}(\mathbb{R}^{4})$.
Straightforward calculations show that
$\|\widetilde{\phi}_{n}\|_{2}^{2}=O(1/{\log
n}),\,\,\|\nabla\widetilde{\phi}_{n}\|_{2}^{2}=O(1/{\log
n}),\,\,\|\Delta\widetilde{\phi}_{n}\|_{2}^{2}=1+O(1/{\log n})$
and thus
$\|\widetilde{\phi}_{n}\|_{E}^{2}=1+O(1/{\log n}).$
Set
$\phi_{n}(x)=\frac{\widetilde{\phi}_{n}(x)}{\|\widetilde{\phi}_{n}\|_{E}}$
so that $\|\phi_{n}\|_{E}=1$. It is not difficult to see that
$\phi_{n}^{2}(x)\geq{\frac{\log n}{8\pi^{2}}}+O(1)\quad{\rm for}\quad|x|\leq
1/n.$ (3.14)
Lemma 3.4. Assume $(H_{2})$ and $(H_{3})$. There exists a sufficiently large
$\nu_{0}>0$ such that if
$\liminf_{s\rightarrow+\infty}sf(x,s)e^{-\alpha_{0}s^{2}}>\nu_{0}$
uniformly with respect to $x\in\mathbb{R}^{4}$, then there exists some
$n\in\mathbb{N}$ such that
$\max_{t\geq
0}\left(\frac{t^{2}}{2}-\int_{\mathbb{R}^{4}}\frac{F(x,t\phi_{n})}{|x|^{\beta}}dx\right)<\frac{16\pi^{2}}{\alpha_{0}}\left(1-\frac{\beta}{4}\right).$
Proof. Suppose by contradiction that for any large $\nu>0$
$\liminf_{s\rightarrow+\infty}sf(x,s)e^{-\alpha_{0}s^{2}}>\nu$ (3.15)
uniformly with respect to $x\in\mathbb{R}^{4}$, but for all $n\geq 2$
$\max_{t\geq
0}\left(\frac{t^{2}}{2}-\int_{\mathbb{R}^{4}}\frac{F(x,t\phi_{n})}{|x|^{\beta}}dx\right)\geq\frac{16\pi^{2}}{\alpha_{0}}\left(1-\frac{\beta}{4}\right).$
(3.16)
By $(H_{2})$, we have $F(x,t\phi_{n})>0$ and $f(x,t\phi_{n})>0$ when
$t\phi_{n}(x)>0$. In addition, $F(x,t\phi_{n})\geq 0$ for all
$x\in\mathbb{R}^{4}$. Furthermore, by $(H_{3})$, there exists some constants
$C_{1}$, $R_{0}$ and $M_{0}$ such that $F(s)\geq C_{1}e^{s/M_{0}}$ for $s\geq
R_{0}$. Hence we have
$\int_{\mathbb{R}^{4}}\frac{F(x,t\phi_{n})}{|x|^{\beta}}dx\geq
C_{1}\int_{t\phi_{n}\geq R_{0}}\frac{e^{t\phi_{n}/M_{0}}}{|x|^{\beta}}dx.$
(3.17)
For each fixed $n$, one can choose sufficiently large $T_{n}$ such that if
$t\geq T_{n}$, then $t\phi_{n}\geq R_{0}$ on $\mathbb{B}_{R_{0}}(0)$. Thus for
$t\geq T_{n}$ we have
$\int_{t\phi_{n}\geq
R_{0}}\frac{e^{t\phi_{n}/M_{0}}}{|x|^{\beta}}dx\geq\frac{t^{3}}{6M_{0}^{3}}\int_{t\phi_{n}\geq
R_{0}}\frac{\phi_{n}^{3}}{|x|^{\beta}}dx\geq\frac{t^{3}}{6M_{0}^{3}}\int_{|x|\leq
r/n}\frac{\phi_{n}^{3}}{|x|^{\beta}}dx.$ (3.18)
Combining (3.17) and (3.18) we have
$\lim_{t\rightarrow+\infty}\left(\frac{t^{2}}{2}-\int_{\mathbb{R}^{4}}\frac{F(x,t\phi_{n})}{|x|^{\beta}}dx\right)=-\infty.$
(3.19)
Since $F(x,0)=0$, it follows from (3.16) and (3.19) that there exists
$t_{n}>0$ such that
$\frac{t_{n}^{2}}{2}-\int_{\mathbb{R}^{4}}\frac{F(x,t_{n}\phi_{n})}{|x|^{\beta}}dx=\max_{t\geq
0}\left(\frac{t^{2}}{2}-\int_{\mathbb{R}^{4}}\frac{F(x,t\phi_{n})}{|x|^{\beta}}dx\right).$
(3.20)
Clearly there holds at $t=t_{n}$
$\frac{d}{dt}\left(\frac{t^{2}}{2}-\int_{\mathbb{R}^{4}}\frac{F(x,t\phi_{n})}{|x|^{\beta}}dx\right)=0.$
Hence
$t_{n}^{2}=\int_{\mathbb{R}^{4}}\frac{t_{n}\phi_{n}f(x,t_{n}\phi_{n})}{|x|^{\beta}}dx.$
Now we prove that $\\{t_{n}\\}$ is bounded. Suppose not. By (3.14), (3.15),
(3.16) and (3.20), there holds for sufficiently large $n$
$t_{n}^{2}\geq\frac{\nu}{2}\int_{|x|\leq
1/n}\frac{e^{\alpha_{0}(t_{n}\phi_{n})^{2}}}{|x|^{\beta}}dx\geq\frac{\nu}{2}\frac{\omega_{3}}{4-\beta}\frac{1}{n^{4-\beta}}e^{\alpha_{0}t_{n}^{2}\left(\frac{\log
n}{8\pi^{2}}+O(1)\right)},$ (3.21)
where $\omega_{3}$ is the area of the unit sphere in $\mathbb{R}^{4}$. It
follows that
$1\geq\frac{\nu}{2}\frac{\omega_{3}}{4-\beta}n^{\alpha_{0}t_{n}^{2}\left(\frac{1}{8\pi^{2}}+o(1)\right)+\beta-4-\frac{2}{\log
n}\log t_{n}}.$
Letting $n\rightarrow\infty$, we get a contradiction because the term on the
right hand tends to $+\infty$.
Next we prove that
$t_{n}^{2}\rightarrow\frac{32\pi^{2}}{\alpha_{0}}\left(1-\frac{\beta}{4}\right)$
as $n\rightarrow\infty$. By $(H_{2})$, $F(x,t_{n}\phi_{n})\geq 0$. It then
follows from (3.16) and (3.20) that
${t_{n}^{2}}\geq\frac{32\pi^{2}}{\alpha_{0}}\left(1-\frac{\beta}{4}\right).$
(3.22)
Suppose that
$\lim_{n\rightarrow\infty}t_{n}^{2}>\frac{32\pi^{2}}{\alpha_{0}}\left(1-\frac{\beta}{4}\right)$.
By (3.21) and $\\{t_{n}\\}$ is bounded we get a contradiction. Thus we have
$t_{n}^{2}\rightarrow\frac{32\pi^{2}}{\alpha_{0}}\left(1-\frac{\beta}{4}\right)$
as $n\rightarrow\infty$.
By (3.15), there exists some $s_{0}>0$ such that
$sf(s)e^{-\alpha_{0}s^{2}}\geq{\nu}/{2}$ for all $s\geq s_{0}$. Since (3.21)
holds for sufficiently large $n$, we have by combining (3.21) and (3.22) that
$\nu\leq\frac{8}{\omega_{3}}t_{n}^{2}e^{-\alpha_{0}t_{n}^{2}O(1)}.$
Letting $n\rightarrow\infty$, we get $\nu\leq C$ for some constant $C$
depending only on $\alpha_{0}$. This contradicts the arbitrary of $\nu$ and
completes the proof of the lemma. $\hfill\Box$
### 3.4 Palais-Smale sequence
In this subsection, we want to show that the weak limit of a Palais-Smale
sequence for $J_{\epsilon}$ is a weak solution of $(\ref{problem})$. To this
end, we need the following convergence result, which is a generalization of
Lemma 2.1 in [14] and Lemma 3.7 in [17].
Lemma 3.5. Let
$f:\mathbb{R}^{N}\times\mathbb{R}\rightarrow\mathbb{R}\,\,(N\geq 1)$ be a
measurable function. Assume for any $\eta>0$, there exists some constant $c$
depending only on $\eta$ such that $|f(x,s)|\leq c|s|$ for all
$(x,s)\in\mathbb{R}^{N}\times[-\eta,\eta]$. Let
$\phi:\mathbb{R}^{N}\rightarrow\mathbb{R}$ be a nonnegative measurable
function, $(u_{n})$ be a sequence of functions with $u_{n}\rightarrow u$ in
$\mathbb{R}^{N}$ almost everywhere, $\phi u_{n}\rightarrow\phi u$ strongly in
$L^{1}(\mathbb{R}^{N})$, $\phi f(x,u)\in L^{1}(\mathbb{R}^{N})$ and
$\int_{\mathbb{R}^{N}}\phi|f(x,u_{n})u_{n}|dx\leq C$ (3.23)
for all $n$. Then, up to a subsequence, we have $\phi
f(x,u_{n})\rightarrow\phi f(x,u)$ strongly in $L^{1}(\mathbb{R}^{N})$.
Proof. Since $u,\,\phi f(x,u)\in L^{1}(\mathbb{R^{N}})$, we have
$\lim_{\eta\rightarrow+\infty}\int_{|u|\geq\eta}\phi|f(x,u)|dx=0.$
Let $C$ be the constant in (3.23). Given any $\epsilon>0$, one can select some
$M>{C}/{\epsilon}$ such that
$\int_{|u|\geq M}\phi|f(x,u)|dx<\epsilon.$ (3.24)
It follows from (3.23) that
$\int_{|u_{n}|\geq M}\phi|f(x,u_{n})|dx\leq\frac{1}{M}\int_{|u_{n}|\geq
M}\phi|f(x,u_{n})u_{n}|dx<\epsilon.$ (3.25)
For all $x\in\\{x\in\mathbb{R}^{N}:|u_{n}|<M\\}$, by our assumption, there
exists a constant $C_{1}$ depending only on $M$ such that $|f(x,u_{n}(x))|\leq
C_{1}|u_{n}(x)|$. Notice that $\phi u_{n}\rightarrow\phi u$ strongly in
$L^{1}(\mathbb{R}^{N})$ and $u_{n}\rightarrow u$ almost everywhere in
$\mathbb{R}^{N}$. By a generalized Lebesgue’s dominated convergence theorem,
up to a subsequence we obtain
$\lim_{n\rightarrow\infty}\int_{|u_{n}|<M}\phi|f(x,u_{n})|dx=\int_{|u|<M}\phi|f(x,u)|dx.$
(3.26)
Combining (3.24), (3.25) and (3.26), we can find some $K>0$ such that when
$n>K$,
$\left|\int_{\mathbb{R}^{N}}\phi|f(x,u_{n})|dx-\int_{\mathbb{R}^{N}}\phi|f(x,u)|dx\right|<3\epsilon.$
Hence $\|\phi f(x,u_{n})\|_{L^{1}(\mathbb{R}^{N})}\rightarrow\|\phi
f(x,u)\|_{L^{1}(\mathbb{R}^{N})}$ as $n\rightarrow\infty$. Since $\phi
u_{n}\rightarrow\phi u$ in $\mathbb{R}^{N}$ almost everywhere, we get the
desired result. $\hfill\Box$
Lemma 3.6. Assume $(A_{1})$ and $(A_{2})$. Then we have the compact embedding
$E\hookrightarrow L^{q}(\mathbb{R}^{4})\quad{\rm for\,\,all}\quad q\geq 1.$
Proof. By $(A_{1})$, the Sobolev embedding theorem implies the following
continuous embedding
$E\hookrightarrow W^{2,2}(\mathbb{R}^{4})\hookrightarrow
L^{q}(\mathbb{R}^{4})\quad{\rm for\,\,all}\quad 2\leq q<\infty.$
It follows from the Hölder inequality and $(A_{2})$ that
$\int_{\mathbb{R}^{4}}|u|dx\leq\left(\int_{\mathbb{R}^{4}}\frac{1}{b(x)}dx\right)^{1/2}\left(\int_{\mathbb{R}^{4}}b(x)u^{2}dx\right)^{1/2}\leq\left(\int_{\mathbb{R}^{4}}\frac{1}{b(x)}dx\right)^{1/2}\|u\|_{E}.$
For any $\gamma:1<\gamma<2$, there holds
$\int_{\mathbb{R}^{4}}|u|^{\gamma}dx\leq\int_{\mathbb{R}^{4}}(|u|+u^{2})dx\leq\left(\int_{\mathbb{R}^{4}}\frac{1}{b(x)}dx\right)^{1/2}\|u\|_{E}+\frac{1}{b_{0}}\|u\|_{E}^{2},$
where $b_{0}$ is given by $(A_{1})$. Thus we get continuous embedding
$E\hookrightarrow L^{q}(\mathbb{R}^{4})$ for all $q\geq 1$.
To prove that the above embedding is also compact, take a sequence of
functions $(u_{k})\subset E$ such that $\|u_{k}\|_{E}\leq C$ for all $k$, we
must prove that up to a subsequence there exists some $u\in E$ such that
$u_{k}$ convergent to $u$ strongly in $L^{q}(\mathbb{R}^{4})$ for all $q\geq
1$. Without loss of generality, up to a subsequence, we may assume
$\left\\{\begin{array}[]{lll}u_{k}\rightharpoonup u&{\rm weakly\,\,in}\quad
E\\\\[6.45831pt] u_{k}\rightarrow u&{\rm strongly\,\,in}\quad L^{q}_{\rm
loc}(\mathbb{R}^{4}),\,\,\forall q\geq 1\\\\[6.45831pt] u_{k}\rightarrow
u&{\rm almost\,\,everywhere\,\,in}\quad\mathbb{R}^{4}.\end{array}\right.$
(3.27)
In view of $(A_{2})$, for any $\epsilon>0$, there exists $R>0$ such that
$\left(\int_{|x|>R}\frac{1}{b}dx\right)^{1/2}<\epsilon.$
Hence
$\int_{|x|>R}|u_{k}-u|dx\leq\left(\int_{|x|>R}\frac{1}{b}dx\right)^{1/2}\left(\int_{|x|>R}b|u_{k}-u|dx\right)^{1/2}\leq\epsilon\|u_{k}-u\|_{E}\leq
C\epsilon.$ (3.28)
On the other hand, it follows from (3.27) that $u_{k}\rightarrow u$ strongly
in $L^{1}(\mathbb{B}_{R})$. This together with (3.28) gives
$\limsup_{k\rightarrow\infty}\int_{\mathbb{R}^{4}}|u_{k}-u|dx\leq C\epsilon.$
Since $\epsilon$ is arbitrary, we obtain
$\lim_{k\rightarrow\infty}\int_{\mathbb{R}^{4}}|u_{k}-u|dx=0.$
For $q>1$, it follows from the continuous embedding $E\hookrightarrow
L^{s}(\mathbb{R}^{4})$ ($s\geq 1$) that
$\displaystyle\int_{\mathbb{R}^{4}}|u_{k}-u|^{q}dx$ $\displaystyle=$
$\displaystyle\int_{\mathbb{R}^{4}}|u_{k}-u|^{\frac{1}{2}}|u_{k}-u|^{(q-\frac{1}{2})}dx$
$\displaystyle\leq$
$\displaystyle\left(\int_{\mathbb{R}^{4}}|u_{k}-u|dx\right)^{1/2}\left(\int_{\mathbb{R}^{4}}|u_{k}-u|^{2q-1}dx\right)^{1/2}$
$\displaystyle\leq$ $\displaystyle
C\left(\int_{\mathbb{R}^{4}}|u_{k}-u|dx\right)^{1/2}\rightarrow 0$
as $k\rightarrow\infty$. This concludes the lemma. $\hfill\Box$
Lemma 3.7. Assume that $(A_{1})$, $(A_{2})$, $(H_{1})$, $(H_{2})$ and
$(H_{3})$ are satisfied. Let $(u_{n})\subset E$ be an arbitrary Palais-Smale
sequence of $J_{\epsilon}$, i.e.,
$J_{\epsilon}(u_{n})\rightarrow c,\,\,J^{\prime}_{\epsilon}(u_{n})\rightarrow
0\,\,{\rm in}\,\,E^{*}\,\,{\rm as}\,\,n\rightarrow\infty.$ (3.29)
Then there exist a subsequence of $(u_{n})$ (still denoted by $(u_{n})$) and
$u\in E$ such that $u_{n}\rightharpoonup u$ weakly in $E$, $u_{n}\rightarrow
u$ strongly in $L^{q}(\mathbb{R}^{4})$ for all $q\geq 1$, and
$\displaystyle\left\\{\begin{array}[]{lll}\frac{f(x,\,u_{n})}{|x|^{\beta}}\rightarrow\frac{f(x,\,u)}{|x|^{\beta}}\,\,{\rm
strongly\,\,in}\,\,L^{1}(\mathbb{R}^{4})\\\\[6.45831pt]
\frac{F(x,\,u_{n})}{|x|^{\beta}}\rightarrow\frac{F(x,\,u)}{|x|^{\beta}}\,\,{\rm
strongly\,\,in}\,\,L^{1}(\mathbb{R}^{4}).\end{array}\right.$
Furthermore $u$ is a weak solution of (1.14).
Proof. Assume $(u_{n})$ is a Palais-Smale sequence of $J_{\epsilon}$. By
$(\ref{PS})$, we have
$\displaystyle\frac{1}{2}\int_{\mathbb{R}^{4}}\left(|\Delta
u_{n}|^{2}+a|\nabla
u_{n}|^{2}+bu_{n}^{2}\right)dx-\int_{\mathbb{R}^{4}}\frac{F(x,u_{n})}{|x|^{\beta}}dx-\epsilon\int_{\mathbb{R}^{4}}hu_{n}dx\rightarrow
c\,\,{\rm as}\,\,n\rightarrow\infty,$ (3.31)
$\displaystyle\left|\int_{\mathbb{R}^{4}}(\Delta u_{n}\Delta\psi+a\nabla
u_{n}\nabla\psi+bu_{n}\psi)dx-\int_{\mathbb{R}^{4}}\frac{f(x,u_{n})}{|x|^{\beta}}\psi
dx-\epsilon\int_{\mathbb{R}^{4}}h\psi dx\right|\leq\tau_{n}\|\psi\|_{E}$
(3.32)
for all $\psi\in E$, where $\tau_{n}\rightarrow 0$ as $n\rightarrow\infty$. By
$(H_{2})$, $0\leq\mu F(x,u_{n})\leq u_{n}f(x,u_{n})$ for some $\mu>2$. Taking
$\psi=u_{n}$ in (3.32) and multiplying (3.31) by $\mu$, we have
$\displaystyle\left(\frac{\mu}{2}-1\right)\|u_{n}\|_{E}^{2}$
$\displaystyle\leq$
$\displaystyle\left(\frac{\mu}{2}-1\right)\|u_{n}\|_{E}^{2}-\int_{\mathbb{R}^{4}}\frac{\mu
F(x,u_{n})-f(x,u_{n})u_{n}}{|x|^{\beta}}dx$ $\displaystyle\leq$
$\displaystyle\mu|c|+\tau_{n}\|u_{n}\|_{E}+(\mu+1)\epsilon\|h\|_{E^{*}}\|u_{n}\|_{E}$
Therefore $\|u_{n}\|_{E}$ is bounded. It then follows from (3.31), (3.32) that
$\int_{\mathbb{R}^{4}}\frac{f(x,u_{n})u_{n}}{|x|^{\beta}}dx\leq
C,\quad\int_{\mathbb{R}^{4}}\frac{F(x,u_{n})}{|x|^{\beta}}dx\leq C.$
Notice that $f(x,u_{n})u_{n}\geq 0$ and $F(x,u_{n})\geq 0$. By Lemma 3.6, up
to a subsequence, $u_{n}\rightarrow u$ strongly in $L^{q}(\mathbb{R}^{4})$ for
some $u\in E$, $\forall q\geq 1$. Then we obtain by applying Lemma 3.5 (here
$N=4$ and $\phi=|x|^{-\beta}$),
$\lim_{n\rightarrow\infty}\int_{\mathbb{R}^{4}}\frac{|f(x,\,u_{n})-f(x,\,u)|}{|x|^{\beta}}dx=0.$
(3.33)
By $(H_{1})$ and $(H_{3})$, there exist constants $c_{1}$, $c_{2}>0$ such that
$F(x,u_{n})\leq c_{1}u_{n}^{2}+c_{2}|f(x,u_{n})|.$
In view of (3.33) and Lemma 3.6, it follows from the generalized Lebesgue’s
dominated convergence theorem
$\lim_{n\rightarrow\infty}\int_{\mathbb{R}^{4}}\frac{|F(x,\,u_{n})-F(x,\,u)|}{|x|^{\beta}}dx=0.$
Finally passing to the limit $n\rightarrow\infty$ in $(\ref{2})$, we have
$\int_{\mathbb{R}^{4}}\left(\Delta u\Delta\psi+a\nabla
u\nabla\psi+bu\psi\right)dx-\int_{\mathbb{R}^{4}}\frac{f(x,u)}{|x|^{\beta}}\psi
dx-\epsilon\int_{\mathbb{R}^{4}}h\psi dx=0$
for all $\psi\in C_{0}^{\infty}(\mathbb{R}^{4})$, which is dense in $E$. Hence
$u$ is a weak solution of $(\ref{problem})$. $\hfill\Box$
### 3.5 Nontrivial solution
In this subsection, we will prove Theorem 1.3. It suffices to prove that the
functional $J$ has a nontrivial critical point in the function space $E$.
Proof of Theorem 1.3. Notice that $J_{\epsilon}$ becomes $J$ when
$\epsilon=0$. By Lemma 3.1 and Lemma 3.2, $J$ satisfies all the hypothesis of
the mountain-pass theorem except for the Palais-Smale condition:
$J\in\mathcal{C}^{1}(E,\mathbb{R})$; $J(0)=0$; $J(u)\geq\delta>0$ when
$\|u\|_{E}=r$; $J(e)<0$ for some $e\in E$ with $\|e\|_{E}>r$. Then using the
mountain-pass theorem without the Palais-Smale condition [30], we can find a
sequence $(u_{n})$ of $E$ such that
$J(u_{n})\rightarrow c>0,\quad J^{\prime}(u_{n})\rightarrow 0\,\,{\rm
in}\,\,E^{*},$
where
$c=\min_{\gamma\in\Gamma}\max_{u\in\gamma}J(u)\geq\delta$
is the mountain-pass level of $J$, where
$\Gamma=\\{\gamma\in\mathcal{C}([0,1],E):\gamma(0)=0,\gamma(1)=e\\}$. This is
equivalent to saying
$\displaystyle\frac{1}{2}\int_{\mathbb{R}^{4}}\left(|\Delta
u_{n}|^{2}+a|\nabla
u_{n}|^{2}+bu_{n}^{2}\right)dx-\int_{\mathbb{R}^{4}}\frac{F(x,u_{n})}{|x|^{\beta}}dx\rightarrow
c\,\,{\rm as}\,\,n\rightarrow\infty,$ (3.34)
$\displaystyle\left|\int_{\mathbb{R}^{4}}(\Delta u_{n}\Delta\psi+a\nabla
u_{n}\nabla\psi+bu_{n}\psi)dx-\int_{\mathbb{R}^{4}}\frac{f(x,u_{n})}{|x|^{\beta}}\psi
dx\right|\leq\tau_{n}\|\psi\|_{E}$ (3.35)
for $\psi\in E$, where $\tau_{n}\rightarrow 0$ as $n\rightarrow\infty$. By
Lemma 3.7 with $\epsilon=0$, up to a subsequence, there holds
$\left\\{\begin{array}[]{lll}u_{n}\rightharpoonup u\,\,{\rm
weakly\,\,in}\,\,E\\\\[6.45831pt] u_{n}\rightarrow u\,\,{\rm
strongly\,\,in}\,\,L^{q}(\mathbb{R}^{4}),\,\,\forall q\geq 1\\\\[6.45831pt]
\lim\limits_{n\rightarrow\infty}\int_{\mathbb{R}^{4}}\frac{F(x,u_{n})}{|x|^{\beta}}dx=\int_{\mathbb{R}^{4}}\frac{F(x,u)}{|x|^{\beta}}dx\\\\[6.45831pt]
u\,\,{\rm
is\,\,a\,\,weak\,\,solution\,\,of}\,\,(\ref{h-zero}).\end{array}\right.$
(3.36)
Now suppose $u\equiv 0$. Since $F(x,0)=0$ for all $x\in\mathbb{R}^{4}$, it
follows from (3.34) and (3.36) that
$\lim_{n\rightarrow\infty}\|u_{n}\|_{E}^{2}=2c>0.$ (3.37)
Thanks to the hypothesis $(H_{4})$, we have $0<c<16\pi^{2}/\alpha_{0}$ by
applying Lemma 3.4. Thus there exists some $\epsilon_{0}>0$ and $N>0$ such
that $\|u_{n}\|_{E}^{2}\leq 32\pi^{2}/\alpha_{0}-\epsilon_{0}$ for all $n>N$.
Choose $q>1$ sufficiently close to $1$ such that
$q\alpha_{0}\|u_{n}\|_{E}^{2}\leq 32\pi^{2}-\alpha_{0}\epsilon_{0}/2$ for all
$n>N$. By $(H_{1})$,
$|f(x,u_{n})u_{n}|\leq
b_{1}u_{n}^{2}+b_{2}|u_{n}|^{1+\gamma}\left(e^{\alpha_{0}u_{n}^{2}}-1\right).$
It follows from the Hölder inequality, Lemma 2.1 and Theorem 1.1 that
$\displaystyle\int_{\mathbb{R}^{4}}\frac{|f(x,u_{n})u_{n}|}{|x|^{\beta}}dx$
$\displaystyle\leq$ $\displaystyle
b_{1}\int_{\mathbb{R}^{4}}\frac{u_{n}^{2}}{|x|^{\beta}}dx+b_{2}\int_{\mathbb{R}^{4}}\frac{|u_{n}|^{1+\gamma}\left(e^{\alpha_{0}u_{n}^{2}}-1\right)}{|x|^{\beta}}dx$
$\displaystyle\leq$ $\displaystyle
b_{1}\int_{\mathbb{R}^{4}}\frac{u_{n}^{2}}{|x|^{\beta}}dx+b_{2}\left(\int_{\mathbb{R}^{4}}\frac{|u_{n}|^{(1+\gamma)q^{\prime}}}{|x|^{\beta}}dx\right)^{1/{q^{\prime}}}\left(\int_{\mathbb{R}^{4}}\frac{\left(e^{\alpha_{0}u_{n}^{2}}-1\right)^{q}}{|x|^{\beta}}dx\right)^{1/{q}}$
$\displaystyle\leq$ $\displaystyle
b_{1}\int_{\mathbb{R}^{4}}\frac{u_{n}^{2}}{|x|^{\beta}}dx+C\left(\int_{\mathbb{R}^{4}}\frac{|u_{n}|^{(1+\gamma)q^{\prime}}}{|x|^{\beta}}dx\right)^{1/{q^{\prime}}}\rightarrow
0\quad{\rm as}\quad n\rightarrow\infty.$
Here we also used (3.36) (precisely $u_{n}\rightarrow u$ in
$L^{s}(\mathbb{R}^{4})$ for all $s\geq 1$) in the last step of the above
estimates. Inserting this into (3.35) with $\psi=u_{n}$, we have
$\|u_{n}\|_{E}\rightarrow 0\quad{\rm as}\quad n\rightarrow\infty,$
which contradicts (3.37). Therefore $u\not\equiv 0$ and we obtain a nontrivial
weak solution of (1.20). $\hfill\Box$
### 3.6 Proof of Theorem 1.4
The proof of Theorem 1.4 is similar to the first part of that of Theorem 1.3.
By Theorem 1.1, Lemma 3.1 and Lemma 3.2, there exists $\epsilon_{1}>0$ such
that when $0<\epsilon<\epsilon_{1}$, $J_{\epsilon}$ satisfies all the
hypothesis of the mountain-pass theorem except for the Palais-Smale condition:
$J_{\epsilon}\in\mathcal{C}^{1}(E,\mathbb{R})$; $J_{\epsilon}(0)=0$;
$J_{\epsilon}(u)\geq\vartheta_{\epsilon}>0$ when $\|u\|_{E}=r_{\epsilon}$;
$J_{\epsilon}(e)<0$ for some $e\in E$ with $\|e\|_{E}>r_{\epsilon}$. Then
using the mountain-pass theorem without the Palais-Smale condition, we can
find a sequence $(u_{n})$ of $E$ such that
$J_{\epsilon}(u_{n})\rightarrow c>0,\quad
J^{\prime}_{\epsilon}(u_{n})\rightarrow 0\,\,{\rm in}\,\,E^{*},$
where
$c=\min_{\gamma\in\Gamma}\max_{u\in\gamma}J_{\epsilon}(u)\geq\vartheta_{\epsilon}$
is the mountain-pass level of $J_{\epsilon}$, where
$\Gamma=\\{\gamma\in\mathcal{C}([0,1],E):\gamma(0)=0,\gamma(1)=e\\}$. By Lemma
3.7, there exists a subsequence of $(u_{n})$ converges weakly to a solution of
(1.14) in $E$. $\hfill\Box$
### 3.7 A weak solution with negative energy
In this subsection, we will prove Theorem 1.5 by using the Ekeland’s
variational principle [37]. Let us first give two technical lemmas.
Lemma 3.8. Assume $(A_{1})$ holds. If $(u_{n})$ is a sequence in $E$ such that
$\lim_{n\rightarrow\infty}\|u_{n}\|_{E}^{2}<\frac{32\pi^{2}}{\alpha_{0}}\left(1-\frac{\beta}{4}\right),$
(3.38)
then ${(e^{\alpha_{0}u_{n}^{2}}-1)}/{|x|^{\beta}}$ is bounded in
$L^{q}(\mathbb{R}^{4})$ for some $q>1$.
Proof. By $(A_{1})$, we have
$\|u_{n}\|_{E}^{2}\geq\int_{\mathbb{R}^{4}}\left(|\Delta
u_{n}|^{2}+a_{0}|\nabla u_{n}|^{2}+b_{0}u_{n}^{2}\right)dx.$
Denote $v_{n}=u_{n}/\|u_{n}\|_{E}$. Then
$\int_{\mathbb{R}^{4}}\left(|\Delta v_{n}|^{2}+a_{0}|\nabla
v_{n}|^{2}+b_{0}v_{n}^{2}\right)dx\leq 1.$
By Theorem 1.1, for any $\alpha<32\pi^{2}(1-\beta/4)$, $0\leq\beta<4$, there
holds
$\int_{\mathbb{R}^{4}}\frac{e^{\alpha v_{n}^{2}}-1}{|x|^{\beta}}dx\leq
C(\alpha,\beta).$ (3.39)
By (3.38), one can choose $q>1$ sufficiently close to $1$ such that
$\lim_{n\rightarrow\infty}\alpha_{0}q\|u_{n}\|_{E}^{2}<32\pi^{2}(1-\beta
q/4).$ (3.40)
Combining Lemma 2.1, (3.39) and (3.40), we obtain
$\displaystyle\int_{\mathbb{R}^{4}}\frac{(e^{\alpha_{0}u_{n}^{2}}-1)^{q}}{|x|^{\beta
q}}dx\leq\int_{\mathbb{R}^{4}}\frac{e^{\alpha_{0}q\|u_{n}\|_{E}^{2}v_{n}^{2}}-1}{|x|^{\beta
q}}dx\leq C$
for some constant $C$. Thus ${(e^{\alpha_{0}u_{n}^{2}}-1)}/{|x|^{\beta}}$ is
bounded in $L^{q}(\mathbb{R}^{4})$. $\hfill\Box$
Lemma 3.9. Assume $(A_{1})$, $(A_{2})$, $(H_{1})$ are satisfied and $(u_{n})$
is a Palais-Smale sequence for $J_{\epsilon}$ at any level with
$\liminf_{n\rightarrow\infty}\|u_{n}\|_{E}^{2}<\frac{32\pi^{2}}{\alpha_{0}}\left(1-\frac{\beta}{4}\right).$
(3.41)
Then $(u_{n})$ has a subsequence converging strongly to a solution of (1.14).
Proof. By (3.41), up to a subsequence, $(u_{n})$ is bounded in $E$. In view of
Lemma 3.6, without loss of generality we can assume
$\left\\{\begin{array}[]{lll}u_{n}\rightharpoonup u_{0}\quad{\rm
weakly\,\,in}\quad E\\\\[6.45831pt] u_{n}\rightarrow u_{0}\quad{\rm
strongly\,\,in}\quad L^{q}(\mathbb{R}^{4}),\,\,\forall q\geq 1\\\\[6.45831pt]
u_{n}\rightarrow u_{0}\quad{\rm
almost\,\,everywhere\,\,in}\quad\mathbb{R}^{4}.\end{array}\right.$ (3.42)
Since $(u_{n})$ is a Palais-Smale sequence for $J_{\epsilon}$, we have
$J^{\prime}_{\epsilon}(u_{n})\rightarrow 0$ in $E^{*}$, particularly
$\displaystyle{}\int_{\mathbb{R}^{4}}\left(\Delta
u_{n}\Delta(u_{n}-u_{0})+a\nabla
u_{n}\nabla(u_{n}-u_{0})+bu_{n}(u_{n}-u_{0})\right)dx$
$\displaystyle-\int_{\mathbb{R}^{4}}\frac{f(x,u_{n})(u_{n}-u_{0})}{|x|^{\beta}}dx-\epsilon\int_{\mathbb{R}^{4}}h(u_{n}-u_{0})dx\rightarrow
0$ (3.43)
as $n\rightarrow\infty$. In view of (3.42), we have
$\int_{\mathbb{R}^{4}}\left(\Delta u_{0}\Delta(u_{n}-u_{0})+a\nabla
u_{0}\nabla(u_{n}-u_{0})+bu_{0}(u_{n}-u_{0})\right)dx\rightarrow 0\quad{\rm
as}\quad n\rightarrow\infty.$ (3.44)
Subtracting (3.44) from (3.43), we obtain
$\|u_{n}-u_{0}\|_{E}^{2}=\int_{\mathbb{R}^{4}}\frac{f(x,u_{n})(u_{n}-u_{0})}{|x|^{\beta}}dx+\epsilon\int_{\mathbb{R}^{4}}h(u_{n}-u_{0})dx+o(1).$
(3.45)
In view of $(H_{1})$ and (3.41), one can see from Lemma 3.8 that $f(x,u_{n})$
is bounded in $L^{q}(\mathbb{R}^{4})$ for some $q>1$ sufficiently close to
$1$. It then follows from (3.42) and the Hölder inequality that
$\displaystyle\int_{\mathbb{R}^{4}}\frac{f(x,u_{n})(u_{n}-u_{0})}{|x|^{\beta}}dx+\epsilon\int_{\mathbb{R}^{4}}h(u_{n}-u_{0})dx\rightarrow
0$
as $n\rightarrow\infty$. Inserting this into (3.45), we conclude
$u_{n}\rightarrow u_{0}$ strongly in $E$. Since
$J_{\epsilon}\in\mathcal{C}^{1}(E,\mathbb{R})$, $u_{0}$ is a weak solution of
(1.14). $\hfill\Box$
Proof of Theorem 1.5. Let $r_{\epsilon}$ be as in Lemma 3.2, namely
$J_{\epsilon}(u)>0$ for all $u:\|u\|_{E}=r_{\epsilon}$ with
$r_{\epsilon}\rightarrow 0$ as $\epsilon\rightarrow 0$. One can choose
$\epsilon_{2}:0<\epsilon_{2}<\epsilon_{1}$ such that when
$0<\epsilon<\epsilon_{2}$,
$r_{\epsilon}<\frac{32\pi^{2}}{\alpha_{0}}\left(1-\frac{\beta}{4}\right).$
(3.46)
Lemma 3.8 together with $(H_{1})$ and $(H_{2})$ implies that
$J_{\epsilon}(u)\geq-C$ for all $u\in\overline{B}_{r_{\epsilon}}=\\{u\in
E:\|u\|_{E}\leq r_{\epsilon}\\}$, where $r_{\epsilon}$ is given by (3.46). On
the other hand thanks to Lemma 3.3, there holds $\inf_{\|u\|_{E}\leq
r_{\epsilon}}J_{\epsilon}(u)<0$. Since $\overline{B}_{r_{\epsilon}}$ is a
complete metric space with the metric given by the norm of $E$, convex and the
functional $J_{\epsilon}$ is of class $\mathcal{C}^{1}$ and bounded below on
$\overline{B}_{r_{\epsilon}}$, thanks to the Ekeland’s variational principle,
there exists some sequence $(u_{n})\subset\overline{B}_{r_{\epsilon}}$ such
that
$J_{\epsilon}(u_{n})\rightarrow c_{0}=\inf_{\|u\|_{E}\leq
r_{\epsilon}}J_{\epsilon}(u),$
and
$J_{\epsilon}^{\prime}(u_{n})\rightarrow 0\quad{\rm in}\quad E^{*}$
as $n\rightarrow\infty$. Observing that $\|u_{n}\|_{E}\leq r_{\epsilon}$, in
view of (3.46) and Lemma 3.9, we conclude that there exists a subsequence of
$(u_{n})$ which converges to a solution $u_{0}$ of (1.14) strongly in $E$.
Therefore $J_{\epsilon}(u_{0})=c_{0}<0$. $\hfill\Box$
### 3.8 Multiplicity results
In this subsection, we will show that two solutions obtained in Theorem 1.4
and Theorem 1.5 are distinct under some assumptions, i.e., Theorem 1.6 holds.
We need the following technical lemma:
Lemma 3.10. Let $(w_{n})$ be a sequence in $E$. Suppose $\|w_{n}\|_{E}=1$ and
$w_{n}\rightharpoonup w_{0}$ weakly in $E$. Then for any
$0<p<\frac{1}{1-\|w_{0}\|_{E}^{2}}$
$\sup_{n}\int_{\mathbb{R}^{4}}\frac{e^{32\pi^{2}(1-\beta/4)pw_{n}^{2}}-1}{|x|^{\beta}}dx<\infty.$
(3.47)
Proof. Since $w_{n}\rightharpoonup w_{0}$ weakly in $E$ and $\|w_{n}\|_{E}=1$,
we have
$\displaystyle{}\|w_{n}-w_{0}\|_{E}^{2}$ $\displaystyle=$
$\displaystyle\int_{\mathbb{R}^{4}}\left(|\Delta(w_{n}-w_{0})|^{2}+a(x)|\nabla(w_{n}-w_{0})|^{2}+b(x)(w_{n}-w_{0})^{2}\right)dx$
(3.48) $\displaystyle=$ $\displaystyle
1+\|w_{0}\|_{E}^{2}-2\int_{\mathbb{R}^{4}}\left(\Delta w_{n}\Delta
w_{0}+a(x)\nabla w_{n}\nabla w_{0}+b(x)w_{n}w_{0}\right)dx$
$\displaystyle\rightarrow$ $\displaystyle 1-\|w_{0}\|_{E}^{2}\quad{\rm
as}\quad n\rightarrow\infty.$
If $w_{0}\equiv 0$, then (3.47) is a consequence of Theorem 1.1. If
$w_{0}\not\equiv 0$, using the Hölder inequality, Lemma 2.1, Theorem 1.1 and
the inequality
$\displaystyle
rs-1\leq\frac{1}{\mu}\left(r^{\mu}-1\right)+\frac{1}{\nu}\left(s^{\nu}-1\right),$
where $r\geq 0,s\geq 0,\mu>1,\nu>1,{1}/{\mu}+{1}/{\nu}=1$, we estimate
$\displaystyle{}\int_{\mathbb{R}^{4}}\frac{e^{32\pi^{2}(1-\beta/4)pw_{n}^{2}}-1}{|x|^{\beta}}dx$
$\displaystyle\leq$
$\displaystyle\int_{\mathbb{R}^{4}}\frac{e^{32\pi^{2}(1-\beta/4)p\left((1+\epsilon)(w_{n}-w_{0})^{2}+(1+\epsilon^{-1})w_{0}^{2}\right)}-1}{|x|^{\beta}}dx$
(3.49) $\displaystyle\leq$
$\displaystyle\frac{1}{q}\int_{\mathbb{R}^{4}}\frac{e^{32\pi^{2}(1-\beta/4)qp(1+\epsilon)(w_{n}-w_{0})^{2}}-1}{|x|^{\beta}}dx$
$\displaystyle+\frac{1}{q^{\prime}}\int_{\mathbb{R}^{4}}\frac{e^{32\pi^{2}(1-\beta/4)q^{\prime}p(1+\epsilon^{-1})w_{0}^{2}}-1}{|x|^{\beta}}dx,$
where $1/q+1/q^{\prime}=1$. Assume $0<p<{1}/{(1-\|w_{0}\|_{E}^{2})}$. By
(LABEL:lim), we can choose $q$ sufficiently close to $1$ and $\epsilon>0$
sufficiently small such that
$qp(1+\epsilon)\|w_{n}-w_{0}\|_{E}^{2}<1$
for large $n$. Recall that $\|u\|_{E}^{2}\geq\int_{\mathbb{R}^{4}}(|\Delta
u|^{2}+a_{0}|\nabla u|^{2}+b_{0}u^{2})dx$. Applying Theorem 1.1, we conclude
the lemma from (3.49). $\hfill\Box$
We remark that similar results were obtained in [25] for bi-Laplacian on
bounded smooth domain $\Omega\subset\mathbb{R}^{4}$ and in [17] for Laplacian
on the whole $\mathbb{R}^{2}$.
Proof of Theorem 1.6. According to Theorem 1.4 and Theorem 1.5, under the
assumptions of Theorem 1.6, there exist sequences $(v_{n})$ and $(u_{n})$ in
$E$ such that as $n\rightarrow\infty$,
$\displaystyle v_{n}\rightharpoonup u_{M}\,\,{\rm weakly\,\,in}\,\,E,\quad
J_{\epsilon}(v_{n})\rightarrow c_{M}>0,\quad|\langle
J_{\epsilon}^{\prime}(v_{n}),\phi\rangle|\leq\gamma_{n}\|\phi\|_{E}$ (3.50)
$\displaystyle{}u_{n}\rightarrow u_{0}\,\,{\rm strongly\,\,in}\,\,E,\quad
J_{\epsilon}(u_{n})\rightarrow c_{0}<0,\quad|\langle
J_{\epsilon}^{\prime}(u_{n}),\phi\rangle|\leq\tau_{n}\|\phi\|_{E}$ (3.51)
with $\gamma_{n}\rightarrow 0$ and $\tau_{n}\rightarrow 0$, both $u_{M}$ and
$u_{0}$ are nonzero weak solutions to (1.14) since $h\not\equiv 0$ and
$\epsilon>0$. Suppose $u_{M}=u_{0}$. Then $v_{n}\rightharpoonup u_{0}$ weakly
in $E$ and thus
$\int_{\mathbb{R}^{4}}\left(\Delta v_{n}\Delta u_{0}+a\nabla v_{n}\nabla
u_{0}+bv_{n}u_{0}\right)dx\rightarrow\|u_{0}\|_{E}^{2}$
as $n\rightarrow\infty$. Using the Hölder inequality, we obtain
$\limsup_{n\rightarrow\infty}\|v_{n}\|_{E}\geq\|u_{0}\|_{E}>0.$
On one hand, by Lemma 3.7, we have
$\int_{\mathbb{R}^{4}}\frac{F(x,v_{n})}{|x|^{\beta}}dx\rightarrow\int_{\mathbb{R}^{4}}\frac{F(x,u_{0})}{|x|^{\beta}}dx\quad{\rm
as}\quad n\rightarrow\infty.$ (3.52)
Here and in the sequel, we do not distinguish sequence and subsequence. On the
other hand, it follows from Theorem 1.4 that $\|v_{n}\|_{E}$ is bounded. In
view of Lemma 3.6, it holds
$\int_{\mathbb{R}^{4}}hv_{n}dx\rightarrow\int_{\mathbb{R}^{4}}hu_{0}dx.\quad{\rm
as}\quad n\rightarrow\infty.$ (3.53)
Inserting (3.52) and (3.53) into (LABEL:42), we obtain
$\frac{1}{2}\|v_{n}\|_{E}^{2}=c_{M}+\int_{\mathbb{R}^{4}}\frac{F(x,u_{0})}{|x|^{\beta}}dx+\epsilon\int_{\mathbb{R}^{4}}hu_{0}dx+o(1),$
(3.54)
where $o(1)\rightarrow 0$ as $n\rightarrow\infty$. In the same way, one can
derive
$\frac{1}{2}\|u_{n}\|_{E}^{2}=c_{0}+\int_{\mathbb{R}^{4}}\frac{F(x,u_{0})}{|x|^{\beta}}dx+\epsilon\int_{\mathbb{R}^{4}}hu_{0}dx+o(1).$
(3.55)
Combining (3.54) and (3.55), we have
$\|v_{n}\|_{E}^{2}-\|u_{0}\|_{E}^{2}=2\left(c_{M}-c_{0}+o(1)\right).$ (3.56)
Now we need to estimate $c_{M}-c_{0}$. By Lemma 3.4, there holds for
sufficiently small $\epsilon>0$,
$\max_{t\geq
0}J_{\epsilon}(t\phi_{n})<\frac{16\pi^{2}}{\alpha_{0}}\left(1-\frac{\beta}{4}\right).$
Since $c_{M}$ is the mountain-pass level of $J_{\epsilon}$, we have
$c_{M}<\frac{16\pi^{2}}{\alpha_{0}}\left(1-\frac{\beta}{4}\right).$
From the proof of Lemma 3.3, we know that $c_{0}\rightarrow 0$ as
$\epsilon\rightarrow 0$ ($c_{0}$ depends on $\epsilon$). Noting that $c_{M}>0$
and $c_{0}<0$, we obtain for sufficiently small $\epsilon>0$,
$0<c_{M}-c_{0}<\frac{16\pi^{2}}{\alpha_{0}}\left(1-\frac{\beta}{4}\right).$
(3.57)
Write
$w_{n}=\frac{v_{n}}{\|v_{n}\|_{E}},\quad
w_{0}=\frac{u_{0}}{\left(\|u_{0}\|_{E}^{2}+2(c_{M}-c_{0})\right)^{1/2}}.$
It follows from (3.56) and $v_{n}\rightharpoonup u_{0}$ weakly in $E$ that
$w_{n}\rightharpoonup w_{0}$ weakly in $E$. Notice that
$\int_{\mathbb{R}^{4}}\frac{e^{\alpha_{0}v_{n}^{2}}-1}{|x|^{\beta}}dx=\int_{\mathbb{R}^{4}}\frac{e^{\alpha_{0}\|v_{n}\|_{E}^{2}w_{n}^{2}}-1}{|x|^{\beta}}dx.$
By (3.56) and (3.57), a straightforward calculation shows
$\lim_{n\rightarrow\infty}\alpha_{0}\|v_{n}\|_{E}^{2}(1-\|w_{0}\|_{E}^{2})<32\pi^{2}\left(1-\frac{\beta}{4}\right).$
Whence Lemma 3.10 implies that $e^{\alpha_{0}v_{n}^{2}}$ is bounded in
$L^{q}(\mathbb{R}^{4})$ for some $q>1$. By $(H_{1})$,
$|f(x,v_{n})|\leq
b_{1}|v_{n}|+b_{2}|v_{n}|^{\gamma}\left(e^{\alpha_{0}v_{n}^{2}}-1\right).$
Then the Hölder inequality and the continuous embedding $E\hookrightarrow
L^{p}(\mathbb{R}^{4})$ for all $p\geq 1$ imply that $f(x,v_{n})/|x|^{\beta}$
is bounded in $L^{q_{1}}(\mathbb{R}^{4})$ for some $q_{1}$: $1<q_{1}<q$. This
together with Lemma 3.6 and the Hölder inequality gives
$\left|\int_{\mathbb{R}^{4}}\frac{f(x,v_{n})(v_{n}-u_{0})}{|x|^{\beta}}dx\right|\leq\left\|\frac{f(x,v_{n})}{|x|^{\beta}}\right\|_{L^{q_{1}}(\mathbb{R}^{4})}\left\|v_{n}-u_{0}\right\|_{L^{{q_{1}^{\prime}}}(\mathbb{R}^{4})}\rightarrow
0,$ (3.58)
where $1/q_{1}+1/q_{1}^{\prime}=1$.
Taking $\phi=v_{n}-u_{0}$ in (LABEL:42), we have by using (3.58) and Lemma 3.6
that
$\int_{\mathbb{R}^{4}}\left(\Delta v_{n}\Delta(v_{n}-u_{0})+a\nabla
v_{n}\nabla(v_{n}-u_{0})+bv_{n}(v_{n}-u_{0})\right)dx\rightarrow 0.$ (3.59)
However the fact $v_{n}\rightharpoonup u_{0}$ weakly in $E$ implies
$\int_{\mathbb{R}^{4}}\left(\Delta u_{0}\Delta(v_{n}-u_{0})+a\nabla
u_{0}\nabla(v_{n}-u_{0})+bu_{0}(v_{n}-u_{0})\right)dx\rightarrow 0.$ (3.60)
Subtracting (3.60) from (3.59), we have $\|v_{n}-u_{0}\|_{E}^{2}\rightarrow
0$. Since $J_{\epsilon}\in\mathcal{C}^{1}(E,\mathbb{R})$, we have
$J_{\epsilon}(v_{n})\rightarrow J_{\epsilon}(u_{0})=c_{0},$
which contradicts $J_{\epsilon}(v_{n})\rightarrow c_{M}>c_{0}$. This completes
the proof of Theorem 1.6. $\hfill\Box$
## References
* [1] S. Adachi, K. Tanaka, Trundinger type inequalities in $\mathbb{R}^{N}$ and their best exponents, Proc. Amer. Math. Soc. 128 (2000) 2051-2057.
* [2] D. Adams, A sharp inequality of J. Moser for higher order derivatives, Ann. Math. 128 (1988) 385-398.
* [3] Adimurthi, Existence of positive solutions of the semilinear Dirichlet Problem with critical growth for the $N$-Laplacian, Ann. Sc. Norm. Sup. Pisa XVII (1990) 393-413.
* [4] Adimurthi, K. Sandeep, A singular Moser-Trudinger embedding and its applications, Nonlinear Differ. Equ. Appl. 13 (2007) 585-603.
* [5] Adimurthi, S. L. Yadava, Critical exponent problem in $\mathbb{R}^{2}$ with neumann boundary condition, Commun. Partial Differential Equations 15 (1990) 461-501.
* [6] Adimurthi, S. L. Yadava, Multiplicity results for semilinear elliptic equations in a bounded domain of $\mathbb{R}^{2}$ involving critical exponent, Ann. Sc. Norm. Sup. Pisa XVII (1990) 481-504.
* [7] Adimurthi, P. Srikanth, S. L. Yadava, Phenomena of critical exponent in $\mathbb{R}^{2}$, Proc. Royal Soc. Edinb. 119A (1991) 19-25.
* [8] Adimurthi, Y. Yang, An interpolation of Hardy inequality and Trudinger-Moser inequality in $\mathbb{R}^{N}$ and its applications, International Mathematics Research Notices 13 (2010) 2394-2426.
* [9] C. O. Alves, G. M. Figueiredo, On multiplicity and concentration of positive solutions for a class of quasilinear problems with critical exponential growth in $\mathbb{R}^{N}$, J. Differential Equations 246 (2009) 1288-1311.
* [10] F. Atkinson, L. Peletier, Ground states and Dirichlet problems for $-\Delta u=f(u)$ in $\mathbb{R}^{2}$, Archive for Rational Mechanics and Analysis 96 (1986) 147-165.
* [11] D. Cao, Nontrivial solution of semilinear elliptic equations with critical exponent in $\mathbb{R}^{2}$, Commun. Partial Differential Equations 17 (1992) 407-435.
* [12] L. Carleson, A. Chang, On the existence of and extremal function for an inequality of J. Moser, Bull. Sc. Math 110 (1986) 113-127.
* [13] D. G. de Figueiredo, J. M. do Ó, B. Ruf, On an inequality by N. Trudinger and J. Moser and related elliptic equations, Comm. Pure Appl. Math. LV (2002) 135-152.
* [14] D. G. de Figueiredo, O. H. Miyagaki, B. Ruf, Elliptic equations in $\mathbb{R}^{2}$ with nonlinearities in the critical growth range, Calc. Var. 3 (1995) 139-153.
* [15] J. M. do Ó, $N$-Laplacian equations in $\mathbb{R}^{N}$ with critical growth, Abstr. Appl. Anal. 2 (1997) 301-315.
* [16] J. M. do Ó, Semilinear Dirichlet problems for the $N$-Laplacian in $\mathbb{R}^{N}$ with nonlinearities in the critical growth range, Differential and Integral Equations 9 (1996) 967-979.
* [17] J. M. do Ó, M. de Souza, On a class of singular Trudinger-Moser type inequalities and its applications, To appear in Mathematische Nachrichten.
* [18] J. M. do Ó, E. Medeiros, U. Severo, On a quasilinear nonhomogeneous elliptic equation with critical growth in $\mathbb{R}^{N}$, J. Differential Equations 246 (2009) 1363-1386.
* [19] J. M. do Ó, Y. Yang, A quasi-linear elliptic equation with critical growth on compact Riemannian manifold without boundary, Ann. Glob. Anal. Geom. 38 (2010) 317-334.
* [20] D. Gilbarg, N. Trudinger, Elliptic partial differential equations of second order, Springer, 2001.
* [21] G. Hardy, J. Littlewood, G. Polya, Inequalities. Cambridge University Press, 1952.
* [22] Kavian, Introduction á la Théorie des Points Critiques et Applications aux Problés elliptiques, Mathématiques et Applications, Springer, 1993.
* [23] N. Lam, G. Lu, Existence of nontrivial solutions to polyharmonic equations with subcritical and critical exponential growth, Preprint.
* [24] Y. Li, B. Ruf, A sharp Trudinger-Moser type inequality for unbounded domains in $\mathbb{R}^{N}$, Indiana Univ. Math. J. 57 (2008) 451-480.
* [25] G. Lu, Y. Yang, Adams’ inequalities for bi-Laplacian and extremal functions in dimension four, Advances in Math. 220 (2009) 1135-1170.
* [26] J. Moser, A sharp form of an Inequality by N.Trudinger, Ind. Univ. Math. J. 20 (1971) 1077-1091.
* [27] R. Panda, Nontrivial solution of a quasilinear elliptic equation with critical growth in $\mathbb{R}^{n}$, Proc. Indian Acad. Sci. (Math. Sci.) 105 (1995) 425-444.
* [28] R. Panda, On semilinear Neumann problems with critical growth for the $N$-Laplacian, Nonlinear Anal. 26 (1996) 1347-1366.
* [29] S. Pohozaev, The Sobolev embedding in the special case $pl=n$. Proceedings of the technical scientific conference on advances of scientific reseach 1964-1965, Mathematics sections, 158-170, Moscov. Energet. Inst., Moscow, 1965.
* [30] P. H. Rabinowitz: On a class of nonlinear Schrödinger equations, Z. Angew. Math. Phys. 43 (1992) 270-291.
* [31] B. Ruf, A sharp Trudinger-Moser type inequality for unbounded domains in $\mathbb{R}^{2}$, J. Funct. Anal. 219 (2005) 340-367.
* [32] B. Ruf, F. Sani, Sharp Adams-type inequalities in $\mathbb{R}^{n}$. To appear in Trans. Amer. Math. Soc..
* [33] E. Silva, S. Soares, Liouville-Gelfand type problems for the $N$-Laplacian on bounded domains of $\mathbb{R}^{N}$, Ann. Scuola Norm. Sup. Pisa Cl. Sci. XXVIII (1999) 1-30.
* [34] C. Tarsi, Adams’ Inequality and Limiting Sobolev Embeddings into Zygmund Spaces, Preprint.
* [35] G. Trombetti, J. L. Vázquez, A Symmetrization Result for Elliptic Equations with Lower Order Terms, Ann. Fac. Sci. Toulouse Math. 7 (1985) 137-150.
* [36] N. S. Trudinger, On embeddings into Orlicz spaces and some applications, J. Math. Mech. 17 (1967) 473-484.
* [37] M. Willem: Minimax theorems. Birkhäuser, 1996.
* [38] Y. Yang and L. Zhao: A class of Adams-Fontana type inequalities and related functionals on manifolds, Nonlinear Differential Equations and Applications 17 (2010) 119-135.
|
arxiv-papers
| 2011-05-08T15:35:14 |
2024-09-04T02:49:18.658983
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Yunyan Yang",
"submitter": "Yunyan Yang",
"url": "https://arxiv.org/abs/1105.1528"
}
|
1105.1534
|
Taking the redpill: Artificial Evolution in native x86 systems
by Sperl Thomas
sperl.thomas@gmail.com
October 2010
Abstract:
First, three successful environments for artificial evolution in computer
systems are analysed briefly. The organism in these enviroment are in a
virtual machine with special chemistries. Two key-features are found to be
very robust under mutations: Non-direct addressing and separation of
instruction and argument.
In contrast, the x86 instruction set is very brittle under mutations, thus not
able to achieve evolution directly. However, by making use of a special meta-
language, these two key-features can be realized in a x86 system. This meta-
language and its implementation is presented in chapter 2.
First experiments show very promising behaviour of the population. A
statistically analyse of these population is done in chapter 3. One key-result
has been found by comparison of the robustness of x86 instruction set and the
meta-language: A statistical analyse of mutation densities shows that the
meta-language is much more robust under mutations than the x86 instruction
set.
In the end, some Open Questions are stated which should be addressed in
further researches. An detailed explanation of how to run the experiment is
given in the Appendix.
###### Contents
1. 0 Overview
1. 1 History
1. 1 CoreWorld
2. 2 Tierra
3. 3 Avida
2. 2 Evolutionary Properties of different Chemistries
3. 3 Biological Information Storage
2. 1 Artificial Evolution in x86
1. 1 Chemistry for x86
2. 2 The instruction set
1. 1 An example: Linear congruential generator
3. 3 Translation of meta-language
3. 2 Experiments
1. 1 Overview
2. 2 First attempts
3. 3 Statistical analyse of experiment
4. 4 Comparing Robustness with x86 instruction set
4. 3 Outlook
1. 1 Open questions
2. 2 Computer malware
3. 3 Conclusion
5. 4 Appendix
1. 4.A The package
2. 4.B Running the experiment
## Chapter 0 Overview
### 1 History
#### 1 CoreWorld
Artificial evolution for self-replicating computer codes has been introduced
for the first time in 1990, when Steen Rasmussen created CoreWorld.[1]
CoreWorld is a virtual machine which can be controlled by a language called
RedCode. This assembler-like language has a pool of ten different instructions
that take two addresses as arguments. Rasmussen’s idea was to introduce a
random flaw of the MOV command, resulting of random mutations of the self-
replicating codes within the environment. The big disadvantage of RedCode was,
that nearly all flaws led to a lethal mutation, hence evolution did not occure
as wished.
#### 2 Tierra
In 1992, Tom Ray found out that the problem with RedCode was due to argumented
instruction set: Independent mutations in the instruction and its arguments
are unlikely to lead to a meaningful combination.[2] Instead of direct
connection between the instruction and its argument, Ray developed a pattern-
based addressing mechanism: He introduced two NOP-instructions (NOP0 and
NOP1). These instructions do not operate themselve, but can be used as marker
within the code. A pattern-matching algorithmus would find the first
appearence of a complementary marker string given (after the search-command),
and returns its addresse.
PUSH_AX | ; push ax
---|---
JMP
NOP0
NOP1
NOP0 | ; jmp marker101
INC_A | ; inc ax
NOP1
NOP0
NOP1 | ; marker101:
POP_CX | ; pop cx
There are 32 instructions available in the virtual Tierra world, roughly based
on assembler (JMP, PUSH_AX, INC_B and so on). With these inventions, Ray was
able to gain great results for artificial evolution (like parasitism, multi-
cellularity[3][4], …).
#### 3 Avida
In 1994, Christoph Adami has developed another artificial evolution
simulation, called Avida. Beside of some different structures of the
simulation, an important change has been made in the artificial chemistry:
Instead of hardcoded arguments within the instructions (as in Tierra for
example PUSH_AX), instructions and arguments are completely separated. The
arguments are defined by NOPs (in avida there are three NOPs: nop-A, nop-B,
nop-C) following the operation (for example, a nop-A following a PUSH pushes
the AX-register to the stack). There are 24 instructions available in avida,
again roughtly based on assembler (call, return, add, sub, allocate and so
on).
push
---
nop-A | ; push ax
jump-f |
nop-A |
nop-B
nop-B | ; jmp markerBCC
inc
nop-A | ; inc ax
nop-B
nop-C
nop-C | ; markerBCC:
pop
nop-B | ; pop bx
With that improvements of the virtual simulation, the researchers using avida
found out amazing results, among other things about the origin of complex
features in organism[5].
### 2 Evolutionary Properties of different Chemistries
In 2002, an detailed analyse about different artificial chemistries has been
published[6]. The authors compare several different instruction sets for
evolutionary properties as Fitness and Robustness ($R=\frac{N}{M}$ where N is
the number of non-lethal mutations and M is the number of all possible
mutations).
The Chemistry I consists of 28 operations and has total seperated operations
and arguments (same as Avida). Chemistry II has 84 unique instructions and
seperated operations and arguments. The last Chemistry III has 27
instructions, but within the instructions the argument is given (i.e. push-AX,
pop-CX, ...). As a result, it has been found that Chemistry I is much robuster
and can achieve a much higher fittness than Chemistry II, Chemistry III is the
worst language for evolution.
### 3 Biological Information Storage
The information in natural organism is stored in the DNA. The DNA is roughly
speaking a string of nucleotides. There are four nucleotides - Adenine,
Cytosine, Guanine and Thymine. Three nucleotides form a codon, and are
translated by tRNA to amino acids. Amino acids are the building blocks of
proteins, which are main modules of cells.
One can calculate that there are $N=4^{3}=64$ possibilities how you can sort
the codons - one could code 64 different amino acids. But nature just provides
20 different amino acids, hence there is big redundancy in the translation
process.
Amino acid | % in human | Codons
---|---|---
ALA | 6.99 | GCU, GCC, GCA, GCG
ARG | 5.28 | CGU, CGC, CGA, CGG, AGA, AGG
ASN | 3.92 | AAU, AAC
ASP | 5.07 | GAU, GAC
CYS | 2.44 | UGU, UGC
GLU | 6.82 | GAA, GAG
GLN | 4.47 | CAA, CAG
GLY | 7.10 | GGU, GGC, GGA, GGG
HIS | 2.26 | CAU, CAC
ILE | 4.50 | AUU, AUC, AUA
LEU | 9.56 | UUA, UUG, CUU, CUC, CUA
LYS | 5.71 | AAA, AAG
MET | 2.23 | AUG
PHE | 3.84 | UUU, UUC
PRO | 5.67 | CCU, CCC, CCA, CCG
SER | 7.25 | UCU, UCC, UCA, UCG, AGU, AGC
THR | 5.68 | ACU, ACA, ACC, ACG
TRP | 1.38 | UGG
TYR | 3.13 | UAC, UAU
VAL | 6.35 | GUU, GUC, GUA, GUG
STOP | 0.24 | UAA, UAG, UGU
There is a connection between the frequency of the amino acid in the genom and
the redundancy of the translation process. This mechanism protects the
organism from consequences of mutation. Imagine an Alanine (ALA) codon GCU
will be mutated to GCC, this codon will still be translated to Alanine, thus
there is no effect.
## Chapter 1 Artificial Evolution in x86
### 1 Chemistry for x86
The aim is to create an evolvable chemistry for a native x86 system. So far,
all noteable attempts have been done in virtual simulated plattforms, where
the creator can define the structure and the embedded instruction set.
On the other hand, the x86 chemistry has been defined long time ago and
appears to be not very evolution-friendly. The instruction set is very big,
the arguments and operations are directly connected, there is no instruction-
end marker or constant instruction-size. Hence, selfreplicators are very
brittle in that environment, and almost all mutations are lethal.
A possibility to avoid the bad behaviour of the x86 instruction set concerning
mutations is to create a (at best Turing-complete) meta-language. At
execution, the meta-language has to be translated to x86 assembler
instructions.
Here, a meta-language is presented with a eight bit code for each instruction,
which will be translated to x86 code at execution. Obviously, this is the same
procedure as in Protein biosynthesis. The eight bits coding a single
instruction in the meta-language are analogs of the three codons representing
one amino acid. At execution they are translated to a x86 instruction - just
as tRNA transformes the codon to a amino-acid. A punch of translated x86
instructions form a specific functionality, in biology a number of amino acids
form a protein (which is responsible for a certain functionality in the
organism).
### 2 The instruction set
One intention was to create an instruction set which can be translated to x86
instructions in a very trivial way. This was a noteable restriction as key
instructions used in Tierra and avida (like search-f, jump-b, dividate,
allocate, ...) can not be written in a simple way in x86 assembler.
The main idea of the meta-language is to have a number of buffers which are
used as arguments of all operations. Registers are not used directly as
arguments for instructions, but have to be copied from/to buffers, leading to
a seperation of operation and argument. The instructions have a very similar
form as in avida; comparing nopsA & push vs. push & nop-A, or pop & nopdA vs.
pop & nop-A or nopsA & inc & nopdA vs. inc & nop-A for the meta-language and
avida, respectively.
It has emerged that it is enough to use three registers (RegA, RegB, RegD),
two buffers for calculations and operations (BC1, BC2) and two buffers for
addressing (BA1, BA2).
| Buffer instructions (16)
---|---
nopsA | | BC1=RegA | | mov ebx, eax
nopsB | | BC1=RegB | | mov ebx, ebp
nopsD | | BC1=RegD | | mov ebx, edx
nopdA | | RegA=BC1 | | mov eax, ebx
nopdB | | RegB=BC1 | | mov ebp, ebx
nopdD | | RegD=BC1 | | mov edx, ebx
saveWrtOff | | BA1=BC1 | | mov edi, ebx
saveJmpOff | | BA2=BC1 | | mov esi, ebx
writeByte | | byte[BA1]=(BC1&&0xFF) | | mov byte[edi], bl
writeDWord | | dword[BA1]=BC1 | | mov dword[edi], ebx
save | | BC2=BC1 | | mov ecx, ebx
addsaved | | BC1+=BC2 | | add ebx, ecx
subsaved | | BC1-=BC2 | | sub ebx, ecx
getDO | | BC1=DataOffset | | mov ebx, DataOffset
getdata | | BC1=dword[BC1] | | mov ebx, dword[ebx]
getEIP | | BC1=instruction pointer | | call gEIP; gEIP: pop ebx
| Operations (10+8)
zer0 | | BC1=0 | | mov ebx, 0x0
push | | push BC1 | | push ebx
pop | | pop BC1 | | pop ebx
mul | | RegA*=BC1 | | mul ebx
div | | RegA/=BC1 | | div ebx
shl | | BC1 << (BC2&&0xFF) | | shl ebx, cl
shr | | BC1 >> (BC2&&0xFF) | | shr ebx, cl
and | | BC1=BC1&&BC2 | | and ebx, ecx
xor | | BC1=BC1 xor BC2 | | xor ebx, ecx
add0001 | | BC1+=0x1 | | add ebx, 0x0001
add0004 | | BC1+=0x4 | | add ebx, 0x0004
add0010 | | BC1+=0x10 | | add ebx, 0x0010
add0040 | | BC1+=0x40 | | add ebx, 0x0040
add0100 | | BC1+=0x100 | | add ebx, 0x0100
add0400 | | BC1+=0x400 | | add ebx, 0x0400
add1000 | | BC1+=0x1000 | | add ebx, 0x1000
add4000 | | BC1+=0x4000 | | add ebx, 0x4000
sub0001 | | BC1-=1 | | sub ebx, 0x0001
| Jumps (4)
JnzUp | | jz over && jmp esi && over:
JnzDown | | jnz down (&& times 32: nop) && down:
JzDown | | jz down (&& times 32: nop) && down:
ret | | ret
| API calls (11) - Windows based
CallAPIGetTickCounter | | | stdcall [GetTickCount]
CallAPIGetCommandLine | | | stdcall [GetCommandLine]
CallAPICopyFile | | | stdcall [CopyFile]
CallAPICreateFile | | | stdcall [CreateFile]
CallAPIGetFileSize | | | stdcall [GetFileSize]
CallAPICreateFileMapping | | | stdcall [CreateFileMapping]
CallAPIMapViewOfFile | | | stdcall [MapViewOfFile]
CallAPICreateProcess | | | stdcall [CreateProcess]
CallAPIUnMapViewOfFile | | | stdcall [UnMapViewOfFile]
CallAPICloseHandle | | | stdcall [CloseHandle]
CallAPISleep | | | stdcall [Sleep]
There are 30+8 unique commands (the eight addNNNN and sub0001 could be reduced
to one single command, but this would make the code very big) and 11 API calls
- giving 49 instructions. For translation, a command is identified by 8bits.
Therefore there are $N=2^{8}=256$ possible combinations, thus there is a big
redundancy within the translation of commands to x86 code - just as in natural
organism. This gives the code a big freedom in protecting itself against
harmful effects of mutations.
#### 1 An example: Linear congruential generator
The following code creates a new random number (Linear congruential generator)
via
$x_{n+1}=(ax_{n}+c)\textnormal{ mod }m$
with $a=1103515245$, $c=12345$ and $m=2^{32}$ (these are the numbers used by
GCC).
.data
---
DataOffset:
| SomeData dd 0x0
| RandomNumber dd 0x0
.code
macro addnumber arg { ... }
| | ; Creates the correct addNNNN combination
| getDO
| add0004
| getdata | ; mov ebx, dword[RandomNumber]
| nopdA | ; eax=dword[RandomNumber]
| zer0
| addnumber 1103515245 | | ; mov ebx, 1103515245
| mul | ; mul ebx
| zer0
| addnumber 12345
| save | ; mov ecx, ebx
| nopsA
| addsaved
| nopdB | ; mov ebp, (1103515245*[RandomNumber]+12345)
| | ; ebp=new random number
| getDO
| add0004
| saveWrtOff | ; mov edi, RandomNumber
| nopsB
| writeDWord | ; mov dword[RandomNumber], ebp
| | ; Save new random number
.end code
### 3 Translation of meta-language
As the instruction set has been created to construct a trivial translator, the
translator can be written as a single loop. A meta-language instruction is one
byte, the corresponding x86 instruction has 8 bytes (for 256 instructions,
this gives a $8*256=2.120$ Byte long translation table).
The Translator picks one 8bit codon, searchs the corresponding x86 instruction
and writes that x86 instruction to the memory. At the end of all codons, it
executes the memory.
| invoke | VirtualAlloc, 0x0, 0x10000, 0x1000, 0x4
---|---|---
| mov | [Place4Life], eax | | ; 64 KB RAM
| mov | edx, 0x0 | | ; EDX will be used as the
| | | | ; counter of this loop
| WriteMoreToMemory:
| | mov | ebx, 0x0 | ; EBX=0
| | mov | bl, byte[edx+StAmino] | ; BL=NUMBER OF AMINO ACID
| | shl | ebx, 3 | ; EBX*=8;
| | mov | esi, StartAlphabeth | ; Alphabeth offset
| | add | esi, ebx | ; offset of the current amino acid
| | mov | ebx, edx | ; current number of amino acid
| | shl | ebx, 3 | ; lenght of amino acids
| | mov | edi, [Place4Life] | ; Memory address
| | add | edi, ebx | ; Offset of current memory
| | mov | ecx, 8 | ; ECX=8
| | rep | movsb | ; Write ECX bytes from ESI to EDI
| | | | ; Write 8 bytes from Alphabeth
| | | | ; to Memory
| | inc | edx | ; Increase EDX
| cmp | edx, (EndAmino-StAmino)
| jne | WriteMoreToMemory
| call | [Place4Life] | | ; Run organism!
The Translation Table/Alphabeth has the following form:
; 0001 1000 - 24:
---
_getEIP EQU 24
ACommand24:
| call gEIP
| gEIP:
| pop ebx
ECommand24:
times (8-ECommand24+ACommand24): nop
; 0001 1001 - 25:
_JnzUp EQU 25
ACommand25:
| jz over
| jmp esi
| over:
ECommand25:
times (8-ECommand25+ACommand25): nop
## Chapter 2 Experiments
To achieve evolution it is necessary to have replication, mutation and
selection.
### 1 Overview
An ancestor organism has been written, which is able replicate itself. It
copies itself in the current directory to a random named file and execute its
offspring.
The mutation-algorithm is written within the code (not given by the plattform
as it is possible in Tierra or avida). With a certain probability a random bit
within a special interval of the new file flips. Each organism can create five
offspring, each with a different intervall and probability of mutation.
For finding an adequate mutation probability, one can calculate the
probability $P$ that at least one bit-flip occures giving a $N$ byte interval
and a probability $p_{bit}$ for a single bit to flip:
$P(N,p_{bit})=\sum_{n=0}^{N-1}p_{bit}(1-p_{bit})^{n}=1-(1-p_{bit})^{n}$
| Interval | N | P | $p_{bit}$
---|---|---|---|---
1 | Code | 2100 | 0.9 | $\frac{1}{900}$
2 | Code+Alphabeth | 4200 | 0.9 | $\frac{1}{1800}$
3 | whole file | 6150 | 0.9 | $\frac{1}{2666}$
4 | Code | 2100 | 0.75 | $\frac{1}{1500}$
5 | Code | 2100 | 0.68 | $\frac{1}{1820}$
The second offspring has also the opportunity to change the alphabeth. This
could lead to redundancy in the alphabeth to avoid negative effects of
mutations (as used in nature - descriped in 3). The mutations in the third
offspring can access the whole file.
Natural selection is not very strong in this experiment, CPU speed and
harddisk space is limited. Thus, most non-lethal mutations are neutral and
disribute randomly within the population. This can be used very easy to
understand the relationship of the population: The smaller their Hamming
distance, the nearer their relationship.
The Hamming distance $\Delta(x,y)$ is defined as
$\Delta(x,y):=\sum_{x_{i}\not=y_{i}}1,\quad i=1,...,n$
Beside of natural selection, their could be artificial selection. Some
artificial selection has been used to prevent some wired behaviour of the
populations.
The experiments have been done on a native WindowsXP. For stabilization,
several small C++ guard programs have been developed which searches and closes
endless-loops, closes error messages and dead processes (processes that live
longer than a certain time).
### 2 First attempts
The first attempt has shown some unexpected behaviour.
Multiple instances of same file: Already after a few dozen of generations, the
process list started to fill with multiple instances of the same file. An
analyse of the file shows that this happens due to a mutation corrupting the
random name engine. The random name engine always generates the same filename
(for instance aaaaaaaa.exe). After the mutation process (which has no effect
as the file has write protection due to execution) the new/old file is
executed again.
To prevent this unnatural behaviour, it has become necessary to include an
artificial selection to the system. The new C++ file scans the process list
for multiple file instances, and closes them.
It is interesting to see that this is a real selection, not a restriction to
the system. Mutation still can create such effects, but with that additional
guard file, they have negative consequences for the file (the will be deleted
immediatly) and therefore will not spread within the population.
Avoid mutations: The first long-time experiment appeared to be very promising.
The guard files closed error messages, endless loops, dead processes and
multiple instances of the same file. After some hundred generations the
experiment has been stopped and the files have been analysed. Surprisingly,
all files had the exact same bit-code, they were all clones.
There has been a mutation in the alphabeth, changing the xor instruction. This
instruction is responsible for changing a bit at the mutation-process. If the
mutation does not work anymore, no files will change anymore.
For the organism, this is a big advantage. All offspring will survive as no
more mutation happens. Other organism often create corrupt offspring, hence
spread slower. After a while, the whole system is dominated by unmutable
organism.
In nature, organism also created very complex systems to prevent mutation or
mutational effects. DNA repairing or amino acid redundancy are just two
examples.
Even this discoverment is very interesting and has a great analogon in nature,
it prevents from further discoverments in this artificial system. Therefore
another guard file has been developed, which scans the running files for
clones and deletes them.
It’s not natural to prohibit clones at all, thus a adequate probability should
be found. If there are 42 clones in the process list, they should be detected
by a probability of 51% in one guard file cycle. This gives a probability of
$P=\frac{1}{59}$ that a running file will be checked whether it has clones. A
controlled file will be compared to all other running files, all clones will
be deleted.
### 3 Statistical analyse of experiment
Afer installing the new guard file, a further experiment has been run. This
first ”long-term” experiment can be analysed statistically by comparing the
density and type of mutation from the ancestor file and the latest population.
Unforunatley it is very hard to determinate the number of generations in the
population; by comparing mutations in the oldest population with the primary
ancestor and using the mutation probability, one could speculate that there
have been 400-600 generations.
Number of mutations - ancestor vs. successor: A number of 100 successor have
been randomly picked and compared with the ancestor. One can calucate the
average number of mutation during the lifetime of the experiment, and its
standard deviation:
$\bar{X}=\frac{1}{n}\sum_{i=1}^{n}{X_{i}}=192.02\textnormal{ Mutations}$
$\sigma=\sqrt{\frac{1}{n-1}\sum_{i=1}^{n}{(X_{i}-\bar{X})^{2}}}=4.59$
The standard deviation gives an unexpected small value, which means that the
number of mutations is quite constant over the lifetime of the population.
Relations between individua: One can analyse the relation beween the individua
by calculating their Hamming distance (the number of differences in their
bytecode). A number of six files have been selected randomly and analysed.
ancestor.exe - a.exe: 195
---
ancestor.exe - b.exe: 195
ancestor.exe - c.exe: 184
ancestor.exe - d.exe: 192
ancestor.exe - e.exe: 194
ancestor.exe - f.exe: 200
a.exe - b.exe: 2 | c.exe - a.exe: 75 | e.exe - a.exe: 9
---|---|---
a.exe - c.exe: 75 | c.exe - b.exe: 75 | e.exe - b.exe: 9
a.exe - d.exe: 20 | c.exe - d.exe: 73 | e.exe - c.exe: 74
a.exe - e.exe: 9 | c.exe - e.exe: 74 | e.exe - d.exe: 19
a.exe - f.exe: 28 | c.exe - f.exe: 81 | e.exe - f.exe: 27
b.exe - a.exe: 2 | d.exe - a.exe: 20 | f.exe - a.exe: 28
b.exe - c.exe: 75 | d.exe - b.exe: 20 | f.exe - b.exe: 28
b.exe - d.exe: 20 | d.exe - c.exe: 73 | f.exe - c.exe: 81
b.exe - e.exe: 9 | d.exe - e.exe: 19 | f.exe - d.exe: 16
b.exe - f.exe: 28 | d.exe - f.exe: 16 | f.exe - e.exe: 27
While a.exe, b.exe and e.exe are near related, c.exe is far away from all
other files. d.exe and f.exe are medium related. Interestingly, while c.exe
has the biggest distance to all other successors, it has the smallest distance
to the ancestor.
Distribution of mutations: It is interesting to see which mutations are rare
and which are widely spread within the population. There are 153 mutations
which appeare in every single file, 32 mutations appearing in 84 files and so
on. Many mutations are located in unused areas of the file, for instance in
the Win32 .EXE padding bytes or in the unused part of the alphabeth. A list of
mutations of the active-code (whether the used part of the alphabeth or the
meta-language code) and its appearence in the population is given here.
527: 100 | e52: 100 | 12af: 100 | 147f: 100 | e13: 32
---|---|---|---|---
551: 100 | e9a: 100 | 12b1: 100 | 1498: 100 | 1298: 23
56c: 100 | eac: 100 | 12d9: 100 | 14a5: 100 | f0a: 17
5af: 100 | f04: 100 | 130e: 100 | 14b3: 100 | 4c3: 16
5ed: 100 | f34: 100 | 131f: 100 | 14b9: 100 | 558: 16
61a: 100 | fbb: 100 | 1327: 100 | 14c3: 100 | 58a: 16
625: 100 | fed: 100 | 1328: 100 | 5b9: 84 | 60d: 16
c74: 100 | 1090: 100 | 1333: 100 | d86: 84 | ca9: 16
c7b: 100 | 10b8: 100 | 1343: 100 | e43: 84 | cac: 16
c98: 100 | 10c4: 100 | 135c: 100 | 1037: 84 | d83: 16
c9b: 100 | 1119: 100 | 1373: 100 | 106b: 84 | df4: 16
ca1: 100 | 1121: 100 | 138d: 100 | 109c: 84 | f5a: 16
ca4: 100 | 1126: 100 | 139c: 100 | 1148: 84 | 105c: 16
d02: 100 | 118f: 100 | 13b3: 100 | 127d: 84 | 1085: 16
d4f: 100 | 1194: 100 | 13d5: 100 | 12a8: 84 | 10ef: 16
d5d: 100 | 11a2: 100 | 13eb: 100 | 130f: 84 | 12d1: 16
d7a: 100 | 11a9: 100 | 13fd: 100 | 1388: 84 | 12ee: 16
d7d: 100 | 11b1: 100 | 1430: 100 | 1392: 84 | 1323: 16
e3b: 100 | 124d: 100 | 144c: 100 | 13e4: 84 | 1353: 16
e49: 100 | 1265: 100 | 1459: 100 | c9d: 32 | 139a: 16
A full analyse of these mutations would be worthwhile, but has not be done in
this primary analyse due to its great effort. However, to understand this
system and its prospects better, detailed code analyse will be unavoidable.
Nevertheless, examples of two mutations can be given.
First one is Byte 0x527: This is within the alphabeth, defining the behaviour
for the JnzUp instruction. A bit-flip caused following variation:
jz over | | jz over
---|---|---
jmp esi | $\rightarrow$ | jmp esi
over: | | nop
| | over:
This has no effect in the behaviour, but effects just the byte-code - a
neutral mutation.
The second example is the mutation in Byte 0xC7B, which is within the meta-
language code. The unmutated version is the instruction add0001, the mutated
one represents add0004. This is part of the addnumber 26 instruction, which is
used as modulo number for the random name generator.
Due to this mutation, the genom not just picks letters from $a-z$ for its
offpring’s filename, but also the next three in the ASCII list, {, $|$ and }.
Thus, filenames can also contain these three characters. This mutation has an
effect of the behaviour, but still seems to be a neutral mutation.
### 4 Comparing Robustness with x86 instruction set
In 2005 a program called Gloeobacter violaceus has been developed, that uses
artificial mutations in the x86 instruction set, without making use of a meta-
language.[7] That program also replicates in the current directory, and is
subjected by point mutations, and rarely by inseration, deletion and
dublication. Due to the brittleness of the x86 instruction set, that attempt
was not very fruitful. Still it gives a good possibility of comparison.
Both systems have changed to the same initial situation: Point mutations
occure in the whole file with same probability. After several hundreds of
generations, all non-minor mutations (occure in more than 50 different files)
of 2.500 files have been analysed. The mutations have been classified by their
position: x86-code mutations, mutation in some padding region or in the meta-
language code.
Through this classification we find out whether the new meta-language concept
is more robust than the x86 instruction set.
We define the mutation density of a specfic region in the code by
$\rho_{mut}(\textnormal{Region})=\frac{\textnormal{mutations in
region}}{\textnormal{size of region}}$
Meta-Language concept:
$\displaystyle\rho_{mut}({\textnormal{whole code})}=\frac{291}{6144}=0.047$
$\displaystyle\rho_{mut}({\textnormal{padding})}=\frac{151}{2427}=0.062$
$\displaystyle\rho_{mut}({\textnormal{meta-code})}=\frac{81}{2084}=0.039$
$\displaystyle\rho^{*}_{mut}({\textnormal{x86})}=\frac{14}{576}=0.024$
The $\rho_{mut}({\textnormal{x86})}$ combines the very small translator code
and the alphabeth, but as the alphabeth is no real x86 code, comparing this
advisable. If that problem would be neglected, one would see that the meta-
language is more robust under mutations as the x86 code. For a fair comparison
Gloeobacter violaceus can be used.
Gloeobacter violaceus:
$\displaystyle\rho_{mut}({\textnormal{whole code})}=\frac{351}{3584}=0.098$
$\displaystyle\rho_{mut}({\textnormal{padding})}=\frac{284}{2229}=0.127$
$\displaystyle\rho_{mut}({\textnormal{x86})}=\frac{10}{683}=0.015$
Mutations in the padding bytes do not corrupt the organism, thus it is the
initial mutation density. A comparison between
$\rho_{mut}({\textnormal{padding})}$ and $\rho_{mut}({\textnormal{Region})}$
gives the percentage of non-lethal mutations in that region, therefore gives
the robustness R of that region.
$\textnormal{Robustness}(\textnormal{Region}):=\frac{\rho_{mut}({\textnormal{Region})}}{\rho_{mut}({\textnormal{padding})}}$
The interesting comparison is between the x86 region and the meta-language
region.
$\textnormal{Robustness}(\textnormal{x86})=\frac{\rho_{mut}({\textnormal{x86})}}{\rho_{mut}({\textnormal{padding})}}=\frac{0.015}{0.127}=0.115$
$\textnormal{Robustness}(\textnormal{meta-
code})=\frac{\rho_{mut}({\textnormal{meta-
code})}}{\rho_{mut}({\textnormal{padding})}}=\frac{0.039}{0.092}=0.424$
Even this analysis is based on low statistics, it already indicates a great
result:
This new meta-language concept for x86 systems is much more robust than the
original x86 instruction set.
## Chapter 3 Outlook
### 1 Open questions
Development of new functionality: The most important question is whether an
artificial organism with this meta-language in a x86 system can develope new
functionalities.
In a long-term evolution experiment by Richard Lenski, they discovered that
simple E.coli was able to make a major evolutionary step and suddenly acquired
the ability to metabolise citrate.[8] This happened after 31.500 generations,
approximativly after 20 years.
The generation time of artificial organism are of many orders of magnitude
smaller, therefore beneficial mutation such as development of new
functionality may occure within days or a few weeks. The question that remains
is whether this meta-language concept is the right environment or not.
Other types of mutations: Point mutation is one important type of mutation,
but not the only one. In DNA, there is also Deletion, Duplication, Inseration,
Translocation, Inversion. Especially inseration of code and deletion of code
is proved to be important in artificial evolution too.[9] The question is how
one can create such a type of mutation without file structure errors occuring
after every single mutation.
One possibility would be to more the n last byte of the meta-language code
forwards (deletion) or backwards (inseration), filling the gap with NOPs.
However, how could you find out where the end of the meta-language code is
without some complex (and thus brittle) functions?
Behaviour of Hamming distance: How is the time evolution of the average
Hamming distance between a population and the primary ancestor? Does it have a
constant slope or is it rather like a logarithm? How is the behaviour of the
Hamming distance when taking into account other types of mutations (as
descriped above)? Large-scale experiments are needed to answer that questions
propriatly.
APIs: This is an operation system specific problem, and can not be solved for
any OS at once. For Windows, the current system of calling APIs is not very
natural. It is a call to a specific addresse of a library, needing the right
numbers of arguments on the stack and the API and library defined in the file
structure. Hence, API calls are not (very) evolvable in this meta-language,
restricting the ability to use new APIs by mutations.
One possible improvement could be the usage of LoadLibraryA and
GetProcAddress, which loads the APIs from the kernel autonomous. This
technique would not need the APIs and libraries saved within the file
structure, and could make it possible to discover new functionalities.
Unfortunately, this requires complex functions, which may be very brittle and
unflexible.
Still it needs more thoughts to find an adequate solution to this problem.
### 2 Computer malware
This technique could be used in autonomous spreading computer programs as
computer viruses or worms. This has been discussed in a very interesting paper
by Iliopoulos, Adami and Ször in 2008.[10]
Their main results are: The x86 instruction set does not allow enough neural
mutations, thus it is impossible to develope new functionalities; a
’evolvable’ language or a meta-language would be needed. Further, together
with smaller generation times, the selective pressure and the mutation rate
would be higher, speeding up evolution. Conclusion is, that it is currently
unclear what would be a defence against such viruses.
In contrast to the experiment explained in chapter 2 \- where natural
selection was nearly absent, computer malware are continuously under selective
pressure due to antivirus scanners. This is the same situation as in
biological organism, where parasits are always attacked by the immune system
and antibiotics.
Theoretically, computer malware could also find new ways to exploit software
or different OS APIs for spreading. This is not as unlikely as it seems in the
first moment. Experiments with artificial and natural evolution have shown
that complex features could evolve in acceptable time.[5][9]
### 3 Conclusion
An artificial ’evolvable’ meta-language for x86 Systems has been created using
the main ideas of Tierra and avida: Separation of operations and arguments,
and not using direct addressing. The experiments have been very promising,
showing that the robustness of the new meta-language is approximately four
times higher as for usual x86 instructions. Several open questions are given
in the end, which should motivate for further research.
Howsoever, the most important step has been done:
The artificial organism are not trapped in virtual systems anymore, they can
finally move freely - they took the redpill…
## References
* [1] Christoph Adami, _Introduction to Artificial Life_ , Springer, 1998.
* [2] Tom S. Ray, _An approach to the synthesis of life_ , Physica D, 1992.
* [3] Kurt Thearling and Tom S. Ray, _Evolving Parallel Computation_ , Complex Systems, 1997.
* [4] Tom S. Ray and Joseph Hart, _Evolution of Differentiated Multi-threaded Digital Organisms_ , Artificial Life VI proceedings, 1998.
* [5] Richard E. Lenski, Charles Ofria, Robert T. Rennock and Christoph Adami, _The evolutionary origin of complex features_ , Nature, 2003.
* [6] Charles Ofria, Christph Adami and Travis C. Collier, _Design of Evolvable Computer Languages_ , IEEE Transition on Evolutionary Computation, 2002.
* [7] SPTH, _Code Evolution: Follow nature’s example_ , 2005.
* [8] Zachary D. Blount, Christina Z. Borland, and Richard E. Lenski _Historical contingency and the evolution of a key innovation in an experimental population of Escherichia coli_ , National Academy of Sciences, 2008.
* [9] Richard E. Lenski, Charles Ofria, Robert T. Rennock and Christoph Adami, _Phenotypic and genomic evolution along the line of descent in the case-study population through the origin of the EQU function at step 111_ , 2003.
* [10] Dimitris Iliopoulos, Christoph Adami and Peter Ször, _Darwin inside the machines: malware evolution and the consequences for computer security_ , Virus Bulletin Conference, 2008.
## Chapter 4 Appendix
This artificial evolution system can be started on every common Windows
Operation System. Even it is a chaotic process, due to the guard files the
process can run for hours without a breakdown of the system.
## Appendix 4.A The package
The package:
binary\run0ndgens.bat: This script starts all guard files, then starts the 0th
generation. Adjust the hardcoded path in the file to the directory of the
guard files. This file has to be at H:, you can use subst for that.
binary\NewArt.exe: This is the 0th generation. It has to be started with the
shell (or a .bat file) - not via double-click. It is highly recommented to not
run the file without the guard files. This file has to be as H: aswell.
ProcessWatcher\\*.*: This directory contains the binary and source of all
guard files.
ProcessWatcher\CopyPopulation.cpp: This file copies every 3 minutes 10% of the
population to a specific path given in the source. This path has to be
adjusted before usage.
ProcessWatcher\Dead.cpp: This program can be used to manually stop all
organism. You can enter a probability of how many organism should be survive.
For instance, if you enter 10, 90% of the population will be terminated - 10%
survive.
ProcessWatcher\DoubleProcess.cpp: This program searchs and destroyes multiple
instances of the same file. See Chapter 2 for more information.
ProcessWatcher\EndLessLoops.cpp: This guard file searchs for endless loops in
the memory and terminates them.
ProcessWatcher\JustMutation.cpp: Also descriped in chapter 2, this program
searchs and terminates clones in the process.
ProcessWatcher\Kill2MuchProcess.cpp: This guard is very important for
stability of the operation system while running the experiment. If there are
more than 350 processes running, it terminates 75% of them.
ProcessWatcher\RemoveCorpus.cpp: As space is restriced, this guard deletes
files that are older than a 30sec.
ProcessWatcher\SearchAndDestroy.cpp: This program removes error messages (by
clicking at ”OK”), terminates error-processes (as dwwin.exe or drwtsn32.exe),
and it terminates dead processes (processes that are older than 100sec).
ProcessWatcher\malformed_PEn.exe: These are two malformed .EXE files, which
will be called by SearchAndDestroy.exe at the start to find the ”OK”-Button.
They have to be in the directory of the guard files.
Analyse\SingleFileAnalyse: This directory contains an analyse file, that
compaires the bytecode of two genotypes. Copy the file to the directory,
change the name in the source and execute it.
Analyse\Relation: This file compaires gives you the Hamming distance of all
.exe files.
Analyse\MutationDistribution: With this file you can get a distribution of all
mutations compaired with NewArt.exe.
## Appendix 4.B Running the experiment
Copy the run0ndgens.bat and NewArt.exe to H:\. Adjust the path in
CopyPopulation.cpp to the backup directory (and compile it) and in
run0ndgens.bat to the directory of the guard files.
Now you can start run0ndgens.bat, move over the two error-messages (dont click
them, this will be done by SearchAndDestroy.exe). Then you are ready and can
press a key in the run0ndgens.bat, which will start 10 instances of
NewArt.exe.
---
Running Experiment: This is how the experiment should look like
|
arxiv-papers
| 2011-05-08T16:47:35 |
2024-09-04T02:49:18.666465
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Thomas Sperl",
"submitter": "Thomas Sperl",
"url": "https://arxiv.org/abs/1105.1534"
}
|
1105.1811
|
Niall Douglas
Draft 4 Draft 4
Mr. Niall Douglas MBS MA BSc ned Productions IT Consulting
http://www.nedproductions.biz/
# User Mode Memory Page Allocation
A Silver Bullet For Memory Allocation?
(2010-2011)
###### Abstract
The literature of the past decade has discussed a number of contrasting ways
in which to improve general allocation performance. The Lea allocator,
dlmalloc Lea and Gloger [2000], aims for a reusable simplicity of
implementation in order to improve third-party customisability and
performance, whereas other allocators have a much more complex implementation
which makes use of per-processor heaps, lock-free, cache-line locality and
transactional techniques Berger et al. [2000]; Michael [2004]; Hudson et al.
[2006]. Many still believe strongly in the use of custom application-specific
allocators despite that research has shown many of these implementations to be
sub-par Berger et al. [2002], whereas others believe that the enhanced type
information and metadata available to source compilers allow superior
allocators to be implemented at the compilation stage Berger et al. [2001];
Udayakumaran and Barua [2003]; boo .
More recently there appears to be an increasingly more holistic view of how
the allocator’s implementation strategy non-linearly affects whole application
performance via cache set associativity effects and cache line localisation
effects, with a very wide range of approaches and techniques Hudson et al.
[2006]; Dice et al. [2010]; Afek, U., Dice, D. and Morrison, A. [2010]. This
paper extends this more holistic view by examining how the design of the paged
virtual memory system interacts with overall application performance from the
point of view of how the growth in memory capacity will continue to
exponentially outstrip the growth in memory speed, and the consequent memory
allocation overheads that will be thus increasingly introduced into future
application memory allocation behaviour.
The paper then proposes a novel solution: the elimination of paged virtual
memory and partial outsourcing of memory page allocation and manipulation from
the operating system kernel into the individual process’ user space – _a user
mode page allocator_ – which allows an application to have direct, bare metal
access to the page mappings used by the hardware Memory Management Unit (MMU)
for its part of the overall address space. A user mode page allocator based
emulation of the mmap() abstraction layer of dlmalloc is then benchmarked
against the traditional kernel mode implemented mmap() in a series of
synthetic Monte-Carlo and real world application settings.
Despite the highly inefficient implementation, scale invariant performance up
to 1Mb block sizes was found in the user mode page allocator, which translates
into up to a 10x performance increase for manipulation of blocks sized 0-8Mb
and up to a 4.5x performance increase for block resizing. A surprising 2x
performance increase for very small allocations when running under non-paged
memory was also found, and it is speculated that this could be due to cache
pollution effects introduced by page fault based lazy page allocation.
In real world application testing where dlmalloc was binary patched in to
replace the system allocator in otherwise unmodified application binaries
(i.e. a worst case scenario), every test scenario, bar one, saw a performance
improvement with the improvement ranging between -4.05% and +5.68% with a mean
of +1.88% and a median of +1.20%.
Given the superb synthetic and positive real world results from the profiling
conducted, this paper proposes that with proper operating system and API
support one could gain a further order higher performance again while keeping
allocator performance invariant to the amount of memory being allocated or
freed i.e. a _100x_ performance improvement or more in some common use cases.
It is rare that through a simple and easy to implement API and operating
system structure change one can gain a Silver Bullet with the potential for a
second one.
###### category:
D.4.2 Operating Systems Allocation/Deallocation Strategies
††conference: International Symposium on Memory Management 2011 June 4th-5th
2011, San Jose, U.S.A.††terms: M
emory Allocation, Memory Management Unit, Memory Paging, Page Tables,
Virtualization, faster array extension, Bare Metal, Intel VT-x, AMD-V, Nested
Page Tables, Extended Page Tables
## 1 Introduction
### 1.1 Historical trends in growths of capacities and speeds
As described by J.C. Fisher in his seminal 1972 paper Fisher and Pry [1972]
and reconfirmed in much research since Meyer [1994]; Meyer and Ausubel [1999],
technological growth is never better than logistic in nature. Logistic growth,
often called an “S-curve”, occurs when growth is introduced by a feedback loop
whereby the thing grown is used to grow more of that thing – which is
implicitly a given when new computer technology is used to make available the
next generation of computer technology. As with any feedback loop driven
growth, hard “carrying capacity” limits such as the fixed physical size of the
silicon atom eventually emerge which decrease the initial exponential rate of
growth down to linear, then to exponentially declining growth i.e. the rate of
growth of growth – the second derivative – goes negative long before the rate
of growth itself does. Because feedback driven growth tends to follow a
logistic curve, as Fisher showed one cannot predict when exponential growth
will end until the first derivative goes negative for an extended period of
time – and only now can one predict with increasing probability when overall
growth will cease. This pattern of growth is summarised in Figure 1 where 0.0
along the x-axis shows the point of inflection where growth turns negative.
Figure 1: A plot of the standard logistic growth curve $\frac{1}{1+e^{-t}}$.
The history of growth in computer technology has been nothing short of
remarkable in the last five decades, but one must always remember that nothing
can grow exponentially forever Meyer et al. [1999]. In the growth of storage
capacity and transistor density we are still in the part of the logistic
growth curve before the mid-turning-point where growth appears exponential and
no one can yet say when the point of inflection (when the rate of growth of
growth i.e. the second derivative turns negative) will come. In comparison,
magnetic hard drive storage is not just past its point of inflection (which
was in 1997), but it is rapidly approaching zero growth which Figure 2 makes
all too obvious. All other things being equal and if present growth trends
continue, flash based non-volatile storage ought to overtake magnetic based
non-volatile storage some time in 2013 plus or minus eighteen months.
Figure 2: A log plot of bytes available per inflation adjusted US$ from 1980
to 2010 for conventional magnetic hard drives and flash based solid state disk
drives. Magnetic hard drives are clearly coming to the end of their logistic
growth curve trajectory, whereas flash drives are still to date undergoing the
exponential growth part of their logistic growth curve. Sources: win wik [a]
sto .
The implications of this change are profound for all users of computing
technology, but especially for those responsible for the implementations of
system memory management. As Denning (1996) Denning recounts most vividly,
from the 1950s onward the present system of memory management was designed on
the basis that non-volatile storage was large but with a high access latency,
whereas volatile storage (RAM) was small but with a low access latency. The
problem was always one of how to make best use of the limited low-latency
volatile storage in making use of as much of the high-latency non-volatile
storage as possible – thus the modern automatic paged virtual storage
allocation system (hereafter called “paged virtual memory”) with the concept
of a “working set” was born, and which was described by Denning himself in his
seminal 1970 paper Denning [1970].
As a much simplified explanation of the essentials of Denning’s paper, what
happens is that requests for memory from the operating system kernel allocate
that memory on the non-volatile storage (typically in the paging, or ‘swap’
file used to temporarily store presently least used volatile memory pages) and
the kernel returns a region of virtual address space of the requested size.
Note that no physical RAM is allocated at this stage. When the process first
accesses a RAM page (the granularity of the MMU’s page mapping, typically no
smaller than 4Kb on modern systems), the CPU triggers a _page fault_ as the
memory does not exist at that address. The operating system kernel then takes
a free physical RAM page and places it at the site of the fault before
restoring normal operation. If a previously accessed RAM page is not used for
a long time and if new free RAM pages are needed, then the contents of that
page are written off into the paging file and its physical RAM page used for
something more frequently used instead. This allows only those pages which are
most frequently used to consume physical RAM, thus reducing the true RAM
consumption of the process as far as possible while still allowing the process
to conceptually use much more memory than is physically present in the
computer. Almost all computer systems with any complexity use such a system of
paged virtual memory, or a simplification thereof.
### 1.2 The first problem with page file backed memory
Figure 3: A log plot of how much time is required to randomly access selected
computer data storage media. Sources: tom ; Desnoyers [2010].
The first problem given the trends outlined in Figure 2 with paged virtual
memory has already been shown: the fact that high latency magnetic non-
volatile storage is due to become replaced with flash based storage. Unlike
magnetic storage whose average random access latency may vary between 8-30ms
tom and which has a variance strongly dependent on the distance between the
most recently accessed location and the next location, flash based storage has
a flat and uniform random access latency just like RAM. Where DDR3 RAM may
have a 10-35ns read/write latency, current flash storage has a read latency of
10-20µs, a write latency of 200-250µs (and an erase latency of 1.5-2ms, but
this is usually hidden by the management controller) Desnoyers [2010] which is
only 667x slower than reading and 10,000x slower than writing RAM
respectively. What this means is that the relative overhead of the page file
backed memory system implementation on flash based storage becomes _much
larger_ – several thousand fold larger – against overall performance than when
based on magnetic storage based swap files.
Furthermore flash based storage has a limited write cycle lifetime which makes
it inherently unsuitable for use as temporary volatile storage. Thankfully due
to the exponential growth in RAM capacity per currency unit, RAM capacity is
in abundance on modern systems – the typical new home computer today in 2010
comes with far more RAM than the typical home user will ever use, and RAM
capacity constraints are only really felt in server virtualisation, web
service, supercomputing and other “big iron” applications.
### 1.3 The second problem with page file backed memory
This introduces the second problem with paged virtual memory: it was designed
under the assumption that low latency non-volatile storage capacity is scarce
and whose usage must be absolutely maximised, and that is increasingly not
true. As Figure 4 shows, growth in capacity for the “value sweetspot” in RAM
(i.e. those RAM sticks with the most memory for the least price at that time)
has outstripped growth in the access speed of the same RAM by a factor of ten
in the period 1997-2009, so while we have witnessed an impressive 25x growth
in RAM speed we have also witnessed a 250x growth in RAM capacity during the
same time period. Moreover, growth in capacity is exponential versus a linear
growth in access speed, so the differential is only going to dramatically
increase still further in the next decade. Put another way, assuming that
trends continue and all other things being equal, if it takes a 2009 computer
160ms to access all of its memory at least once, it will take a 2021 computer
5,070 years to do the same thing111No one is claiming that this will actually
be the case as the exponential phase of logistic growth would surely have
ended. However, even if capacity growth goes linear sometime soon, one would
still see the growth in capacity outpace the growth in access speed by
anywhere between tenfold and one trillion ($10^{12}$) times (for reference,
the latter figure was calculated as follows: (i) double period of exponential
growth since 1997 $log_{44.8}(250)=1.131$, $(2\times 1.131)^{44.8}=7.657\times
10^{15}$ (ii) remove exponential growth 1997-2009 $\frac{7.657\times
10^{15}}{250}$ = $3\times 10^{13}$ (iii) divide exponential capacity growth by
linear access speed growth 2009-2021 $\frac{3\times 10^{13}}{25}$ = $1.2\times
10^{12}$)..
Figure 4: A plot of the relative growths since 1997 of random access memory
(RAM) speeds and sizes for the best value (in terms of Mb/US$) memory sticks
then available on the US consumer market from 1997 - 2009, with speed depicted
by squares on the left hand axis and with size depicted by diamonds on the
right hand axis. The dotted line shows the best fit regression for speed which
is linear, and the dashed line shows the best fit for size which is a power
regression. Note how that during this period memory capacity outgrew memory
speed by a factor of ten. Sources: McCallum wik [b].
From the growth trends evident in these time series data, one can make the
following observations:
1. 1.
Recently what is much more scarce than capacity is _access speed_ – as anyone
who has recently tried a full format of a large hard drive can tell you, the
time to write once to the entire capacity is also exponentially growing
because the speed of access is only growing linearly.
2. 2.
If the growth in RAM capacity over its access speed continues, we are
increasingly going to see applications constrained not by insufficient
storage, but by insufficiently fast access _to_ storage.
3. 3.
It is reasonable to expect that programmers will continue to expend RAM
capacity usage for improvements in other factors such as application
performance, development time and user experience. In other words, the
“working set” of applications is likely to also grow as exponentially as it
can (though one would struggle to see how word processing and other common
tasks could ever make good use of huge capacities). This implies that the
overheads spent managing capacity will also grow exponentially.
Considering all of the above, one ought to therefore conclude that,
algorithmically speaking, capacity ought to be expended wherever latency can
be reduced.
### 1.4 Paged virtual memory overheads
Figure 5: A log plot of how much overhead paged virtual memory allocation
introduces over non-paged memory allocation according to block size.
Figure 5 shows how much overhead is introduced in a best case scenario by
fault driven page allocation versus non-paged allocation for Microsoft Windows
7 x64 and Linux 2.6.32 x64 running on a 2.67Ghz Intel Core 2 Quad
processor222For reference, this system has 32Kb of L1 data cache with 16 entry
L1 TLB, 4Mb of L2 cache with 256 entry L2 TLB int and 6Gb of DDR2 RAM with
12.8Gb/sec maximum bandwidth. The L1 TLB can hold page mappings for 64Kb of
memory, whereas the L2 TLB can hold mappings for 1Mb of memory int .. The test
allocates a block of a given size and writes a single byte in each page
constituting that block, then frees the block. On Windows the Address
Windowing Extension (AWE) functions AllocateUserPhysicalPages() et al. were
used to allocate non-paged memory versus the paged memory returned by
VirtualAlloc(), whereas on Linux the special flag MAP_POPULATE was used to ask
the kernel to prefault all pages before returning the newly allocated memory
from mmap(). As the API used to perform the allocation and free is completely
different on Windows, one would expect partially incommensurate results.
As one can see, the overhead introduced by fault driven page allocation is
substantial with the overhead reaching 100% on Windows and 36% on Linux. The
overhead rises linearly with the number of pages involved up until
approximately 1-2Mb after which it drops dramatically. This makes sense: for
each page fault the kernel must perform a series of lookups in order to figure
what page should be placed where, so the overhead from the kernel per page
fault as shown in Figure 6 ought to be approximately constant at 2800 cycles
per page for Microsoft Windows 7. There is something wrong with the Linux
kernel page fault handler here: it costs 3100 cycles per page up to 2Mb which
seems reasonable, however after that it rapidly becomes 6500 cycles per page
which suggests a TLB entry size dependency333For reference, when the CPU
accesses a page not in its Translation Lookaside Buffer (TLB) it costs a page
table traversal and TLB entry load which on this test machine is approximately
15 cycles when the page is on the same page table sheet, and no more than 230
cycles if it is in a distant sheet. or a lack of easily available free pages
and which is made clear in Table 1. However that isn’t the whole story –
obviously enough, each time the kernel executes the page fault handler it must
traverse several thousand cycles worth of code and data, thus kicking whatever
the application is currently doing out of the CPU’s instruction and data
caches and therefore necessitating a reload of those caches when execution
returns.
In other words, fault driven page allocation introduces a certain amount of
ongoing CPU cache pollution and therefore _raises the latency of first access
to a memory page_.
Table 1: Selected page fault allocation latencies for a run of pages on Microsoft Windows and Linux. | Microsoft Windows | Linux
---|---|---
| Paged | Non-paged | Paged | Non-paged
Size | cycles/page | cycles/page | cycles/page | cycles/page
16Kb | 2367 | 14.51 | 2847 | 15.83
1Mb | 2286 | 81.37 | 3275 | 14.53
16Mb | 2994 | 216.2 | 6353 | 113.4
512Mb | 2841 | 229.9 | 6597 | 115.9
Figure 6: A log plot of how many CPU cycles is consumed per page by the kernel
page fault handler when allocating a run of pages according to block size.
The point being made here is that paged virtual memory introduces a lot of
additional and often hard to predict application execution latency through the
entire application as a whole when that application makes frequent changes to
its virtual address space layout. The overheads shown in Figures 5 and 6
represent a best-case scenario where there isn’t a lot of code and data being
traversed by the application – in real world code, the additional overhead
introduced by pollution of CPU caches by the page fault handler can become
sufficiently pathological that some applications deliberately prefault newly
allocated memory before usage.
The typical rejoinder at this point is that MMUs do allow much larger page
sizes (2Mb on x64) to be concurrently used with smaller ones, so an improved
fault driven page allocator could map 2Mb sized pages if its heuristics
suggest that to be appropriate. However as anyone who has tried to implement
this knows, this is very tricky to get right Navarro et al. [2002]. Moreover,
one can only detect when an individual page has been modified (becomes
‘dirty’) and so therefore the use of large pages can generate very significant
extra i/o when writing out dirty pages when pages are backed by a non-volatile
store.
### 1.5 Code complexity
Since the 1970s, application and kernel programmers have assumed that the
larger the amount of memory one is working with, the longer it will take
simply because it has always been that way historically – and after all,
logically speaking, it must take ten times as many opcodes executed in order
to access ten times as much memory by definition. Therefore, programmers will
tend to be more likely to instinctively dedicate significant additional code
complexity to reducing memory usage as the amount of memory usage grows due to
the standard space/time trade-off assumptions made by all computer programmers
since the very earliest days (e.g. Bloom [1970]). In other words, if memory
usage looks “too big” relative to the task being performed, the programmer
will tend to increase the complexity of code in order to reduce the usage of
memory – a ‘balancing’ of resource usage as it were.
However, the standard assumptions break down under certain circumstances. In
the past, the speed of the CPU was not much different from that of memory, so
memory access costs were strongly linked to the number of opcodes required to
perform the access. As CPUs have become ever more faster than main memory,
such is today’s disjunct between CPU speed and memory speed that (as indeed we
shall shortly see) scale invariant memory allocation has become possible. In
other words, today’s CPUs run so much faster than memory that they can
allocate gigabytes of memory in the same time as a few dozen kilobytes _if the
implementation is written to avoid CPU instruction dependencies upon memory_.
The problem with the assumption that larger memory usage takes longer is that
it introduces significant extra complexity into both the kernel and the medium
to large sized application which most programmers are so used to that they
don’t even think about it. Garbage collecting, reference counting, least
frequently used caching – all are well known as object lifetime management
techniques in the application space, while in the kernel space much effort is
dedicated to trying to extract a “one size fits all” set of statistical
assumptions about typical memory usage patterns of the average application and
writing a paging virtual memory system to attempt an optimal behaviour based
on predicted working set.
Just to show the ludicrousness of the situation, consider the following: most
applications use a C malloc API (whose design is almost unchanged since the
1970s) which takes no account of virtual memory _at all_ – each allocates
memory as though it is the sole application in the system. The kernel then
_overrides_ the requested memory usage of each process on the basis of a set
of statistical assumptions about which pages are actually used by _faking_ an
environment for the process where it is alone in the system and can have as
much memory as it likes. Meanwhile you have programmers finding pathological
performance in some corner cases in this environment, and writing code which
is specifically designed to fool the kernel into believing that memory is
being used when it is not, despite that such prefaulting hammers the CPU
caches and is fundamentally a waste of CPU time and resources.
## 2 User mode page allocation as a potential solution
The solution proposed by this paper to the problems outlined above is simple:
1. 1.
Except for memory mapped files, do away with kernel paged virtual memory
altogether444Ideally, even memory mapped files ought to be a user mode only
implementation. Strictly speaking, even shared memory mapped regions used for
inter-process communications only need the kernel for construction and
destruction of the shared memory region only – thereafter a user mode filing
system implementation using the kernel page fault handler hooks could do the
rest.! In other words, when you map a RAM page to some virtual address, you
get an actual, real physical RAM page at that address. This should allow a
significant – possibly a very significant on some operating systems –
reduction in the complexity of virtual memory management implementation in the
kernel. Less kernel code complexity means fewer bugs, better security and more
scalability Curtis et al. [1979]; Kearney et al. [1986]; Khoshgoftaar and
Munson [1990]; Banker et al. [1998].
2. 2.
Provide a kernel syscall which returns an arbitrary number of physical memory
pages with an arbitrary set of sizes as their raw physical page frame
identifiers as used directly by the hardware MMU. The same call can both free
and request physical pages simultaneously, so a set of small pages could be
exchanged for large pages and so on.
3. 3.
If the machine has sufficiently able MMU hardware e.g. NPT/EPT on x64, expose
to user mode code direct write access to the page tables for each process such
that the process may arbitrarily remap pages within its own address space to
page frames it is permitted to access555In other words, one effectively
virtualises the MMU for all applications as a hardware-based form of page
mapping security enforcement.. If without a sufficiently able MMU, a kernel
call may be necessary to validate the page frames before permitting the
operation.
4. 4.
Allow user mode code to issue TLB flushes if necessary on that CPU
architecture.
5. 5.
Provide two asynchronous kernel upcalls such that (i) the kernel can ask the
process to release unneeded pages according to a specified “severity” and (ii)
that the kernel can notify a handler in the process of CPU access to a watched
set of pages.
6. 6.
Rewrite the family of APIs VirtualAlloc() on Windows and mmap(MAP_ANONYMOUS)
on POSIX to be simple user mode wrappers around the above functionality i.e.
get requisite number of physical pages and map them at some free virtual
address. In other words, virtual address space management for a process
becomes exclusively the process’ affair. Add new APIs permitting asynchronous
batch modifications (more about this shortly).
7. 7.
For backwards compatibility, and for the use of applications which may still
use much more memory than is physically present in the machine, the process’
memory allocator ought to provide a page file backed memory store in user mode
implemented using the facilities above. This would be easy to implement as one
can still memory map files and/or use the installable page fault kernel
handlers mentioned earlier, and whose implementation could be made much more
flexible and tunable than at present especially as it opens the possibility of
competing third-party user mode implementations.
In other words, direct access to the page mapping tables and hardware is
handed over to the application. Because the application can now keep a
lookaside cache of unused memory pages and it can arbitrarily relocate pages
from anywhere to anywhere within its virtual address space, it can make the
following primary optimisations in addition to removing the CPU cache
pollution and latencies introduced by page faulted allocation:
1. 1.
An allocated memory block can be very quickly extended or shrunk without
having to copy memory – a feature which is _very_ useful for the common
operation of extending large arrays and which is also provided by the
proprietary mremap() function under Linux. Kimpe et al. [2006] researched the
performance benefits of a vector class based upon this feature and found a
50-200% memory usage overhead when using a traditional vector class over a
MMU-aware vector class as well as extension time complexity becoming dependent
on the elements being added rather than the size of the existing vector. While
the test employed was synthetic, a 50% improvement in execution time was also
observed thanks to being able to avoid memory copying.
2. 2.
Two existing memory blocks can have their contents swapped without having to
copy memory. Generally programmers work around the lack of this capability by
swapping pointers to memory rather than the memory itself, but one can see
potential in having such a facility.
3. 3.
The application has _far_ better knowledge than any kernel could ever have
about what regions of memory ought to be mapped within itself using larger
pages. In combination with the lack of page file backed storage, user mode
page allocation could enable significant TLB optimisations in lots of ways
that traditional paged virtual memory never could.
4. 4.
Lastly, and most importantly of all, _one no longer needs to clear contents
when allocating new memory pages_ if there are free pages in the lookaside
cache. To explain why this is important, when a process returns memory pages
to the kernel it could be that these pages may contain information which may
compromise the security of that process and so the kernel must zero (clear)
the contents of those pages before giving them to a new process. Most
operating system kernels try to delay page clearing until when the CPU is idle
which is fine when there is plenty of free RAM and the CPU has regular idle
periods, however it does introduce significant physical memory page
fragmentation which is unhelpful when trying to implement large page support,
and besides in the end the CPU and memory system are being tied up in a lot of
unnecessary work which comes with costs as non-obvious as increased electrical
power consumption. User mode page allocation lets the memory allocator of the
process behave far more intelligently and only have pages cleared when they
need to be cleared, thus increasing whole application performance and
performance-per-watt efficiency especially when the CPU is constantly maxed
out and page clears must be done on the spot with all the obvious consequences
for cache pollution.
## 3 Raw kernel and user page allocator performance
In the end though, theory is all well and good, but will it actually work and
will it behave the way in which theory suggests? To discover the answer to
this, a prototype user mode page allocator was developed which abuses the
Address Windowing Extensions (AWE) API of Microsoft Windows mentioned earlier
in order to effectively provide direct access to the hardware MMU. The use of
the verb ‘abuses’ is the proper one: the AWE functions are intended for 32-bit
applications to make use of more than 4Gb of RAM, and they were never intended
to be used by 64 bit applications in arbitrarily remapping pages around the
virtual address space. Hence due API workarounds the prototype user mode page
allocator runs at least 10x slower than it ought to were the API more
suitable666The reference to API workarounds making the prototype user mode
page allocator 10x slower refers specifically to how due to API restrictions,
page table modifications must be separated into two buffers one of which
clears existing entries, and the second maps new entries. In addition to these
unnecessary memory copies two separate calls must be made to
MapUserPhysicalPages() to clear and MapUserPhysicalPagesScatter() to set
despite that this ought to be a single call. In testing, it was found that
this buffer separation and making of two API calls is approximately 8x-12x
slower than a single direct MapUserPhysicalPagesScatter() call., and probably
more like 40x slower when compared to a memory copy directly into the MMU page
tables – however, its dependency or lack thereof on allocation size in real
world applications should remain clear.
As suggested above, the user mode page allocator provides five APIs for
external use with very similar functionality to existing POSIX and Windows
APIs: userpage_malloc(), userpage_free, userpage_realloc(), userpage_commit()
and userpage_release(). This was done to ease the writing of kernel API
emulations for the binary patcher, described later in this paper.
Figure 7: A log-log plot of how the system and user mode page allocators scale
to block size.
Figure 7 shows on a log-log plot how the Windows 7 x64 kernel page allocator,
Linux 2.6.32 x64 kernel page allocator and user mode page allocators scale to
allocation block size. This test chooses a random number between 4Kb and 8Mb
and then proceeds to request an allocation for that size and write a single
byte into each page of it. Each allocated block is stored in a 512 member ring
buffer and when the buffer is full the oldest item is deallocated. The times
for allocation, traversal and freeing are then added to the two powers bin for
that block size and the overall average value is output at the end of the
test. Before being graphed for presentation here, the results are adjusted for
a null loop in order to remove all overheads not associated with allocator
usage, and where an item says ‘notraverse’ it means that it has been adjusted
for the time it takes for the CPU to write a single byte into each page in the
allocation i.e. notraverse results try to remove as much of the effects of
memory traversal as possible. Note that all following results for the user
mode page allocator are for a best case scenario where its lookaside page
cache can supply all requests i.e. the test preloads the free page cache
before testing.
As would be expected, Figure 7 shows that page allocation performance for the
kernel page allocators is very strongly linearly correlated with the amount of
memory being allocated and freed with apparently no use of pre-zeroed pages on
either Windows or Linux777According to a year 2000 email thread Eggert on the
FreeBSD mailing list, idle loop pre-zeroing of pages was benchmarked in Linux
x86 and found to provide no overall performance benefit, so it was removed.
Apparently the PowerPC kernel build does perform idle loop pre-zeroing
however.. Meanwhile, the user mode page allocator displays an apparent scale
invariance up to 1Mb before also becoming linearly correlated with block size,
with performance being lower up to 128Kb block sizes due to inefficiency of
implementation.
Figure 8: An adjusted for traversal costs log-linear plot of how the system
and user mode page allocators scale to block sizes between 4Kb and 1Mb.
Breaking out the range 4Kb–1Mb on a log-linear plot in Figure 8, one can see
very clear O(N) complexity for the Windows and Linux kernel page allocators
and an approximate O(1) complexity for the user mode page allocator. There is
no good reason why O(1) complexity should break down after 1Mb except for the
highly inefficient method the test uses to access the MMU hardware (i.e. how
the AWE functions are used).
### 3.1 Effects of user mode page allocation on general purpose application
allocators
Figure 9: An adjusted for traversal costs log-log plot of how general purpose
allocators scale to block sizes between 4Kb and 8Mb when running either under
the system page allocator or the user mode page allocator (umpa).
Figure 9 shows how dlmalloc performs when running under the Windows kernel
page allocator and the user mode page allocator with the performance of the
Windows system allocator added for informational purposes. dlmalloc, under
default build options, uses mmap()/VirtualAlloc() when the requested block
size exceeds 256Kb and the curve for dlmalloc under the kernel page allocator
clearly joins the same curve for the system allocator at 256Kb. Meanwhile, the
user mode page allocator based dlmalloc does somewhat better.
Figure 10: A summary of the performance improvements in allocation and frees
provided by the user mode page allocator (umpa) where 1.0 equals the
performance of the system page allocator.
Breaking out this performance onto a linear scale where 1.0 equals the
performance of the Windows kernel page allocator, Figure 10 illuminates a
surprising 2x performance gain for very small allocations when running under
the user mode page allocator with a slow decline in improvement as one
approaches the 128Kb–256Kb range. This is particularly surprising given that
the test randomly mixes up very small allocations with very big ones, so why
there should be a LOG(allocation size) related speed-up is strange. If one
were to speculate on the cause of this, it would be suggested that the lack of
cache pollution introduced by the lack of a page fault handler being called
for every previously unaccessed page is most likely to be the cause.
### 3.2 Effects of user mode page allocation on block resizing
Figure 11: A summary of the performance improvements in block resizing
provided by the page remapping facilities of the user mode page allocator
where 1.0 equals the performance of (allocate new block + copy in old block +
free old block).
Figure 11 shows no surprises considering the earlier results. The ability to
remap pages and therefore avoid memory copies allows scale invariant realloc()
performance up to 128Kb block sizes after which the prototype user mode page
allocator joins the same scaling curve as traditional memory copy based
realloc(). The implementation of userpage_realloc() used in the prototype is
extremely inefficient – considerably worse than the implementations of
userpage_malloc() and userpage_free() – having to perform some four AWE
function calls per item remapped. There is absolutely no reason why, if given
proper operating system support, this operation would not continue to be scale
invariant up to one quarter the level of that for allocations and frees rather
than the $\left.{}^{1}\middle/_{20}\right.$th seen here.
### 3.3 Effects of user mode page allocation on real world application
performance
Using the equation from Gene Amdahl’s 1967 paper Amdahl [1967] where the
overall speedup from an improvement is $\frac{1}{(1-P)+\frac{P}{S}}$ where $P$
is the amount of time in the allocator and $S$ is the speedup of the
allocator, if an application spends 5% of its time in the memory allocator
then doubling allocator performance will only yield at most a 2.56%
improvement. In order to determine the effects of user mode page allocation on
present and future real world application usage profiles, a binary patcher was
used to replace calls to the C allocator API as well as VirtualAlloc() et al.
with calls to dlmalloc and the user mode page allocator respectively in a set
of otherwise unmodified applications.
One must therefore bear in mind that the following results are therefore
necessarily a worst case scenario for the user mode page allocator as it is
used identically to the kernel page allocator. One should also consider that
the Microsoft Windows kernel page allocator has a granularity of 64Kb, and
that therefore the user mode page allocator is about the same speed at that
granularity as shown in Figure 7.
The following tests were conducted with the results shown in Table 2:
* •
Tests 1 and 2: The x86 MinGW build of the GNU C++ compiler v4.5 was used to
compile two projects which use the Boost Spirit Classic C++ recursive descent
parser generator library v1.43. These tests were chosen as good examples of
modern C++ code using extensive template meta-programming, and were conducted
by compiling a file which uses the C preprocessor to include all the source
files of the project into a single compiland. The first test compiles an ISO C
grammar parser kai which is a relatively easy test typical of current
workloads, whereas the second test compiles most of an ISO C++98 grammar
parser sca written in very heavy C++0x metaprogramming. This second test is
likely indicative of compiler workloads in a few years time.
Both tests were conducted with variation in flags, with (a) ‘-O0’ (no
optimisation), (b) ‘-g -O0’ (debug info, no optimisation), (c) ‘-O3’ (full
optimisation) and (d) ‘-g -O3’ (debug info, full optimisation) chosen. As one
can see from the results in Table 2, the improvement given by the user mode
page allocator appears to be much more dependent on compiler flags rather than
memory usage with the least improvement given by turning on optimisation and
the most improvement given by turning on the inclusion of debug information.
* •
Test 3: A VBA automation script was written for Microsoft Word 2007 x86 which
times how long it takes to load a document, paginate it ready for printing and
publish the document as a print-ready PDF before closing the document.
Document (a) is a 200,000 word, five hundred page book with a large number of
embedded graphs and high resolution pictures, whereas document (b) is a set of
twenty-one extremely high resolution forty megapixel (or better) photographs.
Surprisingly, the user mode page allocator gave a net performance decrease for
document (a) and barely affected the result for document (b), despite that in
the case of document (b) that Word is definitely making twenty-one large
allocations and frees.
* •
Test 4: The standard x86 MSVC build of Python v2.6.5 was used to run two
benchmarks taken from the Computer Language Benchmarks Game com . Two programs
were benchmarked: (a) binary-trees which is a simplistic adaptation of Hans
Boehm’s GCBench where $N$ number of binary trees are allocated, manipulated
and deallocated and (b) k-nucleotide which reads in $N$ nucleotides from a
FASTA format text file into a hash table, and then to count and write out the
frequency of certain sequences of DNA. For test (a) iterations were performed
for $13<=N<=19$ and for test (b) iterations were performed for
$1,000,000<=N<=10,000,000$.
* •
Test 5: For the final test it was thought worthwhile to try modifying the
source code of a large application to specifically make use of the O(1)
features of the user mode page allocator. Having access to the source code of
a commercial military modelling and simulation application Allen and Black
[2005] which is known to make a series of expansions of large arrays, the core
Array container class was replaced with an implementation using realloc() and
three of its demonstration benchmarks performed: (a) Penetration Grid (b) Tank
and (c) Urban Town.
Table 2: Effects of the user mode page allocator on the performance of selected real world applications. | Peak Memory |
---|---|---
Test | Usage | Improvement
Test 1a (G++): | 198Mb | +2.99%
Test 1b (G++): | 217Mb | +1.19%
Test 1c (G++): | 250Mb | +5.68%
Test 1d (G++): | 320Mb | +4.44%
Test 2a (G++): | 410Mb | +3.04%
Test 2b (G++): | 405Mb | +1.20%
Test 2c (G++): | 590Mb | +5.25%
Test 2d (G++): | 623Mb | +3.98%
Test 3a (Word): | 119Mb | -4.05%
Test 3b (Word): | 108Mb | +0.67%
| | [-0.25% - +1.27%],
Test 4a (Python): | 6 - 114Mb | avrg. +0.47%
| | [-0.41% - +1.73%],
Test 4b (Python): | 92 - 870Mb | avrg. +0.35%
Test 5a (Solver): | – | +1.41%
Test 5b (Solver): | – | +1.07%
Test 5c (Solver): | – | +0.58%
_Mean = +1.88%_
_Median = +1.20%_
_Chi-squared probability of independence $p$ = 1.0_
Figure 12: An illustration of structure present in the results shown by Table
2.
Figure 12 sorts these results and shows linear regression lines for the
clusters subsequently identified. The high line – entirely that of g++ –
displays a clear relationship, whereas the lower line is much more of a mix of
applications. This structure, in addition to the total lack of correlation
between the improvements and memory usage, strongly suggests that user mode
page allocation has highly task-specific effects which are not general in
nature. In other words, user mode page allocation affects a minority of
operations a lot while having little to no effect on anything else.
As mentioned earlier, even half a percent improvement in a real world
application can be significant depending on how that application uses its
memory allocator. Apart from one of the tests involving Microsoft Word 2007,
all of the tests showed a net positive gain when that application – otherwise
unmodified except for Test 5 – is running under the prototype user mode page
allocator. Given a properly implemented user mode page allocator with full
operating system and batch API support, one could only expect these results to
be significantly improved still further though still by no more than a few
percent overall in the examples above.
## 4 Conclusion
To quote P.J. Denning (1996) Denning as he reflected on the history of
computer memory management:
> _“From time to time over the past forty years, various people have argued
> that virtual memory is not really necessary because advancing memory
> technology would soon permit us to have all the random-access main memory we
> could possibly want. Such predictions assume implicitly that the primary
> reason for virtual memory is automatic storage allocation of a memory
> hierarchy. The historical record reveals, to the contrary, that the driving
> force behind virtual memory has always been simplifying programs (and
> programming) by insulating algorithms from the parameters of the memory
> configuration and by allowing separately constructed objects to be shared,
> reused, and protected. The predictions that memory capacities would
> eventually be large enough to hold everything have never come true and there
> is little reason to believe they ever will. And even if they did, each new
> generation of users has discovered that its ambitions for sharing objects
> led it to virtual memory. Virtual memory accommodates essential patterns in
> the way people use computers. It will still be used when we are all gone.”_
> (pp. 13–14, emphasis added).
It would be churlish to disagree with the great Prof. Denning, and it is hoped
that no one has taken this paper to mean this. Rather, the time series data
presented in Figures 2–4 were intended to show that the rate of growth of
growth (the second derivative) in RAM capacity has been outstripping the rate
of growth of growth in magnetic storage since approximately 1997 – this being
a fact unavailable to Prof. Denning at the time he wrote the quote above. Such
a fundamental change must both introduce new inconveniences for programmers
and remove old ones as the effort-to-reward and cost-to-benefit curves shift
dramatically. This paper has proposed the technique of user mode page
allocation as one part of a solution to what will no doubt be a large puzzle.
Nothing proposed by this paper is a proposal against virtually _addressed_
memory: this is extremely useful because it lets programmers easily program
today for tomorrow’s hardware, and it presents an effectively uniform address
space which is easy to conceptualise mentally and work with via pointer
variables which are inherently one dimensional in nature. Back in the bad old
days of segment based addressing, one had a series of small ranges of address
space much the same as if one disables the MMU hardware and works directly
with physical RAM today. Because the introduction of a MMU adds considerable
latency to random memory access – especially so when that MMU is hardware
virtualised as this paper proposes – one must always ask the question: _does
the new latency added here more than offset the latency removed overall?_
The central argument of this paper is that if a user mode page allocator is
already no slower on otherwise unmodified existing systems today, then the
considerable simplifications in implementation it allows across the entire
system ought to both considerably improve performance and memory consumption
at a geometrically increasing rate if technology keeps improving along
historical trends. Not only this, but millions of programmer man hours ought
to be saved going on into the future by avoiding the writing and debugging of
code designed to reuse memory blocks rather than throwing them away and
getting new ones. One gets to make use of the considerably superior metadata
about memory utilisation available at the application layer. And finally, one
opens the door wide open to easy and transparent addition of support for
trans-cluster NUMA application deployments, or other such hardware driven
enhancements of the future, without needing custom support in the local
kernel.
This paper agrees entirely with the emphasised sentiment in Prof. Denning’s
quote: generally speaking, _the simpler the better_. This must be contrasted
against the reality that greater software implementation parallelization will
be needed going on into the future, and parallelizability tends to contradict
sharing except in read-only situations. Therefore, one would also add that
generally speaking, _the simpler and fewer the interdependencies the better_
too because the more parallelizable the system, the better it will perform as
technology moves forward. To that end, operating system kernels will surely
transfer ever increasing amounts of their implementation into the process
space to the extent that entering the kernel only happens when one process
wishes to establish or disconnect communications with another process and
security arbitration is required. Given the trends in hardware virtualisation,
it is likely that simple hardware devices such as network cards will soon be
driven directly by processes from user mode code and the sharing of the device
will be implemented in hardware rather than in the kernel. One can envisage a
time not too far away now where for many common application loads, the kernel
is entered once only every few seconds – an eternity in a multi-gigahertz
world.
Effectively of course this is a slow transition into a ‘machine of machines’
where even the humble desktop computer looks more like a distributed
multiprocessing cluster rather than the enhanced single purpose calculator of
old. We already can perceive the likely future of a thin hypervisor presiding
over clusters of a few dozen specialised stream and serial processing cores,
with each cluster having a few dozen gigabytes of local memory with non-
unified access to the other clusters’ memory taking a few hundred cycles
longer. How can it be wise to consider the paradigm espoused by the 1970s C
malloc API as even remotely suitable for such a world? Yet when you examine
how memory is allocated today, it very rarely breaks the C memory allocation
paradigm first codified by the Seventh Edition of Unix all the way back in
1979 Laboratories .
Surely it has become time to replace this memory allocation paradigm with one
which can handle non-local, hardware assisted, memory allocation as easily as
any other? Let the application have direct but virtualised access to the bare
metal hardware. Let user mode page allocation replace kernel mode page
allocation!
### 4.1 Future and subsequent work
Without doubt, this paper raises far more questions than answers. While this
paper has tried to investigate how user mode page allocation would work in
practice, there is no substitute for proper implementation. Suitable syscall
support could be easily added to the Linux or Windows kernels for MMU
virtualisation capable hardware along with a kernel configuration flag
enabling the allocation of physical memory pages to non-superusers. One would
then have a perfect platform for future allocator research, including those
barely imaginable today.
The real world application testing showed an up to 5% performance improvement
when running under the user mode page allocator. How much of this is due to
lack of cache pollution introduced by the page fault handler, or the scale
invariant behaviour of the user mode page allocator up to 1Mb block sizes, or
simply due to differences in free space fragmentation or changes in cache
locality is unknown. Further work ought to be performed to discover the
sources of the observed performance increases, as well as determining whether
the aggregate improvements are introduced smoothly across whole program
operation or whether in fact some parts of the application run slower and are
in aggregate outbalanced by other parts running quicker.
The present 1979 C malloc API has become a significant and mostly hidden
bottleneck to application performance, particularly in object orientated
languages because (i) object containers have no way of batching a known
sequence of allocations or deallocations e.g. std::list<T>(10000) or even
std::list<T>::$\scriptstyle\mathtt{\sim}$list() (ii) realloc() is incapable of
inhibiting address relocation which forces a copy or move construction of
every element in an array when resized Kimpe et al. [2006]; Buhr [2010].
Considering how much more important this overhead would become under a user
mode page allocator, after writing this paper during the summer of 2010 I
coordinated the collaborative development of a “latency reducing v2 malloc
API” for C1X. After its successful conclusion, a proposal for a change to the
ISO C1X library specification standard was constructed and submitted to WG14
as N1527 n [15] as well as two C99 reference implementations of N1527 were
written and made available to the public c1x . This proposal adds batch memory
allocation and deallocation along with much finer hinting and control to the
allocator and VM implementations. As an example of what N1527 can bring to
C++, a simple std::list<int>(4000000) runs 100,000 times faster when
batch_alloc1() is used for construction instead of multiple calls to operator
new! I would therefore urge readers to take the time to examine N1527 for
deficiencies and to add support for its proposed API to their allocators.
The author would like to thank Craig Black, Kim J. Allen and Eric Clark from
Applied Research Associates Inc. of Niceville, Florida, USA for their
assistance during this research, and to Applied Research Associates Inc. for
sponsoring the development of the user mode page allocator used in the
research performed for this paper. I would also like to thank Doug Lea of the
State University of New York at Oswego, USA; David Dice from Oracle Inc. of
California, USA; and Peter Buhr of the University of Waterloo, Canada for
their most helpful advice, detailed comments and patience.
## References
* Lea and Gloger [2000] D. Lea and W. Gloger. A memory allocator, 2000.
* Berger et al. [2000] E.D. Berger, K.S. McKinley, R.D. Blumofe, and P.R. Wilson. Hoard: A scalable memory allocator for multithreaded applications. _ACM SIGPLAN Notices_ , 35(11):117–128, 2000\.
* Michael [2004] M.M. Michael. Scalable lock-free dynamic memory allocation. In _Proceedings of the ACM SIGPLAN 2004 conference on Programming language design and implementation_ , pages 35–46. ACM, 2004.
* Hudson et al. [2006] R.L. Hudson, B. Saha, A.R. Adl-Tabatabai, and B.C. Hertzberg. McRT-Malloc: A scalable transactional memory allocator. In _Proceedings of the 5th international symposium on Memory management_ , page 83. ACM, 2006.
* Berger et al. [2002] E.D. Berger, B.G. Zorn, and K.S. McKinley. Reconsidering custom memory allocation. _ACM SIGPLAN Notices_ , 37(11):12, 2002.
* Berger et al. [2001] E.D. Berger, B.G. Zorn, and K.S. McKinley. Composing high-performance memory allocators. _ACM SIGPLAN Notices_ , 36(5):114–124, 2001.
* Udayakumaran and Barua [2003] S. Udayakumaran and R. Barua. Compiler-decided dynamic memory allocation for scratch-pad based embedded systems. In _Proceedings of the 2003 international conference on Compilers, architecture and synthesis for embedded systems_ , page 286. ACM, 2003\.
* [8] The Boost C++ ”Pool” library which can be found at http://www.boost.org/doc/libs/release/libs/pool/doc/.
* Dice et al. [2010] D. Dice, Y. Lev, V.J. Marathe, M. Moir, D. Nussbaum, and M. Olszewski. Simplifying Concurrent Algorithms by Exploiting Hardware Transactional Memory. In _Proceedings of the 22nd ACM symposium on Parallelism in algorithms and architectures_ , pages 325–334. ACM, 2010.
* Afek, U., Dice, D. and Morrison, A. [2010] Afek, U., Dice, D. and Morrison, A. Cache index-Aware Memory Allocation. In _¡UNKNOWN AS YET¿_ , 2010.
* Fisher and Pry [1972] J.C. Fisher and R.H. Pry. A simple substitution model of technological change*. _Technological forecasting and social change_ , 3:75–88, 1972.
* Meyer [1994] P. Meyer. Bi-logistic growth. _Technological Forecasting and Social Change_ , 47(1):89–102, 1994.
* Meyer and Ausubel [1999] P.S. Meyer and J.H. Ausubel. Carrying Capacity:: A Model with Logistically Varying Limits. _Technological Forecasting and Social Change_ , 61(3):209–214, 1999.
* Meyer et al. [1999] P.S. Meyer, J.W. Yung, and J.H. Ausubel. A Primer on Logistic Growth and Substitution:: The Mathematics of the Loglet Lab Software. _Technological Forecasting and Social Change_ , 61(3):247–271, 1999.
* [15] A list of historical magnetic hard drive sizes and prices can be found at http://ns1758.ca/winch/winchest.html.
* wik [a] More recent data on historical magnetic hard drive sizes and prices can be found at http://www.mattscomputertrends.com/harddiskdata.html, a.
* [17] A detailed chronological history of solid state drive development can be found at http://www.storagesearch.com/.
* [18] P.J. Denning. Before Memory Was Virtual. http://cs.gmu.edu/cne/pjd/PUBS/bvm.pdf.
* Denning [1970] P.J. Denning. Virtual memory. _ACM Computing Surveys (CSUR)_ , 2(3):153–189, 1970.
* [20] A list of random access latency times for recent hard drives can be found at http://www.tomshardware.com/charts/raid-matrix-charts/Random-Access-Tim%e,227.html.
* Desnoyers [2010] P. Desnoyers. Empirical evaluation of NAND flash memory performance. _ACM SIGOPS Operating Systems Review_ , 44(1):50–54, 2010.
* [22] J.C. McCallum. A list of historical memory sizes and prices can be found at http://www.jcmit.com/memoryprice.htm.
* wik [b] A list of historical memory interconnect speeds can be found at http://en.wikipedia.org/wiki/List_of_device_bit_rates#Memory_Inte%rconnect.2FRAM_buses, b.
* [24] The datasheet for the Intel Q6700 microprocessor can be found at http://download.intel.com/design/processor/datashts/31559205.pdf.
* Navarro et al. [2002] J. Navarro, S. Iyer, P. Druschel, and A. Cox. Practical, transparent operating system support for superpages. _ACM SIGOPS Operating Systems Review_ , 36(SI):104, 2002.
* Bloom [1970] B.H. Bloom. Space/time trade-offs in hash coding with allowable errors. _Communications of the ACM_ , 13(7):422–426, 1970\.
* Curtis et al. [1979] B. Curtis, S.B. Sheppard, and P. Milliman. Third time charm: Stronger prediction of programmer performance by software complexity metrics. In _Proceedings of the 4th international conference on Software engineering_ , page 360. IEEE Press, 1979.
* Kearney et al. [1986] J.P. Kearney, R.L. Sedlmeyer, W.B. Thompson, M.A. Gray, and M.A. Adler. Software complexity measurement. _Communications of the ACM_ , 29(11):1050, 1986\.
* Khoshgoftaar and Munson [1990] T.M. Khoshgoftaar and J.C. Munson. Predicting software development errors using software complexity metrics. _IEEE Journal on Selected Areas in Communications_ , 8(2):253–261, 1990.
* Banker et al. [1998] R.D. Banker, G.B. Davis, and S.A. Slaughter. Software development practices, software complexity, and software maintenance performance: A field study. _Management Science_ , 44(4):433–450, 1998.
* Kimpe et al. [2006] Dries Kimpe, Stefan Vandewalle, and Stefaan Poedts. Evector: An efficient vector implementation - using virtual memory for improving memory. _Sci. Program._ , 14(2):45–59, 2006. ISSN 1058-9244.
* [32] L. Eggert. An email thread about pre-zeroing pages in the FreeBSD and Linux kernels can be found at http://www.mail-archive.com/freebsd-hackers@freebsd.org/msg13993.html.
* Amdahl [1967] G.M. Amdahl. Validity of the single processor approach to achieving large scale computing capabilities. In _Proceedings of the April 18-20, 1967, spring joint computer conference_ , pages 483–485. ACM, 1967.
* [34] The C grammar parser was written 2001-2004 by Hartmut Kaiser and can be found at http://boost-spirit.com/repository/applications/c.zip.
* [35] The C++ grammar parser, called ‘Scalpel’, was written by Florian Goujeon and can be found at http://42ndart.org/scalpel/.
* [36] The Computer Language Benchmarks Game can be found at http://shootout.alioth.debian.org/.
* Allen and Black [2005] K.J. Allen and C. Black. Implementation of a framework for vulnerability/lethality modeling and simulation. In _Proceedings of the 37th conference on Winter simulation_ , page 1153. Winter Simulation Conference, 2005.
* [38] Bell Laboratories. The Manual for the Seventh Edition of Unix can be found at http://cm.bell-labs.com/7thEdMan/bswv7.html.
* Buhr [2010] P.A. Buhr. Making realloc work with memalign. In _unpublished_ , 2010.
* n [15] The N1527 proposal lying before ISO WG14 C1X can be found at http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1527.pdf.
* [41] Two C99 reference implementations of the N1527 proposal can be found at http://github.com/ned14/C1X_N1527.
|
arxiv-papers
| 2011-05-09T22:26:15 |
2024-09-04T02:49:18.678246
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Niall Douglas",
"submitter": "Niall Douglas",
"url": "https://arxiv.org/abs/1105.1811"
}
|
1105.1815
|
Niall Douglas
Draft 1 Draft 1
Mr. Niall Douglas BSc MA MBS MCollT ned Productions IT Consulting
http://www.nedproductions.biz/
# User Mode Memory Page Management
An old idea applied anew to the memory wall problem
(2010-2011)
###### Abstract
It is often said that one of the biggest limitations on computer performance
is memory bandwidth (i.e.“the memory wall problem”). In this position paper, I
argue that if historical trends in computing evolution (where growth in
available capacity is exponential and reduction in its access latencies is
linear) continue as they have, then this view is wrong – in fact we ought to
be concentrating on _reducing whole system memory access latencies_ wherever
possible, and by “whole system” I mean that we ought to look at how software
can be unnecessarily wasteful with memory bandwidth due to legacy design
decisions.
To this end I conduct a feasibility study to determine whether we ought to
virtualise the MMU for each application process such that it has direct access
to its own MMU page tables and the memory allocated to a process is managed
exclusively by the process and not the kernel. I find under typical conditions
that nearly scale invariant performance to memory allocation size is possible
such that hundreds of megabytes of memory can be allocated, relocated, swapped
and deallocated in almost the same time as kilobytes (e.g. allocating 8Mb is
10x quicker under this experimental allocator than a conventional allocator,
and resizing a 128Kb block to 256Kb block is 4.5x faster). I find that first
time page access latencies are improved tenfold; moreover, because the kernel
page fault handler is never called, the lack of cache pollution improves whole
application memory access latencies increasing performance by up to 2x.
Finally, I try binary patching existing applications to use the experimental
allocation technique, finding almost universal performance improvements
without having to recompile these applications to make better use of the new
facilities.
As memory capacities continue to grow exponentially, applications will make
ever larger allocations and deallocations which under present page management
techniques will incur ever rising bandwidth overheads. The proposed technique
removes most of the bandwidth penalties associated with allocating and
deallocating large quantities of memory, especially in a multi-core
environment. The proposed technique is easy to implement, retrofits gracefully
onto existing OS and application designs and I think is worthy of serious
consideration by system vendors, especially if combined with a parallelised
batch allocator API such as N1527 currently before the ISO C1X programming
language committee.
###### category:
C.4 Performance of Systems Performance Attributes
###### category:
D.4.2 Operating Systems Allocation/Deallocation Strategies
††conference: Memory Systems Performance and Correctness 2011 June 5th 2011,
San Jose, U.S.A.††terms: S
cale Invariant, Memory Allocation, Memory Management Unit, Memory Paging, Page
Tables, Virtualization, faster array extension, Bare Metal, Intel VT-x, AMD-V,
Nested Page Tables, Extended Page Tables, Memory Wall
## 1 Introduction
There is a lot in the literature about the memory wall problem, and most of it
comes from either the hardware perspective Wulf and McKee [1995]; Saulsbury et
al. [1996]; McKee [2004]; Cristal et al. [2005]; Boncz et al. [2008] or from
the software and quite tangential perspective of the effects upon performance
of memory allocation techniques. As a quick illustration of the software
approaches, the Lea allocator, dlmalloc Lea and Gloger [2000], aims for a
reusable simplicity of implementation, whereas other allocators have a much
more complex implementation which makes use of per-processor heaps, lock-free,
cache-line locality and transactional techniques Berger et al. [2000]; Michael
[2004]; Hudson et al. [2006]. Many still believe strongly in the use of custom
application-specific allocators despite that research has shown many of these
implementations to be sub-par Berger et al. [2002], whereas others believe
that the enhanced type information and metadata available to source compilers
allow superior allocators to be implemented at the compilation stage Berger et
al. [2001]; Udayakumaran and Barua [2003]; boo .
Yet there appears to me to be a lack of joined up thinking going on here. The
problem is not one of either hardware or software alone, but of _whole
application performance_ which looks at the entire system including the humans
and other systems which use it – a fundamentally non-linear problem. There is
some very recent evidence in the literature that this holistic approach is
becoming realised: for example, Hudson (2006) Hudson et al. [2006] and Dice
(2010) Dice et al. [2010]; Afek, U., Dice, D. and Morrison, A. [2010] looked
into how a memory allocator’s implementation strategy non-linearly affects
whole application performance via cache set associativity effects and cache
line localisation effects, with a very wide range of approaches and techniques
suggested.
To quote the well known note on hitting the memory wall by Wulf and McKee
(1995) Wulf and McKee [1995]:
> _“Our prediction of the memory wall is probably wrong too – but it suggests
> that we have to start thinking “out of the box”. All the techniques that the
> authors are aware of, including ones we have proposed, provide one-time
> boosts to either bandwidth or latency. While these delay the date of impact,
> they don’t change the fundamentals.”_ (p. 22).
Personally I don’t think that the fundamentals _are_ changeable – the well
established holographic principle111It even has a Wikipedia page which is not
bad at http://en.wikipedia.org/wiki/Holographic_principle. clearly shows that
maximal entropy in any region scales with its radius _squared_ and not cubed
as might be expected Bekenstein [2003] i.e. it is the _boundary_ to a region
which determines its maximum information content, not its volume. Therefore,
growth in storage capacity and the speed in accessing that storage will always
diverge exponentially for exactly the same reason as why mass – or
organisations, or anything which conserves information entropy – face
decreasing marginal returns for additional investment into extra order.
This is not to suggest that we are inevitably doomed in the long run – rather
that, as Wulf and McKee also suggested, we need to start revisiting previously
untouchable assumptions. In particular, we need to identify the “low hanging
fruit” i.e. those parts of system design which consume or cause the
consumption of a lot of memory bandwidth due to legacy design and algorthmic
decisions and replace them with functionally equivalent solutions which are
more intelligent. This paper suggests one such low hanging fruit – the use of
page faulted virtual memory.
## 2 How computing systems currently manage memory
Virtual memory, as originally proposed by Denning in 1970 Denning [1970], has
become so ingrained into how we think of memory that it has become taken for
granted. However, back when it was first introduced, virtual memory was
_controversial_ and for good reason, and I think it worthwhile that we recap
exactly why.
Virtual memory has two typical meanings which are often confused: (i) the MMU
hardware which maps physical memory pages to appear at arbitrary (virtual)
addresses and (ii) the operating system support for using a paging device to
store less frequently used memory pages, thus allowing the maximal use of the
available physical RAM in storing the most used memory pages. As Denning
pointed out in 1970 Denning [1970], again with much more empirical detail in
1980 Denning [1980] and indeed in a reflection upon his career in 1996 Denning
, virtual memory had the particular convenience of letting programmers write
_as if_ the computer had lots of low latency memory. The owner of a particular
computer could then trade off between cost of additional physical RAM and
execution speed, with the kernel trying its best to configure an optimal
_working set_ of most frequently used memory pages for a given application.
We take this design for granted nowadays. But think about it: there are
several assumptions in this choice of design, and these are:
1. 1.
That lower latency memory capacity (physical RAM) is expensive and therefore
scarce compared to other forms of memory.
2. 2.
That higher latency memory capacity (magnetic hard drives) is cheaper and
therefore more plentiful.
3. 3.
That growth in each will keep pace with the other over time (explicitly stated
for example in Wulf and McKee (1995) Wulf and McKee [1995]).
4. 4.
That magnetic storage has a practically infinite write cycle lifetime (so
thrashing the page file due to insufficient RAM won’t cause self-destruction).
5. 5.
And therefore we ought to maximise the utilisation of as much scarce and
expensive RAM as possible by expending CPU cycles on copying memory around in
order to allow programmers to write software for tomorrow’s computers by using
magnetic storage to “fake” tomorrow’s memory capacities.
As a result, we have the standard C malloc API (whose design is almost
unchanged since the 1970s Laboratories and which is used by most applications
and languages) which takes no account of virtual memory _at all_ – each
application allocates memory as though it is the sole application in the
system. The kernel then _overrides_ the requested memory usage of each process
on the basis of a set of statistical assumptions about which pages might
actually used by _faking_ an environment for the process where it is alone in
the system and can have as much memory as it likes – using the page fault
handler to load and save memory page to the page file and polluting all over
the processor caches in doing so. Meanwhile you have programmers finding
pathological performance in some corner cases in this environment, and writing
code which is specifically designed to fool the kernel into believing that
memory is being used when it is not, despite that such prefaulting hammers the
CPU caches and is fundamentally a waste of CPU time and resources.
This design is stupid, but choosing it was not. There is good reason why it
has become so ubiquitous, and I personally cannot think of a better least
worst solution given the assumptions above.
## 3 Historical trends in growths of capacities and speeds
### 3.1 The increasing non-substitutability of RAM
Figure 1: A log plot of bytes available per inflation adjusted US$ from 1980
to 2010 for conventional magnetic hard drives and flash based solid state disk
drives. Magnetic hard drives are clearly coming to the end of their logistic
growth curve trajectory, whereas flash drives are still to date undergoing the
exponential growth part of their logistic growth curve. Sources: win ; wik
[a]; sto .
Unknowable until early 2000’s222It is unknowable because one cannot know when
logistic growth will end until well past its point of inflection i.e. when the
second derivative goes negative., Figure 1 proves the fact that magnetic non-
volatile storage is due to become replaced with flash based storage. Unlike
magnetic storage whose average random access latency may vary between 8-30ms
tom and which has a variance strongly dependent on the distance between the
most recently accessed location and the next location, flash based storage has
a flat and uniform random access latency just like RAM. Where DDR3 RAM may
have a 10-35ns read/write latency, current flash storage has a read/write
latencies of 10-20µs and 200-250µs respectively Desnoyers [2010] which is only
three orders slower than reading and four orders slower than writing RAM
respectively, versus the six orders of difference against magnetic storage.
What this means is that the relative overhead of page faulted virtual memory
on flash based storage becomes _much larger_ – several thousand fold larger –
against overall performance than when based on magnetic storage based swap
files.
Furthermore flash based storage has a limited write cycle lifetime which makes
it inherently unsuitable for use as temporary volatile storage. This has a
particular consequence, and let me make this clear: there is no longer any
valid substitute for RAM.
### 3.2 How growth in RAM capacity is going to far outstrip our ability to
access it
Figure 2: A plot of the relative growths since 1997 of random access memory
(RAM) speeds and sizes for the best value (in terms of Mb/US$) memory sticks
then available on the US consumer market from 1997 - 2009, with speed depicted
by squares on the left hand axis and with size depicted by diamonds on the
right hand axis. The dotted line shows the best fit regression for speed which
is linear, and the dashed line shows the best fit for size which is a power
regression. Note how that during this period memory capacity outgrew memory
speed by a factor of ten. Sources: McCallum wik [b].
As Figure 2 shows, growth in capacity for the “value sweetspot” in RAM (i.e.
those RAM sticks with the most memory for the least price at that time) has
outstripped growth in the access speed of the same RAM by a factor of ten in
the period 1997-2009, so while we have witnessed an impressive 25x growth in
RAM speed we have also witnessed a 250x growth in RAM capacity during the same
time period. Moreover, growth in capacity is exponential versus a linear
growth in access speed, so the differential is only going to dramatically
increase still further in the next decade. Put another way, assuming that
trends continue and all other things being equal, if it takes a 2009 computer
160ms to access all of its memory at least once, it will take a 2021 computer
5,070 years to do the same thing333No one is claiming that this will actually
be the case as the exponential phase of logistic growth will surely have ended
before 2021..
### 3.3 Conclusions
So let’s be really clear:
1. 1.
Non-volatile storage will soon be unsubstitutable for RAM.
2. 2.
RAM capacity is no longer scarce.
3. 3.
What is scarce, and going to become a LOT more scarce is memory access speed.
4. 4.
Therefore, if the growth in RAM capacity over its access speed continues, we
are increasingly going to see applications constrained not by insufficient
storage, but by insufficiently fast access _to_ storage.
The implications of this change are profound for all users of computing
technology, but especially for those responsible for the implementations of
system memory management. What these trends mean is that historical
performance bottlenecks are metamorphising into something new. And that
implies, especially given today’s relative abundance and lack of
substitutability of memory capacity, that page faulted virtual memory needs to
be eliminated and replaced with something less latency creating.
## 4 Replacing page faulted virtual memory
### 4.1 How much latency does page faulted virtual memory actually introduce
into application execution?
Sadly, I don’t have the resources available to me to find out a full and
proper answer to this, but I was able to run some feasibility testing. Proper
research funding would be most welcome.
Figure 3: A log-log plot of how much overhead paged virtual memory allocation
introduces over non-paged memory allocation according to block size.
Figure 3 shows how much overhead is introduced in a best case scenario by
fault driven page allocation versus non-paged allocation for Microsoft Windows
7 x64 and Linux 2.6.32 x64 running on a 2.67Ghz Intel Core 2 Quad processor.
The test allocates a block of a given size and writes a single byte in each
page constituting that block, then frees the block. On Windows the Address
Windowing Extension (AWE) functions AllocateUserPhysicalPages() et al. were
used to allocate non-paged memory versus the paged memory returned by
VirtualAlloc(), whereas on Linux the special flag MAP_POPULATE was used to ask
the kernel to prefault all pages before returning the newly allocated memory
from mmap(). As the API used to perform the allocation and free is completely
different on Windows, one would expect partially incommensurate results.
As one can see, the overhead introduced by fault driven page allocation is
substantial with the overhead reaching 125% on Windows and 36% on Linux. The
overhead rises linearly with the number of pages involved up until
approximately 1-2Mb after which it drops dramatically. This makes sense: for
each page fault the kernel must perform a series of lookups in order to figure
what page should be placed where, so the overhead from the kernel per page
fault as shown in Figure 4 ought to be approximately constant at 2800 cycles
per page for Microsoft Windows 7. There is something wrong with the Linux
kernel page fault handler here: it costs 3100 cycles per page up to 2Mb which
seems reasonable, however after that it rapidly becomes 6500 cycles per page
which suggests a TLB entry size dependency or a lack of easily available free
pages and which is made clear in Table 1. However that isn’t the whole story –
obviously enough, each time the kernel executes the page fault handler it must
traverse several thousand cycles worth of code and data, thus kicking whatever
the application is currently doing out of the CPU’s instruction and data
caches and therefore necessitating a reload of those caches (i.e. a memory
stall) when execution returns.
In other words, fault driven page allocation introduces a certain amount of
ongoing CPU cache pollution and therefore raises by several orders not just
the latency of first access to a memory page not currently in RAM, but also
memory latency in general.
Table 1: Selected page fault allocation latencies for a run of pages on Microsoft Windows and Linux. | Microsoft Windows | Linux
---|---|---
| Paged | Non-paged | Paged | Non-paged
Size | cycles/page | cycles/page | cycles/page | cycles/page
16Kb | 2367 | 14.51 | 2847 | 15.83
1Mb | 2286 | 81.37 | 3275 | 14.53
16Mb | 2994 | 216.2 | 6353 | 113.4
512Mb | 2841 | 229.9 | 6597 | 115.9
Figure 4: A log-linear plot of how many CPU cycles is consumed per page by the
kernel page fault handler when allocating a run of pages according to block
size.
The point being made here is that when an application makes frequent changes
to its virtual address space layout (i.e. the more memory allocation and
deallocation it does), page faulted virtual memory introduces a lot of
additional and often hard to predict application execution latency through the
entire application as a whole. The overheads shown in Figures 3 and 4
represent a best-case scenario where there isn’t a lot of code and data being
traversed by the application – in real world code, the additional overhead
introduced by pollution of CPU caches by the page fault handler can become
sufficiently pathological that some applications deliberately prefault newly
allocated memory before usage.
### 4.2 What to use instead of page faulted virtual memory
I propose a very simple replacement of page faulted virtual memory: _a user
mode page allocator_. This idea is hardly new: Mach has an external pager
mechanism of Washington. Dept. of Computer Science et al. [1990], V++ employed
an external page cache management system Harty and Cheriton [1992], Nemesis
had self-paging Hand [1998] and Azul implements a pauseless GC algorithm Click
et al. [2005] which uses a special Linux kernel module to implement user mode
page management azu . However something major has changed just recently – the
fact that we can now use nested page table suppor in commodity PCs to hardware
assist user mode page management. As a result, the cost of this proposal is
perhaps just a 33-50% increase in page table walk costs.
The design is simple: one virtualises the MMU tables for each process which
requests it (i.e. has a sufficiently new version of its C library). When you
call malloc, mmap et al. and new memory is needed, a kernel upcall is used to
asynchronously release and request physical memory page frame numbers of the
various sizes supported by the hardware and those page frames are mapped by
the C library to wherever needed via direct manipulation of its virtualised
MMU page tables. When you call free, munmap et al. and free space coalescing
determines that a set of pages is no longer needed, these are placed into a
_free page cache_. The kernel may occasionally send a signal to the process
asking for pages from this cache to be asynchronously or synchronously freed
according to a certain severity.
There are three key benefits to this design. The first is that under paged
virtual memory when applications request new memory from the kernel, it is not
actually allocated right there and then: instead it is allocated _and its
contents zeroed_ on first access which introduces cache pollution as well as
using up lots of memory bandwidth. The user mode page allocator avoids
unnecessary pages clears, or when necessary avoids them being performed when
the CPU is busy by keeping a cache of free pages around which can be almost
instantly mapped to requirements without having to wait for the kernel or for
the page to be cleared. Because dirtied pages are still cleaned when they move
between processes, data security is retained and no security holes are
introduced, but pages are not cleared when they are relocated within the same
process thus avoiding unnecessary page clearing or copying.
If this sounds very wasteful of memory pages, remember that _capacities_ are
increasing exponentially. Even for “big-iron” applications soon access
latencies will be far more important than capacities. One can therefore afford
to ‘throw’ memory pages at problems.
The second benefit is that a whole load of improved memory management
algorithms become available. For example, right now when extending a
std::vector<> in C++ one must allocate new storage and move construct each
object from the old storage into the new storage, then destruct the old
storage. Under this design one simply keeps some extra address space around
after the vector’s storage and maps in new pages as necessary – thus avoiding
the over-allocation typical in existing std::vector<> implementations. Another
example is that memory can be relocated from or swapped between A and B at a
speed invariant to the amount of data by simply remapping the data in
question. The list of potential algorithmic improvements goes on for some
time, but the final and most important one that I will mention here is that
memory management operations can be much more effectively _batched_ across
multiple cores, thus easily allowing large numbers of sequential allocations
and deallocations to be performed concurrently – something not easily possible
with current kernel based designs. As an example of the benefits of batching,
consider the creation of a four million item list in C++ which right now
requires four million separate malloc calls each of identical size. With a
batch malloc API such as N1527 proposed by myself and currently before the ISO
C1X standards committee n [15], one maps as many free pages as are available
for all four million members in one go (asynchronously requesting the
shortfall/reloading the free page cache from the kernel) and simply demarcates
any headers and footers required which is much more memory bandwidth and cache
friendly. While the demarcation is taking place, the kernel can be busy
asynchronously returning extra free pages to the process, thus greatly
parallelising the whole operation and therefore substantially reducing total
memory allocation latency by removing waits on memory.
In case you think a batch malloc API unnecessary, consider that a batched four
million item list construction is around 100,000 times faster that at present.
And consider that compilers can easily aggregate allocation and frees to
single points, thus giving a free speed-up such that C++’s allocation speeds
might start approaching that of Java’s. And finally consider that most third
party memory allocators have provided batch APIs for some time, and that the
Perl language found a 18% reduction in start-up time by adopting a batch
malloc API per .
The third benefit is implied from the first two, but may be non-obvious:
memory allocation becomes invariant to the amount allocated. As mentioned
earlier, as capacity usage continues to be exchanged for lower latencies, the
amounts of memory allocated and deallocation are going to rise – a lot. A user
mode page allocator removes the overhead associated with larger capacity
allocation and deallocation as well as its first time access – witness the
massive drop in latencies shown by Table 1.
#### 4.2.1 Testing the feasibility of a user mode page allocator
Figure 5: A log-linear plot of how the system and user mode page allocators
scale to block sizes between 4Kb and 1Mb. Adjusted no traverse means that
single byte write traversal costs were removed to make the results
commensurate.
To test the effects of a user mode page allocator, a prototype user mode page
allocator444It is open source and can be found at
http://github.com/ned14/nedmalloc. was developed which abuses the Address
Windowing Extensions (AWE) API of Microsoft Windows mentioned earlier in order
to effectively provide direct user mode access to the hardware MMU. The use of
the verb ‘abuses’ is the proper one: the AWE functions are intended for 32-bit
applications to make use of more than 4Gb of RAM, and they were never intended
to be used by 64 bit applications in arbitrarily remapping pages around the
virtual address space. Hence due API workarounds the prototype user mode page
allocator runs (according to my testing) at least 10x slower than it ought to
were the API more suitable, and probably more like 40x slower when compared to
a memory copy directly into the MMU page tables – however, its dependency or
lack thereof on allocation size in real world applications should remain
clear. The results shown by Figure 5 speak for themselves.
Figure 6: A summary of the performance improvements in the Lea allocator
provided by the user mode page allocator (umpa) where 1.0 equals the
performance of the system page allocator.
Looking at the performance of malloc et al. as provided by the Lea allocator
Lea and Gloger [2000] under the user mode page allocator (where 1.0 equals the
performance of the Lea allocator under the Windows kernel page allocator),
Figure 6 illuminates an interesting 2x performance gain for very small
allocations when running under the user mode page allocator with a slow
decline in improvement as one approaches the 128Kb–256Kb range. This is
particularly interesting given that the test randomly mixes up very small
allocations with very big ones, so why there should be a LOG(allocation size)
related speed-up is surprising. I would suggest that the lack of cache
pollution introduced by the lack of a page fault handler being called for
every previously unaccessed page is most likely to be the cause.
Table 2: Effects of the user mode page allocator on the performance of selected real world applications. | Peak Memory |
---|---|---
Test | Usage | Improvement
Test 1a (G++): | 198Mb | +2.99%
Test 1b (G++): | 217Mb | +1.19%
Test 1c (G++): | 250Mb | +5.68%
Test 1d (G++): | 320Mb | +4.44%
Test 2a (G++): | 410Mb | +3.04%
Test 2b (G++): | 405Mb | +1.20%
Test 2c (G++): | 590Mb | +5.25%
Test 2d (G++): | 623Mb | +3.98%
Test 3a (MS Word): | 119Mb | -4.05%
Test 3b (MS Word): | 108Mb | +0.67%
| | [-0.25% - +1.27%],
Test 4a (Python): | 6 - 114Mb | avrg. +0.47%
| | [-0.41% - +1.73%],
Test 4b (Python): | 92 - 870Mb | avrg. +0.35%
Test 5a (Solver): | – | +1.41%
Test 5b (Solver): | – | +1.07%
Test 5c (Solver): | – | +0.58%
_Mean = +1.88%_
_Median = +1.20%_
_Chi-squared probability of independence $p$ = 1.0_
Table 2 shows the effect of the user mode page allocator in various real world
applications binary patched to use the Lea allocator. Clearly the more an
application allocates a lot of memory during its execution (G++) rather than
working on existing memory (Python), the better the effect.
## 5 Conclusion and further work
Even a highly inefficient user mode page allocator implementation shows
impressive scalability and mostly positive effects on existing applications –
even without them being recompiled to take advantage of the much more
efficient algorithms made possible by virtualising the MMU for each process.
I would suggest that additional research be performed in this area – put
another way, easy to implement efficiency gains in software could save
billions of dollars by delaying development of additional hardware complexity
to manage the memory wall. And besides, page file backed memory is not just
unnecessary but performance sapping in modern systems.
The author would like to thank Craig Black, Kim J. Allen and Eric Clark from
Applied Research Associates Inc. of Niceville, Florida, USA for their
assistance during this research, and to Applied Research Associates Inc. for
sponsoring the development of the user mode page allocator used in the
research performed for this paper. I would also like to thank Doug Lea of the
State University of New York at Oswego, USA; David Dice from Oracle Inc. of
California, USA; and Peter Buhr of the University of Waterloo, Canada for
their most helpful advice, detailed comments and patience.
## References
* Wulf and McKee [1995] W.A. Wulf and S.A. McKee. Hitting the memory wall: Implications of the obvious. _Computer Architecture News_ , 23:20–20, 1995. ISSN 0163-5964.
* Saulsbury et al. [1996] A. Saulsbury, F. Pong, and A. Nowatzyk. Missing the memory wall: The case for processor/memory integration. In _ACM SIGARCH Computer Architecture News_ , volume 24, pages 90–101. ACM, 1996. ISBN 0897917863.
* McKee [2004] S.A. McKee. Reflections on the memory wall. In _Proceedings of the 1st conference on Computing frontiers_ , page 162. ACM, 2004. ISBN 1581137419.
* Cristal et al. [2005] A. Cristal, O.J. Santana, F. Cazorla, M. Galluzzi, T. Ramirez, M. Pericas, and M. Valero. Kilo-instruction processors: Overcoming the memory wall. _Micro, IEEE_ , 25(3):48–57, 2005. ISSN 0272-1732.
* Boncz et al. [2008] P.A. Boncz, M.L. Kersten, and S. Manegold. Breaking the memory wall in MonetDB. _Communications of the ACM_ , 51(12):77–85, 2008\. ISSN 0001-0782.
* Lea and Gloger [2000] D. Lea and W. Gloger. A memory allocator, 2000.
* Berger et al. [2000] E.D. Berger, K.S. McKinley, R.D. Blumofe, and P.R. Wilson. Hoard: A scalable memory allocator for multithreaded applications. _ACM SIGPLAN Notices_ , 35(11):117–128, 2000\.
* Michael [2004] M.M. Michael. Scalable lock-free dynamic memory allocation. In _Proceedings of the ACM SIGPLAN 2004 conference on Programming language design and implementation_ , pages 35–46. ACM, 2004.
* Hudson et al. [2006] R.L. Hudson, B. Saha, A.R. Adl-Tabatabai, and B.C. Hertzberg. McRT-Malloc: A scalable transactional memory allocator. In _Proceedings of the 5th international symposium on Memory management_ , page 83. ACM, 2006.
* Berger et al. [2002] E.D. Berger, B.G. Zorn, and K.S. McKinley. Reconsidering custom memory allocation. _ACM SIGPLAN Notices_ , 37(11):12, 2002.
* Berger et al. [2001] E.D. Berger, B.G. Zorn, and K.S. McKinley. Composing high-performance memory allocators. _ACM SIGPLAN Notices_ , 36(5):114–124, 2001.
* Udayakumaran and Barua [2003] S. Udayakumaran and R. Barua. Compiler-decided dynamic memory allocation for scratch-pad based embedded systems. In _Proceedings of the 2003 international conference on Compilers, architecture and synthesis for embedded systems_ , page 286. ACM, 2003\.
* [13] The Boost C++ ”Pool” library which can be found at http://www.boost.org/doc/libs/release/libs/pool/doc/.
* Dice et al. [2010] D. Dice, Y. Lev, V.J. Marathe, M. Moir, D. Nussbaum, and M. Olszewski. Simplifying Concurrent Algorithms by Exploiting Hardware Transactional Memory. In _Proceedings of the 22nd ACM symposium on Parallelism in algorithms and architectures_ , pages 325–334. ACM, 2010.
* Afek, U., Dice, D. and Morrison, A. [2010] Afek, U., Dice, D. and Morrison, A. Cache index-Aware Memory Allocation. In _¡UNKNOWN AS YET¿_ , 2010.
* Bekenstein [2003] J.D. Bekenstein. Information in the holographic universe. _SCIENTIFIC AMERICAN-AMERICAN EDITION-_ , 289(2):58–65, 2003. ISSN 0036-8733.
* Denning [1970] P.J. Denning. Virtual memory. _ACM Computing Surveys (CSUR)_ , 2(3):153–189, 1970.
* Denning [1980] P.J. Denning. Working sets past and present. _IEEE Transactions on Software Engineering_ , pages 64–84, 1980. ISSN 0098-5589.
* [19] P.J. Denning. Before Memory Was Virtual. http://cs.gmu.edu/cne/pjd/PUBS/bvm.pdf.
* [20] Bell Laboratories. The Manual for the Seventh Edition of Unix can be found at http://cm.bell-labs.com/7thEdMan/bswv7.html.
* [21] A list of historical magnetic hard drive sizes and prices can be found at http://ns1758.ca/winch/winchest.html.
* wik [a] More recent data on historical magnetic hard drive sizes and prices can be found at http://www.mattscomputertrends.com/harddiskdata.html, a.
* [23] A detailed chronological history of solid state drive development can be found at http://www.storagesearch.com/.
* [24] A list of random access latency times for recent hard drives can be found at http://www.tomshardware.com/charts/raid-matrix-charts/Random-Access-Tim%e,227.html.
* Desnoyers [2010] P. Desnoyers. Empirical evaluation of NAND flash memory performance. _ACM SIGOPS Operating Systems Review_ , 44(1):50–54, 2010.
* [26] J.C. McCallum. A list of historical memory sizes and prices can be found at http://www.jcmit.com/memoryprice.htm.
* wik [b] A list of historical memory interconnect speeds can be found at http://en.wikipedia.org/wiki/List_of_device_bit_rates#Memory_Inte%rconnect.2FRAM_buses, b.
* of Washington. Dept. of Computer Science et al. [1990] University of Washington. Dept. of Computer Science, D. McNamee, and K. Armstrong. _Extending the mach external pager interface to allow user-level page replacement policies_. 1990\.
* Harty and Cheriton [1992] K. Harty and D.R. Cheriton. Application-controlled physical memory using external page-cache management. In _Proceedings of the fifth international conference on Architectural support for programming languages and operating systems_ , pages 187–197. ACM, 1992. ISBN 0897915348.
* Hand [1998] S.M. Hand. Self-paging in the Nemesis operating system. _Operating systems review_ , 33:73–86, 1998. ISSN 0163-5980.
* Click et al. [2005] C. Click, G. Tene, and M. Wolf. The pauseless GC algorithm. In _Proceedings of the 1st ACM/USENIX international conference on Virtual execution environments_ , pages 46–56. ACM, 2005. ISBN 1595930477.
* [32] A paper on the Azul memory module can be found at http://www.managedruntime.org/files/downloads/AzulVmemMetricsMRI.pdf.
* n [15] The N1527 proposal lying before ISO WG14 C1X can be found at http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1527.pdf.
* [34] The perl-compiler discussion can be found at http://groups.google.com/group/perl-compiler/msg/dbb6d04c2665d265.
|
arxiv-papers
| 2011-05-09T22:39:46 |
2024-09-04T02:49:18.685750
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Niall Douglas",
"submitter": "Niall Douglas",
"url": "https://arxiv.org/abs/1105.1815"
}
|
1105.1834
|
# Design, Fabrication, and Experimental Demonstration of Junction Surface Ion
Traps
D. L. Moehring, C. Highstrete, D. Stick, K. M. Fortier, R. Haltli, C. Tigges,
and M. G. Blain Sandia National Laboratories, Albuquerque, New Mexico 87185,
USA dlmoehr@sandia.gov
###### Abstract
We present the design, fabrication, and experimental implementation of surface
ion traps with Y-shaped junctions. The traps are designed to minimize the
pseudopotential variations in the junction region at the symmetric
intersection of three linear segments. We experimentally demonstrate robust
linear and junction shuttling with greater than $10^{6}$ round-trip shuttles
without ion loss. By minimizing the direct line of sight between trapped ions
and dielectric surfaces, negligible day-to-day and trap-to-trap variations are
observed. In addition to high-fidelity single-ion shuttling, multiple-ion
chains survive splitting, ion-position swapping, and recombining routines. The
development of two-dimensional trapping structures is an important milestone
for ion-trap quantum computing and quantum simulations.
## 1 Introduction
The first requirement for quantum information processing is the ability to
build a “scalable physical system with well characterized qubits” [1]. In
recent years, research in trapped ion quantum information has concentrated
much effort toward creating scalable architectures for trapping and shuttling
large numbers of ions [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. Within this
effort, surface-electrode ion traps are generally regarded as the most
promising long-term approach due to the ability to fabricate complex trap
arrays leveraging advanced semiconductor and microfabrication techniques [14,
15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. Two-dimensional ion-trap geometries,
such as those shown here, and ion trap arrays are key developments for
scalable implementations of ion-based quantum computation and simulation [25,
26, 27].
In this paper, we report the design, fabrication, and successful testing of
Y-junction surface ion traps. We demonstrate high-fidelity shuttling protocols
in three different traps of two different trap designs. This includes linear
and junction shuttling, splitting and recombining of ion chains, and ion
reordering. The surface microtraps reported here are both reproducible and
invariable. This is demonstrated by successful ion-shuttling solutions that
are identical for day-to-day operation as well as for multiple congeneric
traps.
## 2 Design and Fabrication
The design principles for the traps discussed here are based in part on
fabrication constraints and techniques described in our previous work [23].
These principles include minimized line of sight exposure between the ion and
dielectric surfaces to ensure shielding of any trapped dielectric charge [28],
and recessed bond pads to limit the projection of the wire-bond ribbons above
the top surface of the trap. In addition, the top metal layout in the vicinity
of the junction and the loading hole is optimized with respect to RF potential
and other characteristics by utilizing lateral shape-modulation of electrode
edges [Fig. 1].
As with all known junction traps, it is not possible to stabilize a trapped
charge with a vanishing RF field and zero ponderomotive potential everywhere
[29]. However, a suitable trade in performance characteristics can be achieved
by optimizing the predicted performance using a figure of merit as a function
of the electrode geometry. The trap geometries chosen here placed particular
emphasis on minimizing the magnitude and slope of the equilibrium
pseudopotential particularly in the junction region while maintaining a
specified ion equilibrium height range. Resulting junction designs are shown
in Figures 1 and 2. The spatial features of the RF electrodes in the junction
region decrease the pseudopotential barrier from greater than 1 eV for
straight RF electrodes to less than 2 meV so that the ion can be transported
through the junction with relatively reduced control voltages and motional
heating [12] [Fig. 3].
Figure 1: Scanning electron microscope (SEM) image of a
Y${}_{\textrm{H}}$-junction trap. In each trap, 47 independent DC electrodes
are routed for wire bonding to the CPGA. Inset: Image of 7 ions trapped above
the loading hole. The average ion-ion spacing in this image is $\approx
3.5~{}\mu$m
Figure 2: SEM image of the central region of the Y${}_{\textrm{H}}$-junction
trap (left) and the Y${}_{\textrm{L}}$-junction trap (right). Note especially
the sharp edges in Y${}_{\textrm{H}}$ that are rounded in Y${}_{\textrm{L}}$.
Figure 3: Two different BE simulations of the trap pseudopotential. Both show
similar results from the junction center ($x=0~{}\mu$m) outward along a linear
segment of the Y${}_{\textrm{H}}$ trap. The loading hole is centered at
$x=853~{}\mu$m.
Another feature of these traps is the (70 $\mu$m $\times$ 86 $\mu$m) loading
hole [Fig. 4]. The presence of the loading hole, in contrast to a solid center
DC electrode, also gives rise to pseudopotential barriers that can be reduced
by modulating the edges of nearby RF and DC electrodes. The distance between
the center of the loading hole and the center of the junction is $853~{}\mu$m.
Electrostatic solutions were calculated using a boundary element (BE) method
to predict device performance. The BE models were created by describing
electrode geometries in the form of planar polygons in three spatial
dimensions in conjunction with additional meta information (i.e. ion
equilibrium position, BE mesh length scale) to control model fidelity. In
these models, the dielectric contribution has been ignored since these
materials are always well shielded by metal. The geometries were determined by
a parametric description in order to facilitate computer optimization.
Optimization of the electrode layout was accomplished by minimizing a design
cost function described by parametric values for a specific geometry. This
cost function was a sum of several positive semi-definite sub-cost terms that
were traded by weighting coefficients. The cost functions included figures of
merit for the ion height and the pseudopotential values and derivatives along
the equilibrium trap axis of one arm of the Y-junction. The sub-cost values at
points along the trap axis were not always treated uniformly – some points in
particular were weighted differently to effect a desired outcome not achieved
by solely varying the trade coefficients. For example, the ion height near the
junction was reduced in cost so as to not trade so heavily against the
pseudopotential minimization.
Two different trap models were fabricated and tested. Trap
“Y${}_{\textrm{L}}$” has lower spatial frequencies on the electrode edge
shapes in the junction region, whereas the second trap “Y${}_{\textrm{H}}$”
has higher spatial frequencies [Fig. 2]. The higher spatial frequencies in
Y${}_{\textrm{H}}$ are predicted to further reduce the pseudopotential by
$\approx 50\%$ compared to Y${}_{\textrm{L}}$. Two trap versions were
fabricated due to a concern over the structural integrity of the cantilevered
electrode segments, caused by dielectric setbacks in the high spatial-
frequency regions of Y${}_{\textrm{H}}$ [23]. This concern was also addressed
by using a dielectric setback of only 2 $\mu$m for Y${}_{\textrm{H}}$, as
opposed to 5 $\mu$m for Y${}_{\textrm{L}}$. In the end, the structural
integrity of the aluminum was not an issue.
Figure 4: SEM image showing the detail of the loading hole and the modulated
edges of nearby electrodes.
## 3 Operation
Each fabricated ion-trap chip is packaged in a 100-pin ceramic pin grid array
(CPGA) providing electrical connections to the RF electrode and 47 independent
DC electrodes. Each trap is installed in a vacuum chamber with a base pressure
$\approx 5\times 10^{-11}$ Torr. The applied DC voltages are varied between
-10 V and +10 V using National Instruments PXI-6733 DAC cards. The applied RF
voltage is varied between 25 - 165 V for trapping in the loading hole and 85 -
120 V for junction shuttling. Trapping lifetimes are several hours when the
ions are Doppler cooled, and approximately one minute without laser cooling.
Calcium ions are loaded by first generating a stream of neutral calcium atoms
through the loading hole [Fig. 4], which are then photoionized using a
resonant 423 nm laser on the 4s1S${}_{0}\leftrightarrow$ 4p1P1 transition and
an ionizing 375 nm laser [30]. By placing the source of the atomic beam
beneath the chip, we minimize neutral atom plating on the top surface of the
chip, virtually eliminating any chance of shorting adjacent trap electrodes.
This is essential for consistent day-to-day trap operations [31].
The height of the ions above the top trap surface can be directly measured
after shuttling the ions from the loading hole. This is accomplished by
imaging the ion directly and imaging the reflected photons off the aluminum
center trap electrode [Figure 5]. The translation of the imaging lens between
the two images confirms the expected $\approx 70~{}\mu$m height of the ion
above the trap surface [20, 32].
Figure 5: Left: Direct image of the ion (upper) and indirect image of the ion
reflected by the aluminum trap surface (lower). Right: Elapsed-time image of
an ion shuttling $\approx 40$ microns up each arm in a Y${}_{\textrm{H}}$
trap. This 10 second image captures $\approx 40,000$ round-trip junction
shuttles. Rather than shuttling into the exact center of the junction, the
ions are intentionally steered along a smooth path between the linear
sections.
In total, one Y${}_{\textrm{L}}$ trap and two Y${}_{\textrm{H}}$ traps [Fig.
2] were tested, with initial work performed on the Y${}_{\textrm{L}}$ trap.
Here, trapping and linear shuttling tests were completed as in [23] with
greater than $10^{5}$ linear shuttles without ion loss. Junction shuttling was
also tested in the Y${}_{\textrm{L}}$ trap, and despite the lower spatial
frequency electrode variations, we performed $10^{6}$ round-trip shuttles
without ion loss. In a given round-trip, the ion traveled up each leg of the
Y-junction by $\approx 30~{}\mu$m, resulting in a total of $3\times 10^{6}$
passes through the junction.
Following these initial successes, testing began in two congeneric
Y${}_{\textrm{H}}$ traps in two independent test systems. As the
Y${}_{\textrm{H}}$ trap design differs from Y${}_{\textrm{L}}$ only in the
junction region, successful loading and linear shuttling voltage solutions
used for Y${}_{\textrm{L}}$ were demonstrated to work also in each
Y${}_{\textrm{H}}$ trap. In contrast, voltage solutions for ion transfer
through the modified junction electrodes required a new shuttling routine.
Interestingly, successful junction shuttling was observed in each
Y${}_{\textrm{H}}$ trap with a voltage-solution modification consisting of a
-0.5 volt adjustment on only the center electrode [33, 34].
Multiple ions were also shuttled with these solutions from the loading hole,
thrice through the junction, and back to the loading hole. One round trip
reverses the order of ions within the linear chain, however this is difficult
to prove unequivocally as all ions were the same isotope (40Ca+). All of the
above mentioned tests utilized high-degree-of-freedom voltage solutions which
used up to 25 DC electrodes at a given time for linear shuttling and 35 DC
electrodes for junction shuttling.
After success with the high-degree-of-freedom solutions, we tested voltage
solutions with a reduced number of DC electrodes. This is convenient for
parallel shuttling of ions in multiple harmonic wells with minimal crosstalk.
It is also convenient to use voltage shuttling solutions with a constant
center DC electrode [33]. With solutions utilizing only the nearest 7 DC
electrodes at any one time in the linear regions and the central 13 DC
electrodes in the junction region (including the center electrode), ions were
shuttled to all functional regions of each trap. Ions survive these routines
with or without Doppler cooling during shuttling.
As in the Y${}_{\textrm{L}}$ trap, a single ion in a Y${}_{\textrm{H}}$ trap
successfully completed $10^{6}$ round-trip shuttles around the junction
without ion loss. With the ion moving $40$ microns up each arm, $10^{6}$
shuttles took about 4 minutes, whereas moving $250$ microns up each arm took
about 24 minutes. In the latter case, the ion traveled a total distance of
$1.5$ km at an average speed of $1~{}$m/s. Utilizing Doppler cooling during
the junction shuttling routine, the emission of photons from the ion resulted
in the trace of the ion path shown in Figure 5. Importantly, voltage solutions
successful in one Y${}_{\textrm{H}}$ trap were also successful in a second
identically constructed Y${}_{\textrm{H}}$ trap, even though the two traps
were tested in different vacuum chambers with independent RF and DC voltage
sources – only lasers were shared between the two setups. These solutions have
been successfully used without modification for over six months.
Finally, complex shuttling routines were implemented for multiple ions in
several locations on a given trap. For instance, three ions were consecutively
loaded, independently shuttled into each arm of the junction, and Doppler
cooled in a triangular configuration for over an hour without loss. Linear ion
chains were also split and recombined as seen in Fig. 6. By performing
independent junction shuttling between splitting and recombining, two ions
were unequivocally observed to exchange position in a linear chain.
Figure 6: Left: Ions within the same harmonic well separated by approximately
6 microns. Right: Ions separated by 4 electrodes (approximately 370 microns).
The splitting and recombining of two ions was explicitly observed hundreds of
times without error.
## 4 Conclusion
Reliable and repeatable micro-fabrication of complex ion-trapping structures
has been demonstrated, indicating that a scalable system of trapped ions for
quantum computation and quantum simulation is conceivable. By further
utilizing the device design and integration capabilities of multi-layer
fabrication techniques, the robust fabrication of more complex 2D and 3D trap
arrays is imminent [25]. For example, multi-level metalization (up to four
levels of metal) is being employed to accommodate nested trap electrodes and
to minimize electrode cross-talk, thereby enabling fundamentally new trap
array concepts.
## 5 Acknowledgments
The authors thank the members of Sandia’s Microsystems and Engineering
Sciences Application (MESA) facility for their fabrication expertise and Mike
Descour for helpful comments on the manuscript. This work was supported by the
Intelligence Advanced Research Projects Activity (IARPA). Sandia National
Laboratories is a multi-program laboratory managed and operated by Sandia
Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the
U.S. Department of Energy’s National Nuclear Security Administration under
contract DE-AC04-94AL85000.
## References
## References
* [1] D. P. DiVincenzo. Fortschr. Phys. 48, 771 (2000).
* [2] D. J. Wineland, et al. Journal of Research of the National Institute of Standards and Technology 103, 259 (1998).
* [3] D. Kielpinski, C. Monroe, and D. Wineland. Nature 417, 709 (2002).
* [4] M. A. Rowe, et al. Quant. Inf. Comp. 2, 257 (2002).
* [5] M. J. Madsen, W. K. Hensinger, D. Stick, J. A. Rabchuk, and C. Monroe. Appl. Phys. B 78, 639 (2004).
* [6] D. Stick, et al. Nature Phys. 2, 36 (2006).
* [7] W. K. Hensinger, et al. Appl. Phys. Lett. 88, 034101 (2006).
* [8] R. Reichle, et al. Fortschr. Phys. 54, 666 (2006).
* [9] M. Brownnutt, G. Wilpers, P. Gill, R. C. Thompson, and A. G. Sinclair. New J. Phys. 8, 232 (2006).
* [10] D. Leibfried, et al. Hyperfine Interactions 174, 1 (2007).
* [11] J. M. Amini, J. Britton, D. Leibfried, and D. J. Wineland. arXiv:0812.3907v1 [quant-ph] (2008).
* [12] R. B. Blakestad, et al. Phys. Rev. Lett. 102, 153002 (2009).
* [13] M. D. Hughes, B. Lekitsch, J. A. Broersma, and W. K. Hensinger. arXiv:1101.3207v1 [quant-ph] (2011).
* [14] J. Chiaverini, et al. Quant. Inf. Comp. 5, 419 (2005).
* [15] J. Britton, et al. arXiv:quant-ph/0605170v1 (2006).
* [16] S. Seidelin, et al. Phys. Rev. Lett. 96, 253003 (2006).
* [17] K. R. Brown, et al. Phys. Rev. A 75, 015401 (2007).
* [18] J. Britton, et al. Appl. Phys. Lett. 95, 173102 (2009).
* [19] D. R. Leibrandt, et al. Quant. Inf. Comp. 9, 901 (2009).
* [20] D. T. C. Allcock, et al. New J. Phys. 12, 053026 (2010).
* [21] M. Hellwig, A. Bautista-Salvador, K. Singer, G. Werth, and F. Schmidt-Kaler. New J. Phys. 12, 065019 (2010).
* [22] J. M. Amini, et al. New J. Phys. 12, 033031 (2010).
* [23] D. Stick, et al. arXiv:1008.0990v2 [physics.ins-det] (2010).
* [24] C. Ospelkaus, et al. arXiv:1104.3573v1 [quant-ph] (2011).
* [25] R. Schmied, J. H. Wesenberg, and D. Leibfried. Phys. Rev. Lett. 102, 233002 (2009).
* [26] J. T. Barreiro, et al. Nature 470, 486 (2011).
* [27] M. Kumph, M. Brownnutt, and R. Blatt. arXiv:1103.5428v2 [quant-ph] (2011).
* [28] M. Harlander, M. Brownnutt, W. Hänsel, and R. Blatt. New J. Phys. 12, 093035 (2010).
* [29] J. H. Wesenberg. Phys. Rev. A 79, 013416 (2009).
* [30] S. Gulde, et al. Appl. Phys. B: Lasers and Optics 73, 861 (2001).
* [31] N. Daniilidis, et al. New J. Phys. 13, 013032 (2011).
* [32] P. F. Herskind, et al. arXiv:1011.5259v1 [quant-ph] (2011).
* [33] The center DC electrode is directly underneath the ion and extends the entire trapping region. A voltage applied to this electrode is applied to all regions of the trap.
* [34] Reverse compatibility of the solutions tested in the Y${}_{\textrm{H}}$ trap were not tested in the Y${}_{\textrm{L}}$ trap as it was no longer installed in a test setup.
|
arxiv-papers
| 2011-05-10T00:32:17 |
2024-09-04T02:49:18.691719
|
{
"license": "Public Domain",
"authors": "D. L. Moehring, C. Highstrete, D. Stick, K. M. Fortier, R. Haltli, C.\n Tigges, M. G. Blain",
"submitter": "David Moehring",
"url": "https://arxiv.org/abs/1105.1834"
}
|
1105.2070
|
# Poisson Hail on a Hot Ground
Francois Baccelli111 _INRIA and ENS, Paris, France_. The research of this
author was partially supported by EURONF. and Sergey Foss222 _Heriot-Watt
University, Edinburgh, UK and Institute of Mathematics, Novosibirsk, Russia_.
The research of this author was partially supported by INRIA and EURONF. The
authors thank the SCS Programme of Isaac Newton Institute for Mathematical
Studies where this work was completed.
(13 March 2011)
###### Abstract
We consider a queue where the server is the Euclidean space, and the customers
are random closed sets (RACS) of the Euclidean space. These RACS arrive
according to a Poisson rain and each of them has a random service time (in the
case of hail falling on the Euclidean plane, this is the height of the
hailstone, whereas the RACS is its footprint). The Euclidean space serves
customers at speed 1. The service discipline is a hard exclusion rule: no two
intersecting RACS can be served simultaneously and service is in the First In
First Out order: only the hailstones in contact with the ground melt at speed
1, whereas the other ones are queued; a tagged RACS waits until all RACS
arrived before it and intersecting it have fully melted before starting its
own melting. We give the evolution equations for this queue. We prove that it
is stable for a sufficiently small arrival intensity, provided the typical
diameter of the RACS and the typical service time have finite exponential
moments. We also discuss the percolation properties of the stationary regime
of the RACS in the queue.
> Keywords:
Poisson point process, Poisson rain, random closed sets, Euclidean space,
service, stability, backward scheme, monotonicity, branching process,
percolation, hard core exclusion processes, queueing theory, stochastic
geometry.
## 1 Introduction
Consider a Poisson rain on the $d$ dimensional Euclidean space
$\mathbb{R}^{d}$ with intensity $\lambda$; by Poisson rain, we mean a Poisson
point process of intensity $\lambda$ in $\mathbb{R}^{d+1}$ which gives the
(random) number of arrivals in all time-space Borel sets. Each Poisson
arrival, say at location $x$ and time $t$, brings a customer with two main
characteristics:
* •
A grain $C$, which is a RACS of $\mathbb{R}^{d}$ [9] centered at the origin.
If the RACS is a ball with random radius, its center is that of the ball. For
more general cases, the center of a RACS could be defined as e.g. its gravity
center.
* •
A random service time $\sigma$.
In the most general setting, these two characteristics will be assumed to be
marks of the point process. In this paper, we will concentrate on the simplest
case, which is that of an independent marking and independent and identically
distributed (i.i.d.) marks: the mark $(C,\sigma)$ of point $(x,t)$ has some
given distribution and is independent of everything else.
The customer arriving at time $t$ and location $x$ with mark $(C,\sigma)$
creates a hailstone, with footprint $x+C$ in $\mathbb{R}^{d}$ and with height
$\sigma$.
These hailstones do not move: they are to be melted/served by the Euclidean
plane at the location where they arrive in the FCFS order, respecting some
hard exclusion rules: if the footprints of two hailstones have a non empty
intersection, then the one arriving second has to wait for the end of the
melting/service of the first to start its melting/service. Once the service of
a customer is started, it proceeds uninterrupted at speed 1. Once a customer
is served/hailstone fully melted, it leaves the Euclidean space.
Notice that the customers being served at any given time form a hard exclusion
process as no two customers having intersecting footprints are ever served at
the same time. For instance, if the grains are balls, the footprint balls
concurrently served form a hard ball exclusion process. Here are a few basic
questions on this model:
* •
Does there exist any positive $\lambda$ for which this model is (globally)
stable? By stability, we mean that, for all $k$ and for all bounded Borel set
$B_{1},\ldots,B_{k}$, the vector $N_{1}(t),\ldots,N_{k}(t)$, where $N_{j}(t)$
denotes the number of RACS which are queued or in service at time $t$ and
intersect the Borel set $B_{j}$, converges in distribution to a finite random
vector when $t$ tends to infinity.
* •
If so, does the stationary regime percolate? By this, we mean that the union
of the RACS which are queued or in service in a snapshot of the stationary
regime has an infinite connected component.
The paper is structured as follows. In section 3, we study pure growth models
(the ground is cold and hailstones do not melt) and show that the heap formed
by the customers grows with (at most) linear rate with time and that the
growth rate tends to zero if the input rate tends to zero. We consider models
with service (hot ground) in section 4. Discrete versions of the problems are
studied in section 5.
## 2 Main Result
Our main result bears on the construction of the stationary regime of this
system.
As we shall see below (see in particular Equations (1) and (16)), the Poisson
Hail model falls in the category of infinite dimensional max plus linear
systems. This model has nice monotonicity properties (see sections 3 and 4).
However it does not satisfy the separability property of [2], which prevents
the use of general sub-additive ergodic theory tools to assess stability, and
makes the problem interesting.
Denote by $\xi$ the (random) diameter of the typical RACS (i.e. the maximal
distance between its points) and by $\sigma$ the service time of that RAC.
Assume that the system starts at time $t=0$ from the empty state and denote by
$W^{x}_{t}$ the time to empty the system of all RACS that contain point $x$
and that arrive by time $t$.
###### Theorem 1
Assume that the Poisson hail starts at time $t=0$ and that the system is empty
at that time. Assume further that the distributions of the random variables
$\xi^{d}$ and $\sigma$ are light-tailed, i.e. there is a positive constant $c$
such that ${\mathbf{E}}e^{c\xi^{d}}$ and ${\mathbf{E}}e^{c\sigma}$ are finite.
Then there exists a positive constant $\lambda_{0}$ (which depends on $d$ and
on the joint distribution of $\xi$ and $\sigma$) such that, for any
$\lambda<\lambda_{0}$, the model is globally stable. This means that, for any
finite set $A$ in $\mathbb{R}^{d}$, as $t\to\infty$, the distribution of the
random field $(W^{x}_{t},\ x\in A)$ converges weakly to the stationary one.
## 3 Growth Models
Let $\Phi$ be a marked Poisson point process in $\mathbb{R}^{d+1}$: for all
Borel sets $B$ of $\mathbb{R}^{d}$ and $a\leq b$, a r.v. $\Phi(B,[a,b])$
denotes the number of RACS with center located in $B$ that arrive in the time
interval $[a,b]$. The marks of this point process are i.i.d. pairs
$(C_{n},\sigma_{n})$, where $C_{n}$ is a RACS of $\mathbb{R}^{d}$ and
$\sigma_{n}$ is a height (in $\mathbb{R}+$, the positive real line).
The growth model is best defined by the following equations satisfied by
$H_{t}^{x}$, the height at location $x\in\mathbb{R}^{d}$ of the heap made of
all RACS arrived before time $t$ (i.e. in the $(0,t)$ interval): for all
$t>u\geq 0$,
$H_{t}^{x}=H_{u}^{x}+\int_{[u,t)}\left(\sigma_{v}^{x}+\sup_{y\in
C_{v}^{x}}H^{y}_{v}-H_{v}^{x}\right)N^{x}(dv),$ (1)
where $N^{x}$ denotes the Poisson point process on $\mathbb{R}^{+}$ of RACS
arrivals intersecting location $x$:
$N^{x}([a,b])=\int_{\mathbb{R}^{d}\times[a,b]}1_{C_{v}\cap\\{x\\}\neq\emptyset}\Phi(dv),$
and $\sigma_{u}^{x}$ (resp. $C^{x}_{u}$) the canonical height (resp. RAC) mark
process of $N^{x}$. That is, if the point process $N^{x}$ has points
$T^{x}_{i}$, and if one denotes by $(\sigma_{i}^{x},C^{x}_{i})$ the mark of
point $T^{x}_{i}$, then $\sigma_{u}^{x}$ (resp. $C^{x}_{u}$) is equal to
$\sigma_{i}^{x}$ (resp. $C^{x}_{i})$) on $[T^{x}_{i},T^{x}_{i+1})$.
These equations lead to some measurability questions. Below, we will assume
that the RACS are such that the last supremum actually bears on a subset of
$\mathbb{Q}^{d}$, where $\mathbb{Q}$ denotes the set of rational numbers, so
that these questions do not occur.
Of course, in order to specify the dynamics, one also needs some initial
condition, namely some initial field $H^{x}_{0}$, with
$H^{x}_{0}\in\mathbb{R}$ for all $x\in\mathbb{R}^{d}$.
If one denotes by $\tau^{x}(t)$ the last epoch of $N^{x}$ in $(-\infty,t)$,
then this equation can be rewritten as the following recursion:
$\displaystyle H_{t}^{x}$ $\displaystyle=$ $\displaystyle
H_{0}^{x}+\int_{[0,\tau^{x}(t))}\left(\sigma_{v}^{x}+\sup_{y\in
C_{v}^{x}}H^{y}_{v}-H_{v}^{x}\right)N^{x}(dv)+\sigma_{\tau^{x}(t)}^{x}+\sup_{y\in
C^{x}_{\tau^{x}(t)}}H_{\tau^{x}(t)}^{y}-H_{\tau^{x}(t)}^{x}~{},$
that is
$H_{t}^{x}=\left(\sigma_{\tau^{x}(t)}^{x}+\sup_{y\in
C^{x}_{\tau^{x}(t)}}H_{\tau^{x}(t)}^{y}\right)1_{\tau^{x}(t)\geq
0}+H_{0}^{x}1_{\tau^{x}(t)<0}.$ (2)
These are the forward equations. We will also use the backward equations,
which give the heights at time $0$ for an arrival point process which is the
restriction of the Poisson hail to the interval $[-t,0]$ for $t>0$. Let
$\mathbb{H}^{x}_{t}$ denote the height at locations $x$ and time $0$ for this
point process. Assuming that the initial condition is 0, we have
$\mathbb{H}_{t}^{x}=\left(\sigma_{\tau^{x}_{-}(t)}^{x}+\sup_{y\in
C^{x}_{\tau^{x}_{-}(t)}}\mathbb{H}_{t+\tau^{x}_{-}(t)}^{y}\circ\theta_{\tau^{x}_{-}(t)}\right)1_{\tau^{x}_{-}(t)\geq-t},$
(3)
with $\tau^{x}_{-}(t)$ the last arrival of the point process $N^{x}$ in the
interval $[-t,0]$, $t>0$, and with $\\{\theta_{u}\\}$ the time shift on the
point processes [1].
###### Remark 1
Here are a few important remarks on these Poisson hail equations:
* •
The last pathwise equations hold for all point processes and all RACS/heights
(although one has to specify how to handle ties when RACS with non-empty
intersection arrive at the same time - we postpone the discussion on this
matter to section 5).
* •
These equations can be extended to the case where customers have a more
general structure than the product of a RACS of $\mathbb{R}^{d}$ and an
interval of the form $[0,\sigma]$. We will call as profile a function
$s(y,x):\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\cup\\{-\infty\\}$,
where $s(y,x)$ gives the height at $x$ relative to a point $y$; we will say
that point $x$ is constrained by point $y$ in the profile if
$s(y,x)\neq-\infty$. The equations for the case where random profiles (rather
than product form RACS) arrive are
$H_{t}^{x}=\left(\sup_{y\in\mathbb{R}^{d}}\left(H_{\tau^{x}(t)}^{y}+s_{\tau^{x}(t)}(y,x)\right)\right)1_{\tau^{x}(t)\geq
0}+H_{0}^{x}1_{\tau^{x}(t)<0},$ (4)
where $\tau^{x}(t)$ is the last date of arrival of $N^{x}$ before time $t$,
with $N^{x}$ the point process of arrivals of profiles having a point which
constrains $x$. We assume here that this point process has a finite intensity.
The case of product form RACS considered above is a special case with
$s_{\tau^{x}(t)}(y,x)=\begin{cases}\sigma_{\tau^{x}(t)}&\mbox{if}\ y\in
C^{x}_{\tau^{x}(t)}\\\ -\infty&\mbox{otherwise}\end{cases},$
with $N^{x}$ the point process of arrivals with RACS intersecting $x$.
Here are now some monotonicity properties of these equations:
1. 1.
The representation (2) shows that if we have two marked point processes
$\\{N^{x}\\}_{x}$ and $\\{\widetilde{N}^{x}\\}_{x}$ such that for all $x$,
$N^{x}\subset\widetilde{N}^{x}$ (in the sense that each point of $N^{x}$ is
also a point of $\widetilde{N}^{x}$), and if the marks of the common points
are unchanged, then $H_{t}^{x}\leq\widetilde{H}_{t}^{x}$ for all $t$ and $x$
whenever $H_{0}^{x}\leq\widetilde{H}_{0}^{x}$ for all $x$.
2. 2.
Similarly, if we have two marked point processes $\\{N^{x}\\}_{x}$ and
$\\{\widetilde{N}^{x}\\}_{x}$ such that for all $x$,
$N^{x}\leq\widetilde{N}^{x}$ (in the sense that for all $n$, the $n$-th point
of $N^{x}$ is later than the $n$-th point of $\widetilde{N}^{x}$), and the
marks are unchanged, then $H_{t}^{x}\leq\widetilde{H}_{t}^{x}$ for all $t$ and
$x$ whenever $H_{0}^{x}\leq\widetilde{H}_{0}^{x}$ for all $x$.
3. 3.
Finally, if the marks of a point process are changed in such a way that
$C\subset\widetilde{C}$ and $\sigma\leq\widetilde{\sigma}$, then
$H_{t}^{x}\leq\widetilde{H}_{t}^{x}$ for all $t$ and $x$ whenever
$H_{0}^{x}\leq\widetilde{H}_{0}^{x}$ for all $x$.
These monotonicity properties hold for the backward construction as well.
They are also easily extended to profiles. For instance, for the last
monotonicity property, if profiles are changed in such a way that
$s(y,x)\leq\widetilde{s}(y,x),\quad\forall x,y,$
then $H_{t}^{x}\leq\widetilde{H}_{t}^{x}$ for all $t$ and $x$ whenever
$H_{0}^{x}\leq\widetilde{H}_{0}^{x}$ for all $x$.
Below, we use these monotonicity properties to get upper-bounds on the
$H^{x}_{t}$ and $\mathbb{H}^{x}_{t}$ variables.
### 3.1 Discretization of Space
Consider the lattice $\mathbb{Z}^{d}$, where $\mathbb{Z}$ denotes the set of
integers. To each point in $x=(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}$, we
associate the point $z(x)=(z_{1}(x),\ldots,z_{d}(x))\in\mathbb{Z}^{d}$ with
coordinates $z_{i}(x)=\lfloor x_{i}\rfloor$ where $\lfloor\cdot\rfloor$
denotes the integer-part. Then, with the RACS $A$ centered at point
$x\in\mathbb{R}^{d}$ and having diameter $\xi$, we associate an auxiliary RACS
$\breve{A}$ centered at point $z(x)$ and being the $d$-dimensional cube of
side $2\lfloor\xi\rfloor+2$. Since $A\subseteq\breve{A}$, when replacing the
RACS $A$ by the RACS $\breve{A}$ at each arrival, and keeping all other
features unchanged, we get from the monotonicity property 3 that for all
$t\in\mathbb{R}$ and $x\in\mathbb{R}^{d}$,
$H_{t}^{x}\leq\breve{H}_{t}^{z(x)},$
with $\breve{H}_{t}^{z}$ the solution of the discrete state space recursion
$\breve{H}_{t}^{z}=\left(\sigma_{\breve{\tau}^{z}(t)}^{z}+\max_{y\in\mathbb{Z}^{d}\cap\breve{C}_{\breve{\tau}^{z}(t)}^{z}}\breve{H}_{\breve{\tau}^{z}(t)}^{y}\right)1_{\breve{\tau}^{z}(t)\geq
0}+\breve{H}_{0}^{z}1_{\breve{\tau}^{z}(t)<0},\quad z\in\mathbb{Z}^{d}~{},$
(5)
with $\breve{\tau}^{z}(t)$ the last epoch of the point process
$\breve{N}^{z}([a,b])=\int_{\mathbb{R}^{d}\times[a,b]}1_{\breve{C}_{v}\cap\\{z\\}\neq\emptyset}\Phi(dv)$
in $(-\infty,t)$. The last model will be referred to as Model 2. We will
denote by $R$ the typical half-side of the cubic RACS in this model. These
sides are i.i.d. (w.r.t. RACS), and if $\xi^{d}$ has a light-tailed
distribution, then $R^{d}$ has too.
### 3.2 Discretization of Time
The discretization of time is in three steps.
Step 1. Model 3 is defined as follows: all RACS centered on $z$ that arrive to
Model 2 within time interval $[n-1,n)$, arrive to Model 3 at time instant
$n-1$. The ties are then solved according to the initial continuous time
ordering. In view of the monotonicity property 2, Model 3 is an upper bound to
Model 2.
Notice that for each $n$, the arrival process at time $n$ forms a discrete
Poisson field of parameter $\lambda$, i.e. the random number of RACS
$M^{z}_{n}$ arriving at point $z\in\mathbb{Z}^{d}$ at time $n$ has a Poisson
distribution with parameter $\lambda$, and these random variables are i.i.d.
in $z$ and $n$.
Let $(R_{n,i}^{z},\sigma_{n,i}^{z})$, $i=1,2,\ldots,M_{n}^{z}$, be the i.i.d.
radii and heights of the cubic RACS arriving at point $z$ and time $n$. Let
further $M=M_{0}^{0}$, $R_{i}=R_{0,i}^{0}$, and $\sigma_{i}=\sigma_{0,i}^{0}$.
Step 2. Let $R^{z,max}_{n}$ be the maximal half-side of all RACS that arrive
at point $z$ and time $n$ in Model 2, and $R^{max}=R^{z,max}_{n}$. The random
variables $R^{z,max}_{n}$ are i.i.d. in $z$ and in $n$. We adopt the
convention that $R^{z,max}_{n}=0$ if there is no arrival at this point and
this time. If the random variable $\xi^{d}$ is light-tailed, the distribution
of $R^{d}$ is also light-tailed, and so is that of $\left(R^{max}\right)^{d}$.
Indeed,
$\left(R^{max}\right)^{d}=\left(\max_{i=1}^{M}R_{i}\right)^{d}\leq\sum_{1}^{M}R_{i}^{d},$
so, for $c>0$,
${\mathbf{E}}e^{c\left(R^{max}\right)^{d}}\leq{\mathbf{E}}e^{c\sum_{1}^{M}R_{i}^{d}}=\exp\left(\lambda{\mathbf{E}}e^{cR^{d}}\right)<\infty$
given ${\mathbf{E}}e^{cR^{d}}$ is finite. Let
$\sigma_{n}^{z,sum}=\sum_{i=1}^{M_{n}^{z}}\sigma_{n,i}^{z}\quad\mbox{and}\quad\sigma^{sum}=\sigma_{0}^{0,sum}.$
Then, by Similar arguments, $\sigma^{sum}$ has a light-tailed distribution if
$\sigma_{i}$ do. By monotonicity property 3 (applied to the profile case),
when replacing the heap of RACS arriving at $(z,n)$ in Model 3 by the cube of
half-side $R^{z,max}_{n}$ and of height $\sigma^{z,sum}_{n}$, for all $z$ and
$n$, one again gets an upper bound system which will be referred to as Model
4.
Step 3. The main new feature of the last discrete time Models (3 and 4) is
that the RACS that arrive at some discrete time on different sites may
overlap. Below, we consider the clump made by overlapping RACS as a profile
and use monotonicity property 3 to get a new upper bound model, which will be
referred to as Boolean Model 5.
Consider the following discrete Boolean model, associated with time $n$. We
say that there is a ”ball” at $z$ at time $n$ if $M_{n}^{z}\geq 1$ and that
there is no ball at $z$ at this time otherwise. By ball, we mean a
$L_{\infty}$ ball with center $z$ and radius $R^{z,max}_{n}$. By decreasing
$\lambda$, we can make the probability $p={\mathbf{P}}(M_{n}^{z}\geq 1)$ as
small as we wish.
Let $\widehat{C}_{n}^{z}$ be the clump containing point $z$ at time $n$, which
is formally defined as follows: if there is a ball at $(z,n)$, or another ball
of time $n$ covering $z$, this clump is the largest union of connected balls
(these balls are considered as subsets of $\mathbb{Z}^{d}$ here) which
contains this ball at time $n$; otherwise, the clump is empty. For all sets
$A$ of the Euclidean space, let $L(A)$ denote the number of points of the
lattice $\mathbb{Z}^{d}$ contained in $A$. It is known from percolation theory
that, for $p$ sufficiently small, this clump is a.s. finite [6] and, moreover,
$L(\widehat{C}_{n}^{z})$ has a light-tailed distribution (since
$\left(R^{max}\right)^{d}$ is light-tailed) [5]. Recall that the latter means
that ${\mathbf{E}}\exp(cL(\widehat{C}_{0}^{z}))<\infty$, for some $c>0$.
Below, we will denote by $\lambda_{c}$ the critical value of $\lambda$ below
which this clump is a.s. finite and light-tailed.
For each clump $\widehat{C}_{n}^{z}$, let $\widehat{\sigma}_{n}^{z}$ be the
total height of all RACS in this clump:
$\widehat{\sigma}_{n}^{z}=\sum_{x\in{\widehat{C}_{n}^{z}}}\sum_{j=1}^{M_{n}^{x}}\sigma_{n,j}^{x}=\sum_{x\in{\widehat{C}_{n}^{z}}}\sigma_{n}^{x,sum}.$
The convention is again that the last quantity is 0 if
$\widehat{C}_{n}^{z}=\emptyset$. We conclude also that
$\widehat{\sigma}_{n}^{z}$ has a light-tailed distribution.
By using monotonicity property 3 (applied to the profile case), one gets that
Boolean Model 5, which satisfies the equation
$\widehat{H}^{z}_{n}=\widehat{\sigma}_{n}^{z}+\max_{y\in\widehat{C}_{n}^{z}\bigcup\\{z\\}}\widehat{H}^{y}_{n-1},$
(6)
with the initial condition $\widehat{H}_{0}^{z}=0$ a.s., forms an upper bound
to Model 4. Similarly,
$\widehat{\mathbb{H}}^{z}_{n}=\widehat{\sigma}_{-1}^{z}+\max_{y\in\widehat{C}_{-1}^{z}\bigcup\\{z\\}}\widehat{\mathbb{H}}^{y}_{n-1}\circ\theta^{-1},$
(7)
where $\theta$ is the discrete shift on the sequences
$\\{\widehat{\sigma}_{k}^{z},\widehat{C}_{k}^{z}\\}$. By combining all the
bounds constructed so far, we get:
$H^{x}_{t}\leq\widehat{H}^{z(x)}_{\lceil t\rceil}\quad{\rm
and}\quad\mathbb{H}^{x}_{t}\leq\widehat{\mathbb{H}}^{z(x)}_{\lceil
t\rceil}\quad a.s.$ (8)
for all $x$ and $t$.
The drawbacks of (6) are twofold:
* (i)
for all fixed $n$, the random variables $\\{\widehat{C}_{n}^{z}\\}_{z}$ are
dependent. This is a major difficulty which will be taken care of by building
a branching upper-bound in subsections 3.3.1 and 3.3.2 below.
* (ii)
for all given $n$ and $z$, the random variables $\widehat{C}_{n}^{z}$ and
$\widehat{\sigma}_{n}^{z}$ are dependent. We will take care of this by
building a second upper bound model in subsection 3.3.3 below.
Each model will bound (6) from above and will hence provide an upper bound to
the initial continuous time, continuous space Poisson hail model.
### 3.3 The Branching Upper-bounds
#### 3.3.1 The Independent Set Version
Assume that the Boolean Model 5 (considered above) has no infinite clump. Let
again $\widehat{C}_{n}^{x}$ be the clump containing $x\in\mathbb{Z}^{d}$ at
time $n$. For $x\neq y\in\mathbb{Z}^{d}$, either
$\widehat{C}_{n}^{x}=\widehat{C}_{n}^{y}$ or these two (random) sets are
disjoint, which shows that these two sets are not independent.333Here
“independence of sets” has the probabilistic meaning: two random sets $V_{1}$
and $V_{2}$ are independent if
${\mathbf{P}}(V_{1}=A_{1},V_{2}=A_{2})={\mathbf{P}}(V_{1}=A_{1}){\mathbf{P}}(V_{2}=A_{2})$,
for all $A_{1},A_{2}\subseteq\mathbb{Z}^{d}$. The aim of the following
construction is to show that a certain independent version of these two sets
is ”larger” (in a sense to be made precise below) than their dependent
version.
Below, we call $(\Omega,{\cal F},{\mathbf{P}})$ the probability space that
carries the i.i.d. variables
$\\{(\sigma^{z,sum}_{0},R^{z,max}_{0})\\}_{z\in\mathbb{Z}^{d}},$
from which the random variables
$\\{(\widehat{C}_{0}^{z},\widehat{\sigma}_{0}^{z})\\}_{z\in\mathbb{Z}^{d}}$
are built.
###### Lemma 1
Assume that $\lambda<\lambda_{c}$. Let $x\neq y$ be two points in
$\mathbb{Z}^{d}$. There exists an extension of the probability space
$(\Omega,{\cal F},{\mathbf{P}})$, denoted by
$(\underline{\Omega},\underline{\cal F},\underline{\mathbf{P}})$, which
carries another i.i.d. family
$\\{({\underline{\sigma}}^{z,sum}_{0},{\underline{R}}^{z,max}_{0})\\}_{z\in\mathbb{Z}^{d}}$
and a random pair
$(\widehat{\underline{C}}_{0}^{y},\widehat{\underline{\sigma}}_{0}^{y})$ built
from the latter in the same way as the random variables
$\\{(\widehat{C}_{0}^{z},\widehat{\sigma}_{0}^{z})\\}_{z\in\mathbb{Z}^{d}}$
are built from
$\\{(\sigma^{z,sum}_{0},R^{z,max}_{0})\\}_{z\in\mathbb{Z}^{d}}$, and such that
the following properties hold:
1. 1.
The inclusion
$\widehat{C}_{0}^{x}\cup\widehat{C}_{0}^{y}\subseteq\widehat{C}_{0}^{x}\cup{\widehat{\underline{C}}}_{0}^{y},$
holds a.s.
2. 2.
The random pairs $({\widehat{C}}_{0}^{x},{\widehat{\sigma}}_{0}^{x})$ and
$({\widehat{\underline{C}}}_{0}^{y},{\widehat{\underline{\sigma}}}_{0}^{y})$
are independent, i.e.
$\displaystyle\underline{\mathbf{P}}({\widehat{C}}_{0}^{x}=A_{1},{\widehat{\sigma}}_{0}^{x}\in
B_{1},{\widehat{\underline{C}}}_{0}^{y}=A_{2},{\widehat{\underline{\sigma}}}_{0}^{y}\in
B_{2})$ $\displaystyle=$
$\displaystyle\underline{\mathbf{P}}(\widehat{C}_{0}^{x}=A_{1},{\widehat{\sigma}}_{0}^{x}\in
B_{1})\underline{\mathbf{P}}({\widehat{\underline{C}}}_{0}^{y}=A_{2},{\widehat{\underline{\sigma}}}_{0}^{y}\in
B_{2})$ $\displaystyle=$
$\displaystyle{\mathbf{P}}(\widehat{C}_{0}^{x}=A_{1},{\widehat{\sigma}}_{0}^{x}\in
B_{1})\underline{\mathbf{P}}({\widehat{\underline{C}}}_{0}^{y}=A_{2},{\widehat{\underline{\sigma}}}_{0}^{y}\in
B_{2}),$
for all sets $A_{1},B_{1}$ and $A_{2},B_{2}$.
3. 3.
The pairs
$({\widehat{\underline{C}}}_{0}^{y},{\widehat{\underline{\sigma}}}_{0}^{y})$
and $({\widehat{C}}_{0}^{y},{\widehat{\sigma}}_{0}^{y})$ have the same law,
i.e.
$\underline{\mathbf{P}}({\widehat{\underline{C}}}_{0}^{y}=A,{\widehat{\underline{\sigma}}}_{0}^{y}\in
B))={\mathbf{P}}({\widehat{C}}_{0}^{y}=A,{\widehat{\sigma}}_{0}^{y}\in B),$
for all sets $(A,B)$.
##### Proof.
We write for short $\widehat{C}^{x}=\widehat{C}^{x}_{0}$ and
$\widehat{\sigma}^{x}=\widehat{\sigma}^{x}_{0}$. Consider first the case of
balls with a constant integer radius $R=R^{max}$ (the case with random radii
is considered after). Recall that we consider $L_{\infty}$-norm balls in
${\mathbb{R}}^{d}$, i.e. $d$-dimensional cubes with side $2R$, so a ”ball
$B^{x}$ centered at point $x=(x_{1},\ldots,x_{d})$” is the closed cube
$x+[-R,+R]^{d}$.
We assume that the ball $B^{x}$ exists at time 0 with probability
$p={\mathbf{P}}(M\geq 1)\in(0,1)$ independently of all the others. Let
$E^{x}=B^{x}$ if $B^{x}$ exists at time 0 and $E^{x}=\emptyset$, otherwise,
and let $\alpha^{x}={\bf I}(E^{x}=B^{x})$ be the indicator of the event that
$B^{x}$ exists (we drop the time index to have lighter notation). Then the
family of r.v.’s $\\{\alpha^{x}\\}_{x\in\mathbb{Z}^{d}}$ is i.i.d.
Recall that the clump $\widehat{C}^{x}$, for the input $\\{\alpha^{x}\\}$, is
the maximal connected set of balls that contains $x$. This clump is empty if
and only if $\alpha^{y}=0$, for all $y$ with $d_{\infty}(x,y)\leq R$. Let
$L(\widehat{C}^{x})$ denote the number of lattice points in the clump
$\widehat{C}^{x}$, $0\leq L(\widehat{C}^{x})\leq\infty$. Clearly,
$L(\widehat{C}^{x})$ forms a stationary (translation-invariant) sequence.
For all sets $A\subset{\mathbb{Z}}^{d}$, let
$Int(A)=\\{x\in A:\ B^{x}\subseteq A\\},\quad\mbox{and}\quad
Hit(A)=\\{x\in\mathbb{Z}^{d}:\ B^{x}\cap A\neq\emptyset\\}.$
For $A$ and $x,y\in A$, we say that the event
$\Bigl{\\{}x\ {\Longleftrightarrow\atop{Int(A),\\{\alpha^{u}\\}}}\
y\Bigr{\\}}$
occurs if, for the input $\\{\alpha^{u}\\}$, the random set
$E^{A}=\bigcup_{z\in Int(A)}E^{z}$ is connected and both $x$ and $y$ belong to
$E^{A}$.
Then the following events are equal:
$\Bigl{\\{}\widehat{C}^{x}=A\Bigr{\\}}=\bigcap_{z\in A}\Bigl{\\{}x\
{\Longleftrightarrow\atop{Int(A),\\{\alpha^{u}\\}}}\
z\Bigr{\\}}\bigcap\bigcap_{z\in Hit(A)\setminus Int(A)}\\{\alpha^{z}=0\\}.$
Therefore, the event $\\{\widehat{C}^{x}=A\\}$ belongs to the sigma-algebra
${\cal F}^{\alpha}_{Hit(A)}$ generated by the random variables
$\\{\alpha^{x},x\in Hit(A)\\}$. Let also ${\cal F}^{\alpha,\sigma}_{Hit(A)}$
be the sigma-algebra generated by the random variables
$\\{\alpha^{x},\sigma^{x},x\in Hit(A)\\}$.
Recall the notation
$\sigma^{z,sum}_{0}=\sum_{j=1}^{M_{0}^{z}}\sigma_{0,j}^{z}$. We will write for
short $\sigma^{z}=\sigma_{0}^{z,sum}$. Clearly $\sigma^{z}=0$ if
$\alpha^{z}=0$, and the family of pairs $\\{(\alpha^{z},\sigma^{z})\\}$ is
i.i.d. in $z\in\mathbb{Z}^{d}$.
Let $\\{(\alpha^{z}_{*},\sigma^{z}_{*})\\}$ be another i.i.d. family in
$z\in\mathbb{Z}^{d}$ which does not depend on all random variables introduced
earlier and whole elements have a common distribution with
$(\alpha^{0},\sigma^{0})$. Let $(\underline{\Omega},\underline{\cal
F},\underline{\mathbf{P}})$ be the product probability space that carries both
$\\{(\alpha^{z},\sigma^{z})\\}$ and $\\{(\alpha^{z}_{*},\sigma^{z}_{*})\\}$.
Introduce then a third family
$\\{({\underline{\alpha}}^{z},{\underline{\sigma}}^{z})\\}$ defined as
follows: for any set $A$ containing $x$, on the event
$\\{\widehat{C}^{x}=A\\}$ we let
$\displaystyle({\underline{\alpha}}^{z}(A),{\underline{\sigma}}^{z}(A))=\begin{cases}(\alpha^{z}_{*},\sigma^{z}_{*})&\mbox{if}\quad
z\in Hit(A)\\\ (\alpha^{z},\sigma^{z}),&\mbox{otherwise}.\end{cases}$
When there is no ambiguity, we will use the notation
$({\underline{\alpha}}^{z},{\underline{\sigma}}^{z})$ in place of
$({\underline{\alpha}}^{z}(A),{\underline{\sigma}}^{z}(A))$. First, we show
that $\\{({\underline{\alpha}}^{z},{\underline{\sigma}}^{z})\\}$ is an i.i.d.
family. Indeed, for any finite set of distinct points $y_{1},\ldots,y_{k}$,
for any $0-1$-valued sequence $i_{1},\ldots,i_{k}$, and for all measurable
sets $B_{1},\ldots,B_{k}$,
$\displaystyle\underline{\mathbf{P}}(\underline{\alpha}^{y_{j}}=i_{j},\underline{\sigma}^{y_{j}}\in
B_{j},j=1,\ldots,k)$ $\displaystyle=$
$\displaystyle\sum_{A}\underline{\mathbf{P}}(\widehat{C}^{x}=A,\underline{\alpha}^{y_{j}}=i_{j},\underline{\sigma}^{y_{j}}\in
B_{j},j=1,\ldots,k)$
$\displaystyle=\sum_{A}\underline{\mathbf{P}}(\widehat{C}^{x}=A,{\alpha}^{y_{j}}_{*}=i_{j},\sigma^{y_{j}}_{*}\in
B_{j},y_{j}\in Hit(A)\ \mbox{and}\ \alpha^{y_{j}}=i_{j},\sigma^{y_{j}}\in
B_{j},y_{j}\in(Hit(A))^{c})$
$\displaystyle=\sum_{A}\underline{\mathbf{P}}(\widehat{C}^{x}=A)\underline{\mathbf{P}}({\alpha}^{y_{j}}_{*}=i_{j},\sigma^{y_{j}}_{*}\in
B_{j},y_{j}\in
Hit(A))\underline{\mathbf{P}}(\alpha^{y_{j}}=i_{j},\sigma^{y_{j}}\in
B_{j},y_{j}\in(Hit(A))^{c})$
$\displaystyle=\sum_{A}{\mathbf{P}}(\widehat{C}^{x}=A)\prod_{j=1}^{k}{\mathbf{P}}(\alpha^{0}=i_{j},\sigma^{0}\in
B_{j})$
$\displaystyle=\prod_{j=1}^{k}{\mathbf{P}}(\alpha^{0}=i_{j},\sigma^{0}\in
B_{j}).$
Notice that the sum over $A$ is a sum over finite $A$. This keeps the number
of terms countable. This is licit due to assumption on the finiteness of the
clumps.
Let $\widehat{\underline{C}}^{y}$ be the clump of $y$ for
$\\{\underline{\alpha}^{z}\\}$ and let
$\widehat{\underline{\sigma}}^{y}=\sum_{z\in\widehat{\underline{C}}^{y}}{\underline{\sigma}}^{z}$.
We now show that the pairs $({\widehat{C}}^{x},{\widehat{\sigma}}^{x})$ and
$({\widehat{\underline{C}}}^{y},{\widehat{\underline{\sigma}}}^{y})$ are
independent. For all sets $A$, let ${\cal F}^{A}$ be the sigma-algebra
generated by the random variables
$(\alpha^{(A)},\sigma^{(A)})=\\{(\alpha^{u}_{*},\sigma^{u}_{*},u\in
Hit(A);\alpha^{v},\sigma^{v},v\in(Hit(A))^{c}\\},$
and let $\widehat{\underline{C}}^{y}(A)$ be the clump containing $y$ in the
environment $\alpha^{A}$. Let also
$\widehat{\underline{\sigma}}^{y}(A)=\sum_{z\in\widehat{\underline{C}}^{y}}{\underline{\sigma}}^{z}(A)$.
Clearly, $(\alpha^{(A)},\sigma^{(A)})$ is also an i.i.d. family. Then, for all
sets $A_{1},B_{1}$ and $A_{2},B_{2}$,
$\displaystyle\underline{\mathbf{P}}(\widehat{C}^{x}=A_{1},\widehat{\sigma}^{x}\in
B_{1},\widehat{\underline{C}}^{y}=A_{2},\widehat{\underline{\sigma}}^{y}\in
B_{2})$ (9) $\displaystyle=$
$\displaystyle\underline{\mathbf{P}}(\widehat{C}^{x}=A_{1},\widehat{\sigma}^{x}\in
B_{1},\widehat{\underline{C}}^{y}(A_{1})=A_{2},\widehat{\underline{\sigma}}^{y}(A_{1})\in
B_{2})$ $\displaystyle=$
$\displaystyle\underline{\mathbf{P}}(\widehat{C}^{x}=A_{1},\widehat{\sigma}^{x}\in
B_{1})\underline{\mathbf{P}}(\widehat{C}^{y}(A_{1})=A_{2},\widehat{\sigma}^{y}(A_{1})\in
B_{2})$ $\displaystyle=$
$\displaystyle{\mathbf{P}}(\widehat{C}^{x}=A_{1},\widehat{\sigma}^{x}\in
B_{1}){\mathbf{P}}(\widehat{C}^{y}=A_{2},\widehat{\sigma}^{y}\in B_{2}).$
The second equality follows from the fact that the event
$\\{\widehat{C}^{x}=A_{1},\widehat{\sigma}^{x}\in B_{1}\\}$ belongs to the
sigma-algebra ${\cal F}^{\alpha,\sigma}_{Hit(A_{1})}$ whereas the event
$\\{\widehat{\underline{C}}^{y}(A_{1})=A_{2},\widehat{\underline{\sigma}}^{y}(A_{1})\in
B_{2}\\}$ belongs to the sigma-algebra ${\cal F}^{A_{1}}$, which is
independent. The last equality follows from the fact that
$\\{\alpha^{(A_{1})},\sigma^{(A_{1})}\\}$ is an i.i.d. family with the same
law as $\\{\alpha^{x},\sigma^{x}\\}$.
We now prove the first assertion of the lemma. If
$\widehat{C}^{x}=\widehat{C}^{y}$, then the inclusion is obvious. Otherwise,
$\widehat{C}^{x}\bigcap\widehat{C}^{y}=\emptyset$ and if $\widehat{C}^{x}=A$,
the size and the shape of $\widehat{C}^{y}$ depend only on
$\\{\alpha^{u},u\in(Hit(A))^{c}\\}$. Indeed, on these events,
$v\in\widehat{C}^{y}\quad{\rm iff}\quad y\
{{\Longleftrightarrow}\atop{Int(A^{c}),\\{\alpha^{x}\\}}}\ v\ .$
Then the first assertion follows since, first, the latter relation is
determined by $\\{\alpha^{u},u\in Int(A^{c})\\}$ and, second,
$Int(A^{c})=(Hit(A))^{c}$. We may conclude that
$\widehat{\underline{C}}^{y}(A)\supseteq\widehat{C}^{y}$ because some
$\alpha_{*}^{z},z\in Hit(A)\setminus Int(A)$ may take value 1.
Finally, the second assertion of the lemma follows from the construction.
The proof of the deterministic radius case is complete.
Now we turn to the proof in the case of random radii. Recall that we assume
that the radius $R$ of a Model 2 RACS is a positive integer-valued r.v. and
this is a radius in the $L_{\infty}$ norm. For $x\in{\mathbb{Z}}^{d}$ and
$k=1,2,\ldots$, let $B^{x,k}$ be the $L_{\infty}$-norm ball with center $r$
and radius $k$. Recall that $M_{0}^{x,k}$ is the number of RACS that arrive at
time $0$, are centered at $x$ and have radius $k$. Then, in particular,
$R^{x,max}_{0}=\max\\{k:\ M_{0}^{x,k}\geq 1\\}.$
Let $\alpha^{x,k}$ be the indicator of event $\\{M_{0}^{x,k}\geq 1\\}$ and
$E^{x,k}$ a random set,
$E^{x,k}=B^{x,k}\quad\mbox{if}\quad\alpha^{x,k}=1\quad\mbox{and}\quad
E^{x,k}=\emptyset,\quad\mbox{otherwise}.$
Again, the r.v.’s $\alpha^{x,k}$ are mutually independent (now both in $x$ and
in $k$) and also i.i.d. (in $x$).
For each $A\subseteq{\mathbb{Z}}^{d}$, we let $Int_{2}(A)=\\{(x,k):\ x\in
A,k\in{\mathbb{N}},B^{x,k}\subseteq A\\}$ and $Hit_{2}(A)=\\{(x,k):\ x\in
A,k\in{\mathbb{N}},B^{x,k}\bigcap A\neq\emptyset\\}$.
For $x,y\in A$, we say that the event
$\Bigl{\\{}x\ {\Longleftrightarrow\atop{Int_{2}(A),\\{\alpha^{u,l}\\}}}\
y\Bigr{\\}}$
occurs if, for the input $\\{\alpha^{x}\\}$, the random set
$E^{A}=\bigcup_{(z,k)\in Int(A)}E^{z,k}$ is connected and both $x$ and $y$
belong to $E^{A}$.
Then the following events are equal:
$\Bigl{\\{}\widehat{C}^{x}=A\Bigr{\\}}=\bigcap_{z\in A}\Bigl{\\{}x\
{\Longleftrightarrow\atop{Int_{2}(A),\\{\alpha^{u,l}\\}}}\
z\Bigr{\\}}\bigcap\bigcap_{(z,k)\in Hit_{2}(A)\setminus
Int_{2}(A)}\\{\alpha^{z,k}=0\\}.$
Therefore, the event $\\{\widehat{C}^{x}=A\\}$ belongs to the sigma-algebra
${\cal F}^{\alpha}_{Hit_{2}(A)}$ generated by the random variables
$\\{\alpha^{x,k},(x,k)\in Hit_{2}(A)\\}$. For $x\in\mathbb{Z}^{d}$ and for
$k=1,2,\ldots$, we let $\sigma^{x,k}=\sum_{j=1}^{M_{0}^{x,k}}\sigma^{x}_{0,j}$
where the sum of the heights is taken over all RACS that arrive at time $0$,
are centered at $x$ and have radius $k$. Clearly, the random vectors
$(\alpha^{n,k},\sigma^{n,k})$ are independent in all $x$ and $k$ and
identically distributed in $x$, for each fixed $k$.
Let $\\{(\alpha^{x,k}_{*},\sigma^{x,k}_{*})\\}$ be another independent family
of pairs that does not depend on all random variables introduced earlier and
is such that, for each $k$ and $x$, the pairs
$(\alpha^{x,k}_{*},\sigma^{x,k}_{*})$ and $(\alpha^{0,k},\sigma^{0,k})$ have a
common distribution. Let $(\underline{\Omega},\underline{\cal
F},\underline{\mathbf{P}})$ be the product probability space that carries both
$\\{(\alpha^{x,k},\sigma^{x,k})\\}$ and
$\\{(\alpha^{x,k}_{*},\sigma^{x,k}_{*})\\}$. Introduce then a third family
$\\{(\underline{\alpha}^{x,k},\underline{\sigma}^{x,k})\\}$ defined as
follows: for any set $A$ containing $x$, on the event
$\\{\widehat{C}^{x}=A\\}$ we let
$\displaystyle(\underline{\alpha}^{z,l},\underline{\sigma}^{z,l})=\begin{cases}(\alpha^{z,l}_{*},\sigma^{z,l}_{*})&\mbox{if}\quad(z,l)\in
Hit_{2}(A)\\\ (\alpha^{z,l},\sigma^{z,l}),&\mbox{otherwise}.\end{cases}$
The rest of the proof is then quite similar to that of the constant radius
case: we introduce again $\widehat{\underline{C}}^{y}$, which is now the clump
of $y$ for $\\{\underline{\alpha}^{z,l}\\}$ with the height
$\widehat{\underline{\sigma}}^{y}=\sum_{k}\sum_{z\in\widehat{C}^{y}}{\underline{\sigma}}^{z,k}$;
we then show that the random pairs $(\widehat{C}^{x},\widehat{\sigma}^{x})$
and $(\widehat{\underline{C}}^{y},\widehat{\underline{\sigma}}^{y})$ are
independent and finally establish the first and the second assertions of the
lemma.
$\Box$
We will need the following two remarks on Lemma 1.
###### Remark 2
In the proof of Lemma 1, the roles of the points $x$ and $y$ and of the sets
$\widehat{C}^{x}$ and $\widehat{C}^{y}$ are not symmetrical. It is important
that $\widehat{C}^{x}$ is a clump while from $V=\widehat{C}^{y}$, we only need
the following monotonicity property: the set $V\setminus\widehat{C}^{x}$ is
a.s. bigger in the environment $\\{\underline{\alpha}^{z}\\}$ than in the
environment $\\{\alpha^{z}\\}$. One can note that any finite union of clumps
also satisfies this last property.
###### Remark 3
From the proof of Lemma 1, the following properties hold.
1. 1.
On the event where $\widehat{C}_{0}^{x}$ and $\widehat{C}_{0}^{y}$ are
disjoint, we have
$\widehat{C}_{0}^{y}\subseteq\widehat{\underline{C}}_{0}^{y}$ and
$\sigma^{z,sum}_{0}={\underline{\sigma}}^{z,sum}_{0}$ a.s., for all
$z\in\widehat{C}_{0}^{y}$, so that
$\widehat{\sigma}_{0}^{y}\leq\widehat{\underline{\sigma}}_{0}^{y}$.
2. 2.
On the event where $\widehat{C}_{0}^{x}=\widehat{C}_{0}^{y}$, we have
$\widehat{\sigma}_{0}^{x}=\widehat{\sigma}_{0}^{y}$.
Let us deduce from this that, for all constants $a^{x}\geq a^{y}$, for all
$z\in\widehat{C}_{0}^{x}\cup\widehat{C}_{0}^{y}$, there exists a random
variable $r(z)\in\\{x,y\\}$ such that $z\in\widehat{\underline{C}}_{0}^{r(z)}$
(with the convention $\widehat{\underline{C}}_{0}^{x}=\widehat{C}_{0}^{x}$ and
$\widehat{\underline{\sigma}}_{0}^{x}=\widehat{\sigma}_{0}^{x}$) a.s. and
$\max_{u\in\\{x,y\\}\ :\
z\in\widehat{C}^{u}_{0}}\left(a^{u}+\widehat{\sigma}_{0}^{u}\right)\leq
a^{r(z)}+\widehat{\underline{\sigma}}_{0}^{r(z)}\quad\mbox{a.s.}$
In case 2 and case 1 with $z\in\widehat{C}_{0}^{x}$, we take $r(z)=x$ and use
the fact that $a^{x}\geq a^{y}$. In case 1 with $z\in\widehat{C}_{0}^{y}$, we
take $r(z)=y$ and use the fact that
$\widehat{\sigma}_{0}^{y}\leq\widehat{\underline{\sigma}}_{0}^{y}$.
As a direct corollary of the last property, the inequality
$\max(a^{x}+{\widehat{\sigma}}_{0}^{x},a^{y}+{\widehat{\sigma}}_{0}^{y})\leq\max(a^{x}+{\widehat{\sigma}}_{0}^{x},a^{y}+{\widehat{\underline{\sigma}}}_{0}^{y})$
holds a.s. Here
$\widehat{\underline{\sigma}}_{0}^{y}=\sum_{z\in\widehat{\underline{C}}_{0}^{y}}\underline{\sigma}_{0}^{z,sum}$.
We are now in a position to formulate a more general result:
###### Lemma 2
Assume again that $\lambda<\lambda_{c}$. Let $S$ be a set of $\mathbb{Z}^{d}$
of cardinality $p\geq 2$, say $S=\\{x_{1},\ldots,x_{p}\\}$. There exists an
extension of the initial probability space and random pairs
${(\widehat{\underline{C}}}_{0}^{x_{i}},\widehat{\underline{\sigma}}_{0}^{x_{i}})$,
$i=2,\ldots,p$ defined on this extension which are such that:
1. 1.
The inclusion
$\bigcup_{j=1}^{p}\widehat{C}_{0}^{x_{j}}\subseteq\bigcup_{j=1}^{p}\widehat{\underline{C}}^{x_{j}}_{0}\quad
a.s.$ (10)
holds with $\widehat{C}_{0}^{x_{1}}=\widehat{\underline{C}}^{x_{1}}_{0}$.
2. 2.
For all real valued constants $a^{x_{1}},a^{x_{2}},\ldots,a^{x_{p}}$ such that
$a^{x_{1}}=\max_{1\leq i\leq p}a^{x_{i}}$, for all
$z\in\bigcup_{j=1}^{p}\widehat{C}_{0}^{x_{j}}$, there exists a random variable
$r(z)\in\\{x_{1},\ldots,x_{p}\\}$ such that
$z\in\widehat{\underline{C}}_{0}^{r(z)}$ a.s. and
$\max_{j\in\\{1,\ldots,p\\}\ :\
z\in\widehat{C}^{x_{j}}_{0}}\left(a^{x_{j}}+\widehat{\sigma}_{0}^{x_{j}}\right)\leq
a^{r(z)}+\widehat{\underline{\sigma}}_{0}^{r(z)}\quad\mbox{a.s.}$ (11)
In particular, the inequality
$\max_{1\leq j\leq
p}\left(a^{x_{j}}+\widehat{\sigma}^{x_{j}}_{0}\right)\leq\max_{1\leq j\leq
p}\left(a^{x_{j}}+\widehat{\underline{\sigma}}^{x_{j}}_{0}\right)$ (12)
holds a.s. with
$\widehat{\sigma}_{0}^{x_{1}}=\widehat{\underline{\sigma}}^{x_{1}}_{0}$.
3. 3.
The pairs
$(\widehat{C}_{0}^{x_{1}},\widehat{\sigma}^{x_{1}}_{0}),(\widehat{\underline{C}}^{x_{2}}_{0},\widehat{\underline{\sigma}}_{0}^{x_{2}}),\ldots,(\widehat{\underline{C}}^{x_{p}}_{0},\widehat{\underline{\sigma}}_{0}^{x_{p}})$
are mutually independent.
4. 4.
The pairs $(\widehat{C}_{0}^{x_{i}},\widehat{\sigma}^{x_{i}}_{0})$ and
$(\widehat{\underline{C}}_{0}^{x_{i}},\widehat{\underline{\sigma}}^{x_{i}}_{0})$,
have the same law, for each fixed $i=2,\ldots,p$.
##### Proof.
We proceed by induction on $p$. Assume the result holds for any set with $p$
points. Then consider a set $S$ of cardinality $(p+1)$ and number its points
arbitrarily, $S=\\{x_{1},\ldots,x_{p+1})$. For $A$ fixed, consider the event
$\\{\widehat{C}^{x_{1}}_{0}=A\\}$. On this event, define the same family
$(\underline{\alpha}^{z,l},\underline{\sigma}^{z,l})$ as in the previous proof
and consider the $p$ clumps
$\underline{D}^{x_{2}},\ldots,\underline{D}^{x_{p+1}}$ with their heights, say
$\underline{s}^{x_{2}},\ldots,\underline{s}^{x_{p+1}}$ for this family. By the
same reasons as in the proof of Lemma 1,
$(\widehat{C}^{x_{1}}_{0},\sigma^{x_{1}}_{0})$ is independent of
$(\underline{D}^{x_{2}},\underline{s}^{x_{2}}),\ldots,(\underline{D}^{x_{p+1}},\underline{s}^{x_{p+1}})$.
By Remark 2,
$\bigcup_{j=1}^{p+1}\widehat{C}_{0}^{x_{j}}\subseteq\widehat{C}_{0}^{x_{1}}\bigcup\bigcup_{j=2}^{p+1}\underline{D}^{x_{j}}\quad{a.s.}$
By the induction step,
$\underline{D}^{x_{2}}\cup\ldots\cup\underline{D}^{x_{p+1}}\subseteq_{a.s.}\widehat{\underline{C}}^{x_{2}}_{0}\cup\ldots\cup\widehat{\underline{C}}^{x_{p+1}}_{0},$
with
$\widehat{\underline{C}}^{x_{2}}_{0},\ldots,\widehat{\underline{C}}^{x_{p+1}}_{0}$
defined as in the lemma’s statement and then the first, third and fourth
assertions follow.
We now prove the second assertion, again by induction on $p$. If $p=2$, this
is Remark 3. For $p>2$, we define $L_{1}=\\{p+1\geq j\geq 1\
:\widehat{C}_{0}^{x_{j}}=\widehat{C}_{0}^{x_{1}}\\}$ and we consider two
cases:
1. 1.
$z\in\widehat{C}^{x_{1}}_{0}$. In this case let
$\overline{L}_{1}=\\{1,\ldots,p+1,\\}\setminus L_{1}$. Since
$z\notin\widehat{C}_{0}^{x_{j}}$ for $j\in\overline{L}_{1}$ and since
$\widehat{\sigma}_{0}^{x_{j}}=\widehat{\sigma}_{0}^{x_{1}}$ for all $j\in
L_{1}$, we get that (11) holds with $r(z)=1$ when using the fact that
$a^{x_{1}}=\max_{1\leq i\leq p}a^{x_{i}}$.
2. 2.
$z\notin\widehat{C}^{x_{1}}_{0}$. In this case let
$\overline{L}_{1}^{z}=\\{1\leq j\leq p+1\ :\ j\notin
L_{1},z\in\widehat{C}^{x_{j}}\\}$. We can assume w.l.g. that this set is non-
empty. Then for all $j\in\overline{L}_{1}^{z}$, we have
$\underline{s}^{x_{j}}\geq\widehat{\sigma}^{x_{j}}$, by Lemma 1 and Remark 2.
So
$\max_{j\in\overline{L}_{1}^{z}}\left(a^{x_{j}}+\widehat{\sigma}_{0}^{x_{j}}\right)\leq\max_{j\in\overline{L}_{1}^{z}}\left(a^{x_{j}}+\underline{s}^{x_{j}}\right)\quad\mbox{a.s.}$
Now, since the cardinality of $\overline{L}_{1}^{z}$ is less than or equal to
$p$, we can use the induction assumption, which shows that when choosing
$i_{1}\in\overline{L}_{1}^{z}$ such that
$a^{x_{i_{1}}}=\max_{i\in\overline{L}_{1}^{z}}a^{x_{i}}$, we have
$\max_{j\in\overline{L}_{1}^{z}}\left(a^{x_{j}}+\underline{s}^{x_{j}}\right)\leq
a^{x_{r(z)}}+\widehat{\underline{\sigma}}_{0}^{x_{r(z)}},$
with $r(z)\in\overline{L}_{1}^{z}$ and with the random variables
$\\{\widehat{\underline{\sigma}}_{0}^{x_{j}}\\}$ defined as in the lemma’s
statement, but for $\widehat{\underline{\sigma}}_{0}^{x_{i_{1}}}$ which we
take equal to $\underline{s}^{x_{i_{1}}}$. The proof in concluded in this case
too when using the fact that the random variable $\underline{s}^{x_{i_{1}}}$
is mutually independent of the random variables
$(\\{\widehat{\underline{\sigma}}_{0}^{x_{j}}\\},\widehat{\sigma}_{0}^{x_{1}})$
and it has the same law as $\widehat{\underline{\sigma}}^{x_{i_{1}}}$.
$\Box$
#### 3.3.2 Comparison with a Branching Process
##### Paths and Heights in Boolean Model 5
Below, we focus on the backward construction associated with Boolean Model 5,
for which we will need more notation.
Let $\mathbb{D}_{n}^{x}$ denote the set of descendants of level $n$ of
$x\in\mathbb{R}^{d}$ in this backward process, defined as follows:
$\displaystyle\mathbb{D}_{1}^{x}$ $\displaystyle=$
$\displaystyle\widehat{C}_{0}^{x}\cup\\{x\\}$
$\displaystyle\mathbb{D}_{n+1}^{x}$ $\displaystyle=$
$\displaystyle\bigcup\limits_{y\in\mathbb{D}_{n}^{x}}\widehat{C}_{-n}^{y}\cup\\{y\\},\quad
n\geq 1.$
By construction, $\mathbb{D}_{n}^{x}$ is a non-empty set for all $x$ and $n$.
Let $d_{n}^{x}$ denote the cardinality of $\mathbb{D}_{n}^{x}$.
Let $\Pi_{n}^{x}$ denote the set of paths starting from
$x=x_{0}\in\mathbb{Z}^{d}$ and of length $n$ in this backward process:
$x_{0},x_{1},\ldots,x_{n}$ is such a path if $x_{0},x_{1},\ldots,x_{n-1}$ is a
path of length $n-1$ and
$x_{n}\in\widehat{C}_{-n+1}^{x_{n-1}}\cup\\{x_{n-1}\\}$. Let $\pi_{n}^{x}$
denote the cardinality of $\Pi_{n}^{x}$. Clearly, $d_{n}^{x}\leq\pi_{n}^{x}$
a.s., for all $n$ and $x$.
Further, the height of a path $l_{n}=(x_{0},\ldots,x_{n})$ is the sum of the
heights of all clumps along the path:
$\sum_{i=0}^{n-1}{\widehat{\sigma}}_{-i}^{x_{i}}.$
In particular, if the paths $l_{n}$ and $l_{n}^{{}^{\prime}}$ differ only by
the last points $x_{n}\in\widehat{\sigma}_{-n+1}^{x_{n-1}}$ and
$x^{\prime}_{n}\in\widehat{\sigma}_{-n+1}^{x_{n-1}}$, then their heights
coincide.
For $z\in\mathbb{Z}^{d}$, let $\widehat{h}_{n}^{x,z}$ be the maximal height of
all paths of length $n$ that start from $x$ and end at $z$, where the maximum
over the empty set is zero.
Let $\widehat{\mathbb{H}}_{n}^{x}$, $n\geq 0$ be the maximal height of all
paths of length $n$ that start from $x$. Then
$\widehat{\mathbb{H}}(n)=\max_{z}\widehat{h}_{n}^{x,z}$.
##### Paths and Heights in a Branching Process
Now we introduce a branching process (also in the backward time) that starts
from point $x=x_{0}$ at generation 0. Let $(V_{n,i}^{z},s_{n,i}^{z})$,
$z\in\mathbb{Z}^{d}$, $n\geq 0$, $i\geq 1$ be a family of mutually independent
random pairs such that, for each $z$, the pair $(V_{n,i}^{z},s_{n,i}^{z})$ has
the same distribution as the pair
$(\widehat{C}_{0}^{z}\cup\\{z\\},\widehat{\sigma}_{0}^{z})$, for all $n$ and
$i$.
In the branching process defined below, we do not distinguish between points
and paths.
In generation 0, the branching process has one point:
$\widetilde{\Pi}_{0}^{x_{0}}=\\{(x_{0}\\}$. In generation 1, the points of the
branching process are $\widetilde{\Pi}_{1}^{x_{0}}=\\{(x_{0},x_{1}),\ x_{1}\in
V_{0,1}^{x_{0}}\\}$. Here the cardinality of this set is the number of points
in $V_{0,1}^{x_{0}}$ and all end coordinates $x_{1}$ differ (but this is not
the case for $n\geq 2$, in general).
In generation 2, the points of the branching process are
$\widetilde{\Pi}_{2}^{x_{0}}=\\{(x_{0},x_{1},x_{2}),\
(x_{0},x_{1})\in\widetilde{\Pi}_{1}^{x_{0}},x_{2}\in V_{1,1}^{x_{1}}\\}.$
Here a last coordinate $x_{2}$ may appear several times, so we introduce a
multiplicity function $k_{2}$: for $z\in\mathbb{Z}^{d}$, $k_{2}^{z}$ is the
number of $(x_{0},x_{1},x_{2})\in\widetilde{\Pi}_{1}^{x_{0}}$ such that
$x_{2}=z$.
Assume the set of all points in generation $n$ is
$\widetilde{\Pi}_{n}^{x_{0}}=\\{(x_{0},x_{1},\ldots,x_{n})\\}$ and $k_{n}^{z}$
is the multiplicity function (for the last coordinate). For each $z$ with
$k_{n}^{z}>0$, number arbitrarily all points with last coordinate $z$ from 1
to $k_{n}^{z}$ and let $q(x_{1},x_{2},\ldots,x_{n})$ denote the number given
to point $(x_{0},\ldots,x_{n})$ with $x_{n}=z$. Then the set of points in
generation $n+1$ is
$\widetilde{\Pi}_{n+1}^{x_{0}}=\\{(x_{0},\ldots,x_{n},x_{n+1}\ :\
(x_{0},\ldots,x_{n})\in\widetilde{\Pi}_{n}^{x_{0}},\ x_{n+1}\in
V_{n,q(x_{0},\ldots,x_{n})}^{x_{n}}\\}.$
Finally the height of point
$(x_{0},\ldots,x_{n})\in\widetilde{\Pi}_{n}^{x_{0}}$ is defined as
$\widetilde{h}(x_{0},\ldots,x_{n})=\sum_{i=0}^{n-1}s_{i,q_{i}}^{x_{i}}~{},$
where $q_{i}=q(x_{0},\ldots,x_{i})$.
##### Coupling of the two Processes
###### Lemma 3
Let $x_{0}$ be fixed. Assume that $\lambda<\lambda_{c}$. There exists a
coupling of Boolean Model 5 and of the branching process defined above such
that, for all $n$, for all points $z$ in the set $\mathbb{D}_{n}^{x_{0}}$,
there exists a point $(x_{0},\ldots,x_{n})\in\widetilde{\Pi}_{n}^{x_{0}}$ such
that $x_{n}=z$ and
$\widehat{h}_{n}^{x_{0},z}\leq\widetilde{h}(x_{0},\ldots,x_{n})$ a.s.
##### Proof
We construct the coupling and prove the properties by induction. For $n=0,1$,
the process of Boolean Model 5 and the branching process coincide. Assume that
the statement of the lemma holds up to generation $n$. For
$z\in\mathbb{D}_{n}^{x_{0}}$, let $a^{z}=\widehat{h}_{n}^{x_{0},z}$.
Now, conditionally on the values of both processes up to level $n$ inclusive,
we perform the following coupling at level $n+1$: we choose $z_{*}$ with the
maximal $a^{z}$ and we apply Lemma 2 with $S=\mathbb{D}_{n}^{x_{0}}$, with
$z_{*}$ in place of $x_{1}$, and with $\\{\widehat{C}_{-n}^{z}\\}_{z}$ (resp.
$\\{\widehat{\underline{C}}_{-n}^{z}\\}_{z}$) in place of
$\\{\widehat{C}_{0}^{z}\\}_{z}$ (resp.
$\\{\widehat{\underline{C}}_{0}^{z}\\}_{z}$); we then take
* •
$V_{n,1}^{z_{*}}=\widehat{C}_{-n}^{z_{*}}\cup\\{{z_{*}}\\}$;
* •
$V_{n,1}^{z}=\widehat{\underline{C}}_{-n}^{z}\cup\\{z\\}$ for all
$z\in\mathbb{D}_{n}^{x_{0}}$, $z\neq z_{*}$;
* •
$s_{n,1}^{z_{*}}=\widehat{\sigma}_{-n}^{z_{*}}$;
* •
$s_{n,1}^{z}=\widehat{\underline{\sigma}}_{-n}^{z}$ for all
$z\in\mathbb{D}_{n}^{x_{0}}$, $z\neq z_{*}$.
By induction assumption, for all $z\in\mathbb{D}_{n}^{x_{0}}$, there exists a
$(x_{0},\ldots,x_{n})\in\widetilde{\Pi}_{n}^{x_{0}}$ such that $x_{n}=z$. This
and Assertion 1 in Lemma 2 show that if $u\in\mathbb{D}_{n+1}^{x_{0}}$, then
$(x_{0},\ldots,x_{n},u)\in\widetilde{\Pi}_{n+1}^{x_{0}}$, which proves the
first property.
By a direct dynamic programming argument, for all $u\in D_{n+1}^{x_{0}}$
$\widehat{h}_{n+1}^{x_{0},u}=\max_{z\in\mathbb{D}_{n}^{x_{0}},u\in\widehat{C}_{-n}^{z}}\widehat{h}_{n}^{x_{0},z}+\widehat{\sigma}_{-n}^{z}.$
We get from Assertion 2 in Lemma 2 applied to the set
$\\{x_{1},\ldots,x_{p}\\}=\\{z\in\mathbb{D}_{n}^{x_{0}},u\in\widehat{C}_{-n}^{z}\\}$
that
$\widehat{h}_{n+1}^{x_{0},u}\leq\max_{z\in\mathbb{D}_{n}^{x_{0}},u\in\widehat{C}_{-n}^{z}}\left(\widehat{h}_{n}^{x_{0},z}+\widehat{\underline{\sigma}}_{-n}^{z}\right).$
By the induction assumption, for all $z$ as above,
$\widehat{h}_{n}^{x_{0},z}\leq\widetilde{h}(x_{0},\ldots,x_{n})$ a.s. for some
$(x_{0},\ldots,x_{n})\in\widetilde{\Pi}_{n}^{x_{0}}$ with $x_{n}=z$. Hence for
all $u$ as above, there exists a path
$(x_{0},\ldots,x_{n},x_{n+1})\in\widetilde{\Pi}_{n+1}^{x_{0}}$ with
$x_{n+1}=u$ and such that
$\widehat{h}_{n+1}^{x_{0},u}\leq\widetilde{h}(x_{0},\ldots,x_{n},x_{n+1})$
with $(x_{0},\ldots,x_{n},x_{n+1})\in\widetilde{\Pi}_{n+1}^{x_{0}}$ and
$x_{n+1}=u$. $\Box$
#### 3.3.3 Independent Heights
Below, we assume that the light tail assumptions on $\xi^{d}$ and $\sigma$ are
satisfied (see Section 3.1).
In the last branching process, the pairs $(V_{n,i}^{z},s_{n,i}^{z})$ are
mutually independent in $n,i$ and $z$. However, for all given $n,i$ and $z$,
the random variables $(V_{n,i}^{z},s_{n,i}^{z})$, are dependent. It follows
from Proposition 1 in the appendix that one can find random variables
$(W_{n,i}^{z},t_{n,i}^{z})$ such that
* •
For all $n,i$ and $z$, $V_{n,i}^{z}\subset W_{n,i}^{z}$ a.s.
* •
The random sets $W_{n,i}^{z}$ are of the form $z+w_{n,i}^{z}$, where the
sequence $\\{w_{n,i}^{z}\\}$ is i.i.d. in $n,i$ and $z$.
* •
The random variable ${\rm card}(W_{0,1}^{0})$ has exponential moments.
* •
For all $n,i$ and $z$, $s_{n,i}^{z}\leq t_{n,i}^{z}$ a.s.
* •
The random variable $t_{0,1}^{0}$ has exponential moments.
* •
The pairs $(W_{n,i}^{z},t_{n,i}^{z})$ are mutually independent in $n,i$ and
$z$.
So the branching process built from the $\\{(W_{n,i}^{z},t_{n,i}^{z})\\}$
variables is an upper bound to the one built from the
$\\{(V_{n,i}^{z},s_{n,i}^{z})\\}$ variables.
### 3.4 Upper Bound on the Growth Rate
The next theorem, which pertains to branching process theory, is not new (see
e.g. [4]). We nevertheless give a proof for self-containedness. It features a
branching process with height (in the literature, one also says with age or
with height), starting from a single individual, as the one defined in Section
3.3.3. Let $v$ be the typical progeny size, which we assume to be light-
tailed. Let $s$ be the typical height of a node, which we also assume to be
light-tailed.
###### Theorem 2
Assume that $\lambda<\lambda_{c}$. For $n\geq 0$, let $h(n)$ be the maximal
height of all descendants of generation $n$ in the branching process defined
above. There exists a finite and positive constant $c$ such that
$\lim\sup_{n\to\infty}\frac{\widehat{\mathbb{H}}(n)}{n}\leq c\quad a.s.$ (13)
Proof. Let $(v_{i},s_{i})$ be the i.i.d. copies of $(v,s)$. Take any positive
$a$. Let $D(a)$ be the event
$D(a)=\bigcup\limits_{n\geq 1}\\{{d}_{n}>a^{n}\\},$
with $d_{n}$ the number of individuals of generation $n$ in the branching
process. For all $c>0$ and all positive integers $k$, let $W_{c,k}$ be the
event $\bigl{\\{}\frac{h(k)}{k}\leq c\bigr{\\}}$. Then
$W_{c,k}\subseteq\bigl{(}W_{c,k}\cap\overline{D}(a)\bigr{)}\bigcup D(a),$
where $\overline{D}(a)$ is the complement of $D(a)$. From Chernoff’s
inequality, we have, for $\gamma\geq 0$
$\displaystyle{\mathbf{P}}(D(a))$ $\displaystyle=$
$\displaystyle{\mathbf{P}}\left(\bigcup_{n\geq
0}\\{{d}_{n+1}>a^{n+1},{d}_{i}\leq a^{i},\forall i\leq n\\}\right)$
$\displaystyle\leq$ $\displaystyle\sum_{n\geq
1}{\mathbf{P}}\left(\sum_{j=1}^{a^{n}}v_{j}>a^{n+1}\right)$
$\displaystyle\leq$ $\displaystyle\sum_{n\geq 1}\left({\mathbf{E}}\exp(\gamma
v)\right)^{a^{n}}\cdot e^{-\gamma a^{n+1}}$ $\displaystyle\leq$
$\displaystyle\sum_{n\geq 1}\left(\varphi(\gamma)e^{-\gamma
a}\right)^{a^{n}},$
where $\varphi(\gamma)={\mathbf{E}}\exp(\gamma v)$. First, choose $\gamma>0$
such that $\varphi(\gamma)<\infty$. Then, for any integer $m=1,2,\ldots$,
choose $a_{m}\geq\max({\mathbf{E}}v,2)$ such that
$q_{m}=\varphi(\gamma)e^{-\gamma a_{m}}<\frac{1}{2^{m}}.$
So ${\mathbf{P}}(D(a_{m}))\leq 2^{-m}\to 0$ as $m\to\infty$.
For any $m$ and any $c$,
$\Bigl{\\{}\lim\sup_{n\to\infty}\frac{{h}(n)}{n}>c\Bigr{\\}}\subseteq
D(a_{m})\bigcup\bigl{\\{}\lim\sup_{t\to\infty}\frac{{h}(n)}{n}>c\bigr{\\}}\cap\overline{D}(a_{m})$
and
${\mathbf{P}}\left(\bigl{\\{}\lim\sup_{n\to\infty}\frac{{h}(n)}{n}>c\bigr{\\}}\cap\overline{D}(a_{m})\right)\\\
\leq\sum_{n}P(n,c,m)~{},$
where
$P(n,c,m)={\mathbf{P}}\left(\bigl{\\{}\frac{{h}(n)}{n}>c\bigr{\\}}\cap\overline{D}(a_{m})\right)$.
We deduce from the union bound that, for all $m$,
$\displaystyle P(n,c,m)\leq
a_{m}^{n}{\mathbf{P}}\left(\sum_{i=1}^{n}s_{i}>cn\right).$
The inequality follows from the assumption that $v$-family and $s$-family of
random variables are independent. Hence, by Chernoff’s inequality,
$\displaystyle P(n,c,m)\leq a_{m}^{n}(\psi(\delta))^{n}e^{-\delta cn},$
where $\psi(\delta)={\mathbf{E}}e^{\delta s}$. Take $\delta>0$ such that
$\psi(\delta)$ is finite and then $c_{m}>0$ such that
$h_{m}=a_{m}\psi(\delta)e^{-\delta c_{m}}<1.$
Then
$\sum_{k\in\mathbb{N}}h_{m}^{k}<\infty.$
Hence for all $m$,
$\limsup_{n}\frac{{h}(n)}{n}1_{\overline{D}(a_{m})}\leq
c_{m}1_{\overline{D}(a_{m})},\quad\mbox{a.s.}$
Let $\mu$ be a random variable taking the value $c_{m}$ on the event
$\overline{D}(a_{m})\setminus\overline{D}(a_{m-1})$. Then $\mu$ is finite a.s.
and
$\limsup_{n}\frac{{h}(n)}{n}\leq\mu,\quad\mbox{a.s.}$
But $\limsup_{n}\frac{{h}(n)}{n}$ must be a constant (by ergodicity) and then
this constant is necessarily finite. Indeed, since
$\limsup_{n}\frac{{h}(n)}{n}\geq\limsup_{n}\frac{{h}(n)\circ\theta^{-1}}{n}\quad\mbox{a.s.,}$
and since the shift $\theta$ is ergodic, for each $c$, the event
$\\{\limsup_{n}\frac{{h}(n)}{n}\leq c\\}$ has either probability 1 or 0.
$\Box$
Recall that if $\lambda_{c}$ is the maximal value of intensity $\lambda$ such
that Boolean Model 5 has a.s. finite clumps, for any $\lambda<\lambda_{c}$.
###### Corollary 1
Let ${\mathbb{H}}(t)=\mathbb{H}_{t}^{0}$ be the height at $0\in\mathbb{Z}^{d}$
in the backward Poisson hail growth model defined in (3). Under the
assumptions of Theorem 2, for all $\lambda<\lambda_{c}$, with $\lambda_{c}>0$
the critical intensity defined above, there exists a finite constant
$\kappa(\lambda)$ such that
$\lim\sup_{t\to\infty}\frac{\mathbb{H}(t)}{t}=\kappa(\lambda)\quad a.s.$ (14)
with $\lambda$ the intensity of the Poisson rain.
Proof. The proof of the fact that the limit is finite is immediate from bound
(8) and Theorem 2. The proof that the limit is constant follows from the
ergodicity of the underlying model. $\Box$
###### Lemma 4
Let $a<\lambda_{c}$, where $\lambda_{c}$ is the critical value defined above.
For all $\lambda<\lambda_{c}$,
$\kappa(\lambda)=\frac{\lambda}{a}\kappa(a)~{},$ (15)
Proof. A Poisson rain of intensity $\lambda$ on the interval $[0,t]$ can be
seen as a Poisson rain of intensity $a$ on the time interval $[0,\lambda
t/a]$. Hence, with obvious notation
${\mathbb{H}}(t,\lambda)={\mathbb{H}}\left(\frac{t\lambda}{a},a\right),$
which immediately leads to (15). $\Box$
## 4 Service and Arrivals
Below, we focus on the equations for the dynamical system with service and
arrivals, namely on Poisson hail on a hot ground.
Let $W_{t}^{x}$ denote the residual workload at $x$ and $t$, namely the time
elapsing between $t$ and the first epoch when the system is free of all
workload arrived before time $t$ and intersecting location
$x\in\mathbb{R}^{d}$. We assume that $H_{0}^{x}\equiv 0$. Then, with the
notation of Section 3,
$W_{t}^{x}=\left(\sigma_{\tau^{x}(t)}^{x}-t+\tau^{x}(t)+\sup_{y\in
C^{x}_{\tau^{x}(t)}}W_{\tau^{x}(t)}^{y}\right)^{+}1_{\tau^{x}(t)\geq 0}.$ (16)
We will also consider the Loynes’ scheme associated with (16), namely the
random variables
$\mathbb{W}_{t}^{x}=W_{t}^{x}\circ\theta_{t},$
for all $x\in\mathbb{R}^{d}$ and $t>0$. We have
$\mathbb{W}_{t}^{x}=\left(\sigma_{\tau^{x}_{-}(t)}^{x}+\tau^{x}_{-}(t)+\sup_{y\in
C^{x}_{\tau^{x}_{-}(t)}}\mathbb{W}_{t+\tau^{x}_{-}(t)}^{y}\circ\theta_{\tau^{x}_{-}(t)}\right)^{+}1_{\tau^{x}_{-}(t)\geq-t}.$
(17)
Assume that $W_{0}^{x}=\mathbb{W}_{0}^{x}=0$ for all $x$. Using the Loynes-
type arguments (see, e.g., [7] or [8]), it is easy to show that for all $x$,
$\mathbb{W}_{t}^{x}$ is non decreasing in $t$. Let
$\mathbb{W}_{\infty}^{x}=\lim_{t\to\infty}\mathbb{W}_{t}^{x}.$
By a classical ergodic theory argument, the limit $\mathbb{W}_{\infty}^{x}$ is
either finite a.s. or infinite a.s. Therefore, for all integers $n$ and all
$(x_{1},\ldots,x_{n})\in\mathbb{R}^{dn}$, either
$\mathbb{W}_{\infty}^{x_{i}}=\infty$ for all $i=1,\ldots,n$ a.s. or
$\mathbb{W}_{\infty}^{x_{i}}<\infty$ for all $i=1,\ldots,n$ a.s. In the latter
case,
* •
$\\{\mathbb{W}_{\infty}^{x}\\}$ is the smallest stationary solution of (17);
* •
$(\mathbb{W}_{t}^{x_{1}},\ldots,\mathbb{W}_{t}^{x_{n}})$ converges a.s. to
$(\mathbb{W}_{\infty}^{x_{1}},\ldots,\mathbb{W}_{\infty}^{x_{n}})$ as $t$
tends to $\infty$.
Our main result is (with the notation of Corollary 1):
###### Theorem 3
If $\lambda<\min(\lambda_{c},a\kappa(a)^{-1})$, then for all
$x\in\mathbb{R}^{d}$, $\mathbb{W}_{\infty}^{x}<\infty$ a.s.
Proof. For all $t>0$, we say that $x_{0}$ is a critical path of length $0$ and
span $t$ starting from $x_{0}$ in the backward growth model
$\\{\mathbb{H}_{t}^{x}\\}$ defined in (3) if $\tau^{x_{0}}_{-}(t)<-t$. The
height of this path is $\mathbb{H}_{t}^{x_{0}}=0.$ For all $t>0$, $q\geq 1$,
we say that $x_{0},x_{1},\ldots,x_{q}$ is a critical path of length $q$ and
span $t$ starting from $x_{0}$ in the backward growth model
$\\{\mathbb{H}_{t}^{x}\\}$ defined in (3) if
$\displaystyle\mathbb{H}_{t}^{x_{0}}$ $\displaystyle=$
$\displaystyle\sigma_{\tau^{x_{0}}_{-}(t)}^{x_{0}}+\mathbb{H}_{t+\tau^{x_{0}}_{-}(t)}^{x_{1}}\circ\theta_{\tau^{x_{0}}_{-}(t)}~{},$
with $x_{1}\in C^{x_{0}}_{\tau^{x_{0}}_{-}(t)}$ and $\tau^{x_{0}}_{-}(t)>-t$,
and if $x_{1},\ldots,x_{q}$ is a critical path of length $q-1$ and span
$t+\tau^{x_{0}}_{-}(t)$ starting from $x_{1}$ in the backward growth model
$\\{\mathbb{H}_{t+\tau^{x_{0}}_{-}(t)}^{x}\circ\theta_{\tau^{x_{0}}_{-}(t)}\\}$.
The height of this path is $\mathbb{H}_{t}^{x_{0}}$.
Assume that $\mathbb{W}_{\infty}^{x_{0}}=\infty$. Since $\mathbb{W}_{t}^{x}$
is a.s. finite for all finite $t$ and all $x$, there must exist an increasing
sequence $\\{t_{k}\\}$, with $t_{k}\to\infty$, such that
$\mathbb{W}_{t_{k+1}}^{x_{0}}>\mathbb{W}_{t_{k}}^{x_{0}}>0$ for all $k$. This
in turn implies the existence, for all $k$, of a critical path of length
$q_{k}$ and span $t_{k}$, say $x_{0},x^{k}_{1},\ldots,x_{q_{k}}^{k}$ of height
$\mathbb{H}^{x_{0}}_{t_{k}}$ such that
$\mathbb{W}^{x_{0}}_{t_{k+1}}=\mathbb{H}^{x_{0}}_{t_{k}}-t_{k}>0,$
Then
$\frac{\mathbb{H}^{x_{0}}_{t_{k}}}{t_{k}}\geq 1,$
for all $k$ and therefore
$\kappa(\lambda)\geq\liminf_{k\to\infty}\frac{\mathbb{H}^{x_{0}}_{t_{k}}}{t_{k}}\geq
1.$
Using (15), we get
$\kappa(\lambda)=\frac{\lambda}{a}\kappa(a)\geq 1\quad\mbox{a.s.}$
But this contradicts the theorem assumptions. $\Box$
###### Remark 4
Theorem 1 follows from the last theorem and the remarks that precede it.
###### Remark 5
We will say that the dynamical system with arrivals and service percolates if
there is a time for which the directed graph of RACS present in the system at
that time (where directed edges between two RACS represent the precedence
constraints between them) has an infinite directed component. The finiteness
of the Loynes variable is equivalent to the non-percolation of this dynamical
system.
## 5 Bernoulli Hail on a Hot Grid
The aim of this section is to discuss discrete versions of the Poisson Hail
model., namely versions where the server is the grid $\mathbb{Z}^{d}$ rather
than the Euclidean space $\mathbb{R}^{d}$. Some specific discrete models were
already considered in the analysis of the Poisson hail model (see e.g.
Sections 3.1 and 3.2). Below, we concentrate on the simplest model, emphasize
the main differences with the continuous case and give a few examples of
explicit bounds and evolution equations.
### 5.1 Models with Bernoulli Arrivals and Constant Services
The state space is $\mathbb{Z}$. All RACS are pairs of neighbouring
points/nodes $\\{i,i+1\\}$, $i\in\mathbb{Z}$ with service time 1. In other
words, such a RACS requires $1$ unit of time for and simultaneous service from
nodes/servers $i$ and $i+1$. For short, a RACS $\\{i,i+1\\}$ will be called
”RACS of type $i$”.
Within each time slot (of size 1), the number of RACS of type $i$ arriving is
a Bernoulli-($p$) random variable. All these variables are mutually
independent. If a RACS of type $i$ and a RACS of type $i+1$ arrive in the same
time slot, the FIFO tie is solved at random (with probability $1/2$). The
system is empty at time 0, and RACS start to arrive from time slot $(0,1)$ on.
#### 5.1.1 The Growth Model
(1) The Graph ${\cal G}(1)$.
We define a precedence graph ${\cal G}(1)$ associated with $p=1$ nodes are all
$(i,n)$ pairs where $i\in\mathbb{Z}$ is a type and
$n\in\mathbb{N}=\\{1,2,\ldots\\}$ is a time. There are directed edges between
certain nodes, some of which are deterministic and some random. These edges
represent precedence constraints: an edge from $(i,n)$ to
$(i^{\prime},n^{\prime})$ means that $(i,n)$ ought to be served after
$(i^{\prime},n^{\prime})$. Here is the complete list of directed edges:
1. 1.
There is either an edge $(i,n)\to(i+1,n)$ w.p. 1/2 (exclusive) or an edge
$(i+1,n)\to(i,n)$ w.p. 1/2; we call these random edges spatial;
2. 2.
The edges $(i,n)\to(i-1,n-1)$, $(i,n)\to(i,n-1)$, and $(i,n)\to(i+1,n-1)$
exist for all $i$ and $n\geq 2$; we call these random edges time edges.
Notice that there are at most five directed edges from each node. These edges
define directed paths: for $x_{j}=(i_{j},n_{j})$, $j=1,\ldots,m$, the path
$x_{1}\to x_{2}\to\ldots\to x_{m}$ exists if (and only if) all edges along
this path exist. All paths in this graph are acyclic. If a path exists, its
length is the number of nodes along the path, i.e. $m$.
(2) The Graph ${\cal G}(p)$.
We obtain ${\cal G}(p)$ from ${\cal G}(1)$ by the following thinning:
1. 1.
Each node of ${\cal G}(1)$ is colored ”white” with probability $1-p$ and
”black” with probability $p$, independently of everything else;
2. 2.
If a node is coloured white, then each directed spatial edge from this node is
deleted (recall that there are at most two such edges);
3. 3.
For $n\geq 2$, if a node $(i,n)$ is coloured white, then two time edges
$(i,n)\to(i-1,n-1)$ and $(i,n)\to(i+1,n-1)$ are deleted, and only the
”vertical” one, $(i,n)\to(i,n-1)$, is kept.
So, the sets of nodes are hence the same in ${\cal G}(1)$ and ${\cal G}(p)$
whereas the set of edges in ${\cal G}(p)$ is a subset of that in ${\cal
G}(1)$. Paths in ${\cal G}(p)$ are defined as above (a path is made of a
sequence of directed edges present in ${\cal G}(p)$). The graph ${\cal G}(p)$
describes the precedence relationship between RACS in our basic growth model.
##### The Monotone Property.
We have the following monotonicity in $p$: the smaller $p$, the thinner the
graph. In particular, by using the natural coupling, one can make ${\cal
G}(p)\subset{\cal G}(q)$ for all $p\leq q$; here inclusion means that the sets
of nodes in both graphs are the same and the set of edges of ${\cal G}(p)$ is
included in that of ${\cal G}(q)$.
#### 5.1.2 The Heights and The Maximal Height Function
We now associate heights to the nodes: the height of a white node is 0 and
that of a black one is 1. The height of a path is the sum of the heights of
the nodes along the path. Clearly, the height of a path cannot be bigger than
its length.
For all $(i,n)$, let $H_{n}^{i}=H_{n}^{i}(p)$ denote the height of the maximal
height path among all paths of ${\cal G}(p)$ which start from node $(i,n)$. By
using the natural coupling alluded to above, we get that $H_{n}^{n}(p)$ can be
made a.s. increasing in $p$.
Notice that, for all $p\leq 1$, for all $n$ and $i$, the random variable
$H_{n}^{i}$ is finite a.s. To show this, it is enough to consider the case
$p=1$ (thanks to monotonicity) and $i=0$ (thanks to translation invariance).
Let
$t^{+}_{n,n}=\min\\{i\geq 1\ :\ (i,n)\to(i-1,n)\\}$
and, for $m=n-1,n-2,\ldots,1$, let
$t^{+}_{m,n}=\min\\{i>t^{+}_{m+1,n}+1\ :\ (i,m)\to(i-1,m)\\}.$
Similarly, let
$t^{-}_{n,n}=\max\\{i\leq-1\ :\ (i,n)\to(i+1,n)\\}$
and, for $m=n-1,n-2,\ldots,1$, let
$t^{-}_{m,n}=\max\\{i<t^{-}_{m+1,n}-1\ :\ (i,m)\to(i+1,m)\\}.$
Then all these random variables are finite a.s. (moreover, have finite
exponential moments) and the following rough estimate holds:
$H_{n}^{0}\leq\sum_{i=1}^{n}\left(t^{+}_{1,n}-t^{-}_{1,n}\right)+n.$
#### 5.1.3 Time and Space Stationarity
The driving sequence of RACS is i.i.d. and does not depend on the random
ordering of neighbours which is again i.i.d., so the model is homogeneous both
in time $n=1,2,\ldots$ and in space $i\in\mathbb{Z}$. Then we may extend this
relation to non-positive indices of $n$ and then introduce the measure
preserving time-transformation $\theta$ and its iterates
$\theta^{m},-\infty<m<\infty$. So $H_{n}^{i}\circ\theta^{m}$ is now
representing the height of the node $(i,n+m)$ in the model which starts from
the empty state at time $m$. Again, due to the space homogeneity, for any
fixed $n$, the distribution of the random variable $H_{n}^{i}$ does not depend
on $i$. So, in what follows, we will write for short
$H_{n}\equiv H_{n}(p)=H_{n}^{i},$
when it does not lead to a confusion.
Definition of function $h$. We will also consider paths from $(0,n)$ to
$(0,1)$ and we will denote by $h_{n}=h_{n}(p)$ the maximal height of all such
paths. Clearly, $h_{n}\leq H_{n}$ a.s.
#### 5.1.4 Finiteness of the Growth Rate and Its Continuity at 0
###### Lemma 5
There exists a positive probability $p_{0}\geq 2/5$ such that, for any
$p<p_{0}$,
$\limsup_{n\to\infty}H_{n}/n\leq C(p)<\infty,\quad\mbox{a.s.}$ (18)
and
$h_{n}(p)/n\to\gamma(p)\quad\mbox{a.s. and in $L_{1}$},$ (19)
with $\gamma(p)$ and $C(p)$ positive and finite constants, $\gamma(p)\leq
C(p)$.
##### Remark.
The sequence $\\{H_{n}\\}$ is neither sub- nor super-additive.
###### Lemma 6
For all $p$,
$\limsup_{n\to\infty}H_{n}(p)/n\leq 2\gamma(p)\quad\mbox{a.s.}$ (20)
###### Lemma 7
Under the foregoing assumptions,
$\lim_{p\downarrow 0}\limsup_{n\to\infty}H_{n}/n=0\quad\mbox{a.s.}$
Proofs of Lemmas 5–7 are in a similar spirit to that of the main results
(Borel-Cantelli lemma, branching upper bounds, and also superadditivity), and
therefore are omitted.
#### 5.1.5 Exact Evolution Equations for the Growth Model
We now describe the exact evolution of the process defined in §5.1.1. We adopt
here the continuous-space interpretation where a RACS of type $i$ is a segment
of length 2 centered in $i\in\mathbb{Z}$.
The variable $H_{n}^{i}$ is the height of the last RACS (segment) of type $i$
that arrived among the set with time index less than or equal to $n$ (namely
with index $1\leq k\leq n$), in the growth model under consideration. If
$(i,n)$ is black, then $H_{n}^{i}$ is at the same time the height of the
maximal height path starting from node $(i,n)$ in ${\cal G}(p)$ and the height
of the RACS $(i,n)$ in the growth model. If $(i,n)$ is white and the last
arrival of type $i$ before time $n$ is $k$, then $H_{n}^{i}=H_{k}^{i}$. This
is depicted in Figure 1
If there are no arrivals of type $i$ in this time interval, then
$H_{n}^{i}=0$. In general, if $\beta_{n}^{i}$ is the number of segments of
type $i$ that arrive in $[1,n]$, then $H_{n}^{i}\geq\beta_{n}^{i}$. Let
$v_{n}^{i}$ be the indicator of the event that $(i,n)$ is an arrival
($v_{n}^{i}=1$ if it is black and $v_{n}^{i}=0$ otherwise).
Let $e_{n}^{i,i+1}$ indicate the direction of the edge between $(i,n)$ and
$(i+1,n)$: we write $e_{n}^{i,i+1}=r$ if the right node has priority,
$e_{n}^{i,i+1}=l$ if the left node has priority.
The following evolution equations hold: if $v_{n+1}^{i}=1$, then
$\displaystyle H_{n+1}^{i}$ $\displaystyle=$
$\displaystyle(H_{n+1}^{i+1}+1){\bf I}(e_{n}^{i,i+1}=r,v_{n+1}^{i+1}=1)$
$\displaystyle\vee$ $\displaystyle(H_{n+1}^{i-1}+1){\bf
I}(e_{n}^{i-1,i}=l,v_{n+1}^{i-1}=1)$ $\displaystyle\vee$
$\displaystyle(H_{n}^{i}\vee H_{n}^{i-1}\vee H_{n}^{i+1}+1)$
and if $v_{n+1}^{i}=0$ then $H_{n+1}^{i}=H_{n}^{i}$. Here, for any event $A$,
${\bf I}(A)$ is its indicator function: it equals $1$ if the event occurs and
$0$ otherwise.
The evolution equations above may be rewritten as
$\displaystyle H_{n+1}^{i}$ $\displaystyle=$
$\displaystyle(H_{n+1}^{i+1}+1){\bf
I}(e_{n}^{i,i+1}=r,v_{n+1}^{i+1}=1,v_{n+1}^{i}=1)$ $\displaystyle\vee$
$\displaystyle(H_{n+1}^{i-1}+1){\bf
I}(e_{n}^{i-1,i}=l,v_{n+1}^{i-1}=1,v_{n+1}^{i}=1)$ $\displaystyle\vee$
$\displaystyle(H_{n}^{i}\vee H_{n}^{i-1}{\bf I}(v_{n+1}^{i}=1)\vee
H_{n}^{i+1}{\bf I}(v_{n+1}^{i}=1)+{\bf I}(v_{n+1}^{i}=1)).$
Figure 1: Top: A realization of the random graph ${\cal G}(p)$. Only the first
6 time-layers are represented. A black node at $(i,n)$ represents the arrival
of a RACS of type $i$ at time $n$. Bottom: the associated the heap of RACS,
with a visualization of the height $H_{n}^{i}$ of each RACS.
#### 5.1.6 Exact Evolution Equations for the Model with Service
The system with service can be described as follows: there is an infinite
number of servers, each of which serves with a unit rate. The servers are
located at points $1/2+i$, $-\infty<i<\infty$. For each $i$, RACS $(i,n)$ (or
customer $(i,n)$) is a customer of “type” $i$ that arrives with probability
$p$ at time $n$ and needs one unit of time for simultaneous service from two
servers located at points $i-1/2$ and $i+1/2$. So, at most one customer of
each type arrives at each integer time instant. If customers of types $i$ and
$i+1$ arrive at time $n$, then one makes a decision, that either $i$ arrives
earlier or $i+1$ arrives earlier, at random with equal probabilities,
${\mathbf{P}}(\mbox{customer}\ \ i\ \ \mbox{arrives earlier than customer}\ \
i+1)={\mathbf{P}}(e_{n}^{i,i+1}=l)=1/2.$
Each server serves customers in the order of arrival. A customer leaves the
system after the completion of its service. As before, we may assume that, for
each $(i,n)$, customer $(i,n)$ arrives with probability $1$, but is
“real”(“black”) with probability $p$ and “virtual”(“white”) with probability
$1-p$.
Assume that the system is empty at time $0$ and that the first customers
arrive at time $1$. Then, for any $n=1,2,\ldots$, the quantity
$W_{n}^{i}:=\max(T_{n}^{i}-(n-1),0)$ is the residual amount of time (starting
from time $n$) which is needed for the last real customer of type $i$ (among
customers $(i,1),\ldots,(i,n)$) to receive the service (or equals zero if
there is no real customers there).
Then these random variables satisfy the equations, for $n\geq
1,-\infty<i<\infty$,
$\displaystyle W_{n+1}^{i}$ $\displaystyle=$
$\displaystyle(W_{n+1}^{i+1}+1){\bf
I}(e_{n}^{i,i+1}=r,v_{n+1}^{i+1}=1,v_{n+1}^{i}=1)$ $\displaystyle\vee$
$\displaystyle(W_{n+1}^{i-1}+1){\bf
I}(e_{n}^{i-1,i}=l,v_{n+1}^{i-1}=1,v_{n+1}^{i}=1)$ $\displaystyle\vee$
$\displaystyle((W_{n}^{i}-1)^{+}+{\bf I}(v_{n+1}^{i}=1))$ $\displaystyle\vee$
$\displaystyle((W_{n}^{i-1}-1)^{+}+1){\bf I}(v_{n+1}^{i}=1)$
$\displaystyle\vee$ $\displaystyle((W_{n}^{i+1}-1)^{+}+1){\bf
I}(v_{n+1}^{i}=1).$
Since the heights are equal to 1 (and time intervals have length 1), the last
two terms in the equation may be simplified, for instance,
$((W_{n}^{i-1}-1)^{+}+1){\bf I}(v_{n+1}^{i}=1)$ may be replaced by
$W_{n}^{i-1}{\bf I}(v_{n+1}^{i}=1).$
In the case of random heights $\\{\sigma_{n}^{i}\\}$, the random variables
$\\{W_{n}^{i}\\}$ satisfy the recursions
$\displaystyle W_{n+1}^{i}$ $\displaystyle=$
$\displaystyle(W_{n+1}^{i+1}+\sigma_{n+1}^{i}){\bf
I}(e_{n}^{i,i+1}=r,v_{n+1}^{i+1}=1,v_{n+1}^{i}=1)$ $\displaystyle\vee$
$\displaystyle(W_{n+1}^{i-1}+\sigma_{n+1}^{i}){\bf
I}(e_{n}^{i-1,i}=l,v_{n+1}^{i-1}=1,v_{n+1}^{i}=1)$ $\displaystyle\vee$
$\displaystyle((W_{n}^{i}-1)^{+}+\sigma_{n+1}^{i}{\bf I}(v_{n+1}^{i}=1))$
$\displaystyle\vee$ $\displaystyle((W_{n}^{i-1}-1)^{+}+\sigma_{n+1}^{i}){\bf
I}(v_{n+1}^{i}=1)$ $\displaystyle\vee$
$\displaystyle((W_{n}^{i+1}-1)^{+}+\sigma_{n+1}^{i}){\bf I}(v_{n+1}^{i}=1).$
The following monotonicity property holds: for any $n$ and $i$,
$W_{n+1}^{i}\circ\theta^{-n-1}\leq W_{n}^{i}\circ\theta^{-n}\quad\mbox{a.s.}$
Let
$p_{0}=\sup\\{p\ :\ \Gamma(p)\leq 1\\}.$
###### Theorem 4
If $p<p_{0}$, then, for any $i$, random variables $W_{n}^{i}$ converge weakly
to a proper limit. Moreover, there exists a stationary random vector
$\\{W^{i},-\infty<i<\infty\\}$ such that, for any finite integers $i_{0}\leq
0\leq i_{1}$, the finite-dimensional random vectors
$(W_{n}^{i_{0}},W_{n}^{i_{0}+1},\ldots,W_{n}^{i_{1}-1},W_{n}^{i_{1}})$
converge weakly to the vector
$(W^{i_{0}},W^{i_{0}+1},\ldots,W^{i_{1}-1}.W^{i_{1}})$
###### Theorem 5
If $p<p_{0}$, then the random variables
$\min\\{i\geq 0\ :\ W^{i}=0\\}\quad\mbox{and}\quad\max\\{i\leq 0\ :\
W^{i}=0\\}$
are finite a.s.
## 6 Conclusion
We conclude with a few open questions. The first class of questions pertain to
stochastic geometry [9]:
* •
How does the RACS exclusion process which is that of the RACS in service at
time $t$ in steady state compare to other exclusion processes (e.g. Matérn,
Gibbs)?
* •
Assuming that the system is stable, can the undirected graph of RACS present
in the steady state regime percolate?
The second class of questions are classical in queueing theory and pertain to
existence and properties of the stationary regime:
* •
In the stable case, does the stationary solution $\mathbb{W}_{\infty}^{0}$
always have a light tail? At the moment, we can show this under extra
assumptions only. Notice that in spite of the fact that the Poisson hail model
falls in the category of infinite dimensional max plus linear systems.
Unfortunately, the techniques developed for analyzing the tails of the
stationary regimes of finite dimensional max plus linear systems [3] cannot be
applied here.
* •
In the stable case, does the Poisson hail equation (17) admit other stationary
regimes than obtained from $\\{\mathbb{W}_{\infty}^{x}\\}_{x}$, the minimal
stationary regime?
* •
For what other service disciplines still respecting the hard exclusion rule
like e.g. priorities or first/best fit can one also construct a steady state?
## 7 Appendix
###### Proposition 1
For any pair $(X,Y)$ of random variables with light-tailed marginal
distributions, there exists a coupling with another pair $(\xi,\eta)$ of
i.i.d. random variables with a common light-tailed marginal distribution and
such that
$\max(X,Y)\leq\min(\xi,\eta)\quad\mbox{a.s}.$
Proof. Let $F_{X}$ be the distribution function of $X$ and $F_{Y}$ the
distribution function of $Y$. Let $C>0$ be such that ${\mathbf{E}}e^{CX}$ and
${\mathbf{E}}e^{CY}$ are finite. Let $\zeta=\max(0,X,Y)$. Since
$e^{C\zeta}\leq 1+e^{CX}+e^{CY}$, $\zeta$ also has a light-tailed
distribution, say $F$.
Let $\overline{F}(x)=1-F(x)$, $\overline{G}(x)=\overline{F}^{1/2}(x)$, and
$G(x)=1-\overline{G}(x)$. Let $\xi$ and $\eta$ be i.i.d. with common
distribution $G$. Then ${\mathbf{E}}^{c\xi}$ is finite, for any $c<C/2$.
Finally, a coupling of $X,Y,\xi$, and $\eta$ may be built as follows. Let
$U_{1}$, $U_{2}$ be two i.i.d. random variable having uniform $(0,1)$
distribution. Then let $\xi=G^{-1}(U_{1})$, $\eta=G^{-1}(U_{2})$ and
$\zeta=\min(\xi,\eta)$. Finally, define $X$ and $Y$ given $\max(X,Y)=\zeta$
and conditionally independent of $(\xi,\eta)$.
$\Box$
## References
* [1] F. Baccelli and P. Brémaud, Elements of Queueing Theory, Springer Verlag, Applications of Mathematics, 2003.
* [2] F. Baccelli and S. Foss, “On the Saturation Rule for the Stability of Queues.” Journal of Applied Probability, 32(1995), 494–507.
* [3] F. Baccelli and S. Foss, “Moments and Tails in Monotone-Separable Stochastic Networks”, Annals of Applied Probability, 14 (2004), 621–650.
* [4] J.D. Biggins, “The first and last birth problems for a multitype age-dependent branching process”. Advances in Applied Probability, 8 (1976), 446–459.
* [5] B. Blaszczyszyn, C. Rau and V. Schmidt “Bounds for clump size characteristics in the Boolean model”. Adv. in Appl. Probab. 31, (1999), 910–928.
* [6] P. Hall, “On continuum percolation” Ann. Probab. 13, (1985), 1250–1266.
* [7] M. Loynes, “The stability of a queue with non-independentinter-arrival and service times”, Proceedings of the Cambridge Phil. Society, 58 (1962), 497–520.
* [8] D. Stoyan, Comparison Methods for Queues and other Stochastic Models, 1983, Wiley.
* [9] D. Stoyan, W.S. Kendall and J. Mecke, Stochastic Geometry and its Applications, second edition, 1995, Wiley.
|
arxiv-papers
| 2011-05-10T23:13:19 |
2024-09-04T02:49:18.699755
|
{
"license": "Public Domain",
"authors": "Francois Baccelli and Sergey Foss",
"submitter": "Sergey Foss",
"url": "https://arxiv.org/abs/1105.2070"
}
|
1105.2335
|
# Hierarchical Complexity: Measures of High Level Modularity
Alex Fernández (alejandrofer@gmail.com)
###### Abstract
Software is among the most complex endeavors of the human mind; large scale
systems can have tens of millions of lines of source code. However, seldom is
complexity measured above the lowest level of code, and sometimes source code
files or low level modules. In this paper a hierarchical approach is explored
in order to find a set of metrics that can measure higher levels of
organization. These metrics are then used on a few popular free software
packages (totaling more than 25 million lines of code) to check their
efficiency and coherency.
## 1 Introduction
There is a high volume of literature devoted to measuring complexity. Several
metrics have been proposed to measure complexity in software packages. At the
code level there is for example McCabe’s cyclomatic complexity [1]. Complexity
in modules (sometimes equated to source code files, in other occasions low
level aggregations of code) has also been studied, and its relationship to
quality has been measured [2].
But modern software packages can have millions or tens of millions of source
code; just one level of modularity is not enough to make complex or large
software systems manageable. And indeed an exploration of large system
development (made possible thanks to open development of free software)
systematically reveals a hierarchy of levels that contains subsystems,
libraries, modules and other components. But to date no theoretical approach
has been developed to measure and understand this hierarchy. This paper
presents a novel way to study to this problem; the particular set of metrics
derived can be seen as a first order approach, to be refined in subsequent
research when the problem reaches a wider audience.
### 1.1 Contents
This first section contains this short introduction; the second is devoted to
the study of hierarchical complexity and modularity from a theoretical point
of view. Section 3 explains the experimental setup that will be used for
measurements. In section 4 the source code of several software packages is
measured to verify the theoretical assumptions, and the results are analyzed.
Finally, the last section contains some conclusions that can be extracted from
this approach, in light of the theoretical analysis and the experimental
results.
### 1.2 Conventions
Portions of source code are shown in a non-proportional font: org.eclipse.
File names are also shown in non-proportional type, and also in italics:
_org/eclipse_. Commands are shown indented in their own line, like this:
> find . ! -name "*.c" -delete
Terms being defined appear within single quotes: ‘exponential tree’, while
terms already defined in the text or assumed are in double quotes:
“combinatorial complexity”.
## 2 Theoretical Approach
Complexity is the bane of software development. Large scale systems routinely
defy the understanding and comprehension powers of the brightest individuals;
small scale systems suffer from poor maintainability, and it is often cheaper
to rewrite old programs than to try to understand them.
Against this complexity there is basically one weapon: modularity. No other
technique (cataloging, automation, graphical environments) works for large
system development, but our tools to measure it are scarce. In this section a
new metric for complexity measurements is proposed.
### 2.1 Hierarchical Complexity
In 1991 Capers Jones proposed several measures of complexity for software [3].
One of them was “combinatorial complexity”, dealing with the ways to integrate
different components; another was “organizational complexity”, this time
dealing with the organization that produces the software. But no attention was
given to the fact that large-scale software is organized in multiple levels.
Several notions related to high level organization have since been advanced in
the literature [4, 5]. However, when the time comes to measure complexity in a
system it is still the norm to resort on measuring interfaces[6], code-level
complexity [7] or just lines of code per module [8]. Given that the real
challenge in software engineering does not currently lie in algorithmic
complexity or cyclomatic complexity, but in the complexity of high level
organization, some metrics to explore complexity in the component hierarchy
should be a useful addition to the software engineering toolbox. The term
“hierarchical complexity” seems a good fit for the magnitude being measured.
### 2.2 System and Components
A ‘system’ can be defined as a global entity, which is seen from the outside.
A ‘component’ (often called “module”) is an internal part of the system which
is relatively independent, and which hides its inner workings from the rest of
the system. It communicates with the rest of the system via ‘interfaces’:
predefined aggregations of behavior.
We can then redefine a ‘system’ as a collection of components and the
interfaces that connect them. Underlying this definition is the presumption
that a system has well defined components; as we will see, at some level it
holds true for any non-trivial software system.
A system can also be divided in subsystems. A ‘subsystem’ is a high-level
component inside a system. In turn, a component can be divided into
subcomponents; for each of them, the container component is their “system”. In
other words, any component can be seen as a whole system, which interfaces
with other systems at the same level.
When the system is viewed in this light, a hierarchy of levels emerges: the
main system is made of subsystems, which can often be decomposed into
components, in turn divided into subcomponents. The number of levels varies
roughly with the size of the system; for small software projects one or two
levels can be enough, while large scale developments often have more than five
levels.
### 2.3 Articulation
The way that these levels are organized is a crucial aspect of modularity. It
is not enough to keep a neat division between modules; for robust and
maintainable systems to emerge, connections between levels must follow a
strict hierarchy. The alternative is what is commonly called “spaghetti code”.
There are many different names for components at various levels. Depending on
the domain and the depth we can find subsystems, modules, plugins, components,
directories, libraries, packages and a few more. It is always a good idea to
choose a standard set of levels for a given system and use them consistently.
The selection of names should be done according to whatever is usual in the
system’s domain, e.g.: in operating systems “subsystems” are commonly used as
first level, as in “networking subsystem”; while in financial packages
“modules” is the common division (as in “supply chain module”). Finally,
ambiguities should be avoided: e.g. in this paper “software package” is used
in this paper in the sense of “complete software systems”, so when adding a
component level of “packages” the difference must be noted.
In the lowest levels it is often the programming language itself that drives
(and sometimes enforces) modularity. In object-oriented languages, classes are
containers of behavior that hide their internal workings (like attributes or
variables), and are in fact a perfect example of components. Procedural
languages usually keep some degree of modularity within source code files:
some of the behavior can be hidden from other files.
Functions are at the bottom of the hierarchy. A ‘function’ (also called
“subroutine” or “method”) is a collection of language statements with a common
interface that are executed in sequence. A function certainly hides its
behavior from the outside, so it can be seen as the lowest level component, or
it can be seen as the highest level of non-components. The first point of view
will be chosen here.
Statements (or lines of code) are the low level elements, but they can hardly
be considered components themselves from the point of view of the programming
language: they do not hide their internal workings, nor have an interface.
There is however one sense in which they can actually be considered
components: as entry points to processor instructions. A line of code will
normally be translated into several machine-code instructions, which will in
turn be run on the processor. The line of code hides the internal workings of
the processor and presents a common interface in the chosen language; if a
function call is performed, the language statement hides the complexity of
converting the parameters if necessary, storing them and jumping to the
desired location, and finally returning to the calling routine. Since in
software development machine-code instructions are seldom considered except
for debugging or optimization purposes, lines of code remain as “borderline
components” and will be considered as units for the purposes of this study.
Lines of source code can be unreliable as economic indicators [9], but they
are good candidates for the simplest bricks out of which software is made once
comments and blank lines are removed. Source code statements are more reliable
but the added effort is not always worth the extra precision. In what follows
the unit for non-blank, non-comment source code line will be written like
this: “$\mathsf{LOC}$”, with international system prefixes such as
“$\mathsf{kLOC}$” for a thousand lines and “$\mathsf{MLOC}$” for a million
lines.
To capitulate, a function is made of lines of code, and a class is made of a
number of functions (plus other things). Classes are kept in source code
files, and files are combined into low level components. Components aggregate
into higher level components, until at the last two steps we find subsystems
and the complete system.
### 2.4 A Few Numbers
We will now check these theoretical grounds with some numerical estimations.
The “magic number” $7$ will be used as an approximation of a manageable
quantity; in particular the “magic range” $7\pm 2$. It is common to use the
interval $7\pm 2$ in that fashion [10, 11], and yet Miller’s original study
[12] does not warrant it except as the number of things that can be counted as
a glance. Even so, it seems to be a reasonable cognitive limit for the number
of things that can be considered at the same time.
The main assumption for this subsection is thus that a system is more
maintainable if it is composed of $7\pm 2$ components. In this range its
internal organization can be apprehended with ease. Each of these components
will in turn contain $7\pm 2$ subcomponents, and so on for each level. In the
lowest level, functions with $7\pm 2\mathsf{LOC}$ will be more manageable. In
this fashion systems with $7\pm 2$ subsystems, packages with $7\pm 2$ classes,
classes with $7\pm 2$ functions and functions with $7\pm 2\mathsf{LOC}$ will
be preferred. At each level a new type of component will be chosen; arbitrary
names are given so as to resemble a typical software organization.
The problem can thus be formulated as follows: if each component contains
$7\pm 2$ subcomponents, what would be the total system size (measured in
$\mathsf{LOC}$) for a given depth of levels? Put another way, how many levels
of components will be needed for a given $\mathsf{LOC}$ count? (Again, the
lowest level containing $\mathsf{LOC}$ themselves does not count as a level of
components.)
Let us first study the nominal value of the range, $7$. For only $3$ levels of
components, a reasonable amount for a small system, the total $\mathsf{LOC}$
count will be
$7\mathsf{packages}\times 7\mathsf{\frac{classes}{package}}\times
7\mathsf{\frac{functions}{class}}\times
7\mathsf{\frac{LOC}{function}}=2401\mathsf{LOC}.$
When there are $4$ levels of components, including in this case “subsystems”,
the result is
$7\mathsf{subsystems}\times
7\mathsf{\frac{\mathsf{packages}}{subsystem}}\times
7\mathsf{\frac{classes}{package}}\times
7\mathsf{\frac{functions}{class}}\times
7\mathsf{\frac{LOC}{function}}=16807\mathsf{LOC}.$
For bigger systems a new level of “modules” is added:
$7\mathsf{subsystems}\times 7\mathsf{\frac{\mathsf{modules}}{subsystem}}\times
7\mathsf{\frac{\mathsf{packages}}{module}}\times
7\mathsf{\frac{classes}{package}}\times
7\mathsf{\frac{functions}{class}}\times 7\mathsf{\frac{LOC}{function}}=$
$=117649\mathsf{LOC}\approx 117\mathsf{kLOC}.$
A respectable system of $100\mathsf{kLOC}$ is already reached with $5$ levels.
At depth $6$ (inserting a level of “libraries”) the total $\mathsf{LOC}$ count
will now be:
$7\mathsf{subsystems}\times
7\mathsf{\frac{\mathsf{libraries}}{subsystem}}\times
7\mathsf{\frac{\mathsf{modules}}{library}}\times
7\mathsf{\frac{\mathsf{packages}}{module}}\times\ldots\times
7\mathsf{\frac{LOC}{function}}=$
$=7^{7}\mathsf{LOC}=823543\mathsf{LOC}\approx 824\mathsf{kLOC}.$
The limit of reason ability can again be estimated using the “magic number” as
$7$ levels, adding a last level of “subprojects”:
$7\mathsf{subsystems}\times
7\mathsf{\frac{\mathsf{subproject}}{subsystem}}\times
7\mathsf{\frac{\mathsf{libraries}}{subproject}}\times
7\mathsf{\frac{\mathsf{modules}}{library}}\times\ldots\times
7\mathsf{\frac{LOC}{function}}=$
$=7^{8}\mathsf{LOC}=5764801\mathsf{LOC}\approx 5765\mathsf{kLOC}\approx
5.8\mathsf{MLOC}.$
There are of course systems bigger than 6 million lines of code. A few ways to
extend $\mathsf{LOC}$ count can be noted:
* •
Some particular level might contain quite more than $7$ subcomponents. For
example, when classes are counted some of them (those in auxiliary or
“scaffolding” code) can be left out, since they do not belong to the core of
the model and do not necessarily add complexity.
* •
We have seen that $\mathsf{LOC}$s are not components, so the $7\pm 2$ rule of
thumb would not apply at the lowest level. Having e.g. 21
$\mathsf{\frac{LOC}{function}}$ would yield a total three times higher for
every depth.
* •
Using the high value in the “magic range” $7\pm 2$, $9$, total size goes up
considerably. At the highest depth considered above, with $7$ levels, size
would be $9^{8}=43046721$, or about $43$ million $\mathsf{LOC}$.
* •
The “magic range” can also apply to the number of levels, yielding a total of
$9$ levels of components. In this case, $7^{10}=282475249$, yielding about
$282$ million $\mathsf{LOC}$. In the most extreme high range we would have
$9^{10}=3\cdot 10^{9}$ or $3486$ million $\mathsf{MLOC}$, as the most complex
system that could be built and maintained.
* •
Lastly, it is quite likely that systems bigger than five million
$\mathsf{LOC}$ are indeed too complex to grasp by any single person (or even a
team). A system with five million $\mathsf{LOC}$ is already a challenge for
anyone.
It must be noted that the “magic range” does not apply to depth as such for
two different reasons. First, a limit should be sought rather than a range.
Small systems cannot be expected to fall within the range, if they don’t have
enough components as to warrant a minimum of 5 levels. Therefore values below
the range are acceptable too; it is the upper end of the range where it
becomes relevant.
A more important objection is that the “magic range” was introduced above as a
cognitive limit for things which have to be considered at the same time. But
component levels are introduced precisely to avoid considering too many
components at the same time; much less levels of components. There is no real
reasoning where all levels of components have to be considered at the same
time, other than to catalog them; therefore, systems with more than $7\pm 2$
levels of components do not have to be intrinsically harder to manage than
systems with less, other than they become really big at that depth.
### 2.5 Generalization
Let $d$ be the depth of the component hierarchy (i.e. the number of levels of
components), and $c(i)$ the number of elements at level $i$; for $i>0$, $c(i)$
is also the number of subcomponents. Then the total size $S$ (in
$\mathsf{LOC}$) will be
$S=\prod_{i=0}^{d}c(i),$ (1)
where $c(0)$ is the number of $\mathsf{LOC}$ per function and $c(1)$ the
number of functions per file. Assuming that $c(i)$ is always in the magic
range:
$c(i)\approx 7\pm 2,$
from where
$S\approx(7\pm 2)^{d+1}.$
Table 1 summarizes the estimated number of $\mathsf{LOC}$ that result from
three values in the range $7\pm 2$: both endpoints and the mean value. They
can be considered to be representative of trivial ($5$), nominal ($7$), and
complex systems ($9$). Several depths of levels are shown.
depth | trivial ($5$) | nominal ($7$) | complex ($9$)
---|---|---|---
$3$ | $0.6\mathsf{kLOC}$ | $2.4\mathsf{kLOC}$ | $6.6\mathsf{kLOC}$
$4$ | $3.1\mathsf{kLOC}$ | $16.8\mathsf{kLOC}$ | $59\mathsf{kLOC}$
$5$ | $15.6\mathsf{kLOC}$ | $118\mathsf{kLOC}$ | $531\mathsf{kLOC}$
$6$ | $78\mathsf{kLOC}$ | $824\mathsf{kLOC}$ | $4783\mathsf{kLOC}$
$7$ | $391\mathsf{kLOC}$ | $5765\mathsf{kLOC}$ | $43\mathsf{MLOC}$
$8$ | $1953\mathsf{kLOC}$ | $40\mathsf{MLOC}$ | $387\mathsf{MLOC}$
$9$ | $9766\mathsf{kLOC}$ | $282\mathsf{MLOC}$ | $3486\mathsf{MLOC}$
Table 1: Number of LOC that corresponds to several depths of levels. For each
depth the low, mean and high values of the “magic range” $7\pm 2$ are shown;
representative of trivial, nominal or complex systems.
A system whose code is nearer the trivial end will be easy to understand and
master; not only at the lowest code level, but its internal organization. On
the other hand, a complex system will take more effort to understand, and
consequently to maintain and extend. This effort is not only metaphorical: it
translates directly into maintenance costs. An effort to first simplify such a
system before attempting to extend it might be worth it from an economic point
of view.
### 2.6 Measurement
Lines of source code (however imperfect they may be) can be counted; function
definitions can be located using regular expressions. But it is not easy to
get an idea of the number of components in a system. The basic assumption for
the whole study is that source code will be structured on disk according to
its logical organization; _each directory will represent a logical component_.
An exact measure of modularity would require a thorough study of the design
documents for the software package: first finding out the number of component
levels, then gathering the names of components at each level and finally
counting the number of components. Unfortunately, such a study cannot be
performed automatically; it would only be feasible on small packages or after
a staff-intensive procedure.
A metric based on directories has the big advantage of being easy to compute
and manipulate. It has the disadvantage of imprecision: to some degree
developers may want to arrange files in a manageable fashion without regard
for the logical structure. Developers may also have an on-disk organization
that does not encapsulate internal behavior and just keeps files well sorted
according to some other formal criteria. Alternatively a software package
might be organized logically into a hierarchy of components, but lack a
similar on-disk organization.
Finally, it is of course perfectly possible to rearrange the source files in
new directories to make the code look modular, without actually changing its
internal structure; organization in directories would allow trivial cheating
if used e.g. as an acceptance validation. Since this measure is not in use
today this last concern does not apply, but it should be taken into account
for real-life uses. In software developed commercially component hierarchies
should always be documented, and automatic checking should only be used as an
aid to visual inspections.
## 3 Methodology
The study is performed on several free software packages. Ubuntu Linux is used
for the analysis, but about any modern Unix (complemented with common GNU
packages) should suffice. For each step, the Bash shell commands are shown.
### 3.1 Package Selection and Downloading
Several free software packages are selected for study. The requirement to
choose only free software is intended to make this study replicable by any
interested party; source code can be made freely available. The selection of
individual packages was made according to the following criteria:
* •
Every package should be relevant: well known in a wide circle of developers
and users.
* •
Every package should be successful within its niche; this ensures that only
code which is more or less in a good shape is chosen.
* •
Every package should be big enough for a study on complexity; only software
packages bigger than half a million lines of source code were selected. On the
other hand measures should cover about an order of magnitude. Therefore the
selected packages will be in the range $500\mathsf{kLOC}$ to
$5000\mathsf{kLOC}$.
* •
At least two packages for each language should be selected.
* •
Only packages with several authors should be chosen, so that individual coding
styles are not prevalent.
Overall, packages should be written in different languages, so as to have a
varied sample of programming environments.
Once selected, for each package a fresh copy was downloaded in October 2006 to
February 2007 from the public distribution. Those packages that are offered as
a single compressed file are downloaded and uncompressed in their own
directory. Some packages have their own download mechanisms. For the GNOME
desktop, GARNOME was used [13]: it downloads the required tools and builds the
desktop.
### 3.2 Basic Metrics
The first step is identifying what the target language is, and keeping just
files in that language. It is difficult to find a software project this size
written in only one language; for this study the main language of each package
(in terms of size) was selected, the rest were discarded.
The file name is used to identify source code files. For C we can choose to
keep just those files ending in _.c_ , what is usually called the “extension”.
C header files (with the extension _.h_) are ignored, since they are usually
stored along with the equivalent C source code files. Table 2 shows the
extensions for all languages used in the present study. Then the rest of the
files can be deleted:
> find . ! -name "*.c" -delete
This command removes all files not in the target language; it also removes all
directories that do not contain any target language files. This step is
repeated as needed in case deletion was not strictly sequential (i.e. a
directory was processed before the items it contains), until no more
directories are removed. Then directories are counted:
> find . -type d | wc -l
All files in the target language are counted too:
> find . -name "*.c" | wc -l
Lines of code are counted removing blanks and comments. C++ and Java are
derivatives of C, and therefore share a common structure for comments. Note
that lines consisting of opening and closing braces are excluded too, in order
to remove a certain degree of “programming style” from them.
> find . -name "*.c" -print0 | xargs -0 cat
>
> | grep -v "^[[:space:]]*$" | grep -v "^[[:space:]]*\\*"
>
> | grep -v "^[[:space:]]*{[[:space:]]*$"
>
> | grep -v "^[[:space:]]*}[[:space:]]*$"
>
> | grep -v "^[[:space:]]*//" | wc -l
The results obtained with this method are not perfect, but they have been
found to be a reasonable fit. Similar expressions are used for Lisp and Perl
taking into account their respective syntax for comments. For the actual
results presented in this study several equivalent regular expressions are
used to remove comments, with slightly more accurate results.
The number of functions is also counted using full-blown regular expressions.
Table 2 shows the regular expressions used for each language to locate
function definitions in the code.
language | extension | function definition
---|---|---
C | “.c” | (\w+)\s*\\([\w\s\,\\[\\]\&\\*]*\\)\s*\;
C++ | “.cpp”,“.cxx”,“.cc” | (\w+)\s*\\([\w\s\,\\[\\]\&\\*]*\\)\s*\\{
Java | “.java” | (\w+)\s*\\([\w\\.\s\,\\[\\]]*\\)\s*…
| | (?:throws\s*\w+(?:\s*\,\s*\w+)*)\s*\\{
Lisp | “.el” | \,\s*\w+)*)*\s*\\{
Perl | “.pl”,“.plx”,“.pm” | (\w+)\s*\\(; (?<=sub\s)\s*(\w+)\s*\\{
Table 2: File extensions for different languages. The right-most column shows
the regular expression used to find function definitions for each language.
A Python script iterates over every directory and file with the correct
extension, and counts the number of matches for the appropriate regular
expression. Again, the results have been inspected for correctness. They are
not perfect; for example, some C++ function definitions may call parent
functions, and when they contain typecasts they will not be registered under
this definition. C++ template code is not considered either. They are however
a good enough compromise; not too slow but not too precise.
Note that C and C++ headers are not used to find function definitions. There
are several reasons for this decision. First, header files usually contain
only function definitions, while function declarations are in source files
(except for inlined functions). And second and most important, it is hard to
distinguish between C and C++ headers since they follow the same pattern (both
share the extension “.h”), so they would get mixed in the process for projects
that contain code in both languages. In field experiments the results are not
affected by this omission.
### 3.3 Average Directory Depth
A first approximation to complexity is to find out the depth of each source
code file, and compute an average. This operation can be performed using a
sequence of commands like this (for C code):
> tree | grep ‘‘\\.c’’ | grep ‘‘.--’’ | wc -l
>
> tree | grep ‘‘\\.c’’ | grep ‘‘. .--’’ | wc -l
>
> tree | grep ‘‘\\.c’’ | grep ‘‘. . .--’’ | wc -l
>
> …
thus getting the source file counts for level 0, 1, 2… and computing the
differences to obtain the number of files at each level of directories. An
average depth can then be easily computed, where the depth for each file is
the number of directories that contain it. Table 3 shows the number of source
files at each level for a few packages.
package | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10
---|---|---|---|---|---|---|---|---|---|---
Linux | 297 | 2474 | 4451 | 1050 | 142 | 0 | 0 | 0 | 0 | 0
OpenSolaris | 0 | 2 | 1529 | 3686 | 5082 | 1850 | 340 | 104 | 10 | 3
Eclipse | 2 | 0 | 89 | 228 | 2045 | 4136 | 4674 | 2973 | 407 | 0
Table 3: Depth of source code files for selected software packages.
Figure 1 shows the previous distribution graphically. Each set of depths can
be approximated by a normal distribution.
Figure 1: Depth of source code files organized by levels. A normal
approximation is shown for each package.
This approach has several problems. Many packages contain directories with
only one subdirectory, e.g. a directory called _source_ or _src_ which
contains all source code; this is an artifact of the source distribution and
should not be taken into account. In other occasions folders are repeated
across the board, as in Java packages: the convention here is to start all
packages with a prefix that depends on the organization that develops the
code, and continue with the name of the organization and possibly some other
fixed directories [14]. E.g. for Eclipse all packages should start with
org.eclipse, which results in folders _org/eclipse_ ; this structure is
repeated inside each individual project within the Eclipse source code. Visual
inspection of the result is therefore necessary to disregard those meaningless
directories.
The results are less than satisfactory. In what follows an alternative
approach will be explored.
### 3.4 Average Number of Items per Directory
The target of complexity measurements is to find the number of subcomponents
per component. From the number of archives and the number of directories a
rough approximation to modularity can be computed, as the ratio between files
and directories. But directories can also be contained in other directories,
counting as subcomponents.
Let $T$ be the total number of source files, and $D$ be the number of
directories. Then the average number of items per directory $a$ can be
computed as
$a=\frac{T+D}{D+1},$ (2)
counting one more directory in the denominator for the root directory. It is a
measure of the average number of subcomponents per component.
### 3.5 Average Exponential Depth
Once the average number of items per directory is known, another approach to
compute the average depth can be used.
An ‘exponential tree’ is a tree of directories that has the same number of
items at every level. The last level contains only files; all the rest contain
only directories. Figure 2 shows two examples of exponential trees: one with 3
items per directory and 2 levels, for a total of $3^{2}=9$ source code files;
and one with 2 items per directory and 3 levels, yielding $2^{3}=8$ source
files. The root directory is not counted.
Figure 2: Two examples of exponential trees. The number of items (directories
and files) per directory at each level is constant throughout the tree.
Symbolically:
$T=a^{l},$ (3)
where $T$ is the total number of files, and $a$ the number of items per
directory. The number of files in the tree grows exponentially with its depth
$l$.
If the tree of source code files and directories is approximated by an
exponential tree, then at each level $i$ the complexity would be constant:
$c(i)=a$. If such a tree has $l$ levels, then equation 3 can be written as:
$T=\prod_{i=2}^{l+1}c(i)=a^{l},$ (4)
where levels are counted from $2$ to $l+1$ because level $1$ is for functions
inside files. The component hierarchy will actually have $l+1$ levels. With
some arithmetic we get:
$l=\frac{ln(T)}{ln(a)},$
which considering equation 2 can be restated as
$l=\frac{ln(T)}{ln(T+D)-ln(D+1)}$ (5)
and which can be computed just knowing the number of source files and the
number of directories.
This value represents the depth that an exponential tree would need to have in
order to produce the observed number of source code files; the approximation
is evidenced by the presence of fractional depths. If the overall number of
levels follows the $7\pm 2$ rule, the value for exponential depth should
cluster around $6$, since it does not take into consideration the lowest
function level. Symbolically:
$l=d-1\approx 6\pm 2.$
### 3.6 Pruning
Consolidation can play an important role in some source code layouts. Each
subsystem may have its own hierarchy of directories like _src_ ; they add to
the count of directories but do not really add either complexity or
modularity. This effect can be exacerbated in some Java projects (particularly
those organized around Eclipse projects), where the common hierarchy of
packages (like _org.eclipse…_) is repeated for every subproject, inside the
_src_ folder. Directories should be consolidated removing repeated occurrences
prior to measuring, but this operation requires intimate knowledge of the
structure of the software and will not be done here.
A more practical approximation to consolidation can be done by pruning the
source code tree. A certain amount of pruning has already been done in
subsection 3.2: non-source files and empty directories have already been
pruned, but a more aggressive approach is required. The number of reported
directories is in many instances not consistent with .
‘Trivial’ directories are those containing just one meaningful item: either
one code file or one meaningful directory. (A ‘meaningful’ directory is one
which contains source code at any level.) These trivial directories cannot add
to the component hierarchy, since a component with one subcomponent is
meaningless; therefore they are not counted as directories. Figure 3 shows an
example of pruning a source code tree.
Figure 3: Pruning of the source code tree. Non-source files, empty directories
and directories containing only one meaningful item are pruned.
In practice, a small Python script (available upon request from the author) is
used to count source files and directories, discount trivial directories,
recalculate the values for $T$ and $D$, and recompute the average number of
items per directory.
Again, the results are not perfect since pruning does not yield the same
results as consolidation: some internal structure can be lost in the process,
so this time it tends to underestimate the number of directories, albeit in a
smaller percentage. Figure 4 shows an example of pruning compared to
consolidation.
Figure 4: Example of the results of pruning compared to consolidation. The
original structure shows that directories $a1-a3$ had an internal linear
structure with repeated directories, yielding 10 directories. Consolidation
leads to removing the leading directory and sharing the common directory $b1$
for $a1$ and $a2$, for a total of 6 directories. Pruning removes every trivial
directory, leaving only 4 directories.
Nevertheless, pruned values seem to follow code organization more closely than
in the original model, so they will be used in the final results.
## 4 Results
The following tables summarize the results for a selected range of software
projects.
### 4.1 Basic Metrics
Table 4 shows the results of computing some basic metrics against the selected
software packages.
package | language | $\mathsf{kLOC}$ | functions | files | directories
---|---|---|---|---|---
Linux 2.6.18 | C | 3388 | 134682 | 8414 | 902
OpenSolaris 20061009 | C | 4299 | 120925 | 12606 | 2525
GNOME 2.16.1 | C | 3955 | 163975 | 10025 | 1965
KDE 20061023 | C++ | 2233 | 172761 | 11016 | 1602
KOffice 20061023 | C++ | 511 | 31880 | 2204 | 375
OpenOffice.org C680_m7 | C++ | 3446 | 114250 | 11021 | 1513
SeaMonkey 1.0.6 | C++ | 1180 | 69049 | 4556 | 884
Eclipse 3.2.1 | Java | 1560 | 163838 | 14645 | 2334
NetBeans 5.5 | Java | 1615 | 187212 | 16563 | 6605
JBoss 5.0.0.Beta1 | Java | 471 | 54645 | 9504 | 2478
Emacs 21.4 | Lisp | 473 | 18931 | 759 | 18
XEmacs-Packages 21.4.19 | Lisp | 926 | 37415 | 2133 | 285
Table 4: Lines of code, function definitions, source files and directories for
selected free software packages.
It must be noted how non-modular the code of Emacs really is, evident even in
these direct measurements.
### 4.2 Average Items per Directory
These basic metrics allow us to compute some basic complexity metrics. Table 5
shows $\mathsf{LOC}$ per function (an indication of the lowest level of
articulation), functions per file (the first level of components) and items
per directory (all the remaining levels of components).
package | $\mathsf{LOC}$/function | functions/file | items/directory
---|---|---|---
Linux 2.6.18 | $25.2$ | $16.0$ | $10.33$
OpenSolaris 20061009 | $35.6$ | $9.6$ | $5.99$
GNOME 2.16.1 | $24.1$ | $16.4$ | $6.10$
KDE 20061023 | $12.9$ | $15.7$ | $7.88$
KOffice 20061023 | $16.0$ | $14.5$ | $6.88$
OpenOffice.org C680_m7 | $30.2$ | $10.4$ | $8.28$
SeaMonkey 1.0.6 | $17.1$ | $15.2$ | $6.15$
Eclipse 3.2.1 | $9.5$ | $11.2$ | $7.27$
NetBeans 5.5 | $8.6$ | $11.3$ | $3.51$
JBoss 5.0.0.Beta1 | $8.6$ | $5.7$ | $4.84$
Emacs 21.4 | $24.6$ | $25.3$ | $43.17$
XEmacs-Packages 21.4.19 | $24.7$ | $17.6$ | $8.46$
Table 5: Basic complexity metrics.
Both $\mathsf{LOC}$ per function and functions per file are shown separately
from the remaining levels, items per directory. And indeed we find a bigger
disparity in $\mathsf{LOC}$ per function than in the remaining metrics; from
the $8.6$ found in JBoss or NetBeans to the $35.6$ in OpenSolaris. In
functions per file both procedural and functional languages (C and Lisp)
appear to be similar to C++; only Java shows lower values.
In the remaining metric, items per directory, only two packages are above the
“magic range” $7\pm 2$: Emacs and Linux. Emacs stands out as the less modular
of all packages with difference, which is consistent with its long heritage:
it is an ancient package which has seen little in the way of modern practices,
and which is not really modularized. The modernized version, XEmacs, shows a
profile more coherent with modern practices, although only the separate
packages are measured. Linux, on the other hand, is a monolithic kernel design
with a very practical approach. This seems to result in source code which is
not always as well modular as it might be. JBoss is also out of range by a
very small amount, but this time it is the low end of the range ($4.84$ items
per directory). Its code is very modular, so much so that with only half a
million $\mathsf{LOC}$ it has the second highest directory count. The first
position goes to NetBeans, with $3.51$ items per directory; it has by far the
highest directory count.
These first impressions are refined and expanded in subsection 4.6.
### 4.3 Average depth
Average depth is computed using two algorithms: average directory depth (see
subsection 3.3) and average exponential depth (see equation 5 in subsection
3.5). Table 6 shows the average depth for the usual set of packages,
calculated using both methods.
package | directory | exponential
---|---|---
Linux 2.6.18 | $2.8$ | $3.9$
OpenSolaris 20061009 | $4.7$ | $5.3$
GNOME 2.16.1 | $3.8$ | $5.1$
KDE 20061023 | $3.2$ | $4.5$
KOffice 20061023 | $2.5$ | $3.1$
OpenOffice.org C680_m7 | $3.7$ | $4.4$
SeaMonkey 1.0.6 | $4.8$ | $4.6$
Eclipse 3.2.1 | $4.6$ | $4.8$
NetBeans 5.5 | $7.5$ | $7.7$
JBoss 5.0.0.Beta1 | $4.9$ | $5.8$
Emacs 21.4 | $1.7$ | $1.8$
XEmacs-Packages 21.4.19 | $3.0$ | $3.6$
Table 6: Average depth computed using directory depth and exponential depth.
The biggest difference between both methods happens in KDE ($3.2$ vs $4.5$),
GNOME ($3.8$ vs $5.1$) and Linux ($2.8$ vs $3.9$). The values of directory
depth for the first two packages involve heavy corrections due to directories
with just one subdirectory in the tree; for Linux however there are no such
corrections, but there is a big disparity between some directories which can
contain up to $116$ items, like “/fs”, and others with just one item. The rest
of the packages are within a level in both measures. Exponential depth does
not involve error-prone corrections and will therefore be used in what
follows.
This average depth of components does not take into account the lowest level
of components that was considered in subsection 2.3, that of functions. To
find out the real depth of the component hierarchy a level must therefore be
added to the exponential depth. In fact, for C++ packages an additional level
for classes might also be added, since they are allowed a further level of
modularity inside source code files. This possibility will be discussed in its
own subsection, 4.7.
### 4.4 Pruning
After pruning the source code tree, disregarding all trivial directories
(those containing only one file or directory), the results are those in table
7.
package | directories | items/directory | depth
---|---|---|---
Linux 2.6.18 | 774 | $11.9$ | $3.65$
OpenSolaris 20061009 | 1276 | $10.9$ | $3.96$
GNOME 2.16.1 | 1075 | $10.3$ | $3.95$
KDE 20061023 | 1444 | $10.2$ | $4.09$
KOffice 20061023 | 286 | $8.7$ | $3.56$
OpenOffice.org C680_m7 | 1134 | $10.7$ | $3.92$
SeaMonkey 1.0.6 | 532 | $9.6$ | $3.73$
Eclipse 3.2.1 | 1116 | $14.1$ | $3.62$
NetBeans 5.5 | 2492 | $7.6$ | $4.78$
JBoss 5.0.0.Beta1 | 1471 | $7.5$ | $4.56$
Emacs 21.4 | 16 | $48.4$ | $1.71$
XEmacs-Packages 21.4.19 | 120 | $18.8$ | $2.61$
Table 7: After pruning trivial directories: pruned directory count, average
items per directory and exponential depth.
Comparing these results with those in table 5 it can be seen that items per
directory have increased (while depth has correspondingly decreased), as could
be expected by the smaller number of directories considered. Values have
mostly gone out of the “magic range”, to the point that only three software
packages remain inside: two in Java, NetBeans and JBoss, and one in C++:
KOffice.
### 4.5 Complexity per Level
Table 8 shows the depth once the correction for functions (adding one level to
every depth) is taken into account. It also summarizes the number of elements
per component at levels 0 ($\mathsf{LOC}$ per function), 1 (functions per
file) and higher than 1 (items per directory); according to the assumptions in
subsection 2.5, this number would represent complexity at each level.
package | level 0 | level 1 | level > 1 | depth
---|---|---|---|---
Linux 2.6.18 | $25.2$ | $16.0$ | $11.9$ | $4.65$
OpenSolaris 20061009 | $35.6$ | $9.6$ | $10.9$ | $4.96$
GNOME 2.16.1 | $24.1$ | $16.4$ | $10.3$ | $4.95$
KDE 20061023 | $12.9$ | $15.7$ | $10.2$ | $5.09$
KOffice 20061023 | $16.0$ | $14.5$ | $8.7$ | $3.56$
OpenOffice.org C680_m7 | $30.2$ | $10.4$ | $10.7$ | $4.92$
SeaMonkey 1.0.6 | $17.1$ | $15.2$ | $9.6$ | $4.73$
Eclipse 3.2.1 | $9.5$ | $11.2$ | $14.1$ | $4.62$
NetBeans 5.5 | $8.6$ | $11.3$ | $7.6$ | $5.78$
JBoss 5.0.0.Beta1 | $8.6$ | $5.7$ | $7.5$ | $5.56$
Emacs 21.4 | $24.6$ | $25.3$ | $48.4$ | $1.71$
XEmacs-Packages 21.4.19 | $24.7$ | $17.6$ | $18.8$ | $3.61$
Table 8: Complexity at various levels and depth (number of levels) of the
component hierarchy. Complexity is computed as number of subcomponents per
component; corrected depth is found adding one to exponential depth. Pruning
is used.
$\mathsf{LOC}$ count can be approximately recovered based on these measures.
To recapitulate: $S$ is the size in $\mathsf{LOC}$, and $c(i)$ is the number
of elements at level $i$; $d$ is the total hierarchy depth (number of levels),
$a$ the average number of items per directory, and $T$ is the total number of
source files. Now $l$ is the exponential depth, so that $d=l+1$. (Table 8
shows, from left to right: $c(0)$, $c(1)$, $a$ and $d$.) Then starting from
equation 1:
$S=\prod_{i=0}^{d}c(i)=c(0)\times c(1)\times T,$
and recalling equation 4:
$S=c(0)\times c(1)\times a^{d-1}.$
The count is not exact due to rounding.
### 4.6 Discussion
In this subsection we will discuss the experimental results, referring to
table 8 unless explicitly stated.
Looking just at corrected depth, almost all measured values are well below the
“magic range” $7\pm 2$, as corresponds their medium size. In fact there are
only three values in the range: NetBeans at $5.78$, JBoss at $5.56$ and KDE
with $5.09$, with several others grazing the $5$. Interestingly, the second
deepest hierarchy is found in JBoss, even though it is the smallest package
considered.
Taking now a closer look at complexity for level > 1 (i.e. average items per
directory): the first thing to note is that before pruning (table 5) most
values were inside the “magic range”. After pruning there are only three
values inside the range; this time all the rest are above. JBoss holds now the
lowest value with $7.5$, very close to NetBeans with $7.6$; both are near the
“nominal” value of the range in table 1, while the rest would appear upon or
above the “complex” end. The remaining Java project, Eclipse, has the highest
measure of all but both versions of Emacs, with $14.1$. This value might be an
artifact of pruning (as explained in subsection 3.6); however, all attempts at
consolidation yielded the same results, or worse. It appears that Eclipse is
not as modular in high levels of componentization as other packages; or that
its logical structure is not reflected in its on-disk layout.
Emacs and XEmacs-Packages have far higher measures of complexity for level > 1
than the rest. Even the more modern refactorization has $18.8$, while the
venerable branch has a staggering value of $48.4$. Even if at low levels they
behave much better, it appears that they just are not modularized in a modern
sense, i.e. in a hierarchical fashion. As to the rest, they are just above the
“magic range”, from Linux with $11.9$ to KOffice with $8.7$. At least for the
C/C++ packages there appears to be some correlation between application area
and complexity: systems software packages like Linux and OpenSolaris (with
$11.9$ and $10.9$ respectively) rank higher than the desktop environments
GNOME and KDE ($10.3$ and $10.2$), in turn higher than the graphical
applications SeaMonkey and KOffice ($9.6$ and $8.7$). OpenOffice provides the
exception for a graphical application with $10.7$, but it is also the only
package in the set with roots in the decade of 1980 [15]. Figure 5 shows a
summary of the effect. Incidentally, this correlation has only been made
apparent after pruning.
Figure 5: Complexity per application area. As should be expected, systems
software is more complex than graphical environments, which in turn are more
complex than graphical applications. The exception is OpenOffice, a software
package with roots in the eighties.
On the other end, level 0 complexity or $\mathsf{LOC}$ per function obviously
do not behave as subcomponents in a component: the value is out of the “magic
range” for almost all software packages. Java packages show the lowest count
for $\mathsf{LOC}$ per function with $8.6$, $8.6$and $9.5$; but the highest
count is for OpenSolaris with $35.6$, which otherwise shows good modularity.
It appears that having far more than $7\pm 2\mathsf{LOC}$ per function is not
a problem for package modularity.
More surprising is to find that level 1 complexity (functions per file) is out
of range too in almost all instances, except for JBoss. For C software results
are $9.6$, $16.0$ and $16.4$, very similar to C++ where we find $10.4$,
$14.5$, $15.2$ and $15.7$. These results are typical of the values that have
been observed in other software packages: a range of $[2,20]$ would capture
most C and C++ software. Sometimes an effort is done to modularize code at
this level, as can be seen in OpenSolaris and OpenOffice. Java software seems
to be closer to this goal: results for methods per file are now $5.7$, $11.2$
and $11.3$, mostly still out of range but close enough. In wider field tests a
range of $[2,12]$ would capture most Java software. The remaining software in
Lisp meanwhile does a little worse with $17.6$ and $25.3$ functions per file;
some efforts seem to have been given in this front, but the amount of the
reduction from Emacs to XEmacs-Packages is not too significant (and is even
less evident in XEmacs proper, not included in this study).
A possible explanation for this discrepancy at level 1 is that only exported
functionality should be counted; when considering a component from the outside
a developer only needs to be concerned with visible functions (i.e. public
functions in C++, or public methods in Java). The problem remains for internal
consideration of components, which is precisely where the “magic range” can
have the biggest effect. Another explanation is that only in object-oriented
languages there is a sensibility for modularity within source files, and it
appears to be the case at least for Java. In the next subsection some
explanations are explored for C++.
It appears that many popular free software packages tend to be organized as
one would expect from purely theoretical grounds; after measuring more than 25
million lines of code it has been found that several popular free software
packages tend to comply with the theoretical framework about a hierarchy of
components, at least as far as file layout is concerned. Other software
packages that were conceived using less modern practices, like Emacs, do not
follow this organization. Whether this structure indeed corresponds to
modularization is a different question, difficult to answer without intimate
knowledge of their internal structure; more research is required to give
definitive responses.
### 4.7 Classes and Files in C++
C++ offers the possibility to arrange several classes in a file; according to
the theoretical study presented in subsection 2.3, this organization should
result in another level of articulation within files. It is however good
coding advice to put only one module unit per file [16]; in C++ this
translates into placing each class in a separate file. If this convention is
followed the number of functions per file might be expected to follow the
magic range. In Java, where the limitation of declaring one class per file is
enforced by the language, the results are indeed near the expectation with
$5.7$, $11.2$ and $11.3$. However, C++ results ($10.4$, $14.5$, $15.2$ and
$15.7$) are similar to those for C; again, a typical range seems to be
$[7,20]$. On the other hand, if each file contained two levels of articulation
(functions and classes) the number of functions per file that could be
expected would rather be $(7\pm 2)^{2}=[25,81]$. It is clear that in C++ there
is nothing near this interval either. This deviation deserves a deeper
exploration. In this subsection four possible explanations will be advanced.
The C++ convention is to have one _public_ class per file; there can be some
private classes within the same source file. One private class per public
class (or two classes per file) would yield an interval twice the “magic
range”, or $[10,18]$, consistent with observations. Alternatively, the range
might apply only to public functions and not to those private to the class,
but this explanation was discarded in the previous subsection since external
consideration is not where the number of elements would have the most effect.
Some software packages may disregard the “one public class per file”
convention altogether or in parts; sometimes it is possible to find multi-
class files in otherwise compliant software packages, yielding more than one
level of articulation between files and functions. For example, an average of
$1.3$ levels would yield an interval of $(7\pm 2)^{1.3}\approx[8,17]$, also
congruent with observations. It can be noted that mean depth for C packages is
$4.85$, and $5.32$ for Java; but for C++ it is lower with an average of only
$4.57$. The difference may be suggestive that C++ may have a partial “hidden
articulation level” in classes per file, and even grossly coincides with the
conjectured $1.3$ levels, but it is not statistically significant. Note also
that KOffice is quite smaller than most packages.
C++ also tends to require a large number of supporting code for each class
implemented. It is not uncommon to find an explicit constructor, a destructor,
an assignment operator and a copy constructor declared for each class [17].
With four additional functions per file, the interval would become $[9,13]$;
if half the functions were supporting code, we would again get a range of
$[10,18]$.
The last explanation is psychological, and can be the underlying cause for all
the previous hypotheses. It seems that C++ developers don’t see a problem with
having as many as 20 functions per file; in fact going below 10 may look very
difficult when writing C++ code. For example, Kalev writes [18]:
> As a rule, a class that exceeds a 20-30 member function count is suspicious.
Such a number would raise the hairs on the back of the neck for experienced
Java developers. Whether this concern is valid or it is just an idiomatic
difference shall remain an open question until further research can validate
the alternatives.
## 5 Conclusion
The theoretical framework presented in this study should allow developers to
view really large systems in a new light; the experimental results show that
complexity in a hierarchy of components can be dealt with and measured. Until
now most of these dealings have been made intuitively; hopefully the metrics
developed in this paper can help developers think about how to structure their
software, and they can be used for complexity analysis by third parties.
As a first proof of their usefulness, several object oriented software
packages have been shown to have a bigger sensibility to modularization than
those written in procedural languages, and these in turn higher than
functional language software. These concerns may explain their relative
popularity better than language-specific features. Application area also seems
to play a role, as does age of the project. Apparently the set of metrics
reflect, if not the exact hierarchy of logical components, at least general
modularity concerns.
### 5.1 Metrics in the Real World
Metrics can be highly enlightening, but if they cannot be computed
automatically they are less likely to be used in practice. Function points are
very interesting but hard to compute, and therefore they have not yet reached
their potential as a universal size measure [19].
The metrics presented here are very easy to compute; a few regular expressions
and some commands should be enough to get full results. A software package
which analyzes source code automatically and produces a report should be very
easy to develop; extending existing packages should be even easier.
Despite the automatic nature of the metrics, a certain amount of manual
inspection of the intermediate results must always be performed in order to
assess if directory structure reflects software modularity. This is usually
the case with source code, so inspections are a good idea in any case.
### 5.2 Further Work
This study has focused on well-known free software packages with sizes within
an order of magnitude ($500\mathsf{kLOC}$ to $5000\mathsf{kLOC}$). Extending
the study to a big range of projects (with hard data for maintainability)
would add reliability to the metrics proposed. Future research will try to
identify complex areas in otherwise modular programs. An interesting line of
research would be to model the software not as an exponential tree, but as a
more organic structure with outgrowths, stubs and tufts; this approach might
help locate problematic areas.
The set of metrics presented is only a first approximation to measures of
complexity. Until these metrics are validated by wider studies they should not
be applied to mission-critical projects or used to make business decisions. On
the other hand, making hierarchical complexity measurable is an important step
that should lead to more comprehensive but hopefully just as simple metrics.
A goal that can immediately be applied to current development is to make
explicit and to document the hierarchical component organization. The process
of sorting out and naming levels of components following a conscious and
rational scheme will probably lead to better modularization; in the end it is
expected that more maintainable software will be attained.
### 5.3 Acknowledgments
The author would like to thank Rodrigo Lumbreras for his insights on
professional C++ development, and Carlos Santisteban for his sharp
corrections.
## References
* [1] L M Laird, M C Brennan: “Software Measurement and Estimation”, ed John Wiley & Sons, 2006, p 58.
* [2] Y K Malaiya, J Denton: “Module Size Distribution and Defect Density”, in _Proceedings of ISSRE’00_ , 2000.
* [3] C Jones: “Applied Software Measurement”, ed Mcgraw-Hill, 1991, pp 237-241.
* [4] M Blume, A W Appel: “Hierarchical Modularity”, in _ACM Transactions on Programming Languages and Systems_ , vol 21, n 4, July 1999.
* [5] S McConnell: “Code Complete”, ed Microsoft Press, 1993, p 775.
* [6] H Washizaki _et al_ : “A Metrics Suite for Measuring Reusability of Software Components”, in _Proceedings of METRICS’03_ , 2003.
* [7] Alice T. Lee et al: “Software Analysis Handbook: Software Complexity Analysis…”, ed NASA Johnson Space Center, August 1994.
* [8] M Agrawal, K Chari: “Software Effort, Quality, and Cycle Time: A Study of CMM Level 5 Projects”, in IEEE Transactions on Software Engineering, vol 33, n 3, March 2007.
* [9] C Jones: “Applied Software Measurement”, ed Mcgraw-Hill, 1991, p 9.
* [10] S W Ambler: “The Elements of UML 2.0 Style”, ed Cambridge University Press, 2005, p 10.
* [11] C Jones: “Applied Software Measurement”, ed Mcgraw-Hill, 1991, p 240.
* [12] G A Miller: “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information” en _The Psychological Review_ , 1956, vol 63, pp 81-97.
* [13] GNOME Project: “GARNOME Documentation”, accessed February 2007. http://www.gnome.org/projects/garnome/docs.html
* [14] Sun Microsystems: “Code Conventions for the JavaTM Programming Language”, revised April 20, 1999, ch 9.
* [15] C Euler: “File transformation with OpenOffice and its use for E-Learning”, in _Future Trends in E-Learning Technologies_ , 2005.
* [16] S McConnell: “Code Complete”, ed Microsoft Press, 1993, p 445.
* [17] D Kalev: “ANSI/ISO C++ Professional Programmer’s Handbook”, ed Que, 1999, chapter 4.
* [18] D Kalev: “ANSI/ISO C++ Professional Programmer’s Handbook”, ed Que, 1999, chapter 5.
* [19] L M Laird, M C Brennan: “Software Measurement and Estimation”, ed John Wiley & Sons, 2006, p 50.
|
arxiv-papers
| 2011-05-11T23:03:38 |
2024-09-04T02:49:18.711881
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Alejandro Fern\\'andez",
"submitter": "Alejandro Fern\\'andez",
"url": "https://arxiv.org/abs/1105.2335"
}
|
1105.2410
|
# An Envelope Disrupted by a Quadrupolar Outflow in the Pre-Planetary Nebula
IRAS19475+3119
Ming-Chien Hsu11affiliation: Academia Sinica Institute of Astronomy and
Astrophysics, P.O. Box 23-141, Taipei 106, Taiwan; mchsu@asiaa.sinica.edu.tw,
cflee@asiaa.sinica.edu.tw 22affiliation: Department of Physics, National
Taiwan University, Taipei 10617, Taiwan and Chin-Fei Lee11affiliation:
Academia Sinica Institute of Astronomy and Astrophysics, P.O. Box 23-141,
Taipei 106, Taiwan; mchsu@asiaa.sinica.edu.tw, cflee@asiaa.sinica.edu.tw
###### Abstract
IRAS 19475+3119 is a quadrupolar pre-planetary nebula (PPN), with two bipolar
lobes, one in the east-west (E-W) direction and one in the southeast-northwest
(SE-NW) direction. We have observed it in CO J=2-1 with the Submillimeter
Array at $\sim$ $1^{\prime\prime}$ resolution. The E-W bipolar lobe is known
to trace a bipolar outflow and it is detected at high velocity. The SE-NW
bipolar lobe appears at low velocity, and could trace a bipolar outflow moving
in the plane of the sky. Two compact clumps are seen at low velocity around
the common waist of the two bipolar lobes, spatially coincident with the two
emission peaks in the NIR, tracing dense envelope material. They are found to
trace the two limb-brightened edges of a slowly expanding torus-like
circumstellar envelope produced in the late AGB phase. This torus-like
envelope originally could be either a torus or a spherical shell, and it
appears as it is now because of the two pairs of cavities along the two
bipolar lobes. Thus, the envelope appears to be disrupted by the two bipolar
outflows in the PPN phase.
circumstellar matter – planetary nebulae: general – stars: AGB and post-AGB –
stars: individual (IRAS 19475+3119): mass loss –stars:winds, outflows
## 1 Introduction
Pre-planetary nebulae (PPNe) are objects in a transient phase between the
asymptotic giant branch (AGB) phase and the planetary nebula (PN) phase in the
stellar evolution of a sun-like star. The PPN phase is short with a lifetime
of up to thousands of years; yet it closes the gap between the two
morphologically very different phases \- the AGB phase and the PN phase. The
former exhibits roughly spherical AGB circumstellar envelopes (CSEs) expanding
radially at 10–20 km s-1, comparable to the escape velocity of the AGB stars.
Yet the latter exhibits diverse morphologies, with a significant fraction
having highly collimated fast-moving bipolar or multipolar (including
quadrupolar) outflow lobes (Balick & Frank, 2002; Sahai, 2004). In the past
decades, high spatial resolution imagings of PPNe exhibit an even richer array
of shapes, including multiaxial symmetries (Balick & Frank, 2002), suggesting
that the aspherical symmetry in the PN phase must have started in the PPN
phase (Sahai, 2004). It seems that collimated fast winds or jets are required
to shape the PPNe and young PNe (Sahai & Trauger, 1998; Soker & Rappaport,
2000; Lee & Sahai, 2003; Soker, 2004; Sahai, 2004). In particular, the
departure from the spherical symmetry could result from an interaction of the
jets with the preexisting AGB CSEs. Also, multipolar and/or point-symmetric
morphologies could result from multiple ejections and/or change in ejection
directions of the jets, respectively (Sahai et al. 2007, hereafter SSMC (07)).
PPNe and PNe are often seen with a dense equatorial waist structure that is
often called a torus (e.g., Kwok et al., 1998; Su et al., 1998; Volk et al.,
2007; Sahai et al., 2008). The origin of the torus is unknown, and it may be
due to either an equatorially enhanced mass loss in the late AGB phase or an
interaction between an underlying jet and the spherical AGB wind (Soker &
Rappaport, 2000). Tori are common in bipolar PPNe, but less common in
multipolar PPNe. It is thus important to study how a torus can be formed in
multipolar PPNe and its relation with the jet.
The PPN IRAS 19475+3119 is a quadrupolar nebula with two collimated bipolar
lobes, one in the east-west (E-W) direction and one in the southeast-northwest
(SE-NW) direction (SSMC, 07). In the optical image of the Hubble Space
Telescope (HST), the bipolar lobes appear limb-brightened, suggesting that
they are dense-walled structures with tenuous interiors or cavities (SSMC,
07), and that they trace two bipolar outflows produced by two bipolar post-AGB
winds (Sánchez-Contreras et al. 2006, hereafter SC (06); SSMC (07)). Its
central star, HD 331319, is an F3 Ib (T${}_{eff}\sim$ 7200K) supergiant star
classified as a post-AGB object based on the elemental abundance analysis
(Klochkova et al., 2002). The distance is $\sim$ 4.9 kpc by assuming a total
luminosity of 8300 L⊙, appropriate for a post-AGB star with a mass of 0.63 M⊙
(Hrivnak & Bieging, 2005). Using this luminosity and the effective
temperature, the central star is estimated to have a radius of 58 R⊙. This PPN
has a detached circumstellar envelope based on its double-peaked spectral
energy distribution (Hrivnak et al., 1999). Near-infrared (NIR) images of this
PPN show a nebula extending in the SE-NW direction and a puzzling two-armed
spiral-like structure (Gledhill et al., 2001). The NIR nebula is the
counterpart of the SE-NW bipolar lobe in the optical. The spiral-like
structure is point-symmetric and could result from an interaction of the mass-
losing star with a binary companion (Gledhill et al., 2001).
Single-dish CO spectra toward this PPN show two components, a strong line core
from the circumstellar envelope and weak wings from fast moving gas (Likkel et
al., 1991; Hrivnak & Bieging, 2005). At $\sim$ $2^{\prime\prime}$ resolution,
the two components are seen in CO J=2-1 as a slowly expanding shell and a fast
bipolar outflow that is aligned with the E-W bipolar lobe in the optical (SC,
06). At $\sim$ $0^{\prime\prime}_{{}^{\textrm{.}}}6$ resolution, a ring-like
envelope is seen mainly around the E-W bipolar lobe (Castro-Carrizo et al.
2010, hereafter CC (10)). Here we present an observation at $\sim$
$1^{\prime\prime}$ resolution in CO J=2-1 for this PPN. Our observation shows
an expanding torus-like envelope around the common waist of the two bipolar
lobes and that the structure and kinematics of the envelope are consistent
with the envelope being disrupted by the quadrupolar nebula, or the two
bipolar outflows.
## 2 Observations
Interferometric observation of IRAS 19475+3119 in CO J=2-1 was carried out
using the Submillimeter Array (SMA) (Ho et al., 2004) in the extended
configuration on 2009 August 29. Seven antennas were used with baselines
ranging from 44 to 226 m. The SMA correlators were set up to have a channel
spacing of 0.2 MHz ($\sim$ 0.26 km s-1) for CO J=2-1. The total duration of
the observation (including the integration time on the source and calibrators)
was $\sim$ 11.5 hr with a useful duration of $\sim$ 5.5 hr on the source.
The calibration of the data was performed using the MIR software package.
Passband and flux calibrators were 3C84 and Uranus, respectively. The gain
calibration was done with the quasar 2015+371, whose flux was estimated to be
3.05 Jy at 230 GHz, about 20% higher than that reported from the light curve
measurements of the SMA around the same period of time. Each integration cycle
includes 4 minutes on 2015+371 and 20 minutes on the source. The calibrated
visibility data were imaged with the MIRIAD package. The dirty maps that were
produced from the calibrated visibility data were CLEANed, producing the CLEAN
component maps. The final maps were obtained by restoring the CLEAN component
maps with a synthesized (Gaussian) beam fitted to the main lobe of the dirty
beam. With natural weighting, the synthesized beam has a size of
$1^{\prime\prime}_{{}^{\textrm{.}}}16$$\times$$1^{\prime\prime}_{{}^{\textrm{.}}}01$
at a position angle (P.A.) of $-$63∘. The rms noise levels are $\sim$ 0.1 Jy
beam-1 in the CO channel maps with a channel width of 1 km s-1. The velocities
of the channel maps are LSR.
## 3 Results
In the following, our results are presented in comparison to the HST image of
SSMC (07) and the NIR image of Gledhill et al. (2001). Note that the HST image
adopted in SC (06) was incorrectly rotated by $\sim$ 10∘ clockwise. The
systemic velocity in this PPN is found to be 18.25$\pm$1 km s-1 from our
modeling in Section 4 (see also SC, 06). The integrated line profile of CO
shows that most emission comes from within $\pm 15$ km s-1 from the systemic
velocity. Compared with previous single-dish and interferometric observations
in the same emission line (SC, 06), our interferometric observation at high
resolution filters out most of the extended emission at low velocity, allowing
us to study compact structures, such as a torus and collimated outflows,
around the source.
### 3.1 A torus-like circumstellar envelope
At low velocity (within $\pm$ 10 km s-1 from the systemic velocity), the CO
map (Fig. 1) shows two compact clumps on the opposite sides of the source, one
to the northeast (NE) at P.A. $\sim$ 15∘ and one to the southwest (SW) at P.A.
$\sim$ 195∘, at a distance of $\sim$ $1^{\prime\prime}$ (4900 AU) from the
source. These two clumps can be seen in the 7 central velocity channels from 8
to 29 km s-1 (Fig. 2) and have also been seen at higher resolution in CC (10)
as well. They are seen around the common waist of the two bipolar lobes,
spatially coincident with the two emission peaks in the NIR, tracing the dense
envelope around the source. The center of the two clumps, however, is shifted
by $0^{\prime\prime}_{{}^{\textrm{.}}}15$ to the east of the reported source
position, which is
$\alpha_{(2000)}=19^{\mathrm{h}}49^{\mathrm{m}}29^{\mathrm{s}}_{{}^{\textrm{.}}}56$,
$\delta_{(2000)}=+31\arcdeg 27\arcmin 16^{\prime\prime}_{{}^{\textrm{.}}}22$
(Hog et al., 1998). This position shift is only $\sim$ 15% of the synthesized
beam and thus can be due to the position uncertainty in our map. Also, there
could be uncertainty in the reported source position. Therefore, in this
paper, the center of the two clumps is considered as the source position.
Faint protrusions are also seen along the two bipolar lobes at low velocity,
extending to the east and west, as well as to the southeast and northwest.
These faint protrusions are most likely to be from the quadrupolar lobes, and
thus unlikely to be from the dense envelope itself.
The structure and kinematics of the two clumps can be studied with position-
velocity (PV) diagrams cut along four different P.A., from the one aligned
with their peaks (cut A with P.A.=15∘) to the one perpendicular (cut D with
P.A. = $-75$∘ ), through the two in between (cut B with P.A.=$-10$∘ and cut C
with P.A.=$-30$∘ ). The PV diagrams along these cuts shows a similar ring-like
PV structure with less and less emission around the systemic velocity as we go
from cut A to cut D (Figures 3a-d), indicating that the two clumps trace the
two limb-brightened edges of a torus-like envelope at a small inclination
angle (i.e, close to edge-on), either radially expanding or collapsing, with
the equator aligned with cut A. In addition, the torus-like envelope has a
thickness with a half opening angle (i.e., subtended angle or see next section
for the definition) of 20∘–45∘, measured from its equator. The two high-
velocity ends of the ring-like PV structure are from the farside and nearside
of the torus-like envelope projected to the source position. The emission
there is bright, indicating that the material in the farside and nearside must
be projected to the source position, and this requires the torus-like envelope
to have an inclination angle smaller than its half opening angle. Figure 4
shows the spectrum integrated over a region with a diameter of $\sim$
$2^{\prime\prime}_{{}^{\textrm{.}}}5$ around the source. It includes most of
the torus-like envelope but excludes the protrusions along the two bipolar
lobes. It shows a double-peaked line profile with the redshifted peak at
$\sim$ 28 km s-1 slightly brighter than the blueshifted peak at $\sim$ 10 km
s-1. Since the temperature of the torus-like envelope is expected to drop away
from the central star (Kwan & Linke, 1982), the emission from the farside is
expected to be brighter than that from the nearside because of the absorption
in the nearside. Therefore, the redshifted peak in the line profile is from
the farside and the blueshifted peak is from the nearside, implying that the
torus-like envelope is expanding at $\sim$ 10 km s-1, in agreement with it
being the circumstellar envelope produced in the AGB phase. The PV diagram cut
perpendicular to the equator, i.e., cut along P.A.=-75∘, shows that the
redshifted emission shifts to the west and the blueshifted emission shifts to
the east at low velocity (Fig. 3d). For an expanding torus, this indicates
that the torus is inclined, with the nearside tilted to the east.
### 3.2 Outflows
At high velocity (beyond $\pm$13 km s-1 from the systemic velocity), CO
emission is seen around the E-W bipolar lobe seen in the optical, emerging
from the two opposite poles (or holes) of the torus-like envelope (Fig. 1).
This E-W bipolar lobe is already known to trace a fast-moving bipolar outflow
(SC, 06). The PV diagram (Fig. 3e) along the outflow axis (P.A.=88∘; cut E in
Fig. 1a) shows that the outflow velocity increases from the base to the tip at
$\sim$ $4^{\prime\prime}$ ($\sim$ 20000 AU), as found in SC (06). This is why
the CO emission at low-velocity is seen near the source forming the faint
protrusions along the E-W bipolar lobe, as mentioned in the previous section.
The radial velocities are $\sim$ 30 km s-1 at the tip of the red-shifted lobe
and $\sim$ 24 km s-1 at the tip of the blue-shifted lobe, with a mean velocity
of $\sim$ 27 km s-1.
The PV diagram also shows a hint of bifurcation in velocity along the outflow
axis, especially for the red-shifted lobe in the east [see the dark lines in
Figure 3e and also Figure 20 in CC (10)]. This suggests that the outflow lobe
is a hollow, shell-like cavity wall, as suggested in SSMC (07). For example,
in the case of the redshifted lobe, the emission with less red-shifted
velocity is from the front wall, while the emission with the more red-shifted
velocity is from the back wall. The outflow has the shell-like structure,
likely resulting from an interaction of a post-AGB wind with the pre-existing
circumstellar medium, as proposed in SC (06) and SSMC (07).
On the other hand, no CO emission is seen here at high velocity around the SE-
NW optical bipolar lobe. It has been argued that this bipolar lobe, like the
E-W bipolar lobe, is also a dense-walled structure with tenuous interiors (or
cavities) (SSMC, 07) and could also be produced by a bipolar post-AGB wind
(SC, 06; SSMC, 07). Thus, this bipolar lobe could trace a bipolar outflow
moving along the plane of the sky, with the CO emission seen only at low
velocity (within $\pm$ 10 km s-1 from the systemic as seen in the channel
maps) forming the faint protrusions along the SE-NW bipolar lobe, as mentioned
in the previous section. The low inclination is also consistent with the fact
that the SE and NW components of the SE-NW bipolar lobe have almost the same
brightness in the optical.
## 4 Modeling the torus-like envelope
As discussed, the two CO clumps seen at low velocity arise from a torus-like
envelope at a small inclination angle, with the nearside tilted to the east.
In this section, we derive the physical properties, including the kinematics,
structure, density and temperature distributions, of this torus-like envelope
through modeling the two CO clumps with a radiative transfer code. The
envelope appears torus-like around the common waist of the two bipolar lobes,
and it could arise from a torus with or without the cavities (or holes)
cleared by the bipolar lobes, or a spherical shell with the cavities. For a
torus and a spherical shell, the number density of the molecular hydrogen in
spherical coordinates $(r,\theta,\phi)$ can be assumed to be
$n=n_{0}\Bigl{(}\frac{r}{r_{0}}\Bigr{)}^{-2}\cos^{p}\theta,$ (1)
where $\theta$ is measured from the equatorial plane of the envelope. Here
$p=0$ for a spherical shell and $p>0$ for a torus with a half opening angle
$\theta_{0}$ defined as
$\cos^{p}\theta_{0}=0.5$ (2)
Also $r_{0}=$$1^{\prime\prime}$, which is the representative radius of the
torus-like envelope. The envelope has an inner radius $R_{\textrm{in}}$ and an
outer radius $R_{\textrm{out}}$. It is expanding radially at a velocity of
$v_{e}$, and thus the mass-loss rate (including helium) can be given by
$\dot{M}=1.4m_{\scriptsize\textrm{H}_{2}}\int nv_{e}r^{2}d\Omega=2.8\pi
m_{\scriptsize\textrm{H}_{2}}n_{0}v_{e}r_{0}^{2}\int\cos^{p}\theta d\theta$
(3)
The cavities, if needed, are assumed to have a half-ellipsoidal (like
paraboloidal) opening for simplicity, with the semi-major axis and semi-minor
axis determined from the bipolar lobes in the optical. Inside the cavities,
the number density of the envelope is set to zero.
In the envelope, the temperature profile is assumed to be
$T=T_{0}\Bigl{(}\frac{r}{r_{0}}\Bigr{)}^{-1}$ (4)
similar to that derived by Kwan & Linke (1982) for AGB envelopes, including
gas-dust collisional heating, adiabatic cooling, and molecular cooling.
In our models, radiative transfer is used to calculate the CO emission, with
an assumption of local thermal equilibrium. The relative abundance of CO to H2
is assumed to be 4$\times 10^{-4}$, as in SC (06). For simplicity, the line
width is assumed to be given by the thermal line width only. Also, the
systemic velocity $v_{\textrm{\scriptsize sys}}$ is assumed to be a free
parameter. The channel maps of the emission derived from the models are used
to obtain the integrated intensity map, spectrum, and PV diagrams to be
compared with the observation. Note that the observed emission in the east,
west, southeast, and northwest protrusions at low velocity (Fig. 1) is not
from the envelope and thus will not be modeled here. In our models, the free
parameters are $v_{\textrm{\scriptsize sys}}$, $v_{e}$,
$R_{\textrm{\scriptsize in}}$, $R_{\textrm{\scriptsize out}}$, $p$ (torus or
spherical shell), cavities (holes), inclination angle $i$, equator PA,
$n_{0}$, and $T_{0}$. Table 1 shows the best-fit parameters for our models
with different structures. Our models all require $v_{\textrm{\scriptsize
sys}}\sim$ 18.25$\pm 1$ km s-1 and $v_{e}\sim$ 12.5$\pm 1.5$ km s-1 to fit the
two velocity ends in the PV diagrams and the spectrum. They require
R${}_{in}\sim$ $0^{\prime\prime}_{{}^{\textrm{.}}}7$ (3430 AU) and
$R_{\textrm{out}}\sim$ $1^{\prime\prime}_{{}^{\textrm{.}}}6$ (7840 AU) to
match the emission peak positions. They also require $T_{0}$ to be $\sim$ 28 K
to match the emission intensity, resulting in an overall characteristic (or
mean) temperature of $\sim$ 23 K for the envelope. This characteristic
temperature is slightly higher than the values found in the more extended
envelope by SC (06) and SSMC (07) at lower resolutions. This is reasonable,
because the envelope temperature is expected to be higher near the source. In
the following, we discuss our different models in detail.
### 4.1 A Torus without cavities
First we check if a simple torus model can reproduce the two clumps (Figure
5). In this model, the equator of the torus is set to be at P.A.=15∘, to be
aligned with the two clumps. As expected, the torus is required to have a
small inclination angle of $\sim$ 15∘ and a small half opening angle of
$\theta_{0}\sim$ 24∘ with $p=8$. This model can reproduce reasonably well the
compactness of the two clumps. It can also reproduce the ring-like PV
structures, and the less and less emission around the systemic velocity in the
PV diagrams from cut A to cut D. Note that the PV diagram cut along the
equatorial plane shows a ring-like structure with two C-shaped peaks at the
two high velocity ends facing to each other, because the emission is symmetric
about the equatorial plane. For the PV diagrams with a cut away from the
equatorial plane, the blueshifted emission shifts to the southeast and the
redshifted emission shifts to the northwest, because the torus is inclined
with the nearside tilted to the southeast. For the spectrum, the model can
produce the double-peaked line profile, however, it produces higher intensity
than the observed around the systemic velocity (Fig. 5b).
### 4.2 A Torus with a E-W pair of cavities
Here we add a E-W pair of cavities to the above simple torus model and see if
we can reproduce the spectrum better by reducing the intensity around the
systemic velocity (Figure 6). The E-W pair of cavities are assumed to have a
half-ellipsoidal opening with a semi-major axis of $2^{\prime\prime}$ and a
semi-minor axis of $1^{\prime\prime}$, as measured from the HST image (Fig.
1). The E-W pair of cavities have a P.A.=88∘ and can have an inclination angle
from 10∘ to 30∘, with the western cavity titled toward us, as suggested in SC
(06). These moderate inclinations are consistent with the optical image that
shows the eastern lobe is slightly fainter than the western lobe, and they all
give similar results in our model. The model results here are similar to those
of the simple torus model above, with some slight differences. This is because
the E-W pair of cavities are almost aligned with the poles of the torus, and
removing the tenuous material in the poles does not change much the model
results. The cavities remove slightly the blueshifted emission in the west and
redshifted emission in the east, mainly at low velocity, as seen in the
spectrum. In addition, unlike that in the torus model, the PV diagram cut
along the equatorial plane shows a ring-like structure with two sickle-shaped
peaks, instead of two C-shaped peaks, at the two high velocity ends facing to
each other (comparing Figure 5c and Figure 6c). This is because that, along
that cut direction, the E-W pair of cavities remove the redshifted material in
the north, and blueshifted emission in the south.
### 4.3 A Torus with 2 pairs of cavities
Here we add one more pair of cavities, the SE-NW pair, into our model (Figure
7). Again, this pair of cavities are assumed to have a half-ellipsoidal
opening but with a semi-major axis of $2^{\prime\prime}$ and a semi-minor axis
of $0^{\prime\prime}_{{}^{\textrm{.}}}8$, as measured from the HST image (Fig.
1). This pair of cavities have a P.A.=$-40$∘ and are assumed to have an
inclination of 0∘ (i.e., in the plane of the sky). Adding this pair of
cavities will rotate the two clumps counterclockwise about the source.
Therefore, the torus in this model is required to have its equator at a
smaller P.A. of 10∘ in order for the two clumps to appear at P.A.=15∘. As a
result, the underlying torus could be initially more perpendicular to the E-W
bipolar lobe. Also, a higher density is required to reproduce the same amount
of flux in the clumps (see Table 1).
By removing the material along the SE-NW bipolar lobe, this model can now
reproduce the required dip around the systemic velocity in the double-peaked
spectrum. This is because that the SE-NW pair of cavities, being in the plane
of the sky, remove the low-velocity emission preferentially. This model can
also reproduce the ring-like PV structures, and the less and less emission
around the systemic velocity in the PV diagrams from cut A to cut D. Note
that, in the PV diagram cut along the equatorial plane, the model shows two
sickle-shaped peaks, which could be somewhat different from the observation in
detail. This detailed difference could be due to localized excitation effect
near the base of the E-W cavities and should not affect our main conclusions
on the envelope properties.
### 4.4 A Spherical shell with 2 pairs of cavities
A spherical shell with only the E-W pair of cavities can be ruled out because
it would produce two clumps that are exactly perpendicular to the E-W bipolar
lobe, inconsistent with the observation. Therefore, here we check if a
spherical shell with the same 2 pairs of cavities as above can produce the two
clumps in the observation. We find that this model is quite similar to the
torus model with the 2 pairs of cavities, except that the two clumps are
slightly rotated and the emission is more extended perpendicular to the torus-
like structure, in between the cavities (comparing Figure 8a to Figure 7a).
This slight rotation is acceptable because our model is simple and the two
clumps are not resolved in our observation. On the other hand, further
observation that can separate the envelope from the two bipolar lobes is
needed to check if the emission extending perpendicular to the torus-like
structure in this model is inconsistent with the observation.
### 4.5 Model summary
Our model results show that the two CO clumps could arise from either a torus
or a spherical shell, with two pairs of cavities, one along the E-W bipolar
lobe and one along the SE-NW bipolar lobe. A pure torus model and a torus with
only a E-W pair of cavities both produce more than observed emission around
the systemic velocity in the spectrum. In addition, the SE-NW pair of
cavities, being in the plane of the sky, are essential to remove the low-
velocity emission in our models. All the models are optically thin. With the
mean envelope temperature of 23 K and the model brightness temperature of
$\sim$ 4 K, the mean optical depth is $\sim$ 0.2. Therefore, in the model
spectra, the redshifted peak is only slightly brighter than the blueshifted
peak, as observed. The mass-loss rate of the AGB wind derived from our two
best models is $\sim 2-3\times 10^{-5}$ M⊙ yr-1. In the shell model, the mass-
loss rate is about three times smaller than that derived from a similar shell
model by SC (06), which was found to be $\sim$ 10-4 M⊙ yr-1. This is probably
because of the following reasons: (1) our inner radius is smaller than theirs,
and the emission is more efficient in the inner region where the density and
temperature are both higher; (2) a large amount of missing flux in our
observation; and (3) their envelope emission seen at low resolution could be
significantly contaminated by the outflows. Therefore, the mass loss rate
derived here could be a lower limit. Alternatively, the mass-loss rate might
have decreased with time.
## 5 Discussion
### 5.1 The expanding torus-like envelope
In our observation, two clumps are seen in CO at the common waist of the two
optical bipolar lobes, spatially coincident with the two peaks in the NIR
polarization image (Figure 1), tracing the dense envelope material. These two
clumps are also seen at higher resolution by CC (10), but not spatially
resolved at lower resolution in SC (06). From our models, they are found to
trace the two limb-brightened edges of an expanding torus-like envelope around
the source, with an inclination angle of $\sim$ 15∘. This torus-like envelope
corresponds to the ring-like envelope found at higher resolution at a similar
inclination angle in CC (10). At a lower resolution of $\sim$
$2^{\prime\prime}$, two different clumps were seen in CO but aligned with the
SE-NW optical bipolar lobe (SC, 06) as well as the SE-NW NIR nebula. Those
clumps are also seen here but appear as the two faint protrusions along the
SE-NW bipolar lobe (Fig.1a). As mentioned before, the two optical bipolar
lobes, both E-W and SE-NW, have been found to be dense-walled structures with
tenuous interiors (i.e., cavities) (SSMC, 07) and these cavities are indeed
required in our models. Thus, our protrusions or their clumps along the SE-NW
bipolar lobe likely trace the unresolved dense wall materials (i.e., swept-up
material) of that bipolar lobe as suggested in SC (06) and SSMC (07). They
only appears at low velocity likely because the SE-NW bipolar lobe is lying in
the plane of the sky, as supported by the similar brightness of the SE and NW
components of the bipolar lobe, in both the HST optical and the NIR images.
They are faint here in our CO observation probably because their CO emission
merges with that of the extended circumstellar envelope and halo, and is thus
mostly resolved out. Two other protrusions are also seen at low velocity along
the E-W bipolar lobe (Fig. 1a), tracing the unresolved dense wall materials of
the bipolar lobe.
### 5.2 The shaping mechanism and its consequences
The PPN 19475+3119 is not spherically symmetric, with a torus-like envelope at
the common waist of the two bipolar lobes. The torus-like envelope is
expanding at a speed of $\sim$ 12.5 km s-1, and thus has a dynamical age of
$\sim$ 1900 yr, with a radius of $\sim$ 5000 AU ($\sim$ $1^{\prime\prime}$).
The E-W bipolar lobe has a mean inclination of $\sim$ 20∘, so the mean
deprojected velocity is $\sim$ $27/\sin(20)\sim$ 80 km s-1 at the tips and the
mean deprojected distance of the tips is $\sim$ 21000 AU ($\sim$
$4^{\prime\prime}_{{}^{\textrm{.}}}3$). Thus, the dynamical age of the E-W
bipolar lobe is estimated to be $\sim$ 1200 years ($\pm$400 years), more than
half of that of the torus-like envelope. The SE-NW bipolar lobe is about half
the length of the E-W bipolar lobe and may have a similar dynamical age to the
E-W bipolar lobe within a factor of two. The bipolar lobes and torus-like
envelope are believed to be physically related. The envelope is believed to be
the AGB envelope formed by the late AGB wind. The two bipolar lobes could
trace two bipolar outflows produced by two distinct bipolar post-AGB winds
ejected in two different directions, as proposed in SC (06) and SSMC (07). The
densed-wall structures with tenuous interiors (cavities) of the bipolar lobes
could result from the interactions between the post-AGB winds and the envelope
as shown in hydrodynamical simulations (Lee & Sahai, 2003; Lee, Hsu, & Sahai,
2009). Also, according to our models, the envelope, originally either a torus
or a spherical shell, is disrupted by the two bipolar lobes or outflows, and
it appears as it is now because of the disruptive interactions.
Could the torus-like envelope originally be a torus, like the dense equatorial
waist structure often seen in other PPNe (Volk et al., 2007; Sahai et al.,
2008)? If so, it could be due to an equatorially enhanced mass loss in the
late AGB phase (see the review by Balick & Frank, 2002). Such a dense waist,
where large grains can grow, seems to be needed to account for the
millimeter/submillimeter excess toward this source, as argued by SSMC (07).
This toroidal envelope may help collimate the E-W bipolar outflow in its pole
direction. However, it is hard for the toroidal envelope to collimate two
bipolar outflows that are almost perpendicular to each other, such as those in
this PPN. Instead, the envelope appears to be disrupted by the two outflows,
as discussed above.
On the other hand, could the torus-like envelope be a remnant of a spherical
AGB shell in the inner part? The torus-like envelope has an expansion velocity
similar to that of the surrounding halo (SSMC, 07), and thus may have the same
origin as the halo, which is the spherical circumstellar shell formed by an
isotropic AGB wind in the past. As discussed before, a spherical shell model
with 2 pairs of cavities along the two bipolar lobes can produce a torus-like
envelope as observed. Thus, the torus-like envelope could indeed be the
remnant of the AGB shell in the inner part. Note that SC (06) have also tried
a similar spherical shell model but only with a pair of cavities along the E-W
bipolar lobe. As mentioned in Section 4.4, however, this model would produce
two clumps that are perpendicular to the E-W bipolar lobe, but not at the
common waist of the two bipolar lobes as observed. A similar model with 2
pairs of cavities has been adopted to explain the clumpy hollow shell
morphology of the Egg Nebula by Dinh-V-Trung & Lim (2009). In that PPN, the
spherical AGB envelope is also disrupted by the outflows. In this PPN, the
spherical shell is disrupted more dramatically, leaving behind a torus-like
envelope.
The envelope, originally either a torus or a spherical shell, will not be able
to collimate the two bipolar outflows, as discussed. Therefore, the two
collimated outflows in this PPN must be produced by two post-AGB winds that
are intrinsically collimated or jets, as proposed in other PPNe and young PNe
(Sahai & Trauger, 1998; Sahai, 2001; Lee & Sahai, 2003; SC, 06). One popular
model to produce a collimated jet requires a binary companion to accrete the
material from the AGB star and then launch the jet (see the review by Balick &
Frank, 2002). Gledhill et al. (2001) also suggested that a binary interaction
is needed to produce the two-armed spiral structure seen in the NIR
polarization image of this PPN. The binary interaction can also transform the
isotropic mass loss of the AGB wind into an equatorial enhanced mass loss
(Mastrodemos & Morris, 1998), producing a toroidal envelope as needed in our
model. However, it is unclear how two jets can be launched in this PPN to
produce the two bipolar outflows (see SSMC, 07, for various possibilities).
## 6 Conclusions
IRAS 19475+3119 is a quadrupolar PPN, with two bipolar lobes, one in the east-
west (E-W) direction and one in the southeast-northwest (SE-NW) direction. The
E-W bipolar lobe is known to trace a bipolar outflow and it is detected at
high velocity. The SE-NW bipolar lobe appears at low velocity, and could trace
a bipolar outflow moving in the plane of the sky. Two compact clumps are seen
at low velocity around the common waist of the two bipolar lobes, spatially
coincident with the two emission peaks in the NIR, tracing dense envelope
material. They are found to trace the two limb-brightened edges of a torus-
like circumstellar envelope, expanding away at $\sim$ 12.5 km s-1. This torus-
like envelope can be reproduced reasonably well with either a torus or a
spherical shell, both with two pairs of cavities along the two bipolar lobes.
Here, the torus could come from an equatorial enhanced mass loss and the
spherical shell could come from an isotropic mass loss, both in the late AGB
phase. In either case, the circumstellar envelope appears to be disrupted by
the two bipolar outflows.
We thank the referee for the valuable comments. M.-C. Hsu appreciates valuable
discussion with Chun-Hui Yang, especially in the development of the radiative
transfer code. C.-F. Lee and M.-C. Hsu are financially supported by the NSC
grant NSC96-2112-M-001-014-MY3.
## References
* Balick & Frank (2002) Balick, B., & Frank, A. 2002, ARA&A, 40, 439
* CC (10) Castro-Carrizo, A., et al. 2010, A&A, 523, A59 (CC10)
* Dinh-V-Trung & Lim (2009) Dinh-V-Trung, & Lim, J. 2009, ApJ, 698, 439
* Gledhill et al. (2001) Gledhill, T. M., Chrysostomou, A., Hough, J. H., & Yates, J. A. 2001, MNRAS, 322, 321
* Ho et al. (2004) Ho, P. T. P., Moran, J. M., & Lo, K. Y. 2004, ApJ, 616, L1
* Hog et al. (1998) Hog, E., Kuzmin, A., Bastian, U., Fabricius, C., Kuimov, K., Lindegren, L., Makarov, V. V., & Roeser, S. 1998, A&A, 335, L65
* Hrivnak et al. (1999) Hrivnak, B. J., Langill, P. P., Su, K. Y. L., & Kwok, S. 1999, ApJ, 513, 421
* Hrivnak & Bieging (2005) Hrivnak, B. J., & Bieging, J. H. 2005, ApJ, 624, 331
* Klochkova et al. (2002) Klochkova, V. G., Panchuk, V. E., & Tavolzhanskaya, N. S. 2002, Astronomy Letters, 28, 49
* Kwan & Linke (1982) Kwan, J., & Linke, R. A. 1982, ApJ, 254, 587
* Kwok et al. (1998) Kwok, S., Su, K. Y. L., & Hrivnak, B. J. 1998, ApJ, 501, L117
* Lee & Sahai (2003) Lee, C.-F., & Sahai, R. 2003, ApJ, 586, 319
* Lee, Hsu, & Sahai (2009) Lee, C.-F., Hsu, M.-C., & Sahai, R., 2009, ApJ, 696, 1630
* Likkel et al. (1991) Likkel, L., Forveille, T., Omont, A., & Morris, M. 1991, A&A, 246, 153
* Mastrodemos & Morris (1998) Mastrodemos, N., & Morris, M. 1998, ApJ, 497, 303
* Sahai (2001) Sahai, R., 2001, in Post-AGB Objects as a Phase of Stellar Evolution, ed. R. Szczerba & S. K. Górny (Dordrecht: Kluwer), 53
* Sahai (2004) Sahai, R. 2004, Asymmetrical Planetary Nebulae III: Winds, Structure and the Thunderbird, 313, 141
* SSMC (07) Sahai, R., Sánchez Contreras, C., Morris, M., & Claussen, M. 2007, ApJ, 658, 410 (SSMC07)
* Sahai & Trauger (1998) Sahai, R., & Trauger, J. T. 1998, AJ, 116, 1357
* Sahai et al. (2008) Sahai, R., Young, K., Patel, N., Sánchez Contreras, C., & Morris, M. 2008, Ap&SS, 313, 241
* SC (06) Sánchez Contreras, C., Bujarrabal, V., Castro-Carrizo, A., Alcolea, J., & Sargent, A. 2006, ApJ, 643, 945 (SC06)
* Soker (2004) Soker, N. 2004, Asymmetrical Planetary Nebulae III: Winds, Structure and the Thunderbird, 313, 562
* Soker & Rappaport (2000) Soker, N., & Rappaport, S. 2000, ApJ, 538, 241
* Su et al. (1998) Su, K. Y. L., Volk, K., Kwok, S., & Hrivnak, B. J. 1998, ApJ, 508, 744
* Volk et al. (2007) Volk, K., Kwok, S., & Hrivnak, B. J. 2007, ApJ, 670, 1137
Table 1: The best fit parameters for our models
Parameters | Torus | Torus | Torus | Shell Model
---|---|---|---|---
| no cavities | \+ EW cavities | \+ 2 pairs of cavities | \+ 2 pairs of cavities
$p$ | 8 | 8 | 8 | 0
Inclination $i$ | 15 | 15 | 15 | –
equator P.A. | 15 | 15 | 10 | –
$n_{0}$ (cm-3) | $7.5\times 10^{3}$ | $7.5\times 10^{3}$ | $9.0\times 10^{3}$ | $4.3\times 10^{3}$
$T_{0}$ (K) | 28 | 28 | 28 | 28
$v_{e}$ (km s-1) | 12.5 | 12.5 | 12.5 | 12.5
$\dot{M}$ (M⊙ yr-1) | $1.9\times 10^{-5}$ | $1.9\times 10^{-5}$ | $2.3\times 10^{-5}$ | $2.7\times 10^{-5}$
Note. — The torus model with p=8 has a half opening angle of $\theta_{0}\sim
24$∘. Here the 2 pairs of cavities include the E-W pair and SE-NW pair of
cavities. The E-W pair of cavities have a P.A.=88∘ and a mean inclination
angle of $\sim$ 20∘, with the western cavity titled toward us. The SE-NW pair
of cavities have a P.A.=$-40$∘ and are assumed to have an inclination of 0∘.
Figure 1: CO J=2-1 contours of IRAS 19475+3119 superposed on top of (a) the
HST image obtained with F606W filter (SSMC, 07) and (b) the NIR image (with
yellow contours, Gledhill et al., 2001). The central source position is marked
by a “+”. The synthesized beam of the CO observation is shown in the left
corner, with a size of
$1^{\prime\prime}_{{}^{\textrm{.}}}16$$\times$$1^{\prime\prime}_{{}^{\textrm{.}}}01$.
The CO intensity maps integrated over three selected LSR velocity intervals
show the torus-like structure (black contours), redshifted (red contours) and
blueshifted (blue contours) outflow lobes. Contour levels all start from
2$\sigma$ with a spacing of 1$\sigma$, where $\sigma$ is 0.45 Jy beam-1 km
s-1. Labels A–E show the cuts for the position-velocity (PV) diagrams in
Figure 3.
Figure 2: Channel maps of CO J=2-1 emission in IRAS 19475+3119. The velocity
width is 3.0 km s-1. The systemic velocity is 18.25$\pm$1 km s-1, as found in
our model. The contour levels start from $2\sigma$ with a spacing of
1$\sigma$, where $\sigma=$0.06 Jy beam-1. The stellar position is marked by a
cross. The synthesized beam is shown in the first channel map at the lower
right corner. The velocity in each channel is shown in the top left corner.
Figure 3: PV diagrams of the CO emission, with the cut directions shown in
Figure 1a. The contour levels start from 2$\sigma$ with a spacing of
2$\sigma$, where $\sigma=0.06$ Jy beam-1 in (a)-(d) and 0.03 Jy beam-1 in (e).
The spatial and velocity resolutions are shown as a rectangle in the bottom
right corner. (a) Cut A (along P.A.=15∘) goes through the emission peaks of
the two low-velocity clumps, showing a ring-like PV structure. (b) Cut B
(along P.A.=$-$10∘) goes along the axis half way to the edges of the clumps.
It also shows a ring-like structure but with a weaker emission around the
systemic velocity. (c) Cut C (along P.A.=$-$30∘) goes through the edges of the
clumps, showing a breakup of the ring-like PV structure. (d) Cut D (along
P.A.=$-$75∘) goes perpendicular to the axis connecting the two clumps, showing
the emission shifted to the east on the blueshifted side and to the west on
the redshifted side. (e) Cut E (along P.A.=88∘) goes along the high-velocity
component in the east-west direction, showing a velocity increasing with the
distance from the source for the outflow, and a hint of bifurcation of the PV
structure (solid lines).
Figure 4: CO spectrum toward the center position, averaged over a region with
a diameter of $2^{\prime\prime}_{{}^{\textrm{.}}}5$. Note that a bigger region
will include emission from the two bipolar lobes. The vertical dashed line
shows the systemic velocity. The spectrum shows more emission on the
redshifted side than the blueshifted side, as expected for an expanding
envelope.
Figure 5: Comparison of our torus model with the observed torus-like envelope
in (a) integrated intensity map as shown in Figure 1, (b) spectrum as shown in
Figure 4, and (c)-(f) PV diagrams as shown in Figure 3a-d. Black contours and
spectrum are from the model. The best fit model requires $p=8$ and an
inclination of $\sim$15∘.
Figure 6: Same as Figure 5 but for the torus + EW pair of cavities model.
Figure 7: Same as Figure 5 but for the torus + 2 pairs of cavities model.
Figure 8: Same as Figure 5 but for the spherical shell + 2 pairs of cavities
model.
|
arxiv-papers
| 2011-05-12T09:25:28 |
2024-09-04T02:49:18.719973
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Ming-Chien Hsu and Chin-Fei Lee",
"submitter": "Ming-Chien Hsu",
"url": "https://arxiv.org/abs/1105.2410"
}
|
1105.2460
|
# Dynamic transition in an atomic glass former: a molecular dynamics evidence
Estelle Pitard Laboratoire des Colloïdes, Verres et NanoMatériaux (CNRS UMR
5587), Université de Montpellier II, place Eugène Bataillon, 34095 Montpellier
cedex 5, France Vivien Lecomte Laboratoire de Probabilités et Modèles
Aléatoires (CNRS UMR 7599), Université Paris Diderot - Paris 7 et Université
Pierre et Marie Curie - Paris 6, Site Chevaleret, CC 7012, 75205 Paris cedex
13, France Frédéric van Wijland Laboratoire Matière et Systèmes Complexes
(CNRS UMR 7057), Université Paris Diderot - Paris 7, 10 rue Alice Domon et
Léonie Duquet, 75205 Paris cedex 13, France
###### Abstract
We find that a Lennard-Jones mixture displays a dynamic phase transition
between an active regime and an inactive one. By means of molecular dynamics
simulations and of a finite-size study, we show that the space time dynamics
in the supercooled regime coincides with a dynamic first order transition
point.
In order to realize that a physical system has fallen into a glassy state one
must either drive the system out of equilibrium, or investigate its relaxation
properties. It is indeed a well-known fact that since no static signature is
available, one has to resort to nonequilibrium protocols or measurements to
identify a glassy state. The supercooled regime, which sets in before the
material actually freezes into a solid glass, is also characterized by
anomalous temporal properties, such as the increase in the viscosity, and
ageing phenomena during relaxation processes. The density-density
autocorrelation function at a microscopic scale, instead of exhibiting a
straightforward exponential relaxation, develops a plateau (over a duration
conventionally called $\tau_{\alpha}$) before relaxing to zero. The present
work is concerned with the dynamical properties in the supercooled regime. The
idea that dynamical heterogeneities, long-lived large scale spatial
structures, are the trademark of the slow and intermittent dynamics in the
supercooled regime dates back to almost thirty years FA and many developments
have occurred since hetero ; garrahanchandler ; merollegarrahanchandler .
Here, instead of resorting to local probes such as the van Hove function, the
self-intermediate scattering function, or the nonlinear dynamical
susceptibility, \- i.e on multi-point correlation functions in space and
time-, we prefer to rely on a global characterization of the system’s
dynamics.
More recently, it was advocated by Garrahan et
al.garrahanjacklecomtepitardvanduijvendijkvanwijland that, at the level of
Kinetically Constrained Models (KCM), dynamical heterogeneities appeared as
the consequence of a universal phase transition mechanism, largely independent
of the specific model under consideration. The phase transitions at work are
not of a conventional type: they occur in the space of trajectories of
realizations the system has followed over a large time duration, instead of
occurring in the space of available configurations, as the liquid-gas
transition, or a putative thermodynamic glass transition, for instance, do.
There exists a well-established body of mathematical physics literature to
analyze these phase transitions, which is based, in spirit, on the
thermodynamic formalism of Ruelle ruelle ; gaspard , and which was adapted
appertlecomtevanwijland-2 and then exploited
garrahanjacklecomtepitardvanduijvendijkvanwijland in numerical and analytical
studies of the KCM’s. The idea behind these studies is to follow the
statistics, upon sampling the physical realizations of the system over a large
time duration, of a space and time extensive observable. This observable
characterizes the level of dynamical activity in the course of the system’s
time evolution. At this stage, the term activity refers to the intuitive
picture that an active trajectory is characterized by many collective
rearrangements needed to escape from energy basins, whereas an inactive
trajectory is dominated by the rapid in-cage rattling motion of the particles
without collective rearrangements. This activity observable is used to
partition trajectories into two groups –the inactive and the active ones–
depending on whether their activity is above or below the average activity.
An important step forward in probing the relevance of the dynamic phase
transition scenario to realistic glasses is the work of Hedges et al.
hedgesjackgarrahanchandler in which an atomistic model was considered. The
system studied by these authors is a mixture of two species of particles
interacting via a Lennard-Jones potential otherwise known as the Kob-Andersen
(KA) model kobandersen , and which has been shown to fall easily into a glassy
state without crystallizing. Endowed with a Monte-Carlo or a Newtonian
dynamics, Hedges et al. implemented the Transition Path Sampling (TPS) method
to produce the probability distribution function (pdf) of the activity, at
finite times. These authors succeeded in showing that for a finite system, as
the observation time was increased, the pdf of the activity develops a bimodal
structure. In the light of previous works on lattice models for glass formers,
they interpreted their result as an evidence for a bona fide dynamic phase
transition, expected to occur once the infinite size and infinite observation
time limits are reached. While the work of Hedges et al. is a definite
breakthrough towards a better understanding of atomistic glass formers, it
leaves a number of questions unanswered. These are the subject of the present
work. Dynamic phase transitions occur in the large time limit, as exemplified
by the pioneering works of Ott et al. ottwhithersyorke , Cvitanović
artusocvitanovickenny or Szépfalusy and Tél szepfalusytel (see Beck and
Schlögl beckschlogl for further references). The dynamic transitions at work
in glass formers are of a subtler form, since they only emerge upon, in
addition, considering the infinite system size limit.
Our first goal in this letter is to show the existence of a phase transition.
Our second goal is to be more precise about the location of the phase
transition itself. This is indeed a central issue: if the critical point is
away from the typical measurable activity then the transition scenario is at
best a crossover. If, on the contrary, it is shown that the phase transition
actually takes place for typical trajectories then experimentally observable
consequences are expected. Our third goal is to provide a rigorous definition
of the activity. It is our feeling that deeper insight will be gained if the
notion of activity can be characterized by standard physical concepts (like
forces between particles) rather than on phenomenological considerations.
In the present work, we consider a KA mixture, which we study by extending the
methods of Molecular Dynamics (MD) to the study of temporal large deviations.
By implementing a MD version of the cloning algorithm of Giardinà, Kurchan and
Peliti giardinakurchanpeliti we are able to follow the statistics of the
large deviations of the activity. Our activity observable is a physically
transparent quantity which can be related to the rate at which the system
escapes from a given location in phase space. As we shall soon see, the
typical trajectories of the system in phase space are characterized by strong
space time heterogeneities –the so-called dynamical heterogeneities– which
appear to be the by-product of an underlying first order dynamical transition
in which the activity plays the role of an order parameter. A typical
realization lies at a first-order transition point, which is thus
characterized by the coexistence of competing families of trajectories. Active
trajectories, in which a time and space extensive number of events where a
particle escapes from its local cage, coexist with inactive trajectories in
which localization of the particles within their local neighborhood dominates
the dynamics. Without going further into the mathematical details of what the
activity is, we present in figure 1 a snapshot of a the activity map in a
three-dimensional KA mixture in three distinct situations.
Figure 1: In these snapshots, the diameter of the particles quantifies the
local activity: small points refer to mobile active particles, whereas large
blobs refer to blocked inactive particles. Red (blue) indicates that the
activity is larger (smaller) than the median one. Left: Activity map in a
typical configuration from a trajectory with excess activity with respect to
the average activity; there is a large number of mobile particles though
blocked particles also exist. Right: Activity map in a typical configuration
from a trajectory with less activity than the average activity, large blobs
dominate.
We now carry out our program and we begin by defining an activity observable:
given a set of particles interacting via a two-body potential $V(\text{\bf
r}_{i}-\text{\bf r}_{j})$, in which each particle $i$ is subjected to a force
${\bf F}_{i}=-\sum_{j\neq i}\bm{\nabla}V(\text{\bf r}_{i}-\text{\bf r}_{j})$,
we introduce the observable:
$V_{\text{eff}}=\sum_{i}\left[\frac{\beta}{4}{\bf
F}_{i}^{2}+\frac{1}{2}\bm{\nabla}_{\text{\bf r}_{i}}\cdot{\bf F}_{i}\right]$
(1)
The combination $\frac{\beta}{4}{\bf
F}_{i}^{2}+\frac{1}{2}\bm{\nabla}_{\text{\bf r}_{i}}\cdot{\bf F}_{i}$ can be
interpreted as the activity of a single particle. The quantity
$V_{\text{eff}}$ in (1) appears in the study of Brownian interacting particles
and measures the tendency for dynamical trajectories to evolve away from a
given configuration. Indeed, it can be argued
autierifacciolisegapederivaorland that the quantity $\exp(-\beta
V_{\text{eff}}\text{d}t)$ is proportional to the probability $P(x,t+dt|x,t)$
that the system has stayed in its configuration between $t$ and $t+\text{d}t$.
This means that $\beta|V_{\text{eff}}|$ is the rate at which the system
escapes its configuration. To understand how (1) is related to this escape
rate one writes $P(x,t+dt|x,t)=\langle x|e^{-\hat{H}\text{d}t}|x\rangle$ where
$\hat{H}$ is the Fokker-Planck operator of evolution of the system. Using the
detailed balance property of the dynamics, $\hat{H}$ can be symmetrized and
this leads to $P(x,t+dt|x,t)\sim\exp(-\beta V_{\text{eff}}\text{d}t)$, using
standard path-integral representation of the propagator $\langle
x|e^{-\hat{H}\text{d}t}|x\rangle$ (see autierifacciolisegapederivaorland for
details). Besides, if one regularizes the Brownian dynamics in the form of
hops on a lattice, our activity can be viewed as the continuum analog of the
activity introduced in Lecomte et al. lecomteappertvanwijland-1 which counted
the number of configuration changes undergone by a system over a given time
interval. Perhaps more interestingly still, $V_{\text{eff}}$ also appears as
the continuum limit analog of the dynamical complexity (a trajectory-dependent
Kolmogorov-Sinai entropy appertlecomtevanwijland-2 ) which further clarifies
its conceptual value. However, hand-waving arguments lead to a more concrete
understanding of the value of the effective potential $V_{\text{eff}}$.
Indeed, in the expression (1) minimizing the first term in the right hand side
drives the system away from regions of phase space where forces are nonzero.
In other words, it tends to favor mechanical equilibrium configurations
(whether stable or metastable). The second term in the right hand side of (1),
can be positive or negative: if negative, it will favor local minima, all the
more so as they are deep and steep; if positive, it will select local maxima.
As a result, there will be two classes of trajectories, according to the
values of $V_{\text{eff}}$. It can be shown that the average equilibrium value
of $V_{\text{eff}}$ is $\langle
V_{\text{eff}}\rangle=-\beta/4\sum_{i}F_{i}^{2}$, which is always negative.
For minimal (negative) values of $V_{\text{eff}}$, trajectories will explore
deep energy basins, for maximal values of $V_{\text{eff}}$, trajectories will
explore local maxima of the energy landscape.
The total activity is defined as
$K(t)=\int_{0}^{t}\text{d}t^{\prime}V_{\text{eff}}(t^{\prime})$. In terms of
the local density $\rho(\text{\bf x},t)$, our activity reads:
$\begin{split}K=&\int_{0}^{t}\text{d}t\Big{[}\frac{1}{4T}\int_{\text{\bf
x},\text{\bf y},\text{\bf
z}}\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\rho(\text{\bf x},t)\rho(\text{\bf
y},t)\rho(\text{\bf z},t)\bm{\nabla}V(\text{\bf x}-\text{\bf
y})\bm{\nabla}V(\text{\bf x}-\text{\bf z})\\\ &-\frac{1}{2}\int_{\text{\bf
x},\text{\bf y}}\\!\\!\\!\\!\\!\\!\rho(\text{\bf x},t)\rho(\text{\bf
y},t)\Delta V(\text{\bf x}-\text{\bf y})\Big{]}\end{split}$ (2)
It is now apparent that $V_{\text{eff}}$ involves three-body effective
interactions, a fact that we a posteriori interpret by realizing that in order
to escape from an energy basin via a collective rearrangement, multi-body
interaction terms are needed.
Figure 2: The dynamical free energy per particle $\frac{\psi(s,N)}{N}$ is
shown as a function of $s$ for increasing sizes $N=45$, $82$, $155$, $250$ and
$393$. While the curves collapse onto a single master curve at $s<0$, a strong
size-dependent behavior is observed at $s>0$. On the $s>0$ side, the curves
display a rapid variation at a typical value $s_{c}(N)$ which decreases as $N$
increases, leading to the putative building up of a singularity of
$\psi(s,N)/N$ at $s=0$ as $N\to\infty$. The rescaled
$[\psi(s,N)-\psi(s_{c},N)]/N^{1+\alpha}$ with $\alpha\simeq 0.43$ is shown in
the inset: on the inactive side $s>0$ the curves collapse onto a single curve,
exemplifying the large $N$ finite size scaling. Applying the cloning algorithm
requires the $|s\,V_{\text{eff}}\,\text{d}t|$ product to be infinitesimally
small (in principle), which at the left end of the graph and for the largest
two values of N is hardly verified (dashed lines). For these points, this
product reaches $.5$ which accounts for the increasing error.
Given the activity we have just defined, we set out to determine the
distribution of this quantity over a large number of realizations of the
process, that is, given the dynamics is deterministic, by sampling over a
large number of initial states drawn from the equilibrium Boltzmann
distribution. Sampling the full distribution of the activity requires
sufficient statistics beyond its typical values. For that reason we have
preferred to work in an ensemble of trajectories in which the average
activity, rather than the activity, is fixed. In this canonical version, we
consider a large number of systems evolving in parallel, and between $t$ and
$t+\text{d}t$ we remove or add a system in a given configuration with a
cloning factor equal to $\exp(-sV_{\text{eff}}(t)\text{d}t)$. How to perform a
numerical simulation in a canonical ensemble of time realizations was
explained in Giardinà et al.giardinakurchanpeliti ’s cloning algorithm. We
combined the cloning algorithm with the molecular dynamics simulation of the
interacting particles. Choosing a positive value of $s$ allows one to focus on
time realizations with a lower-than-average activity, while a negative $s$
selects over-active trajectories, and $s$ close to zero samples typical
activity trajectories. To this day, no one has been able to endow the
parameter conjugate to the activity $s$ with a concrete physical meaning, so
that the only physically realizable value of an $s$-ensemble is achieved for
$s=0$. As will shortly become clear, analyzing the $s=0$ properties gains
enormously from being able to vary $s$ away from 0, a theorist’s privilege.
The distribution of the activity $K$ is fully encoded in its generating
function $Z(s,t,N)=\langle\text{e}^{-sK}\rangle$, or, alternatively, in the
corresponding cumulant generating function $\ln Z$. We define the intensive
dynamical free energy $\psi(s,N)=\lim_{t\to\infty}\frac{\ln Z}{t}$ for any
given finite size. From the knowledge of the function $\psi(s,N)$ we can
reconstruct the properties of the activity $K$. In numerical terms, in view of
the cloning algorithm used, $\psi$ stands for the growth rate of the
population of systems evolving in parallel. The details of the simulations are
as follows. We used the A-B mixture of kobandersen for samples of different
sizes. The samples all had the approximate ratio 80/20 for A/B particles. They
were first prepared at equilibrium at temperature $T=0.8$ by coupling to a
stochastic heat bath; the time step was $\text{d}t=2.10^{-2}$. During a very
long simulation at equilibrium, the $N_{c}$ clones where prepared. Then the
cloning algorithm of Giardinà et al.giardinakurchanpeliti was performed with
a small coupling to the heat bath, necessary to provide some stochasticity to
the exploration of trajectory space, as explained in giardinakurchanpeliti ;
tailleurkurchan for deterministic systems. In our simulations, the
convergence was checked by varying the number of clones and the duration of
the simulations. For $N=45,82$ the best results were found for $N_{c}=1000$
and $\tau=10\tau_{\alpha}$ ($\tau_{\alpha}$ is the relaxation time); for
$N=155,250,393$ the best results were found for $N_{c}=500$ and
$\tau=20\tau_{\alpha}$. Fluctuations in the space of clones would have
required for the largest system sizes that we substantially increase $N_{c}$
as the tails of the distribution are sampled (extreme values of $s$).
Figure 3: The average activity per particle $\langle V_{\text{eff}}\rangle/N$
is measured in an ensemble of trajectories biased by a weight $\text{e}^{-sK}$
and is plotted as a function of $s$. While the $s<0$ regime shows convergence
to a size independent limit of an order comparable to its equilibrium value,
the $s>0$ side confirms a strongly size-dependent behavior as $V_{\text{eff}}$
abruptly decreases to negative values. This sudden drop in $V_{\text{eff}}$
indicates that the system freezes into inactive states.
We now present the central result of the work, namely the dynamical free
energy $\psi(s,N)$ as a function of $s$ for increasing system sizes $N$.
Figure 2 shows that the dynamical free energy per particle
$\frac{\psi(s,N)}{N}$ settles to a well-defined limiting value for $s<0$ as
$N$ increases. This means that for $s<0$ there exists a thermodynamic limit.
On the $s<0$ side, where the activity is above its average value at $s=0$,
which we interpret in terms of active trajectories, the physical properties
are not far from those at $s=0$ as predicted from the Gibbs equilibrium
distribution. However, a dramatic change in behavior is observed for positive
values of $s$. Not only do the dynamical free energies exhibit a strong size
dependence, but a severe change in behavior is also observed on that side of
the parameter space. For $s>0$, the free energy rapidly increases, and the
activity rapidly decreases. The location $s_{c}(N)$ at which this rapid
increase sets shows a strong $N$-dependence: $s_{c}(N)$ seems to decrease to
zero as $N$ increases. The precise location of $s_{c}(N)$ has been determined
thanks to a spline method and by maximizing $\psi^{\prime\prime}(s)$; this is
illustrated in figure 4. One notes that given the existence of error bars,
there is an overall decrease of $s_{c}(N)$ as $N$ increases, but one cannot
exclude that $s_{c}(N)$ converges to a very small positive value. As in other
examples of first order dynamical transitions
garrahanjacklecomtepitardvanduijvendijkvanwijland , we remark that $s_{c}(N)$
remains positive for all finite $N$. Indeed, $\psi^{\prime}(0)=-\langle
V_{\text{eff}}\rangle$ is proportional to $N$, which means that $\psi(s)$ also
scales as $N$ in the vicinity of $s=0$, and since $s_{c}(N)$ marks the
transition to a regime where $\psi(s)$ scales differently with $N$, one has
$s_{c}(N)>0$.
Figure 4: $s_{c}(N)$ as a function of $N$.
The interpretation in terms of active vs. inactive trajectories is easier when
plotting the average activity as a function of $s$. This is done in figure 3,
where the average activity is measured independently: we see that in the
vicinity of $s_{c}(N)$, the behavior of $\langle V_{\text{eff}}\rangle/N$
changes abruptly from a smooth range of values at $s<0$ to a strongly negative
range for $s>0$. In this inactive regime, through rescaling (see the inset on
figure 2), we see that the free energy behaves in $N^{1+\alpha}$ with
$\alpha\simeq 0.43$, in contrast to the purely extensive behavior of the
active regime. A similar difference of scaling exponents in $N$ is observed in
KCMs in two dimensions garrahanjacklecomtepitardvanduijvendijkvanwijland at
$s>0$: $\psi(s)$ is of order $L$ for the Fredrickson-Anderson model (FA) while
of order $1$ for the triangular lattice gas (TLG). This behavior is related to
the geometry of the remaining active sites in the system: those divide up
along the border of fully inactive domains in the FA model, while they are
isolated in the TLG. Our finding that $\alpha$ is non-integer indicates that
the KA mixture adopts configurations with non-trivial geometrical features for
the inactive particles in the inactive phase due to the effective long-range
interactions that develop in the $s>s_{c}$ states. Since large values of
$|V_{\text{eff}}|$ correspond to inactive histories, it is expected that
$\alpha>0$, as observed. We note that such finite-size behaviour of the
activity with non-trivial exponents is known to occur in lattice KCMs
bodineautoninelli , where the value of those exponents are directly related to
the nature of configurations appearing in the inactive state (which differ
from those of the active state).
This allows us to claim that there exists a phase transition from inactive to
active states as $s$ is varied. Our closely related goal was to identify the
location of the transition. We have shown that $s_{c}(N)$ displays a strong
size-dependence with an overall decrease for the range of $N$ values explored.
The two different well-defined scalings $\sim N^{1+\alpha}$ (resp. $\sim N$)
for $s>0$ (resp. $s<0$) indicate that we have reached the large $N$
asymptotics on our range of system sizes (see figure 2). We therefore conclude
that the most likely value at which the transition takes place in an infinite
system is either $s_{c}(\infty)=0$ or a very small positive value close to
$0$. We were not able to perform simulations for larger values of $N$ due to
the very large computation times needed.
The first order dynamic transition scenario observed in KCM’s is thus
confirmed in the atomistic model we have studied. As we mentioned earlier, the
location of the phase transition along the $s$-variable axis is an extremely
relevant issue since only the value $s=0$ is experimentally accessible. Note
however that identifying a transition point at $s=0$ is possible only on the
condition that $s$ is varied across 0. Therefore, finding evidence for the
transition occurring at $s=0$ renders an experimental observation of the
transition a credible achievement. The lesson to be drawn from the present
extensive simulation series is that, as expected, it is desirable to work on
as small systems as possible, yet large enough to allow for collective effects
in trajectory space to develop.
This work was supported by a French Ministry of Foreign Affaires Alliance
grant. VL was supported in part by the Swiss NSF under MaNEP and Division II.
We greatly benefited from discussions with R.L. Jack. We thank W. Kob for
providing the software for the preparation of the configurations, and L.
Berthier for help with figure 1.
## References
* (1) G.H. Fredrickson, H.C. Andersen, Phys. Rev. Lett. 53 1244 (1984).
* (2) M.M. Hurley, P. Harrowell, J. Chem. Phys. 105 10521 (1996). M.D. Ediger, Annu. Rev. Phys. Chem. 51 99 (2000).
* (3) J.P. Garrahan and D. Chandler, Phys. Rev. Lett. 89, 035704 (2002).
* (4) M. Merolle, J.P. Garrahan and D. Chandler, Proc. Natl. Acad. Sci. USA 102, 10837 (2005); R.L. Jack, J.P. Garrahan and D. Chandler, J. Chem. Phys. 125, 184509 (2006).
* (5) J.P. Garrahan, R.L. Jack,V. Lecomte, E. Pitard, K. van Duijvendijk and F. van Wijland, Phys. Rev. Lett. 98, 195702 (2007); J. Phys. A 42, 075007 (2009).
* (6) D. Ruelle, Thermodynamic Formalism (Addison-Wesley, Reading, 1978).
* (7) P. Gaspard, Chaos, scattering and statistical mechanics (CUP, Cambridge, 1998).
* (8) V. Lecomte, C. Appert-Rolland, and F. van Wijland, Phys. Rev. Lett. 95, 010601 (2005).
* (9) L. Hedges, R.L. Jack, J.-P. Garrahan, and D.C. Chandler, Science 323, 1309 (2009).
* (10) W. Kob and H.C. Andersen, Phys. Rev. E 48, 4364 (1993).
* (11) E. Ott, W. Withers, and J.A. Yorke, J. Stat. Phys. 36, 697 (1984).
* (12) R. Artuso, P. Cvitanović, and B.G. Kenny, Phys. Rev. A 39, 268 (1989).
* (13) P. Szépfalusy and T. Tél, Phys. Rev. A 35, 477 (1987).
* (14) C. Beck and F. Schögl, _Thermodynamics of chaotic systems, and introduction_ , Cambridge University Press, 1993.
* (15) C. Giardinà, J. Kurchan, and L. Peliti, Phys. Rev. Lett. 96, 120603 (2006).
* (16) E. Autieri, P. Faccioli, M. Sega, F. Pederiva, and H. Orland, J. Chem. Phys. 130, 064106 (2009).
* (17) V. Lecomte, C. Appert-Rolland, and F. van Wijland, J. Stat. Phys. 127, 51 (2007).
* (18) J. Tailleur and J. Kurchan, Nature Physics 3, 203 (2007)
* (19) T. Bodineau and C. Toninelli, _Activity phase transition for constrained dynamics_ , arXiv:1101.1760, accepted for publication in Comm. Math. Phys (2011)
|
arxiv-papers
| 2011-05-12T13:06:19 |
2024-09-04T02:49:18.726905
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Estelle Pitard, Vivien Lecomte, Fr\\'ed\\'eric Van Wijland",
"submitter": "Vivien Lecomte",
"url": "https://arxiv.org/abs/1105.2460"
}
|
1105.2722
|
# Well-posedness of the Viscous Boussinesq System in Besov Spaces of Negative
Order Near Index $s=-1$
Chao Deng, Shangbin Cui111E-mail: deng315@yahoo.com.cn, cuisb3@yahoo.com.cn
Department of Mathematics, Sun Yat-Sen University, Guangzhou,
Guangdong 510275, P. R. of China
###### Abstract
This paper is concerned with well-posedness of the Boussinesq system. We prove
that the $n$ ($n\geq 2$) dimensional Boussinesq system is well-psoed for small
initial data $(\vec{u}_{0},\theta_{0})$ ($\nabla\cdot\vec{u}_{0}=0$) either in
$({B}^{-1}_{\infty,1}\cap{B^{-1,1}_{\infty,\infty}})\times{B}^{-1}_{p,r}$ or
in ${B^{-1,1}_{\infty,\infty}}\times{B}^{-1,\varepsilon}_{p,\infty}$ if
$r\in[1,\infty]$, $\varepsilon>0$ and $p\in(\frac{n}{2},\infty)$, where
$B^{s,\varepsilon}_{p,q}$ ($s\in\mathbb{R}$, $1\leq p,q\leq\infty$,
$\varepsilon>0$) is the logarithmically modified Besov space to the standard
Besov space $B^{s}_{p,q}$. We also prove that this system is well-posed for
small initial data in
$({B}^{-1}_{\infty,1}\cap{B^{-1,1}_{\infty,\infty}})\times({B}^{-1}_{\frac{n}{2},1}\cap{B^{-1,1}_{\frac{n}{2},\infty}})$.
Keywords: Boussinesq system; Navier-Stokes equations; well-posedness; Besov
spaces.
Mathematics Subject Classification: 76D05, 35Q30, 35B40.
## 1 Introduction
In this paper we will discuss the Cauchy problem for the normalized
$n$-dimensional viscous Boussinesq system which describes the natural
convection in a viscous incompressible fluid as follows:
$\displaystyle{\vec{u}}_{t}+(\vec{u}\cdot\nabla)\vec{u}+\nabla P$
$\displaystyle=\Delta\vec{u}+\theta{\vec{a}}\hskip 56.9055pt\text{ in
}\mathbb{R}^{n}\times(0,\infty),$ (1.1) $\displaystyle{\rm div}\vec{u}$
$\displaystyle=0\hskip 91.33353pt\text{ in }\mathbb{R}^{n}\times(0,\infty),$
(1.2) $\displaystyle\theta_{t}+\vec{u}\cdot\nabla\theta$
$\displaystyle=\Delta\theta\hskip 82.22844pt\text{ in
}\mathbb{R}^{n}\times(0,\infty),$ (1.3)
$\displaystyle(\vec{u}(\cdot,t),\theta(\cdot,t))|_{t=0}$
$\displaystyle=(\vec{u}_{0}(\cdot),\theta_{0}(\cdot))\hskip 39.26494pt\text{
in }\mathbb{R}^{n},$ (1.4)
where $\vec{u}=(u_{1}(x,t),u_{2}(x,t),\cdots,u_{n}(x,t))\in\mathbb{R}^{n}$ and
$P=P(x,t)\in\mathbb{R}$ denote the unknown vector velocity and the unknown
scalar pressure of the fluid, respectively. $\theta=\theta(x,t)\in\mathbb{R}$
denotes the density or the temperature. $\theta{\vec{a}}$ in (1.1) takes into
account the influence of the gravity and the stratification on the motion of
the fluid. The whole system is considered under initial condition
$(\vec{u}_{0},\theta_{0})=(\vec{u}_{0}(x),\theta_{0}(x))\in\mathbb{R}^{n+1}$.
The Boussinesq system is extensively used in the atmospheric sciences and
oceanographic turbulence (cf. [15] and references cited therein). Due to its
close relation to fulids, there are a lot of works related to various aspects
of this system. Among the fruitful results we only cite papers on well-
posedness. In 1980, Cannon and DiBenedetto in [3] established well-posedness
of the full viscous Boussinesq system in Lebesgue space within the framework
of Kato semigroup. Around 1990, Mirimoto, Hishida and Kagei have investigated
weak solutions of this system in [16], [11] and [13]. Well-posedness results
in pseudomeasure-type space and weak $L^{p}$ space, etc. can be found in [10]
and references cited therein. Recently, the two dimensional Boussinesq system
with partial viscous terms has drawn a lot of attention, see [1, 5, 9, 12] and
references cited therein.
In this paper, we aim at achieving the lowest regularity results of the full
viscous Boussinesq system with dimension $n\geq 2$. Though it is hard to deal
with the coupled term $\vec{u}\nabla\theta$, we succeed in finding a suitable
product space with regular index being almost $-1$ in which the Boussinesq
system is well-posed. More precisely, we prove that if
$(\vec{u}_{0},\theta_{0})\in({B}^{-1}_{\infty,1}\cap{B^{-1,1}_{\infty,\infty}})\times({B}^{-1}_{\frac{n}{2},1}\cap{B^{-1,1}_{\frac{n}{2},\infty}})$
satisfying ${\rm div}\vec{u}_{0}=0$, where $B^{-1,1}_{p,\infty}$ ($1\leq
p\leq\infty$) is the logarithmically modified Besov space to the standard
Besov space $B^{-1}_{p,\infty}$ (see definition 1.1 below), then there exists
a local solution to Eqs. (1.1)$\sim$(1.4). We also prove that if $\theta_{0}$
belongs to $B^{-1}_{p,r}$ with $p\in(\frac{n}{2},\infty)$ and $r\in[1,\infty]$
and $\vec{u}_{0}$ belongs to
${B}^{-1}_{\infty,1}\cap{B^{-1,1}_{\infty,\infty}}$ satisfying the divergence
free condition, then there exists a local solution to Eqs. (1.1)$\sim$(1.4).
The method we use here is essentially frequency localization.
As usual, we use the well-known fixed point arguments and hence we invert Eqs.
(1.1) $\sim$ (1.4) into the corresponding integral equations:
$\displaystyle\vec{u}$
$\displaystyle=e^{t\Delta}\vec{u}_{0}-\int_{0}^{t}e^{(t-s)\Delta}\mathbb{P}(\vec{u}\cdot\nabla)\vec{u}ds+\int_{0}^{t}e^{(t-s)\Delta}\mathbb{P}(\theta{\vec{a}}){ds},$
(1.5) $\displaystyle\theta$
$\displaystyle=e^{t\Delta}\theta_{0}-\int_{0}^{t}e^{(t-s)\Delta}(\vec{u}\cdot\nabla{\theta})ds,$
(1.6)
where $\mathbb{P}$ is the Helmholtz projection operator given by
$\mathbb{P}=I+\nabla(-\Delta)^{-1}{\rm div}$ with $I$ representing the unit
operator. In what follows, we shall regard Eqs. (1.5) and (1.6) as a fixed
point system for the map
$\mathfrak{J}:\
(\vec{u},\theta)\mapsto\mathfrak{J}(\vec{u},\theta)=(\mathfrak{J}_{1}(\vec{u},\theta),\mathfrak{J}_{2}(\vec{u},\theta)),$
where $\mathfrak{J}_{1}(\vec{u},\theta)$ and
$\mathfrak{J}_{2}(\vec{u},\theta)$ denote the right-hand sides of (1.5) and
(1.6), respectively.
Before showing our main results of this paper, let us first recall the
nonhomogeneous littlewood-Paley decomposition by means of a sequence of
operators $(\triangle_{j})_{j\in\mathbb{Z}}$ and then we define the Besov type
space $B^{s,\alpha}_{p,r}$ and the corresponding Chemin-Lerner type space
$\tilde{L}^{\rho}(B^{s,\alpha}_{p,r})$.
To this end, let $\gamma>1$ and $(\varphi,\chi)$ be a couple of smooth
functions valued in $[0,1]$, such that $\varphi$ is supported in the shell
$\\{\xi\in\mathbb{R}^{n};\gamma^{-1}\leq|\xi|\leq 2\gamma\\}$, $\chi$ is
supported in the ball $\\{\xi\in\mathbb{R}^{n};|\xi|\leq\gamma\\}$ and
$\displaystyle\chi(\xi)+\sum_{q\in\mathbb{N}}\varphi(2^{-q}\xi)=1,\quad\forall\
\xi\in\mathbb{R}^{n}.$
For $u\in\mathcal{S}^{\prime}(\mathbb{R}^{n})$, we define nonhomogeneous
dyadic blocks as follows:
$\displaystyle\triangle_{q}u:=0\quad\text{ if }q\leq-2,$
$\displaystyle\triangle_{-1}u:=\chi(D)u=\tilde{h}\ast{u}\quad\text{ with
}\tilde{h}:=\mathcal{F}^{-1}\chi,$
$\displaystyle\triangle_{q}u:=\varphi(2^{-q}D)u=2^{qn}\int{h}(2^{q}y)u(x-y)dy\quad\text{
with }h:=\mathcal{F}^{-1}\varphi\text{ if }q\geq 0.$
One can prove that
$\displaystyle u=\sum_{q\geq-1}\triangle_{q}u\quad\text{ in
}\mathcal{S}^{\prime}(\mathbb{R}^{n})$
for all tempered distribution $u$. The right-hand side is called
nonhomogeneous Littlewood-Paley decomposition of $u$. It is also convenient to
introduce the following partial sum operator:
$\displaystyle S_{q}u:=\sum_{p\leq{q-1}}\triangle_{p}u.$
Obviously we have $S_{0}u=\triangle_{-1}u$. Since
$\varphi(\xi)=\chi(\xi/2)-\chi(\xi)$ for all $\xi\in\mathbb{R}^{n}$, one can
prove that
$\displaystyle S_{q}u=\chi(2^{-q}D)u=\int\tilde{h}(2^{q}y)u(x-y)dy\quad\text{
for all }q\in\mathbb{N}.$
Let $\gamma=4/3$. Then we have the following result, i.e. for any
$u\in\mathcal{S}^{\prime}(\mathbb{R}^{n})$ and
$v\in\mathcal{S}^{\prime}(\mathbb{R}^{n})$, there holds
$\displaystyle\triangle_{k}\triangle_{q}u$ $\displaystyle\equiv 0\quad\text{
for }|k-q|\geq 2,$ (1.7) $\displaystyle\triangle_{k}(S_{q-1}u\triangle_{q}v)$
$\displaystyle\equiv 0\quad\text{ for }|k-q|\geq 5,$ (1.8)
$\displaystyle\triangle_{k}(\triangle_{q}u\triangle_{q+l}v)$
$\displaystyle\equiv 0\quad\text{ for }|l|\leq 1,\;\;k\geq q+4.$ (1.9)
###### Definition 1.1.
Let $T>0$, $-\infty<s<\infty$ and $1\leq p$, $r$, $\rho\leq\infty$.
(1) We say that a tempered distribution $f\in B^{s,\alpha}_{p,r}$ if and only
if
$\displaystyle\Big{(}\sum_{q\geq-1}2^{qrs}(3+q)^{\alpha{r}}\|\triangle_{q}f\|_{p}^{r}\Big{)}^{\frac{1}{r}}<\infty$
(1.10)
(with the usual convention for $r=\infty$).
(2) We say that a tempered distribution
$u\in\tilde{L}^{\rho}_{T}(B^{s,\alpha}_{p,r})$ if and only if
$\displaystyle\|u\|_{\tilde{L}^{\rho}_{T}(B^{s,\alpha}_{p,r})}$
$\displaystyle:=\Big{(}\sum_{q}2^{qrs}(3+q)^{\alpha
r}\|\triangle_{q}u\|^{r}_{L^{\rho}(0,T;L^{p}_{x})}\Big{)}^{\frac{1}{r}}<\infty.$
(1.11)
Remarks. (i) The definition (1) is essentially due to Yoneda [19] where he
considered the homogeneous version of the space $B^{s,\alpha}_{p,r}$ (see also
remarks there). Note that by using the heat semigroup characterization of
these spaces (see Lemma 4.1 in Section 4), we see that
$B^{-1,1}_{\infty,\infty}$ coincides with the space
$B^{-1(\ln)}_{\infty,\infty}$ considered by the second author in his recent
work [7]. The definition (2) in the case $\alpha=0$ (note that
$B^{s,0}_{q,r}=B^{s}_{q,r}$) is due to Chermin etc. (cf. [6, 8]).
(ii) Similar to the case $\alpha=0$ (see [8] and references cited therein), by
using the Minkowski inequality we see that for $0\leq\alpha\leq\beta<\infty$,
$\displaystyle\|f\|_{\tilde{L}^{\rho}_{T}(B^{s,\alpha}_{p,r})}\leq\|f\|_{{L}^{\rho}_{T}B^{s,\beta}_{p,r}}\
\ \text{if
}r\geq\rho,\quad\|f\|_{{L}^{\rho}_{T}B^{s,\alpha}_{p,r}}\\!\leq\\!\|f\|_{\tilde{L}^{\rho}_{T}(B^{s,\beta}_{p,r})}\
\ \text{if }r\leq\rho.$
We now state the main results. In the first two results we consider the case
where the first component $\vec{u}_{0}$ of the initial data lies in the space
$B^{-1}_{\infty,1}\cap B^{-1,1}_{\infty,\infty}$. In the third result we
consider the case where $\vec{u}_{0}$ lies in the less regular space
$B^{-1,1}_{\infty,\infty}$. As we shall see, in this case we need the second
component $\theta_{0}$ of the initial data to lie in a more regular space.
###### Theorem 1.2.
Let $n\geq 2$. Given $T>0$, there exist $\mu_{1}$, $\mu_{2}>0$ such that for
any $(\vec{u}_{0},\theta_{0})\in(B^{-1}_{\infty,1}\cap
B^{-1,1}_{\infty,\infty})\times(B^{-1}_{\frac{n}{2},1}\cap
B^{-1,1}_{\frac{n}{2},\infty})$ satisfying
$\left\\{\begin{aligned}
&\|\vec{u}_{0}\|_{B^{-1}_{\infty,1}}+\|\vec{u}_{0}\|_{{B^{-1,1}_{\infty,\infty}}}\leq\mu_{1}\
\ \hbox{ and }\ \ {\rm div}\vec{u}_{0}=0,\\\
&\|\theta_{0}\|_{B^{-1}_{{\frac{n}{2}},1}}+\|\theta_{0}\|_{{B^{-1,1}_{{\frac{n}{2}},\infty}}}\leq\mu_{2},\end{aligned}\right.$
the Boussinesq system has a unique solution $(\vec{u},\theta)$ in
$\Big{(}\tilde{L}^{2}_{T}(B^{0}_{\infty,1})\cap\tilde{L}^{2}_{T}(B^{0,1}_{\infty,\infty})\Big{)}\times\Big{(}\tilde{L}^{2}_{T}(B^{0}_{{\frac{n}{2}},1})\cap\tilde{L}^{2}_{T}(B^{0,1}_{{\frac{n}{2}},\infty})\Big{)}$
and $C_{w}\Big{(}[0,T];B^{-1}_{\infty,1}\cap
B^{-1,1}_{\infty,\infty}\Big{)}\times C\Big{(}[0,T];B^{-1}_{\frac{n}{2},1}\cap
B^{-1,1}_{\frac{n}{2},\infty}\Big{)}$ satisfying
$\|\vec{u}\|_{\tilde{L}^{2}_{T}(B^{0}_{\infty,1})}+\|\vec{u}\|_{\tilde{L}^{2}_{T}(B^{0,1}_{\infty,\infty})}\leq
2\mu_{1}\quad\text{ and
}\quad\|\theta\|_{\tilde{L}^{2}_{T}(B^{0}_{{\frac{n}{2}},1})}+\|\theta\|_{\tilde{L}^{2}_{T}(B^{0,1}_{{\frac{n}{2}},\infty})}\leq
2\mu_{2}.$
###### Theorem 1.3.
Let $n\geq 2$, $1\leq r\leq\infty$ and $p\in(\frac{n}{2},\infty)$. Given
$T>0$, there exist $\mu_{1}$, $\mu_{2}>0$ such that for any
$(\vec{u}_{0},\theta_{0})\in(B^{-1}_{\infty,1}\cap
B^{-1,1}_{\infty,\infty})\times B^{-1}_{p,r}$ satisfying
$\left\\{\begin{aligned}
\|\vec{u}_{0}\|_{B^{-1}_{\infty,1}}+\|\vec{u}_{0}\|_{{B^{-1,1}_{\infty,\infty}}}&\leq\mu_{1}\
\ \hbox{ and }\ \ {\rm div}\vec{u}_{0}=0,\\\
\|\theta_{0}\|_{{B^{-1}_{p,r}}}&\leq\mu_{2},\end{aligned}\right.$
the Boussinesq system has a unique solution $(\vec{u},\theta)$ in
$\Big{(}\tilde{L}^{2}_{T}(B^{0}_{\infty,1})\cap\tilde{L}^{2}_{T}(B^{0,1}_{\infty,\infty})\Big{)}\times\tilde{L}^{2}_{T}(B^{0}_{p,r})$
and $C_{w}([0,T];B^{-1}_{\infty,1}\cap B^{-1,1}_{\infty,\infty})\times
C([0,T];B^{-1}_{p,r\neq\infty})$ or $C_{w}([0,T];B^{-1}_{\infty,1}\cap
B^{-1,1}_{\infty,\infty})\times C_{w}([0,T];B^{-1}_{p,\infty})$ satisfying
$\displaystyle\|\vec{u}\|_{\tilde{L}^{2}_{T}(B^{0}_{\infty,1})}+\|\vec{u}\|_{\tilde{L}^{2}_{T}(B^{0}_{\infty,\infty})}\leq
2\mu_{1}\quad\hbox{ and
}\quad\|\theta\|_{\tilde{L}^{2}_{T}(B^{0,1}_{p,r})}\leq 2\mu_{2}.$
###### Theorem 1.4.
Let $n\geq 2$, $\varepsilon>0$ and $p\in(\frac{n}{2},\infty)$. Given $0<T\leq
1$, there exist $\mu_{1}=\mu_{1}(\varepsilon)$,
$\mu_{2}=\mu_{2}(\varepsilon)>0$ such that for any
$(\vec{u}_{0},\theta_{0})\in B^{-1,1}_{\infty,\infty}\times
B^{-1,\varepsilon}_{p,\infty}$ satisfying
$\left\\{\begin{aligned}
\|\vec{u}_{0}\|_{{B^{-1,1}_{\infty,\infty}}}&\leq\mu_{1}\ \ \hbox{ and }\ \
{\rm div}\vec{u}_{0}=0,\\\
\|\theta_{0}\|_{{B^{-1,\varepsilon}_{p,\infty}}}&\leq\mu_{2},\end{aligned}\right.$
the Boussinesq system has a unique solution $(\vec{u},\theta)$ in
$C_{w}([0,T];B^{-1,1}_{\infty,\infty})\times
C_{w}([0,T];B^{-1,\varepsilon}_{p,\infty})$ satisfying
$\sup_{0<t<T}t^{\frac{1}{2}}|\ln(\frac{t}{e^{2}})|\|\vec{u}\|_{\infty}\leq
2\mu_{1}\quad\hbox{ and
}\quad\sup_{0<t<T}t^{\frac{1}{2}}|\ln(\frac{t}{e^{2}})|^{\varepsilon}\|\theta\|_{p}\leq
2\mu_{2}.$
Later on, we shall use $C$ and $c$ to denote positive constants which depend
on dimension $n$, $|\vec{a}|$ and might depend on $p$ and may change from line
to line. $\mathcal{F}f$ and $\hat{f}$ stand for Fourier transform of $f$ with
respect to space variable and $\mathcal{F}^{-1}$ stands for the inverse
Fourier transform. We denote $A\leq{CB}$ by $A\lesssim B$ and
$A\lesssim{B}\lesssim{A}$ by $A\sim{B}$. For any $1\leq{\rho,q}\leq\infty$, we
denote $L^{q}(\mathbb{R}^{n})$, $L^{\rho}(0,T)$ and
$L^{\rho}(0,T;L^{q}(\mathbb{R}^{n}))$ by $L^{q}_{x}$, $L^{\rho}_{T}$ and
$L^{\rho}_{T}L^{q}_{x}$, respectively. We denote $\|f\|_{L^{p}_{x}}$ by
$\|f\|_{p}$ for short. In what follows we will not distinguish vector valued
function space and scalar function space if there is no confusion.
We use two different methods which are used by Chermin, etc. and Kato, etc.
respectively to prove Theorems 1.2$\sim$1.3 and Theorem 1.4. Therefore we
write their proofs in separate sections. In Sect. 2 we introduce the
paradifferential calculus results, while in Sect. 3 we prove Theorems
1.2$\sim$1.3. Finally, in Sect. 4 we prove Theorem 1.4.
## 2 Paradifferential calculus
In this section, we prove several preliminary results concerning the
paradifferential calculus. We first recall some fundamental results.
###### Lemma 2.1.
(Bernstein) Let $k$ be in $\mathbb{N}\cup\\{0\\}$ and $0<R_{1}<R_{2}$. There
exists a constant $C$ depending only on $R_{1},R_{2}$ and dimension $n$, such
that for all $1\leq a\leq b\leq\infty$ and $u\in L^{a}_{x}$, we have
$\displaystyle{\rm supp}\hat{u}\subset
B(0,R_{1}\lambda),\quad\sup_{|\alpha|=k}\|\partial^{\alpha}{u}\|_{b}$
$\displaystyle\leq C^{k+1}\lambda^{k+n(1/a-1/b)}\|u\|_{a},$ (2.1)
$\displaystyle{\rm supp}\hat{u}\subset
C(0,R_{1}\lambda,R_{2}\lambda),\quad\sup_{|\alpha|=k}\|\partial^{\alpha}{u}\|_{a}$
$\displaystyle\sim C^{k+1}\lambda^{k}\|u\|_{a}.$ (2.2)
###### Lemma 2.2.
[8, 18] Let $1\leq p<\infty$. Then we have the following assertions:
(1)
$B^{0}_{\infty,1}\hookrightarrow\mathcal{C}\cap{L}^{\infty}_{x}\hookrightarrow
L^{\infty}_{x}\hookrightarrow B^{0}_{\infty,\infty}$.
(2) $B^{0}_{p,1}\hookrightarrow L^{p}_{x}\hookrightarrow B^{0}_{p,\infty}$.
###### Lemma 2.3.
(1) Let $1\leq p\leq\infty$, $0\leq\beta\leq\alpha<\infty$, $-\infty<s<\infty$
and $1\leq r_{1}<r_{2}\leq\infty$. Then
$\displaystyle B^{s,\alpha}_{p,r_{1}}\hookrightarrow
B^{s,\alpha}_{p,r_{2}}\hookrightarrow B^{s,\beta}_{p,r_{2}}.$
(2) Let $1<\tilde{r}\leq\infty$, $1\leq p\leq\infty$ and $-\infty<s<\infty$.
For any $\epsilon>0$, we have
$\displaystyle B^{s+\epsilon}_{p,\infty}\hookrightarrow
B^{s,1}_{p,\infty}\hookrightarrow{B^{s}_{p,\tilde{r}}},\quad
B^{s+\epsilon}_{p,\infty}\hookrightarrow{B^{s}_{p,1}}\hookrightarrow{B^{s}_{p,\tilde{r}}}.$
(2.3)
(3) There is no inclusion relation between the spaces $B^{0}_{\infty,1}$ and
$B^{0,1}_{\infty,\infty}$.
###### Proof.
It suffices to prove $(3)$. Similar to [19], we set
$\displaystyle{f}=\sum_{j=-1}^{\infty}a_{j}\delta_{2^{j}}\text{\ for \
}\\{a_{j}\\}_{j=-1}^{\infty}\subset\mathbb{R},$
where $\delta_{z}$ is the Dirac delta function massed at $z\in\mathbb{R}^{n}$.
Then we have
$\displaystyle\|f\|_{B^{0}_{\infty,1}}\simeq\sum_{j\geq-1}|a_{j}|,\quad\|f\|_{B^{0,1}_{\infty,\infty}}\simeq\sup_{j\geq-1}(j+3)|a_{j}|.$
So if we take $a_{j}=(j+3)^{-1}$ for $j\geq 0$ and $a_{j}=0$ for $j<0$, then
$\|f\|_{B^{0,1}_{\infty,\infty}}\simeq 1$, $\|f\|_{B^{0}_{\infty,1}}=\infty$.
Therefore, $B^{0,1}_{\infty,\infty}$ is not included in $B^{0}_{\infty,1}$.
Next, let $\delta_{jk}$ be Kronecker’s delta. For fixed $k\in\mathbb{N}$, if
we take $a_{j}=\frac{\delta_{kj}}{3+j}$ for $j\geq 0$ and $a_{j}=0$ for $j<0$,
then we have $\|f\|_{B^{0,1}_{\infty,\infty}}\simeq 1$ and
$\|f\|_{B^{0}_{\infty,1}}=\frac{1}{k}$. Since $k$ is arbitrary,
$B^{0}_{\infty,1}$ is not included in $B^{0,1}_{\infty,\infty}$. ∎
We now begin our discussion on paradifferential calculus.
###### Lemma 2.4.
For any $p,p_{1},p_{2},r\in[1,\infty]$ satisfying
$\frac{1}{p}=\frac{1}{p_{1}}+\frac{1}{p_{2}}$, the bilinear map $(u,v)\mapsto
uv$ is bounded from $(B^{0}_{p_{1},1}\cap B^{0,1}_{p_{1},\infty})\times
B^{0}_{p_{2},r}$ to $B^{0}_{p,r}$, i.e.
$\displaystyle\|uv\|_{B^{0}_{p,r}}\lesssim\|u\|_{B^{0}_{p_{1},1}\cap
B^{0,1}_{p_{1},\infty}}\|v\|_{B^{0}_{p_{2},r}}.$ (2.4)
###### Proof.
Following Bony [2] we write
$\displaystyle uv=\mathcal{T}(u,v)+\mathcal{T}(v,u)+\mathcal{R}(u,v),$
where
$\displaystyle\mathcal{T}(u,v)=\sum_{q\geq-1}S_{q-1}u\triangle_{q}v,\quad\mathcal{R}(u,v)=\sum_{l=-1}^{1}\sum_{q\geq-1}\triangle_{q}u\triangle_{q+l}v.$
The estimate of $\mathcal{T}(u,v)$ is simple. Indeed, by Proposition 1.4.1 (i)
of [8] we know that for any $p,r\in[1,\infty]$, $\mathcal{T}$ is bounded from
$L^{\infty}_{x}\times{B^{0}_{p,r}}$ to $B^{0}_{p,r}$. By slightly modifying
the proof of that proposition, we see that for any
$p,p_{1},p_{2},r\in[1,\infty]$ satisfying
$\frac{1}{p}=\frac{1}{p_{1}}+\frac{1}{p_{2}}$, $\mathcal{T}$ is also bounded
from $L^{p_{1}}_{x}\times{B^{0}_{p_{2},r}}$ to $B^{0}_{p,r}$. Thus for any
$p,p_{1},p_{2},r\in[1,\infty]$ we have
$\displaystyle\|\mathcal{T}(u,v)\|_{B^{0}_{p,r}}\lesssim\|u\|_{L^{p_{1}}}\|v\|_{B^{0}_{p_{2},r}}\lesssim\|u\|_{B^{0}_{p_{1},1}}\|v\|_{B^{0}_{p_{2},r}}.$
(2.5)
In what follows we estimate $\mathcal{T}(v,u)$ and $\mathcal{R}(u,v)$. By
interpolation, it suffices to consider the two end point cases $r=1$ and
$r=\infty$.
To estimate $\|\mathcal{T}(v,u)\|_{B^{0}_{p,1}}$, we use (1.8) to deduce
$\displaystyle\|\mathcal{T}(v,u)\|_{B^{0}_{p,1}}=$
$\displaystyle\sum_{k\geq-1}\|\triangle_{k}(\sum_{q\geq-1}S_{q-1}v\triangle_{q}u)\|_{p}\lesssim\sum_{k\geq-1}\sum_{|q-k|\leq
4,q\geq-1}\|S_{q-1}v\|_{p_{2}}\|\triangle_{q}u\|_{p_{1}}$
$\displaystyle\lesssim$
$\displaystyle\|v\|_{p_{2}}\sum_{k\geq-1}\|\triangle_{k}u\|_{p_{1}}\lesssim\|u\|_{B^{0}_{p_{1},1}}\|v\|_{B^{0}_{p_{2},1}}.$
(2.6)
To estimate $\|\mathcal{T}(v,u)\|_{B^{0}_{p,\infty}}$ we first note that
$\displaystyle\|S_{q-1}v\|_{p_{2}}\lesssim$
$\displaystyle\sum_{j=-1}^{q-2}\|\triangle_{j}v\|_{p_{2}}\lesssim(q+3)\|v\|_{B^{0}_{p_{2},\infty}}.$
Using this inequality and (1.8) we see that
$\displaystyle\|\mathcal{T}(v,u)\|_{B^{0}_{p,\infty}}=$
$\displaystyle\sup_{k\geq-1}\|\triangle_{k}(\sum_{q\geq-1}S_{q-1}v\triangle_{q}u)\|_{p}\lesssim\sup_{k\geq-1}\sum_{|q-k|\leq
4,q\geq-1}\|S_{q-1}v\|_{p_{2}}\|\triangle_{q}u\|_{p_{1}}$
$\displaystyle\lesssim$
$\displaystyle\|v\|_{B^{0}_{p_{2},\infty}}\sup_{k\geq-1}\sum_{|q-k|\leq
4,q\geq-1}(q+3)\|\triangle_{q}u\|_{p_{1}}\lesssim\|u\|_{B^{0,1}_{p_{1},\infty}}\|v\|_{B^{0}_{p_{2},\infty}}.$
(2.7)
To estimate $\|\mathcal{R}(u,v)\|_{B^{0}_{p,1}}$, we write
$\displaystyle\|\mathcal{R}(u,v)\|_{B^{0}_{p,1}}\lesssim$
$\displaystyle\sum_{l=-1}^{1}\sum_{k\geq-1}\|\triangle_{k}(\sum_{q\geq-1}\triangle_{q}u\triangle_{q+l}v)\|_{p}$
$\displaystyle\lesssim$
$\displaystyle\sum_{l=-1}^{1}\sum_{k=-1}^{3}\|\triangle_{k}(\sum_{q\geq-1}\\!\\!\triangle_{q}u\triangle_{q+l}v)\|_{p}+\sum_{l=-1}^{1}\sum_{k\geq
4}\|\triangle_{k}(\sum_{q\geq-1}\triangle_{q}u\triangle_{q+l}v)\|_{p}$
$\displaystyle:=$ $\displaystyle I_{1}+I_{2}.$
For $I_{1}$ we have
$\displaystyle I_{1}$
$\displaystyle\lesssim\sum_{l=-1}^{1}\|\sum_{q\geq-1}\triangle_{q}u\triangle_{q+l}v\|_{p}\lesssim\sum_{l=-1}^{1}\sum_{q\geq-1}\|\triangle_{q}u\|_{p_{1}}\|\triangle_{q+l}v\|_{p_{2}}\lesssim\|u\|_{B^{0}_{p_{1},\infty}}\|v\|_{B^{0}_{p_{2},1}}.$
For $I_{2}$, by using (1.9) we deduce
$\displaystyle I_{2}\lesssim$ $\displaystyle\sum_{l=-1}^{1}\sum_{k\geq
4}\sum_{q\geq{k-3}}\|\triangle_{q}u\|_{p_{1}}\|\triangle_{q+l}v\|_{p_{2}}\lesssim\sum_{l=-1}^{1}\sum_{q\geq
1}\sum_{4\leq k\leq
3+q}\|\triangle_{q}u\|_{p_{1}}\|\triangle_{q+l}v\|_{p_{2}}$
$\displaystyle\lesssim$ $\displaystyle\sum_{l=-1}^{1}\sum_{q\geq
1}(q+3)\|\triangle_{q}u\|_{p_{1}}\|\triangle_{q+l}v\|_{p_{2}}\lesssim\|u\|_{B^{0,1}_{p_{1},\infty}}\|v\|_{B^{0}_{p_{2},1}}.$
Hence
$\displaystyle\|\mathcal{R}(u,v)\|_{B^{0}_{p,1}}\lesssim\|u\|_{B^{0,1}_{p_{1},\infty}}\|v\|_{B^{0}_{p_{2},1}}.$
(2.8)
Similarly we have
$\displaystyle\|\mathcal{R}(u,v)\|_{B^{0}_{p,\infty}}$
$\displaystyle\lesssim\sum_{l=-1}^{1}\sup_{k\geq-1}\|\triangle_{k}(\sum_{q\geq-1}\triangle_{q}u\triangle_{q+l}v)\|_{p}$
$\displaystyle\lesssim\sum_{l=-1}^{1}\Big{(}\sum_{q\geq-1}\|\triangle_{q}u\|_{p_{1}}\|\triangle_{q+l}v\|_{p_{2}}\Big{)}$
$\displaystyle\lesssim\|u\|_{B^{0}_{p_{1},1}}\|v\|_{B^{0}_{p_{2},\infty}}.$
(2.9)
From (2.5)$\sim$(2) and interpolation, we obtain the desired estimate. This
completes the proof of Lemma 2.4. ∎
###### Lemma 2.5.
For any $p,p_{2},p_{2}\in[1,\infty]$ satisfying
$\frac{1}{p}=\frac{1}{p_{1}}+\frac{1}{p_{2}}$, the bilinear map $(u,v)\mapsto
uv$ is bounded from $(B^{0}_{p_{1},1}\cap
B^{0,1}_{p_{1},\infty})\times(B^{0}_{p_{2},1}\cap B^{0,1}_{p_{2},\infty})$ to
$B^{0}_{p,1}\cap B^{0,1}_{p,\infty}$, i.e.
$\displaystyle\|uv\|_{B^{0}_{p,1}\cap
B^{0,1}_{p,\infty}}\lesssim\|u\|_{B^{0}_{p_{1},1}\cap
B^{0,1}_{p_{1},\infty}}\|v\|_{B^{0}_{p_{2},1}\cap B^{0,1}_{p_{2},\infty}}.$
(2.10)
In particular, $B^{0}_{\infty,1}\cap B^{0,1}_{\infty,\infty}$ is a Banach
algebra.
###### Proof.
By Lemma 2.4, we only need to prove that
$\displaystyle\|uv\|_{B^{0,1}_{p,\infty}}\lesssim\|u\|_{B^{0}_{p_{1},1}\cap
B^{0,1}_{p_{1},\infty}}\|v\|_{B^{0}_{p_{2},1}\cap{B}^{0,1}_{p_{2},\infty}}.$
(2.11)
As before we decompose $uv$ into the sum of $\mathcal{T}(u,v)$,
$\mathcal{T}(v,u)$ and $\mathcal{R}(u,v)$. To estimate
$\|\mathcal{T}(u,v)\|_{B^{0,1}_{p,\infty}}$, we use (1.8) to deduce
$\displaystyle\|\mathcal{T}(u,v)\|_{B^{0,1}_{p,\infty}}=$
$\displaystyle\sup_{k\geq-1}(k+3)\|\triangle_{k}(\sum_{q\geq-1}S_{q-1}u\triangle_{q}v)\|_{p}$
$\displaystyle\lesssim$
$\displaystyle\sup_{k\geq-1}(k+3)\|\triangle_{k}(\sum_{|q-k|\leq
4,q\geq-1}S_{q-1}u\triangle_{q}v)\|_{p}$ $\displaystyle\lesssim$
$\displaystyle\sup_{k\geq-1}(k+3)\sum_{|q-k|\leq
4,q\geq-1}\|S_{q-1}u\|_{p_{1}}\|\triangle_{q}v\|_{p_{2}}$
$\displaystyle\lesssim$
$\displaystyle\|u\|_{p_{1}}\sup_{q\geq-1}(q+3)\|\triangle_{q}v\|_{p_{2}}$
$\displaystyle\lesssim$
$\displaystyle\|u\|_{B^{0}_{p_{1},1}}\|v\|_{B^{0,1}_{p_{2},\infty}}.$ (2.12)
The estimate of $\|\mathcal{T}(v,u)\|_{B^{0,1}_{p,\infty}}$ is similar, with
minor modifications. Indeed,
$\displaystyle\|\mathcal{T}(v,u)\|_{B^{0,1}_{p,\infty}}=$
$\displaystyle\sup_{k\geq-1}(k+3)\|\triangle_{k}(\sum_{q\geq-1}S_{q-1}v\triangle_{q}u)\|_{p}$
$\displaystyle\lesssim$ $\displaystyle\sup_{k\geq-1}(k+3)\sum_{|q-k|\leq
4,q\geq-1}\|S_{q-1}v\|_{p_{2}}\|\triangle_{q}u\|_{p_{1}}$
$\displaystyle\lesssim$
$\displaystyle\|v\|_{p_{2}}\sup_{q\geq-1}(q+3)\|\triangle_{q}u\|_{p_{1}}$
$\displaystyle\lesssim$
$\displaystyle\|u\|_{B^{0,1}_{p_{1},\infty}}\|v\|_{B^{0}_{p_{2},1}}.$ (2.13)
To estimate $\|\mathcal{R}(u,v)\|_{B^{0,1}_{p,\infty}}$ we write
$\displaystyle\|\mathcal{R}(u,v)\|_{B^{0,1}_{p,\infty}}\\!$
$\displaystyle\\!\leq\\!\\!\sum_{l=-1}^{1}\sup_{k\geq-1}(3+k)\|\triangle_{k}(\sum_{q\geq-1}\triangle_{q}u\triangle_{q+l}v)\|_{p}$
$\displaystyle\leq\sum_{l=-1}^{1}\sup_{7\geq
k\geq-1}(3+k)\|\triangle_{k}(\sum_{q\geq-1}\triangle_{q}u\triangle_{q+l}v)\|_{p}$
$\displaystyle\quad+\sum_{l=-1}^{1}\sup_{q+3\geq k\geq
8}(3+k)\|\triangle_{k}(\sum_{q\geq 5}\triangle_{q}u\triangle_{q+l}v)\|_{p}$
$\displaystyle:=$ $\displaystyle I_{3}+I_{4}.$
For $I_{3}$ we have
$\displaystyle I_{3}$
$\displaystyle\lesssim\sum_{l=-1}^{1}\sum_{q\geq-1}\|\triangle_{q}u\|_{p_{1}}\|\triangle_{q+l}v\|_{p_{2}}\lesssim\|u\|_{B^{0}_{p_{1},1}}\|v\|_{B^{0}_{p_{2},\infty}}\lesssim\|u\|_{B^{0}_{p_{1},1}}\|v\|_{B^{0,1}_{p_{2},\infty}}.$
For $I_{4}$ we have
$\displaystyle I_{4}$ $\displaystyle=\sum_{l=-1}^{1}\sup_{q+3\geq k\geq
8}(3+k)\|\triangle_{k}(\sum_{q\geq 5}\triangle_{q}u\triangle_{q+l}v)\|_{p}$
$\displaystyle\lesssim\sum_{l=-1}^{1}\sum_{q\geq
4}(3+q)\|\triangle_{q}u\|_{p_{1}}\|\triangle_{q+l}v\|_{p_{2}}\lesssim\|u\|_{B^{0}_{p_{1},1}}\|v\|_{B^{0,1}_{p_{2},\infty}}.$
Hence
$\displaystyle\|\mathcal{R}(u,v)\|_{B^{0,1}_{p,\infty}}\lesssim\|u\|_{B^{0}_{p_{1},1}}\|v\|_{B^{0,1}_{p_{2},\infty}}.$
(2.14)
Combining (2)$\sim$(2.14), we see that (2.11) follows. This prove Lemma 2.5. ∎
###### Lemma 2.6.
Let $p$, $p_{i}$, $r$, $\rho$, $\rho_{i}\in[1,\infty]$ $(i=1,2)$ be such that
$\frac{1}{\rho}=\frac{1}{\rho_{1}}+\frac{1}{\rho_{2}}$ and
$\frac{1}{p}=\frac{1}{p_{1}}+\frac{1}{p_{2}}$. Then we have
$\displaystyle\|uv\|_{\tilde{L}^{\rho}_{T}(B^{0}_{p,r})}\lesssim\|u\|_{\tilde{L}^{\rho_{1}}_{T}(B^{0}_{p_{1},1})\cap\tilde{L}^{\rho_{1}}_{T}(B^{0,1}_{p_{1},\infty})}\|v\|_{\tilde{L}^{\rho_{2}}_{T}(B^{0}_{p_{2},r})}.$
(2.15)
###### Proof.
The proof is similar to that of Lemma 2.4. Indeed, as in the proof of Lemma
2.4 we decompose $uv$ into the sum of $\mathcal{T}(u,v)$, $\mathcal{T}(v,u)$
and $\mathcal{R}(u,v)$. To estimate
$\|\mathcal{T}(u,v)\|_{\tilde{L}^{\rho}_{T}(B^{0}_{p,\infty})}$, we use (1.8)
to write
$\displaystyle\|\mathcal{T}(u,v)\|_{\tilde{L}^{\rho}_{T}(B^{0}_{p,\infty})}=$
$\displaystyle\sup_{k\geq-1}\|\triangle_{k}(\sum_{q\geq-1}S_{q-1}u\triangle_{q}v)\|_{L^{\rho}_{T}L^{p}_{x}}$
$\displaystyle\lesssim$
$\displaystyle\sup_{k\geq-1}\|\triangle_{k}(\sum_{|q-k|\leq
4,q\geq-1}S_{q-1}u\triangle_{q}v)\|_{L^{\rho}_{T}L^{p}_{x}}$
$\displaystyle\lesssim$ $\displaystyle\sup_{k\geq-1}\sum_{|q-k|\leq
4,q\geq-1}\|S_{q-1}u\|_{L^{\rho_{1}}_{T}L^{p_{1}}_{x}}\|\triangle_{q}v\|_{L^{\rho_{2}}_{T}L^{p_{2}}_{x}}$
$\displaystyle\lesssim$
$\displaystyle\|u\|_{\tilde{L}^{\rho_{1}}_{T}(B^{0}_{p_{1},1})}\|v\|_{\tilde{L}^{\rho_{2}}_{T}(B^{0}_{p_{2},\infty})}.$
The estimates of
$\|\mathcal{T}(v,u)\|_{\tilde{L}^{\rho}_{T}(B^{0}_{p,\infty})}$ and
$\|\mathcal{R}(u,v)\|_{\tilde{L}^{\rho}_{T}(B^{0}_{p,\infty})}$ are similar
and we omit the details here. ∎
###### Lemma 2.7.
Let $p$, $p_{i}$, $\rho$, $\rho_{i}\in[1,\infty]$ $(i=1,2)$ be such that
$\frac{1}{\rho}=\frac{1}{\rho_{1}}+\frac{1}{\rho_{2}}$ and
$\frac{1}{p}=\frac{1}{p_{1}}+\frac{1}{p_{2}}$. Then we have
$\displaystyle\|uv\|_{\tilde{L}^{\rho}_{T}(B^{0}_{p,1})\cap\tilde{L}^{\rho}_{T}(B^{0,1}_{p,\infty})}\lesssim\|u\|_{\tilde{L}^{\rho_{1}}_{T}(B^{0}_{p_{1},1})\cap\tilde{L}^{\rho_{1}}_{T}(B^{0,1}_{p_{1},\infty})}\|v\|_{\tilde{L}^{\rho_{2}}_{T}(B^{0}_{p_{2},1})\cap\tilde{L}^{\rho_{2}}_{T}(B^{0,1}_{p_{2},\infty})}.$
(2.16)
###### Proof.
The proof is similar to that of Lemma 2.5; we thus omit it.∎
We note that results obtained in Lemmas 2.4$\sim$2.7 still hold for vector
valued functions.
## 3 Proofs of Theorems 1.2 and 1.3
In this section, we give the proofs of Theorems 1.2 and 1.3. We need the
following preliminary result:
###### Lemma 3.1.
([4], p.189, Lemma 5) Let $(\mathcal{X}\times\mathcal{Y},\
\|\cdot\|_{\mathcal{X}}+\|\cdot\|_{\mathcal{Y}})$ be an abstract Banach
product space. $B_{1}:\mathcal{X}\times\mathcal{X}\rightarrow\mathcal{X}$,
$B_{2}:\mathcal{X}\times\mathcal{Y}\rightarrow\mathcal{Y}$ and
$L:\mathcal{Y}\rightarrow\mathcal{X}$ are respectively two bilinear operators
and one linear operator such that for any
$(x_{i},y_{i})\in\mathcal{X}\times\mathcal{Y}$ ($i=1,2$), we have
$\displaystyle\|B_{1}(x_{1},x_{2})\|_{\mathcal{X}}\leq{\lambda}\|x_{1}\|_{\mathcal{X}}\|x_{2}\|_{\mathcal{X}},\quad\|L(y_{i})\|_{\mathcal{X}}\leq{\eta}\|y_{i}\|_{\mathcal{Y}},\quad\|B_{2}(x_{i},y_{i})\|_{\mathcal{Y}}\leq{\lambda}\|x_{i}\|_{\mathcal{X}}\|y_{i}\|_{\mathcal{Y}},$
where $\lambda,\eta>0$. For any $(x_{0},y_{0})\in\mathcal{X}\times\mathcal{Y}$
with
$\|(x_{0},c_{\ast}{y_{0}})\|_{\mathcal{X}\times\mathcal{Y}}<1/(16\lambda)$
$(c_{\ast}=\max\\{2\eta,1\\})$, the following system
$(x,y)=(x_{0},y_{0})+\Big{(}B_{1}(x,x),\ B_{2}(x,y)\Big{)}+\Big{(}L(y),\
0\Big{)}$
has a solution $(x,y)$ in $\mathcal{X}\times\mathcal{Y}$. In particular, the
solution is such that
$\displaystyle\|(x,c_{\ast}{y})\|_{\mathcal{X}\times\mathcal{Y}}\leq{4\|(x_{0},c_{\ast}{y_{0}})\|_{\mathcal{X}\times\mathcal{Y}}}$
and it is the only one such that
$\|(x,c_{\ast}{y})\|_{\mathcal{X}\times\mathcal{Y}}<{1}/{(4\lambda)}.$
For $n\geq 2$, $p\in(\frac{n}{2},\infty)$ and $r\in[1,\infty]$, let
$\mathcal{X}_{T}$ and $\mathcal{Z}_{T}$ respectively be the spaces
$\mathcal{X}_{T}=\tilde{L}^{2}_{T}(B^{0}_{\infty,1})\cap\tilde{L}^{2}_{T}(B^{0,1}_{\infty,\infty})\quad\hbox{
and }\quad\mathcal{Z}_{T}=\tilde{L}^{2}_{T}(B^{0}_{p,r})$
with norms
$\|\vec{u}\|_{\mathcal{X}_{T}}:=\|\vec{u}\|_{\tilde{L}^{2}_{T}(B^{0}_{\infty,1})}+\|\vec{u}\|_{\tilde{L}^{2}_{T}(B^{0,1}_{\infty,\infty})}\quad\hbox{
and
}\quad\|\theta\|_{\mathcal{Z}_{T}}:=\|\theta\|_{\tilde{L}^{2}_{T}(B^{0}_{p,r})}.$
Let $\mathcal{Y}_{T}$ be the space
$\mathcal{Y}_{T}=\tilde{L}^{2}_{T}(B^{0}_{\frac{n}{2},1})\cap\tilde{L}^{2}_{T}(B^{0,1}_{\frac{n}{2},\infty})$
with norm
$\|\theta\|_{\mathcal{Y}_{T}}:=\|\theta\|_{\tilde{L}^{2}_{T}(B^{0}_{\frac{n}{2},1})}+\|\theta\|_{\tilde{L}^{2}_{T}(B^{0,1}_{\frac{n}{2},\infty})}.$
Recall that
$\displaystyle\left\\{\begin{aligned}
\mathfrak{J}_{1}(\vec{u},\theta)&=e^{t\Delta}\vec{u}_{0}-\int_{0}^{t}e^{(t-s)\Delta}\mathbb{P}(\vec{u}\cdot\nabla)\vec{u}ds+\int_{0}^{t}e^{(t-s)\Delta}\mathbb{P}(\theta{\vec{a}}){ds},\\\
\mathfrak{J}_{2}(\vec{u},\theta)&=e^{t\Delta}\theta_{0}-\int_{0}^{t}e^{(t-s)\Delta}(\vec{u}\cdot\nabla{\theta})ds.\end{aligned}\right.$
(3.1)
In what follows, we prove several bilinear estimates.
###### Lemma 3.2.
Let $T>0$, $n\geq 2$ and
$\vec{u}_{0}\in{B^{-1}_{\infty,1}}\cap{B}^{-1,1}_{\infty,\infty}$. We have the
following two assertions:
(1) For $\theta\in\mathcal{Y}_{T}$ we have
$\displaystyle\|\mathfrak{J}_{1}(\vec{u},\theta)\|_{\mathcal{X}_{T}}\lesssim$
$\displaystyle\
(1+T)\Big{(}\|\vec{u}_{0}\|_{B^{-1}_{\infty,1}{\cap}B^{-1,1}_{\infty,\infty}}+\|\vec{u}\|^{2}_{\mathcal{X}_{T}}+\|\theta\|_{\mathcal{Y}_{T}}\Big{)}.$
(3.2)
(2) For $\theta\in\mathcal{Z}_{T}$, $r\in[1,\infty]$ and
$p\in(\frac{n}{2},\infty)$ we have
$\displaystyle\|\mathfrak{J}_{1}(\vec{u},\theta)\|_{\mathcal{X}_{T}}\lesssim$
$\displaystyle\
(1+T)\Big{(}\|\vec{u}_{0}\|_{B^{-1}_{\infty,1}{\cap}B^{-1,1}_{\infty,\infty}}+\|\vec{u}\|^{2}_{\mathcal{X}_{T}}+\|\theta\|_{\mathcal{Z}_{T}}\Big{)}.$
(3.3)
###### Proof.
We divide the proof of the $\mathfrak{J}_{1}(\vec{u},\theta)$ into two
subcases $q\geq 0$ and $q=-1$. Since when $q\geq 0$, the symbol of
${\triangle}_{q}$ is supported in dyadic shells and the symbol of $\mathbb{P}$
is smooth in the corresponding dyadic shells we have
$\displaystyle\triangle_{q}\mathfrak{J}_{1}(\vec{u},\theta)=e^{t\Delta}\triangle_{q}\vec{u}_{0}-\int_{0}^{t}e^{(t-\tau)\Delta}{\triangle}_{q}\left(\mathbb{P}\nabla\cdot(\vec{u}\otimes{\vec{u}})(\tau)-\mathbb{P}(\theta{\vec{a}})\right)d\tau.$
(3.4)
In (3.4) we have $n$ scalar equations and each of the $n$ components shares
the same estimate. By making use of (2.2) twice we obtain
$\displaystyle\|\triangle_{q}\mathfrak{J}_{1}(\vec{u},\theta)\|_{\infty}$
$\displaystyle\lesssim{e^{-\kappa
2^{2q}t}}{\|\triangle_{q}{\vec{u}_{0}}\|_{\infty}}+\int_{0}^{t}e^{-\kappa
2^{2q}(t-\tau)}\Big{(}2^{q}\|\triangle_{q}(\vec{u}\otimes\vec{u})\|_{\infty}+\|\triangle_{q}\theta\|_{\infty}\Big{)}d\tau$
$\displaystyle\lesssim{e^{-\kappa
2^{2q}t}}{\|\triangle_{q}{\vec{u}_{0}}\|_{\infty}}+\int_{0}^{t}e^{-\kappa
2^{2q}(t-\tau)}2^{q}\|\triangle_{q}(\vec{u}\otimes\vec{u})\|_{\infty}d\tau$
$\displaystyle\ \ \ \ +\int_{0}^{t}e^{-\kappa
2^{2q}(t-\tau)}\min\\{2^{2q}\|\triangle_{q}\theta\|_{\frac{n}{2}},\
2^{\frac{qn}{p}}\|\triangle_{q}\theta\|_{p}\\}d\tau.$
Applying convolution inequalities to the above estimate with respect to time
variable we get
$\displaystyle\|\triangle_{q}\mathfrak{J}_{1}(\vec{u},\theta)\|_{L^{2}_{T}L^{\infty}_{x}}\lesssim$
$\displaystyle\left(\frac{1-e^{-2\kappa{T}2^{2q}}}{2\kappa}\right)^{\frac{1}{2}}\left(2^{-q}\|\triangle_{q}\vec{u}_{0}\|_{\infty}+\|\triangle_{q}(\vec{u}\otimes\vec{u})\|_{L^{1}_{T}L^{\infty}_{x}}\right)$
$\displaystyle+\min\Big{\\{}\frac{1-e^{-2\kappa{T}2^{2q}}}{2\kappa}\|\triangle_{q}\theta\|_{L^{2}_{T}L^{\frac{n}{2}}_{x}},\
\ \frac{1-e^{-2\kappa{T}2^{2q}}}{2\kappa
2^{q(2-\frac{n}{p})}}\|\triangle_{q}\theta\|_{L^{2}_{T}L^{p}_{x}})\Big{\\}}.$
(3.5)
Considering
$\sum_{q\geq 0}2^{-q(2-\frac{n}{p})}<\infty\ \ \hbox{ for
}p\in(\frac{n}{2},\infty)\cap[1,\infty),$
from (3.5) and Definition 1.1 we see that
$\displaystyle\sum_{q\geq
0}\|\triangle_{q}\mathfrak{J}_{1}(\vec{u},\theta)\|_{L^{2}_{T}L^{\infty}_{x}}$
$\displaystyle\lesssim\sum_{q\geq-1}\Big{(}{2^{-q}}{\|\triangle_{q}\vec{u}_{0}\|_{\infty}}+\|\triangle_{q}(\vec{u}\otimes\vec{u})\|_{L^{1}_{T}L^{\infty}_{x}}$
$\displaystyle\hskip
28.45274pt+\min\\{\|\triangle_{q}\theta\|_{L^{2}_{T}L^{\frac{n}{2}}_{x}},\ \
{2^{-q(\frac{n}{p}-2)}}\|\triangle_{q}\theta\|_{L^{2}_{T}L^{p}_{x}}\\}\Big{)}$
$\displaystyle\lesssim\|\vec{u}_{0}\|_{B^{-1}_{\infty,1}}\\!+\\!\|\vec{u}\otimes\vec{u}\|_{\tilde{L}^{1}_{T}(B^{0}_{\infty,1})}\\!+\\!\min\\{\|\theta\|_{\tilde{L}^{2}_{T}(B^{0}_{\frac{n}{2},2})},\
\|\theta\|_{\tilde{L}^{2}_{T}(B^{0}_{p,\infty})}\\}.$ (3.6)
Considering
$\sup_{q\geq-1}\frac{q+3}{2^{q(2-\frac{n}{p})}}\leq C(p,n)<\infty\quad\hbox{
for }p>\frac{n}{2},$
from (3.5), Definition 1.1 and a similar argument as before we see that
$\displaystyle\sup_{q\geq
0}(q+3)\|\triangle_{q}\mathfrak{J}_{1}(\vec{u},\theta)\|_{L^{2}_{T}L^{\infty}_{x}}\lesssim\|\vec{u}_{0}\|_{B^{-1,1}_{\infty,\infty}}$
$\displaystyle+\|\vec{u}\otimes\vec{u}\|_{\tilde{L}^{1}_{T}(B^{0,1}_{\infty,\infty})}$
$\displaystyle+\min\\{\|\theta\|_{\tilde{L}^{2}_{T}(B^{0,1}_{\frac{n}{2},\infty})},\
\|\theta\|_{\tilde{L}^{2}_{T}(B^{0}_{p,\infty})}\\}.$ (3.7)
Next we consider the case $q=-1$. We recall the decay estimates of Oseen
kernel (cf., Chapter 11, [14]), by interpolating we observe that
$e^{\Delta}\mathbb{P}(-\Delta)^{-\frac{1}{2}+\delta}\nabla$ (for any
$\delta\in(0,\frac{1}{2})$) is $L^{1}_{x}$ bounded. Similar to (3.4) we get
$\displaystyle S_{0}\mathfrak{J}_{1}(\vec{u},\theta)=$ $\displaystyle
e^{t\Delta}S_{0}\vec{u}_{0}-\int_{0}^{t}e^{(t-\tau)\Delta}S_{0}[\mathbb{P}\nabla\cdot(\vec{u}\otimes\vec{u})(\tau)]d\tau+\int_{0}^{t}e^{(t-\tau)\Delta}S_{0}\mathbb{P}(\theta\vec{a})(\tau)d\tau.$
Applying decay estimates of heat kernel and Lemma 2.1 of ${\rm
supp}\mathcal{F}{S_{0}}\subset B(0,\frac{4}{3})$ we see that
$\displaystyle\|S_{0}\mathfrak{J}_{1}(\vec{u},\theta)\|_{\infty}$
$\displaystyle\lesssim\|e^{t\Delta}S_{0}\vec{u}_{0}\|_{\infty}+\int_{0}^{t}(t-\tau)^{-\frac{n}{4p}}\|\mathbb{P}S_{0}\theta\|_{2p}d\tau$
$\displaystyle\ \ \
+\int_{0}^{t}\|e^{(t-\tau)\Delta}{S}_{0}\mathbb{P}\nabla\cdot(\vec{u}\otimes\vec{u})(\tau)\|_{\infty}d\tau$
$\displaystyle\lesssim\|S_{0}\vec{u}_{0}\|_{\infty}+\int_{0}^{t}(t-\tau)^{-\frac{n}{4p}}\min\\{\|S_{0}\theta\|_{p},\
\ \|S_{0}\theta\|_{\frac{n}{2}}\\}d\tau$ $\displaystyle\ \ \
+\int_{0}^{t}(\|S_{0}(\vec{u}\otimes\vec{u})\|_{\infty}d\tau.$
In the above estimate we have used the following fact (see (5.29) of [17]):
$\displaystyle\|S_{0}\mathbb{P}\nabla\cdot(\vec{u}\otimes\vec{u})\|_{\infty}\lesssim\|\mathbb{P}\nabla\tilde{h}\|_{1}\|S_{0}(\vec{u}\otimes\vec{u})\|_{\infty}\lesssim\|\nabla\tilde{h}\|_{\dot{B}^{0}_{1,1}}\|S_{0}(\vec{u}\otimes\vec{u})\|_{\infty}\lesssim\|S_{0}(\vec{u}\otimes\vec{u})\|_{\infty}.$
Applying convolution inequalities to time variable we obtain that
$\displaystyle\|S_{0}\mathfrak{J}_{1}(\vec{u},\theta)\|_{L^{2}_{T}L^{\infty}_{x}}$
$\displaystyle\\!\lesssim\\!T^{\frac{1}{2}}\|S_{0}\vec{u}_{0}\|_{\infty}\\!+T^{\frac{1}{2}}\|S_{0}(\vec{u}\otimes\vec{u})\|_{L^{1}_{T}L^{\infty}_{x}}+\\!T^{\frac{4p-n}{4p}}\min\\{\|S_{0}\theta\|_{L^{2}_{T}L^{\\!\frac{n}{2}}_{x}},\|S_{0}\theta\|_{L^{2}_{T}L^{p}_{x}}\\}$
$\displaystyle\lesssim\\!C_{T}\Big{[}\|\vec{u}_{0}\|_{B^{-1}_{\infty,\infty}}\\!\\!\\!\\!+\\!\\!\|\vec{u}\otimes\vec{u}\|_{\tilde{L}^{1}_{T}(B^{0}_{\infty,\infty}\\!)}\\!+\\!\min\\{\|\theta\|_{\tilde{L}^{2}_{T}(B^{0}_{\frac{n}{2}},\infty\\!)},\|\theta\|_{\tilde{L}^{2}_{T}(B^{0}_{p,\infty}\\!)}\\}\\!\Big{]}$
(3.8)
$\displaystyle\lesssim\\!C_{T}\Big{[}\|\vec{u}_{0}\|_{B^{-1,1}_{\infty,\infty}}\\!\\!\\!\\!+\\!\\!\|\vec{u}\otimes\vec{u}\|_{\tilde{L}^{1}_{T}(B^{0,1}_{\infty,\infty}\\!)}\\!+\\!\min\\{\|\theta\|_{\tilde{L}^{2}_{T}(B^{0,1}_{\frac{n}{2}},\infty\\!)},\|\theta\|_{\tilde{L}^{2}_{T}(B^{0}_{p,\infty}\\!)}\\}\\!\Big{]},$
(3.9)
where $C_{T}=(T^{\frac{4p-n}{4p}}+T^{\frac{1}{2}})$.
By applying (3) $\sim$ (3.9) and Definition 1.1 as well as Lemmas 2.6 $\sim$
2.7 we prove (3.2) and (3.3) and we complete the proof of Lemma 3.2. ∎
###### Lemma 3.3.
Let $T>0$, $n\geq 2$ and $\vec{u}\in\mathcal{X}_{T}$. We have the following
assertions:
(1) For
$\theta_{0}\in{B^{-1}_{\frac{n}{2},1}}\cap{B}^{-1,1}_{\frac{n}{2},\infty}$ we
have
$\displaystyle\|\mathfrak{J}_{2}(\vec{u},\theta)\|_{\mathcal{Y}_{T}}\lesssim$
$\displaystyle\
(1+T)\Big{(}\|\theta_{0}\|_{B^{-1}_{\frac{n}{2},1}{\cap}B^{-1,1}_{\frac{n}{2},\infty}}+\|\vec{u}\|_{\mathcal{X}_{T}}\|\theta\|_{\mathcal{Y}_{T}}\Big{)}.$
(3.10)
(2) For $\theta_{0}\in{B^{-1}_{p,r}}$, $r\in[1,\infty]$ and
$p\in(\frac{n}{2},\infty)$ we have
$\displaystyle\|\mathfrak{J}_{2}(\vec{u},\theta)\|_{\mathcal{Z}_{T}}\lesssim$
$\displaystyle\
(1+T)\Big{(}\|\theta_{0}\|_{B^{-1}_{p,r}}+\|\vec{u}\|_{\mathcal{X}_{T}}\|\theta\|_{\mathcal{Z}_{T}}\Big{)}.$
(3.11)
###### Proof.
Similar as before, we divide the proof of the
$\mathfrak{J}_{2}(\vec{u},\theta)$ into two subcases $q\geq 0$ and $q=-1$. In
the case $q\geq 0$ we have
$\displaystyle\triangle_{q}\mathfrak{J}_{2}(\vec{u},\theta)=e^{t\Delta}\triangle_{q}\theta_{0}-\int_{0}^{t}e^{(t-\tau)\Delta}{\triangle}_{q}\nabla\cdot(\vec{u}\theta)(\tau)d\tau.$
Applying Lemma 2.1, using convolution inequalities to time variable and
following a similar argument as before we see that
$\displaystyle\|\triangle_{q}\mathfrak{J}_{2}(\vec{u},\theta)\|_{L^{2}_{T}L^{\frac{n}{2}}_{x}}\lesssim$
$\displaystyle
2^{-q}\|\triangle_{q}\theta_{0}\|_{\frac{n}{2}}+\|\triangle_{q}(\vec{u}\theta)\|_{L^{1}_{T}L^{\frac{n}{2}}_{x}},$
(3.12)
which yields
$\displaystyle\sum_{q\geq
0}\|\triangle_{q}\mathfrak{J}_{2}(\vec{u},\theta)\|_{L^{2}_{T}L^{\frac{n}{2}}_{x}}$
$\displaystyle\lesssim\sum_{q\geq-1}(2^{-q}\|\triangle_{q}\theta_{0}\|_{\frac{n}{2}}+\|\triangle_{q}(\vec{u}\theta)\|_{L^{1}_{T}L^{\frac{n}{2}}_{x}})$
$\displaystyle\lesssim\|\theta_{0}\|_{B^{-1}_{\frac{n}{2},1}}+\|\vec{u}\theta\|_{\tilde{L}^{1}_{T}(B^{0}_{\frac{n}{2},1})}$
(3.13)
and
$\displaystyle\sup_{q\geq
0}(q+3)\|\triangle_{q}\mathfrak{J}_{2}(\vec{u},\theta)\|_{L^{2}_{T}L^{\frac{n}{2}}_{x}}\lesssim\|\theta_{0}\|_{B^{-1,1}_{\frac{n}{2},\infty}}+\|\vec{u}\theta\|_{\tilde{L}^{1}_{T}(B^{0,1}_{\frac{n}{2},\infty})},$
(3.14)
where we have used Definition 1.1. Now we consider the case $q=-1$. Similarly,
we have
$\displaystyle S_{0}\mathfrak{J}_{2}(\vec{u},\theta)=$ $\displaystyle
e^{t\Delta}S_{0}\theta_{0}-\int_{0}^{t}e^{(t-\tau)\Delta}S_{0}\nabla\cdot(\vec{u}\theta)(\tau)d\tau.$
Applying Lemma 2.1 and convolution inequality to time variable we obtain
$\displaystyle\|S_{0}\mathfrak{J}_{2}(\vec{u},\theta)\|_{L^{2}_{T}L^{\frac{n}{2}}_{x}}\lesssim
T^{\frac{1}{2}}\|S_{0}\theta_{0}\|_{\frac{n}{2}}+T^{\frac{1}{2}}\|S_{0}(\vec{u}\theta)\|_{L^{1}_{T}L^{\frac{n}{2}}_{x}}$
which yields
$\displaystyle\|S_{0}\mathfrak{J}_{2}(\vec{u},\theta)\|_{L^{2}_{T}L^{\frac{n}{2}}_{x}}$
$\displaystyle\lesssim
T^{\frac{1}{2}}\big{(}\|\theta_{0}\|_{B^{-1}_{\frac{n}{2},\infty}}+\|\vec{u}\theta\|_{\tilde{L}^{1}_{T}(B^{0}_{\frac{n}{2},\infty})}\big{)}$
(3.15) $\displaystyle\lesssim
T^{\frac{1}{2}}\big{(}\|\theta_{0}\|_{B^{-1,1}_{\frac{n}{2},\infty}}+\|\vec{u}\theta\|_{\tilde{L}^{1}_{T}(B^{0,1}_{\frac{n}{2},\infty})}\big{)}.$
(3.16)
The desired results (3.10) and (3.11) follows from (3.13) $\sim$ (3.16) and
Definition 1.1 as well as Lemmas 2.6 and 2.7. This proves Lemma 3.3. ∎
Proofs of Theorems 1.2 and 1.3: From Lemmas 3.2, 3.3 and 3.1 as well as a
standard argument, we see that Theorems 1.2 and 1.3 follow.
## 4 Proof of Theorem 1.4
In this section, we give the proof of Theorem 1.4. We first prove the
following heat semigroup characterization of the space $B^{s,\sigma}_{p,r}$:
###### Lemma 4.1.
Let $p,r\in[1,\infty]$, $s<0$ and $\sigma\geq 0$. The following assertions are
equivalent:
(1) $f\in B^{s,\sigma}_{p,r}$,.
(2) For all $t\in(0,1)$, $e^{t\Delta}f\in L^{p}_{x}$ and
$t^{\frac{1}{2}}|\ln(\frac{t}{e^{2}})|^{\sigma}\|e^{t\Delta}f\|_{p}\in
L^{r}((0,1),\frac{dt}{t})$.
###### Proof.
The idea of the proof mainly comes from [14] and the proof is quite similar.
But for readers convenience, we give the details as follows. We denote by $C$
the constant depends on $n$ and might depend on $s$, $\sigma$ and $r$ in the
proof of this Lemma.
$\textsc{(1)}\Rightarrow\textsc{(2)}$. We write $f=S_{0}f+\sum_{j\geq
0}\triangle_{j}f$ with
$\|{S}_{0}f\|_{p}=\varepsilon_{-1},\quad\|\triangle_{j}f\|_{p}=2^{j|s|}(3+j)^{-\sigma}\varepsilon_{j}\hbox{
and }(\varepsilon_{j})_{j\geq-1}\in\ell^{r}.$
We estimate the norm
$t^{\frac{|s|}{2}}|\ln(\frac{t}{e^{2}})|^{\sigma}\|e^{t\Delta}f\|_{p}$ by
$\displaystyle
t^{\frac{|s|}{2}}|\ln(\frac{t}{e^{2}})|^{\sigma}\|e^{t\Delta}S_{0}f\|_{p}\leq
t^{\frac{|s|}{2}}|\ln(\frac{t}{e^{2}})|^{\sigma}\|e^{t\Delta}\tilde{S}_{0}S_{0}f\|_{p}\leq
Ct^{\frac{|s|}{2}}|\ln(\frac{t}{e^{2}})|^{\sigma}\|S_{0}f\|_{p}.$
Similarly, for $j\geq 0$, we have
$\displaystyle
t^{\frac{|s|}{2}}|\ln(\frac{t}{e^{2}})|^{\sigma}\|e^{t\Delta}\triangle_{j}f\|_{p}\leq
Ct^{\frac{|s|}{2}}|\ln(\frac{t}{e^{2}})|^{\sigma}\|\triangle_{j}f\|_{p}.$
Moreover, when $j\geq 0$ and $N\geq 0$, from the $L^{1}_{x}$ integrability of
heat kernel and Lemma 2.1, we have
$\displaystyle
t^{\frac{|s|}{2}}|\ln(\frac{t}{e^{2}})|^{\sigma}\|e^{t\Delta}\triangle_{j}f\|_{p}$
$\displaystyle=t^{\frac{|s|}{2}}|\ln(\frac{t}{e^{2}})|^{\sigma}\|e^{t\Delta}(-t\Delta)^{\frac{N}{2}}(-t\Delta)^{-\frac{N}{2}}\triangle_{j}f\|_{p}$
$\displaystyle\leq
Ct^{\frac{|s|-N}{2}}|\ln(\frac{t}{e^{2}})|^{\sigma}2^{-jN}\|\triangle_{j}f\|_{p}.$
Combining the above estimates, for some $N\geq 2$ and any $0<t<1$ we have
$\displaystyle\|e^{t\Delta}f\|_{p}$ $\displaystyle\leq
C\Big{(}\varepsilon_{-1}+\sum_{j\geq
0}\min\Big{\\{}{2^{j|s|}}{(j+3)^{-\sigma}}\varepsilon_{j},\
t^{-\frac{N}{2}}2^{-jN+j}(j+3)^{-\sigma}\varepsilon_{j}\Big{\\}}\Big{)}$
$\displaystyle\leq
C\|(\varepsilon_{j})_{j\geq-1}\|_{\ell^{r}}t^{-\frac{N}{2}}<\infty.$
Let $I_{k}=(2^{-2-2k},2^{-2k}]$. For any $t\in(0,1]=\cup_{k\geq 0}I_{k}$,
there exists an integer $j_{0}$ such that $t\in I_{j_{0}}$. And for $t\in
I_{j_{0}}$ we have
$\displaystyle
t^{\frac{|s|}{2}}|\ln(\frac{t}{e^{2}})|^{\sigma}\|e^{t\Delta}f\|_{p}\leq$
$\displaystyle
C\Big{(}\varepsilon_{-1}+\sum_{j=0}^{j_{0}}\frac{(3+j_{0})^{\sigma}}{(3+j)^{\sigma}}2^{-(j_{0}-j)|s|}\varepsilon_{j}+\sum_{j=j_{0}+1}\\!\\!2^{(j_{0}-j)(|s|-N)}\varepsilon_{j}\Big{)}:=C\
\\!\eta_{j_{0}}.$
From the above estimate we see that
$\|\|t^{\frac{|s|}{2}}|\ln(\frac{t}{e^{2}})|^{\sigma}e^{t\Delta}f\|_{p}\|_{L^{r}((0,1),\frac{dt}{t})}\leq
C\ \\!\|(\eta_{j})_{j\geq-1}\|_{\ell^{r}}\leq C\
\\!\|(\varepsilon_{j})_{j\geq-1}\|_{\ell^{r}}.$
Indeed, by using Young’s inequality we have
$\displaystyle\sum_{j_{0}\geq-1}\Big{(}\sum_{j=0}^{j_{0}}\frac{(3+j_{0})^{\sigma}}{(3+j)^{\sigma}}2^{(j-j_{0})|s|}\varepsilon_{j}\Big{)}^{r}$
$\displaystyle\leq
C\\!\\!\sum_{j_{0}\geq-1}\Big{(}\sum_{j=0}^{[\frac{j_{0}}{2}]}2^{(j-[\frac{j_{0}}{2}])|s|}\varepsilon_{j}\Big{)}^{r}+C\sum_{j_{0}\geq-1}\Big{(}\sum_{j=[\frac{j_{0}}{2}]+1}^{j_{0}}\\!\\!2^{(j-j_{0})|s|}\varepsilon_{j}\Big{)}^{r}$
$\displaystyle\leq C\ \\!\|(\varepsilon_{j})_{j\geq-1}\|_{\ell^{r}}^{r}$
and
$\displaystyle\sum_{j_{0}\geq-1}\Big{(}\sum_{j=j_{0}+1}2^{(j_{0}-j)(|s|-N)}\varepsilon_{j}\Big{)}^{r}\leq
C\ \\!\|(\varepsilon_{j})_{j\geq-1}\|_{\ell^{r}}^{r}.$
$\textsc{(2)}\Rightarrow\textsc{(1)}$. We get that
$S_{0}f=e^{-\frac{1}{2}\Delta}S_{0}e^{\frac{1}{2}\Delta}f\in L^{p}_{x}$ since
the kernel of $e^{-\frac{1}{2}\Delta}S_{0}$ is $L^{1}_{x}$ bounded. Similarly,
when $j\geq 0$, we write
$\triangle_{j}f=e^{-t\Delta}\triangle_{j}e^{t\Delta}f$. For any $j\geq 0$, we
choose $t$ such that $2^{-2-2j}<t<2^{-2j}$. Then we have
$\displaystyle 2^{js}(3+j)^{\sigma}\|\triangle_{j}f\|_{p}\leq
Ct^{\frac{|s|}{2}}(2-\ln
t)^{\sigma}\|e^{-t\Delta}\triangle_{j}e^{t\Delta}f\|_{p}\leq
Ct^{\frac{|s|}{2}}|\ln(\frac{t}{e^{2}})|^{\sigma}\|e^{t\Delta}f\|_{p}.$
Consequently, we have
$\displaystyle\|f\|_{B^{s,\sigma}_{p,r}}^{r}$
$\displaystyle=\sum_{j\geq-1}2^{jsr}(3+j)^{\sigma{r}}\|\triangle_{j}f\|^{r}_{p}\leq
C\Big{(}\sup_{0<t<1}t^{\frac{|s|}{2}}|\ln(\frac{t}{e^{2}})|^{\sigma}\|e^{t\Delta}f\|_{p}\Big{)}^{r}$
$\displaystyle\leq
C\|t^{\frac{|s|}{2}}|\ln(\frac{t}{e^{2}})|^{\sigma}\|e^{t\Delta}f\|_{p}\|^{r}_{L^{r}((0,1),\frac{dt}{t})},$
where the last inequality follows from a similar argument as in Lemma 16.1 of
[14]. ∎
Next we prove a bilinear estimate.
###### Lemma 4.2.
Let $n\geq 2$, $\varepsilon>0$, $p\in(\frac{n}{2},\infty)$ and
$\mathfrak{J}_{1}$,$\mathfrak{J}_{2}$ be as in (3.1). For $0<T\leq 1$, there
exists $0<\nu<\nu(p)=\frac{2p-n}{2p}$ such that
$\displaystyle\left\\{\begin{aligned}
\sup_{t\in(0,T)}\\!\\!t^{\frac{1}{2}}|\ln(\frac{t}{e^{2}})|\|\mathfrak{J}_{1}(\vec{u},\theta)\|_{\infty}&\lesssim\|\vec{u}_{0}\|_{B^{-1,1}_{\infty,\infty}}+(\sup_{t\in(0,T)}\\!\\!\\!t^{\frac{1}{2}}|\ln(\frac{t}{e^{2}})|\|\vec{u}\|_{\infty})^{2}+T^{\nu}\\!\\!\\!\\!\sup_{t\in(0,T)}\\!\\!\\!t^{\frac{1}{2}}|\ln(\frac{t}{e^{2}})|^{\varepsilon}\|\theta\|_{p}\\\
\sup_{t\in(0,T)}\\!\\!t^{\frac{1}{2}}|\ln(\frac{t}{e^{2}})|^{\varepsilon}\|\mathfrak{J}_{2}(\vec{u},\theta)\|_{p}&\lesssim\|\theta_{0}\|_{B^{-1,\varepsilon}_{p,\infty}}+\frac{1}{\varepsilon}\sup_{t\in(0,T)}t^{\frac{1}{2}}|\ln(\frac{t}{e^{2}})|\|\vec{u}\|_{\infty}\sup_{t\in(0,T)}t^{\frac{1}{2}}|\ln(\frac{t}{e^{2}})|^{\varepsilon}\|\theta\|_{p}\
\ .\end{aligned}\right.$
###### Proof.
The term
$\int_{0}^{t}e^{(t-\tau)\Delta}\mathbb{P}\nabla\cdot(\vec{u}\otimes\vec{u})d\tau$
is already treated in Lemma 2.5 of [7]. From Lemma 4.1 we see that
$\displaystyle\sup_{t\in(0,1)}\\!\\!t^{\frac{1}{2}}|\ln(\frac{t}{e^{2}})|\|e^{t\Delta}\vec{u}_{0}\|_{\infty}\sim\|\vec{u}_{0}\|_{B^{-1,1}_{\infty,\infty}},\quad\sup_{t\in(0,1)}\\!\\!t^{\frac{1}{2}}|\ln(\frac{t}{e^{2}})|^{\varepsilon}\|e^{t\Delta}\theta_{0}\|_{p}\sim\|\theta_{0}\|_{B^{-1,\varepsilon}_{p,\infty}}.$
Therefore, it remains to estimate
$\displaystyle\int_{0}^{t}e^{(t-\tau)\Delta}\mathbb{P}(\theta\vec{a})d\tau,\quad\int_{0}^{t}e^{(t-\tau)\Delta}\nabla\cdot(\vec{u}\theta)d\tau.$
By using the decay estimates of the Oseen kernel (cf. [14], Proposition 11.1)
we see that
$\displaystyle t^{\frac{1}{2}}|\ln(\frac{t}{e^{2}})|\|\int_{0}^{t}$
$\displaystyle
e^{(t-\tau)\Delta}\mathbb{P}(\theta\vec{a})d\tau\|_{\infty}\lesssim
t^{\frac{1}{2}}|\ln(\frac{t}{e^{2}})|\int^{t}_{0}(t-\tau)^{-\frac{n}{2p}}\|\theta(\tau)\|_{p}d\tau$
$\displaystyle\lesssim
t^{\frac{1}{2}}|\ln(\frac{t}{e^{2}})|\int^{t}_{0}(t-\tau)^{-\frac{n}{2p}}\tau^{-\frac{1}{2}}|\ln(\frac{\tau}{e^{2}})|^{-\varepsilon}d\tau\sup_{\tau\in(0,T)}\\!\tau^{\frac{1}{2}}|\ln(\frac{\tau}{e^{2}})|^{\varepsilon}\|\theta(\tau)\|_{p}$
$\displaystyle\lesssim
t^{\nu(p)}|\ln(\frac{t}{e^{2}})|^{1-\varepsilon}\sup_{\tau\in(0,T)}\\!\tau^{\frac{1}{2}}|\ln(\frac{\tau}{e^{2}})|^{\varepsilon}\|\theta(\tau)\|_{p}$
$\displaystyle\lesssim
T^{\nu}\tau^{\frac{1}{2}}|\ln(\frac{\tau}{e^{2}})|^{\varepsilon}\|\theta(\tau)\|_{p},$
where $\nu\in(0,\nu({p}))$ and $\lim_{t\downarrow
0}t^{\nu(p)-\nu}|\ln(\frac{t}{e^{2}})|^{1-\varepsilon}=0$. By applying the
decay estimates of the heat kernel (is Oseen kernel too) (cf. [14],
Proposition 11.1) we see that
$\displaystyle
t^{\frac{1}{2}}|\ln(\frac{t}{e^{2}})|^{\varepsilon}\|\int_{0}^{t}e^{(t-\tau)\Delta}\nabla\cdot(\theta\vec{u})d\tau\|_{p}\lesssim
t^{\frac{1}{2}}|\ln(\frac{t}{e^{2}})|^{\varepsilon}\int^{t}_{0}(t-\tau)^{-\frac{1}{2}}\|\theta(\tau)\|_{p}\|\vec{u}(\tau)\|_{\infty}d\tau$
$\displaystyle\lesssim\\!\frac{\sqrt{t}}{\varepsilon}|\ln(\frac{t}{e^{2}})|^{\varepsilon}\\!\\!\int^{t}_{0}\\!(t-\tau)^{-\frac{1}{2}}\tau^{-1}|\ln(\frac{\tau}{e^{2}})|^{-1-\varepsilon}d\tau\\!\\!\sup_{\tau\in(0,T)}\\!\\!\tau^{\frac{1}{2}}|\ln(\frac{\tau}{e^{2}})|^{\varepsilon}\|\theta(\tau)\|_{p}\sup_{\tau\in(0,T)}\\!\\!\\!\tau^{\frac{1}{2}}|\ln(\frac{\tau}{e^{2}})|\|\vec{u}(\tau)\|_{\infty}$
$\displaystyle\lesssim\frac{1}{\varepsilon}\sup_{\tau\in(0,T)}\\!\tau^{\frac{1}{2}}|\ln(\frac{\tau}{e^{2}})|^{\varepsilon}\|\theta(\tau)\|_{p}\sup_{\tau\in(0,T)}\\!\tau^{\frac{1}{2}}|\ln(\frac{\tau}{e^{2}})|\|\vec{u}(\tau)\|_{\infty},$
where
$t^{\frac{1}{2}}|\ln(\frac{t}{e^{2}})|^{\varepsilon}\int^{t}_{0}(t-\tau)^{-\frac{1}{2}}\tau^{-1}|\ln(\frac{\tau}{e^{2}})|^{-1-\varepsilon}d\tau\lesssim\frac{1}{\varepsilon}$.
Indeed, it suffices to show
$t^{\frac{1}{2}}|\ln(\frac{t}{e^{2}})|^{\varepsilon}\int^{t/2}_{0}(t-\tau)^{-\frac{1}{2}}\tau^{-1}|\ln(\frac{\tau}{e^{2}})|^{-1-\varepsilon}d\tau\lesssim\frac{1}{\varepsilon},$
which is equivalent to
$|\ln(\frac{t}{e^{2}})|^{\varepsilon}\int^{t/2}_{0}\tau^{-1}|\ln(\frac{\tau}{e^{2}})|^{-1-\varepsilon}d\tau\lesssim\frac{1}{\varepsilon}\Leftrightarrow\int_{|\ln(\frac{t}{e^{2}})|}^{\infty}x^{-1-\varepsilon}dx\lesssim\frac{1}{\varepsilon}|\ln(\frac{t}{e^{2}})|^{-\varepsilon}.$
∎
Proof of Theorem 1.4: Applying Lemmas 3.1 and 4.2 and following similar
arguments as in [7], we prove Theorem 1.4. We omit the details here.
## References
* [1] H. Abidi, T. Hmidi, On the global well-posedness for Boussinesq system, J. Differential Equations, 233 (2007), 199-220.
* [2] J.M. Bony, Calcul symbolique et propagation des singlarit$\acute{e}$ pour les $\acute{e}$quations aux d$\acute{e}$riv$\acute{e}$es partielles non lin$\acute{e}$arires, Ann. Sci. $\acute{E}$cole Norm. Sup, 14 (1981), 209–246.
* [3] J.R. Cannon, E. DiBenedetto, The initial value problem for Boussinesq equations with data in $L^{p}$, Approximation Methods for Navier-Stokes Problems, Lect. Notes in Math., 771(1980), 129–144.
* [4] M. Cannone, Harmonic analysis tools for solving the incompressible Navier-Stokes equations, in: Handbook of Mathematical Fluid Dynamics., Vol. 3, S. Friedlander and D. Serre, eds, Elsevier, 2003\.
* [5] D. Chae, Global regularity for the 2-D Boussinesq equations with partial viscous terms, Adv. Math., 203 (2006) 497–513.
* [6] J.Y. Chemin, Th$\acute{e}$or$\grave{e}$mes d’unicit$\acute{e}$ pour le syst$\grave{e}$me de Navier-Stokes tridimensionnel, J. Anal. Math., 77 (1999), 27–50.
* [7] S. Cui, Global weak solutions for the Navier-Stokes Equations for $\dot{B}^{-1,1}_{\infty,\infty}+\dot{B}^{-1+r,\frac{2}{1-r}}_{\dot{X}_{r}}+L^{2}$ Initial Data, arXiv:submit/0187995, 2011.
* [8] R. Danchin, Fourier analysis method for PDE s, Preprint, November 2005.
* [9] R. Danchin, M. Paicu, Le th$\acute{e}$or$\grave{e}$me de Leary et le th$\acute{e}$or$\grave{e}$me de Fujita CKato pour le syst$\grave{e}$me de Boussinesq partiellement visqueux, Bull. Soc. Math. France, 136 (2008) 261 C-309.
* [10] L.C.F. Ferreira, E.J. Vilamiza-Roa, Existence of solutions to the convection problem in a pseudomeasure-type space, Proc. R. Soc. A, 464 (2008), 1983–1999.
* [11] T. Hishida, Existence and regularizing properties of solutions for the nonstationary convection problem, Funkcial. Ekvac., 34 (1991), 449–474.
* [12] T. Hmidi, F. Rousset, Global well-posedness for the Navier CStokes CBoussinesq system with axisymmetric data, Ann. I. H. Poincar$\acute{e}$–AN, 27 (2010), 1227-1246.
* [13] Y. Kagei, On weak solutions of nonstationary Boussinesq equations, Differential Integral Equations, 6 (1993), 587–611.
* [14] P.G. Lemarié-Rieusset, Recent developments in the Navier-Stokes problem, Research Notes in Mathematics, Chapman & Hall/CRC, 2002.
* [15] A. Majda, Introduction to PDEs and waves for the atmosphere and ocean, Courant Lecture Notes in Mathematics, AMS/CIMS, 9, 2003.
* [16] H. Morimoto, Non-Stationary Boussinesq equations, J. Fac. Sci. Univ. Tokyo, Sect. IA, 39 (1992), 61–75.
* [17] O. Sawada, Y. Taniuchi, On the Boussinesq flow with nondecaying initial data, Funkcialaj Ekvacioj, 47 (2004), 225–250.
* [18] H. Triebel, Interpolation theory, function spaces, differential operators, North Holland Publishing Company, Amsterdam, New York, 1978.
* [19] T. Yoneda, Ill-posedness of the 3D-Navier-Stokes equations in a generalized Besov space near $BMO^{-1}$, J. Funct. Anal., 258 (2010), 3376–3387
|
arxiv-papers
| 2011-05-13T13:20:04 |
2024-09-04T02:49:18.735045
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Chao Deng and Shangbin Cui",
"submitter": "Chao Deng",
"url": "https://arxiv.org/abs/1105.2722"
}
|
1105.2735
|
10.1080/10652460YYxxxxxxx 1476-8291 1065-2469 00 00 2009 January
# On a generalization of the generating function for Gegenbauer polynomials
Howard S. Cohla∗
aApplied and Computational Mathematics Division, National Institute of
Standards and Technology, Gaithersburg, Maryland, U.S.A.
∗Corresponding author. Email: howard.cohl@nist.gov
(v3.5 released October 2008)
###### Abstract
A generalization of the generating function for Gegenbauer polynomials is
introduced whose coefficients are given in terms of associated Legendre
functions of the second kind. We discuss how our expansion represents a
generalization of several previously derived formulae such as Heine’s formula
and Heine’s reciprocal square-root identity. We also show how this expansion
can be used to compute hyperspherical harmonic expansions for power-law
fundamental solutions of the polyharmonic equation.
35A08; 35J05; 32Q45; 31C12; 33C05; 42A16
###### keywords:
Euclidean space; Polyharmonic equation; Fundamental solution; Gegenbauer
polynomials; associated Legendre functions
## 1 Introduction
Gegenbauer polynomials $C_{n}^{\nu}(x)$ are given as the coefficients of
$\rho^{n}$ for the generating function $(1+\rho^{2}-2\rho x)^{-\nu}$. The
study of these polynomials was pioneered in a series of papers by Leopold
Gegenbauer (Gegenbauer (1874,1877,1884,1888,1893) [13, 14, 15, 16, 17]). The
main result which this paper relies upon is Theorem 1 below. This theorem
gives a generalized expansion over Gegenbauer polynomials $C_{n}^{\mu}(x)$ of
the algebraic function $z\mapsto(z-x)^{-\nu}$. Our proof is combinatoric in
nature and has great potential for proving new expansion formulae which
generalize generating functions. Our methodology can in principle be applied
to any generating function for hypergeometric orthogonal polynomials, of which
there are many (see for instance Srivastava & Manocha (1984) [25]; Erdélyi et
al. (1981) [11]). The concept of the proof is to start with a generating
function and use a connection formula to express the orthogonal polynomial as
a finite series in polynomials of the same type with different parameters. The
resulting formulae will then produce new expansions for the polynomials which
result from a limiting process, e.g., Legendre polynomials and Chebyshev
polynomials of the first and second kind. Connection formulae for classical
orthogonal polynomials and their $q$-extensions are well-known (see Ismail
(2005) [20]). In this paper we applied this method of proof to the generating
function for Gegenbauer polynomials.
This paper is organized as follows. In Section 2 we derive a complex
generalization of the generating function for Gegenbauer polynomials. In
Section 3 we discuss how our complex generalization reduces to previously
derived expressions and leads to extensions in appropriate limits. In Section
4 we use our complex expansion to generalize a formula originally developed by
Sack (1964) [24] on ${\mathbf{R}}^{3}$, to compute an expansion in terms of
Gegenbauer polynomials for complex powers of the distance between two points
on a $d$-dimensional Euclidean space for $d\geq 2$.
Throughout this paper we rely on the following definitions. For
$a_{1},a_{2},a_{3},\ldots\in{\mathbf{C}}$, if $i,j\in{\mathbf{Z}}$ and $j<i$
then $\sum_{n=i}^{j}a_{n}=0$ and $\prod_{n=i}^{j}a_{n}=1$, where
${\mathbf{C}}$ represents the complex numbers. The set of natural numbers is
given by ${\mathbf{N}}:=\\{1,2,3,\ldots\\}$, the set
${\mathbf{N}}_{0}:=\\{0,1,2,\ldots\\}={\mathbf{N}}\cup\\{0\\}$, and the set
${\mathbf{Z}}:=\\{0,\pm 1,\pm 2,\ldots\\}.$ The sets ${\mathbf{Q}}$ and
${\mathbf{R}}$ represent the rational and real numbers respectively.
## 2 Generalization of the generating function for Gegenbauer polynomials
We present the following generalization of the generating function for
Gegenbauer polynomials whose coefficients are given in terms of associated
Legendre functions of the second kind.
###### Theorem 2.1.
Let $\nu\in{\mathbf{C}}\setminus-{\mathbf{N}}_{0},$
$\mu\in(-1/2,\infty)\setminus\\{0\\}$ and
$z\in{\mathbf{C}}\setminus(-\infty,1]$ on any ellipse with foci at $\pm 1$
with $x$ in the interior of that ellipse. Then
$\frac{1}{(z-x)^{\nu}}=\frac{2^{\mu+1/2}\Gamma(\mu)e^{i\pi(\mu-\nu+1/2)}}{\sqrt{\pi}\,\Gamma(\nu)(z^{2}-1)^{(\nu-\mu)/2-1/4}}\sum_{{n}=0}^{\infty}({n}+\mu)Q_{{n}+\mu-1/2}^{\nu-\mu-1/2}(z)C_{n}^{\mu}(x).$
(1)
If one substitutes $z=(1+\rho^{2})/(2\rho)$ in (1) with $0<|\rho|<1$, then one
obtains an alternate expression with $x\in[-1,1],$
$\displaystyle\frac{1}{(1+\rho^{2}-2\rho
x)^{\nu}}=\frac{\Gamma(\mu)e^{i\pi(\mu-\nu+1/2)}}{\sqrt{\pi}\,\Gamma(\nu)\rho^{\mu+1/2}(1-\rho^{2})^{\nu-\mu-1/2}}$
$\displaystyle\hskip
108.12054pt\times\sum_{{n}=0}^{\infty}({n}+\mu)Q_{{n}+\mu-1/2}^{\nu-\mu-1/2}\left(\frac{1+\rho^{2}}{2\rho}\right)C_{n}^{\mu}(x).$
(2)
One can see that by replacing $\nu=\mu$ in (2), and using (8.6.11) in
Abramowitz & Stegun (1972) [1], that these formulae are generalizations of the
generating function for Gegenbauer polynomials (first occurence in Gegenbauer
(1874) [13])
$\frac{1}{\left(1+\rho^{2}-2\rho
x\right)^{\nu}}=\sum_{{n}=0}^{\infty}C_{n}^{\nu}(x)\rho^{n},$ (3)
where $\rho\in{\mathbf{C}}$ with $|\rho|<1$ and
$\nu\in(-1/2,\infty)\setminus\\{0\\}$ (see for instance (18.12.4) in Olver et
al. (2010) [23]). The Gegenbauer polynomials
$C_{n}^{\nu}:{\mathbf{C}}\to{\mathbf{C}}$ can be defined by
$C_{n}^{\nu}(x):=\frac{(2\nu)_{n}}{n!}\,{}_{2}F_{1}\left(-n,n+2\nu;\nu+\frac{1}{2};\frac{1-x}{2}\right),$
(4)
where $n\in{\mathbf{N}}_{0}$, $\nu\in(-1/2,\infty)\setminus\\{0\\},$ and
${}_{2}F_{1}:{\mathbf{C}}^{2}\times({\mathbf{C}}\setminus-{\mathbf{N}}_{0})\times\\{z\in{\mathbf{C}}:|z|<1\\}\to{\mathbf{C}}$,
the Gauss hypergeometric function, can be defined in terms of the following
infinite series
${}_{2}F_{1}(a,b;c;z):=\sum_{n=0}^{\infty}\frac{(a)_{n}(b)_{n}}{(c)_{n}}\frac{z^{n}}{n!}$
(5)
(see (2.1.5) in Andrews, Askey & Roy 1999), and elsewhere by analytic
continuation. The Pochhammer symbol (rising factorial)
$(\cdot)_{n}:{\mathbf{C}}\to{\mathbf{C}}$ is defined by
$(z)_{n}:=\prod_{i=1}^{n}(z+i-1),$
where $n\in{\mathbf{N}}_{0}$. For the Gegenbauer polynomials $C_{n}^{\nu}(x)$,
we refer to ${n}$ and $\nu$ as the degree and order respectively.
Proof Consider the generating function for Gegenbauer polynomials (3). The
connection relation which expresses a Gegenbauer polynomial with order $\nu$
as a sum over Gegenbauer polynomials with order $\mu$ is given by
$\displaystyle C_{n}^{\nu}(x)$ $\displaystyle=$
$\displaystyle\frac{(2\nu)_{n}}{(\nu+\tfrac{1}{2})_{n}}\sum_{k=0}^{n}\frac{(\nu+k+\tfrac{1}{2})_{n-k}\,(2\nu+n)_{k}\,(\mu+\tfrac{1}{2})_{k}\,\Gamma(2\mu+k)}{(n-k)!\,(2\mu)_{k}\,\Gamma(2\mu+2k)}$
(8) $\displaystyle{}\hskip
71.13188pt\times{}_{3}F_{2}\left(\begin{array}[]{c}-n+k,n+k+2\nu,\mu+k+\tfrac{1}{2}\\\
\nu+k+\tfrac{1}{2},2\mu+2k+1\end{array};1\right)C_{k}^{\mu}(x).$
This connection relation can be derived by starting with Theorem 9.1.1 in
Ismail (2005) [20] combined with (see for instance (18.7.1) in Olver et al.
(2010) [23])
$C_{n}^{\nu}(x)=\frac{(2\nu)_{n}}{(\nu+\tfrac{1}{2})_{n}}P_{n}^{(\nu-1/2,\nu-1/2)}(x),$
(9)
i.e., the Gegenbauer polynomials are given as symmetric Jacobi polynomials.
The Jacobi polynomials $P_{n}^{(\alpha,\beta)}:{\mathbf{C}}\to{\mathbf{C}}$
can be defined as (see for instance (18.5.7) in Olver et al. (2010) [23])
$P_{n}^{(\alpha,\beta)}(x):=\frac{(\alpha+1)_{n}}{n!}\,{}_{2}F_{1}\left(-n,n+\alpha+\beta+1;\alpha+1;\frac{1-x}{2}\right),$
(10)
where $n\in{\mathbf{N}}_{0},$ and $\alpha,\beta>-1$ (see Table 18.3.1 in Olver
et al. (2010) [23]). The generalized hypergeometric function
${}_{3}F_{2}:{\mathbf{C}}^{3}\times({\mathbf{C}}\setminus-{\mathbf{N}}_{0})^{2}\times\\{z\in{\mathbf{C}}:|z|<1\\}\to{\mathbf{C}}$
can be defined in terms of the following infinite series
${}_{3}F_{2}\left(\begin{array}[]{c}a_{1},a_{2},a_{3}\\\
b_{1},b_{2}\end{array};z\right):=\sum_{n=0}^{\infty}\frac{(a_{1})_{n}(a_{2})_{n}(a_{3})_{n}}{(b_{1})_{n}(b_{2})_{n}}\frac{z^{n}}{n!}.$
If we replace the Gegenbauer polynomial in the generating function (3) using
the connection relation (8), we obtain a double summation expression over $k$
and $n$. By reversing the order of the summations (justification by Tannery’s
theorem) and shifting the $n$-index by $k,$ we obtain after making some
reductions and simplifications, the following double-summation representation
$\displaystyle\frac{1}{(1+\rho^{2}-2\rho x)^{\nu}}$ $\displaystyle=$
$\displaystyle\frac{\sqrt{\pi}\,\Gamma(\mu)}{2^{2\nu-1}\Gamma(\nu)}\sum_{k=0}^{\infty}C_{k}^{\mu}(x)\frac{\rho^{k}}{2^{2k}}\frac{\mu+k}{\Gamma(\nu+k+\tfrac{1}{2})\Gamma(\mu+k+1)}$
(13)
$\displaystyle\times\sum_{n=0}^{\infty}\frac{\Gamma(2\nu+2k)}{n!}\,{}_{3}F_{2}\left(\begin{array}[]{c}-n,n+2k+2\nu,\mu+k+\tfrac{1}{2}\\\
\nu+k+\tfrac{1}{2},2\mu+2k+1\end{array};1\right).$
The ${}_{3}F_{2}$ generalized hypergeometric function appearing the above
formula may be simplified using Watson’s sum
$\displaystyle{}_{3}F_{2}\left(\begin{array}[]{c}a,b,c\\\
\frac{1}{2}(a+b+1),2c\end{array};1\right)$ $\displaystyle\hskip
36.98866pt=\frac{\sqrt{\pi}\,\Gamma\left(c+\frac{1}{2}\right)\Gamma\left(\frac{1}{2}(a+b+1)\right)\Gamma\left(c+\frac{1}{2}(1-a-b)\right)}{\Gamma\left(\frac{1}{2}(a+1)\right)\Gamma\left(\frac{1}{2}(b+1)\right)\Gamma\left(c+\frac{1}{2}(1-a)\right)\Gamma\left(c+\frac{1}{2}(1-b)\right)},$
where $\mbox{Re}(2c-a-b)>-1$ (see for instance (16.4.6) in Olver et al. [23]),
therefore
$\displaystyle\frac{1}{\Gamma(\nu+k+\tfrac{1}{2})\Gamma(\mu+k+1)}\,{}_{3}F_{2}\left(\begin{array}[]{c}-n,n+2k+2\nu,\mu+k+\tfrac{1}{2}\\\
\nu+k+\tfrac{1}{2},2\mu+2k+1\end{array};1\right)$ (18)
$\displaystyle=\frac{\sqrt{\pi}\,\Gamma(\mu-\nu+1)}{\Gamma\left(\tfrac{1-n}{2}\right)\Gamma\left(\nu+k+\tfrac{n+1}{2}\right)\Gamma\left(\mu+k+1+\tfrac{n}{2}\right)\Gamma\left(\mu-\nu+1-\tfrac{n}{2}\right)},$
for $\mbox{Re}(\mu-\nu)>-1$. By inserting (18) in (13), it follows that
$\displaystyle\frac{1}{(1+\rho^{2}-2\rho x)^{\nu}}$ $\displaystyle=$
$\displaystyle\frac{\pi\Gamma(\mu)\Gamma(\mu-\nu+1)}{2^{2\nu-1}\Gamma(\nu)}\sum_{k=0}^{\infty}(\mu+k)C_{k}^{\mu}(x)\frac{\rho^{k}}{2^{2k}}$
$\displaystyle\times\sum_{n=0}^{\infty}\frac{\rho^{n}\Gamma(2\nu+2k+n)}{n!\Gamma\left(\tfrac{1-n}{2}\right)\Gamma\left(\nu+k+\tfrac{n+1}{2}\right)\Gamma\left(\mu+k+1+\tfrac{n}{2}\right)\Gamma\left(\mu-\nu+1-\tfrac{n}{2}\right)}.$
It is straightforward to show using (5) and
$\Gamma(z-n)=(-1)^{n}\frac{\Gamma(z)}{(-z+1)_{n}},$
for $n\in{\mathbf{N}}_{0}$ and $z\in{\mathbf{C}}\setminus-{\mathbf{N}}_{0}$,
and the duplication formula (i.e., (5.5.5) in Olver et al. (2010) [23])
$\Gamma(2z)=\frac{2^{2z-1}}{\sqrt{\pi}}\,\Gamma(z)\Gamma\left(z+\frac{1}{2}\right),$
provided $2z\not\in-{\mathbf{N}}_{0}$, that
$\displaystyle\sum_{n=0}^{\infty}\frac{\rho^{n}\Gamma(2\nu+2k+n)}{n!\Gamma\left(\tfrac{1-n}{2}\right)\Gamma\left(\nu+k+\tfrac{n+1}{2}\right)\Gamma\left(\mu+k+1+\tfrac{n}{2}\right)\Gamma\left(\mu-\nu+1-\tfrac{n}{2}\right)}$
$\displaystyle=\frac{2^{2\nu+2k-1}\Gamma(\nu+k)}{\pi\Gamma(\mu+k+1)\Gamma(\mu-\nu+1)}\,{}_{2}F_{1}\left(\nu+k,\nu-\mu;\mu+k+1;\rho^{2}\right),$
so therefore
$\displaystyle\frac{1}{(1+\rho^{2}-2\rho x)^{\nu}}$ $\displaystyle=$
$\displaystyle\frac{\Gamma(\mu)}{\Gamma(\nu)}\sum_{k=0}^{\infty}(\mu+k)C_{k}^{\mu}(x)\rho^{k}$
(19) $\displaystyle\hskip
5.69046pt\times\frac{\Gamma(\nu+k)}{\Gamma(\mu+k+1)}\,{}_{2}F_{1}\left(\nu+k,\nu-\mu;\mu+k+1;\rho^{2}\right).$
Finally utilizing the quadratic transformation of the hypergeometric function
${}_{2}F_{1}(a,b;a-b+1;z)=(1+z)^{-a}{}_{2}F_{1}\left(\frac{a}{2},\frac{a+1}{2};a-b+1;\frac{4z}{(z+1)^{2}}\right),$
for $|z|<1$ (see (3.1.9) in Andrews, Askey & Roy (1999) [2]), combined with
the definition of the associated Legendre function of the second kind
$Q_{\nu}^{\mu}:{\mathbf{C}}\setminus(-\infty,1]\to{\mathbf{C}}$ in terms of
the Gauss hypergeometric function
$\displaystyle
Q_{\nu}^{\mu}(z):=\frac{\sqrt{\pi}\,e^{i\pi\mu}\Gamma(\nu+\mu+1)(z^{2}-1)^{\mu/2}}{2^{\nu+1}\Gamma(\nu+\frac{3}{2})z^{\nu+\mu+1}}$
$\displaystyle\hskip
108.12054pt\times{}_{2}F_{1}\left(\frac{\nu+\mu+2}{2},\frac{\nu+\mu+1}{2};\nu+\frac{3}{2};\frac{1}{z^{2}}\right),$
(20)
for $|z|>1$ and $\nu+\mu+1\notin-{\mathbf{N}}_{0}$ (cf. Section 14.21 and
(14.3.7) in Olver et al. (2010) [23]), one can show that
$\displaystyle{}_{2}F_{1}\left(\nu+k,\nu-\mu;\mu+k+1;\rho^{2}\right)$
$\displaystyle\hskip
56.9055pt=\frac{\Gamma\left(\mu+k+1\right)e^{i\pi(\mu-\nu+1/2)}}{\sqrt{\pi}\,\Gamma(\nu+k)\rho^{\mu+k+1/2}(1-\rho^{2})^{\nu-\mu-1/2}}Q_{k+\mu-1/2}^{\nu-\mu-1/2}\left(\frac{1+\rho^{2}}{2\rho}\right),$
which when used in (19) produces (2). Since the Gegenbauer polynomial is just
a symmetric Jacobi polynomial (9), through Theorem 9.1.1 in Szegő (1959) [26]
(Expansion of an analytic function in a Jacobi series), since
$f_{z}:{\mathbf{C}}\to{\mathbf{C}}$ defined by $f_{z}(x):=(z-x)^{-\nu}$ is
analytic in $[-1,1]$, then the above expansion in Gegenbauer polynomials is
convergent if the point $z\in{\mathbf{C}}$ lies on any ellipse with foci at
$\pm 1$ and $x$ can lie on any point interior to that ellipse.
$\hfill\blacksquare$
## 3 Generalizations, Extensions and Applications
By considering in (2) the substitution $\nu=d/2-1$ and the map
$\nu\mapsto-\nu/2$, one obtains the formula
$\displaystyle\frac{\rho^{(d-1)/2}(1-\rho^{2})^{\nu-(d-1)/2}}{(1+\rho^{2}-2\rho
x)^{\nu}}$ $\displaystyle=$
$\displaystyle\frac{e^{-i\pi(\nu-(d-1)/2)}\Gamma(\frac{d-2}{2})}{2\sqrt{\pi}\,\Gamma(\nu)}$
$\displaystyle\times\sum_{{n}=0}^{\infty}(2{n}+d-2)Q_{{n}+(d-3)/2}^{\nu-(d-1)/2}\left(\frac{1+\rho^{2}}{2\rho}\right)C_{n}^{d/2-1}(x).$
This formula generalizes (9.9.2) in Andrews, Askey & Roy (1999) [2].
By taking the limit as $\mu\to 1/2$ in (1), one obtains a general result is an
expansion over Legendre polynomials, namely
$\frac{1}{(z-x)^{\nu}}=\frac{e^{i\pi(1-\nu)}(z^{2}-1)^{(1-\nu)/2}}{\Gamma(\nu)}\sum_{n=0}^{\infty}(2n+1)Q_{n}^{\nu-1}(z)P_{n}(x),$
(21)
using
$P_{n}(x)=C_{n}^{1/2}(x),$ (22)
which is clear by comparing (cf. (18.7.9) of Olver et al. (2010) and (4) or
(10))
$P_{n}(x):={}_{2}F_{1}\left(-{n},{n}+1;1;\frac{1-x}{2}\right),$ (23)
and (4). If one takes $\nu=1$ in (21) then one has an expansion of the Cauchy
denominator which generalizes Heine’s formula (see for instance Olver (1997)
[22, Ex. 13.1]; Heine (1878) [18, p. 78])
$\frac{1}{z-x}=\sum_{n=0}^{\infty}(2n+1)Q_{n}(z)P_{n}(x).$
By taking the limit as $\mu\to 1$ in (1), one obtains a general result which
is an expansion over Chebyshev polynomials of the second kind, namely
$\frac{1}{(z-x)^{\nu}}=\frac{2^{3/2}e^{i\pi(3/2-\nu)}}{\sqrt{\pi}\,\Gamma(\nu)(z^{2}-1)^{\nu/2-3/4}}\sum_{n=0}^{\infty}(n+1)Q_{n+1/2}^{\nu-3/2}(z)U_{n}(x),$
(24)
using (18.7.4) in Olver et al. (2010) [23], $U_{n}(x)=C_{n}^{1}(x).$ If one
considers the case $\nu=1$ in (24) then the associated Legendre function of
the second kind reduces to an elementary function through (8.6.11) in
Abramowitz & Stegun (1972) [1], namely
$\frac{1}{z-x}=2\sum_{n=0}^{\infty}\frac{U_{n}(x)}{(z+\sqrt{z^{2}-1})^{n+1}}.$
By taking the limit as $\nu\to 1$ in (1), one produces the Gegenbauer
expansion of the Cauchy denominator given in Durand, Fishbane & Simmons (1976)
[10, (7.2)], namely
$\frac{1}{z-x}=\frac{2^{\mu+1/2}}{\sqrt{\pi}}\Gamma(\mu)e^{i\pi(\mu-1/2)}(z^{2}-1)^{\mu/2-1/4}\sum_{n=0}^{\infty}(n+\mu)Q_{n+\mu-1/2}^{-\mu+1/2}(z)C_{n}^{\mu}(x).$
Using (2.4) therein, the associated Legendre function of the second kind is
converted to the Gegenbauer function of the second kind.
If one considers the limit as $\mu\to 0$ in (1) using
$\lim_{\mu\to 0}\frac{{n}+\mu}{\mu}C_{n}^{\mu}(x)=\epsilon_{n}T_{n}(x)$
(see for instance (6.4.13) in Andrews, Askey & Roy (1999) [2]), where
$T_{n}:{\mathbf{C}}\to{\mathbf{C}}$ is the Chebyshev polynomial of the first
kind defined as (see Section 5.7.2 in Magnus, Oberhettinger & Soni (1966)
[21])
$T_{n}(x):={}_{2}F_{1}\left(-n,n;\frac{1}{2};\frac{1-x}{2}\right),$
and $\epsilon_{n}=2-\delta_{{n},0}$ is the Neumann factor, commonly appearing
in Fourier cosine series, then one obtains
$\frac{1}{(z-x)^{\nu}}=\sqrt{\frac{2}{\pi}}\frac{e^{-i\pi(\nu-1/2)}(z^{2}-1)^{-\nu/2+1/4}}{\Gamma(\nu)}\sum_{n=0}^{\infty}\epsilon_{n}T_{n}(x)Q_{n-1/2}^{\nu-1/2}(z).$
(25)
The result (25), which is a generalization of Heine’s reciprocal square-root
identity (see Heine (1881) [19, p. 286]; Cohl & Tohline (1999) [9, (A5)]).
Polynomials in $(z-x)$ also naturally arise by considering the limit $\nu\to
n\in-{\mathbf{N}}_{0}.$ This limit is given in (4.4) of Cohl & Dominici (2010)
[7], namely
$(z-x)^{q}=i(-1)^{q+1}\sqrt{\frac{2}{\pi}}(z^{2}-1)^{q/2+1/4}\sum_{n=0}^{q}\epsilon_{n}T_{n}(x)\frac{(-q)_{n}}{(q+n)!}Q_{n-1/2}^{q+1/2}(z),$
(26)
for $q\in{\mathbf{N}}_{0}$.
Note that all of the above formulae are restricted by the convergence
criterion given by Theorem 9.1.1 in Szegő (1959) [26] (Expansion of an
analytic function in a Jacobi series), i.e., since the functions on the left-
hand side are analytic in $[-1,1]$, then the expansion formulae are convergent
if the point $z\in{\mathbf{C}}$ lies on any ellipse with foci at $\pm 1$ then
$x$ can lie on any point interior to that ellipse. Except of course (26) which
converges for all points $z,x\in{\mathbf{C}}$ since the function on the left-
hand side is entire.
An interesting extension of the results presented in this paper, originally
uploaded to arXiv in Cohl (2011) [6] have been obtained recently in
Szmytkowski (2011) [28], to obtain formulas such as
$\displaystyle\sum_{n=0}^{\infty}\frac{n+\mu}{\mu}\mathrm{P}_{n+\mu-1/2}^{\nu-\mu}(t)C_{n}^{\mu}(x)=\frac{\sqrt{\pi}\,(1-t^{2})^{(\nu-\mu)/2}}{2^{\mu-1/2}\Gamma(\mu+1)\Gamma\left(\frac{1}{2}-\nu\right)}$
$\displaystyle\times\left\\{\begin{array}[]{ll}\displaystyle
0&\qquad\mathrm{if}\ -1<x<t<1,\\\\[5.0pt]
\displaystyle(x-t)^{-\nu-1/2}&\qquad\mathrm{if}\ -1<t<x<1,\end{array}\right.$
and
$\displaystyle\sum_{n=0}^{\infty}\frac{n+\mu}{\mu}\mathrm{Q}_{n+\mu-1/2}^{\nu-\mu}(t)C_{n}^{\mu}(x)=\frac{\sqrt{\pi}\,\Gamma\left(\nu+\frac{1}{2}\right)(1-t^{2})^{(\nu-\mu)/2}}{2^{\mu+1/2}\Gamma(\mu+1)}$
$\displaystyle\times\left\\{\begin{array}[]{ll}\displaystyle(t-x)^{-\nu-1/2}&\qquad\mathrm{if}\
-1<x<t<1,\\\\[5.0pt]
\displaystyle(x-t)^{-\nu-1/2}\cos[\pi(\nu+\tfrac{1}{2})]&\qquad\mathrm{if}\
-1<t<x<1,\end{array}\right.$
where $\mbox{Re}\,\mu>-1/2$, $\mbox{Re}\,\nu<1/2$ and
$\mathrm{P}_{\nu}^{\mu},\mathrm{Q}_{\nu}^{\mu}:(-1,1)\to{\mathbf{C}}$ are
Ferrers functions (associated Legendre functions on-the-cut) of the first and
second kind. The Ferrers functions of the first and second kind can be defined
using Olver et al. (2010) [23, (14.3.11-12)].
## 4 Expansion of a power-law fundamental solution of the polyharmonic
equation
A fundamental solution for the polyharmonic equation on Euclidean space
${\mathbf{R}}^{d}$ is a function
${\mathcal{G}}_{k}^{d}:({\mathbf{R}}^{d}\times{\mathbf{R}}^{d})\setminus\\{({\bf
x},{\bf x}):{\bf x}\in{\mathbf{R}}^{d}\\}\to{\mathbf{R}}$ which satisfies the
equation
$(-\Delta)^{k}{{\mathcal{G}}}_{k}^{d}({\bf x},{\bf x}^{\prime})=\delta({\bf
x}-{\bf x}^{\prime}),$ (31)
where $\Delta:C^{p}({\mathbf{R}}^{d})\to C^{p-2}({\mathbf{R}}^{d}),$ $p\geq
2$, is the Laplacian operator on ${\mathbf{R}}^{d}$ defined by
$\Delta:=\sum^{d}_{i=1}\frac{\partial^{2}}{\partial x_{i}^{2}},$
${\bf x}=(x_{1},\ldots,x_{d}),{{\bf
x}^{\prime}}=(x_{1}^{\prime},\ldots,x_{d}^{\prime})\in{\mathbf{R}}^{d}$, and
$\delta$ is the Dirac delta function. Note that we introduce a minus sign into
the equations where the Laplacian is used, such as in (31), to make the
resulting operator positive. By Euclidean space ${\mathbf{R}}^{d}$, we mean
the normed vector space given by the pair $({\mathbf{R}}^{d},\|\cdot\|)$,
where $\|\cdot\|:{\mathbf{R}}^{d}\to[0,\infty)$ is the Euclidean norm on
${\mathbf{R}}^{d}$ defined by $\|{\bf
x}\|:=\sqrt{x_{1}^{2}+\cdots+x_{d}^{2}},$ with inner product
$(\cdot,\cdot):{\mathbf{R}}^{d}\times{\mathbf{R}}^{d}\to{\mathbf{R}}$ defined
as
$({\bf x},{{\bf x}^{\prime}}):=\sum_{i=1}^{d}x_{i}x_{i}^{\prime}.$ (32)
Then ${\mathbf{R}}^{d}$ is a $C^{\infty}$ Riemannian manifold with Riemannian
metric induced from the inner product (32). Set ${\mathbf{S}}^{d-1}=\\{{\bf
x}\in{\mathbf{R}}^{d}:({\bf x},{\bf x})=1\\},$ then ${\mathbf{S}}^{d-1},$ the
$(d-1)$-dimensional unit hypersphere, is a regular submanifold of
${\mathbf{R}}^{d}$ and a $C^{\infty}$ Riemannian manifold with Riemannian
metric induced from that on ${\mathbf{R}}^{d}$.
###### Theorem 4.1.
Let $d,k\in{\mathbf{N}}$. Define
${\mathcal{G}}_{k}^{d}({\bf x},{\bf
x}^{\prime}):=\left\\{\begin{array}[]{ll}{\displaystyle\frac{(-1)^{k+d/2+1}\
\|{\bf x}-{\bf x}^{\prime}\|^{2k-d}}{(k-1)!\ \left(k-d/2\right)!\
2^{2k-1}\pi^{d/2}}\left(\log\|{\bf x}-{\bf
x}^{\prime}\|-\beta_{p,d}\right)}\\\\[2.0pt] \hskip 210.55022pt\mathrm{if}\
d\,\,\mathrm{even},\ k\geq d/2,\\\\[10.0pt]
{\displaystyle\frac{\Gamma(d/2-k)\|{\bf x}-{\bf x}^{\prime}\|^{2k-d}}{(k-1)!\
2^{2k}\pi^{d/2}}}\hskip 95.88564pt\mathrm{otherwise},\end{array}\right.$
where $p=k-d/2$, $\beta_{p,d}\in{\mathbf{Q}}$ is defined as
$\beta_{p,d}:=\frac{1}{2}\left[H_{p}+H_{d/2+p-1}-H_{d/2-1}\right],$ with
$H_{j}\in{\mathbf{Q}}$ being the $j$th harmonic number
$H_{j}:=\sum_{i=1}^{j}\frac{1}{i},$
then ${\mathcal{G}}_{k}^{d}$ is a fundamental solution for $(-\Delta)^{k}$ on
Euclidean space ${\mathbf{R}}^{d}$.
Proof See Cohl (2010) [4] and Boyling (1996) [3].
Consider the following functions
${\mathfrak{g}}_{k}^{d},{\mathfrak{l}}_{k}^{d}:({\mathbf{R}}^{d}\times{\mathbf{R}}^{d})\setminus\\{({\bf
x},{\bf x}):{\bf x}\in{\mathbf{R}}^{d}\\}\to{\mathbf{R}}$ defined for $d$ odd
and for $d$ even with $k\leq d/2-1$ as a power-law, namely
${\mathfrak{g}}_{k}^{d}({\bf x},{{\bf x}^{\prime}}):=\|{\bf x}-{{\bf
x}^{\prime}}\|^{2k-d},$ (33)
and for $d$ even, $k\geq d/2$, with logarithmic behavior as
${\mathfrak{l}}_{k}^{d}({\bf x},{{\bf x}^{\prime}}):=\|{\bf x}-{{\bf
x}^{\prime}}\|^{2p}\left(\log\|{\bf x}-{{\bf
x}^{\prime}}\|-\beta_{p,d}\right),$
with $p=k-d/2$. By Theorem 4.1 we see that the functions
${\mathfrak{g}}_{k}^{d}$ and ${\mathfrak{l}}_{k}^{d}$ equal real non-zero
constant multiples of ${\mathcal{G}}_{k}^{d}$ for appropriate parameters.
Therefore by (31), ${\mathfrak{g}}_{k}^{d}$ and ${\mathfrak{l}}_{k}^{d}$ are
fundamental solutions of the polyharmonic equation for appropriate parameters.
In this paper, we only consider functions with power-law behavior, although in
future publications we will consider the logarithmic case (see Cohl (2012) [5]
for the relevant Fourier expansions).
Now we consider the set of hyperspherical coordinate systems which parametrize
points on ${\mathbf{R}}^{d}$. The Euclidean distance between two points
represented in these coordinate systems is given by
$\displaystyle\|{\bf x}-{{\bf
x}^{\prime}}\|=\sqrt{2rr^{\prime}}\left[z-\cos\gamma\right]^{1/2},$
where the toroidal parameter $z\in(1,\infty),$ (2.6) in Cohl et al. (2001)
[8], is given by
$z:=\frac{r^{2}+{r^{\prime}}^{2}}{\displaystyle 2rr^{\prime}},$
and the separation angle $\gamma\in[0,\pi]$ is given through
$\cos\gamma=\frac{({\bf x},{\bf x^{\prime}})}{rr^{\prime}},$ (34)
where $r,r^{\prime}\in[0,\infty)$ are defined such that $r:=\|{\bf x}\|$ and
$r^{\prime}:=\|{\bf x^{\prime}}\|.$ We will use these quantities to derive
Gegenbauer expansions of power-law fundamental solutions for powers of the
Laplacian ${\mathfrak{g}}_{k}^{d}$ (33) represented in hyperspherical
coordinates.
###### Corollary 4.2.
For $d\in\\{3,4,5,\ldots\\}$, $\nu\in{\mathbf{C}}$, ${\bf x},{{\bf
x}^{\prime}}\in{\mathbf{R}}^{d}$ with $r=\|{\bf x}\|$, $r^{\prime}=\|{{\bf
x}^{\prime}}\|$, and $\cos\gamma=({\bf x},{{\bf x}^{\prime}})/(rr^{\prime})$,
the following formula holds
$\displaystyle\|{\bf x}-{{\bf x}^{\prime}}\|^{\nu}$ $\displaystyle=$
$\displaystyle\frac{e^{i\pi(\nu+d-1)/2}\Gamma\left(\frac{d-2}{2}\right)}{2\sqrt{\pi}\,\Gamma\left(-\frac{\nu}{2}\right)}\frac{\left(r_{>}^{2}-r_{<}^{2}\right)^{(\nu+d-1)/2}}{\left(rr^{\prime}\right)^{(d-1)/2}}$
(35)
$\displaystyle\times\sum_{{n}=0}^{\infty}\left(2{n}+d-2\right)Q_{{n}+(d-3)/2}^{(1-\nu-d)/2}\biggl{(}\frac{r^{2}+{r^{\prime}}^{2}}{2rr^{\prime}}\biggr{)}C_{n}^{d/2-1}(\cos\gamma),$
where $r_{\lessgtr}:={\min\atop\max}\\{r,r^{\prime}\\}$.
Note that (35) is seen to be a generalization of Laplace’s expansion on
${\mathbf{R}}^{3}$ (see for instance Sack (1964) [24])
$\frac{1}{\|{\bf x}-{{\bf
x}^{\prime}}\|}=\sum_{n=0}^{\infty}\frac{r_{<}^{n}}{r_{>}^{n+1}}P_{n}(\cos\gamma),$
which is demonstrated by utilizing (22) and simplifying the associated
Legendre function of the second kind in (35) through
$Q_{-1/2}^{1/2}:{\mathbf{C}}\setminus(-\infty,1]\to{\mathbf{C}}$ defined such
that
$Q_{-1/2}^{1/2}(z)=i\sqrt{\frac{\pi}{2}}(z^{2}-1)^{-1/4}$
(cf. (8.6.10) in Abramowitz & Stegun (1972) [1] and (20)).
The addition theorem for hyperspherical harmonics, which generalizes
$P_{n}(\cos\gamma)=\frac{4\pi}{2{n}+1}\sum_{m=-{n}}^{n}Y_{{n},m}({\mathbf{\widehat{x}}})\overline{Y_{{n},m}(\mathbf{\widehat{x}^{\prime})}},$
(36)
where $P_{n}(x)$ is the Legendre polynomial of degree
${n}\in{\mathbf{N}}_{0}$, for $d=3$, is given by
$\sum_{K}Y_{n}^{K}(\widehat{\bf x})\overline{Y_{n}^{K}({\widehat{\bf
x}^{\prime}})}=\frac{\Gamma(d/2)}{2\pi^{d/2}(d-2)}(2n+d-2)C_{n}^{d/2-1}(\cos\gamma),$
(37)
where $K$ stands for a set of $(d-2)$-quantum numbers identifying degenerate
harmonics for a given value of ${n}$ and $d$, and $\gamma$ is the separation
angle (34). The functions $Y_{{n}}^{K}:{\mathbf{S}}^{d-1}\to{\mathbf{C}}$ are
the normalized hyperspherical harmonics, and
$Y_{n,k}:{\mathbf{S}}^{2}\to{\mathbf{C}}$ are the normalized spherical
harmonics for $d=3$. Note that
${\mathbf{\widehat{x}}},{\mathbf{\widehat{x}^{\prime}}}\in{\mathbf{S}}^{d-1}$,
are the vectors of unit length in the direction of ${\bf x},{{\bf
x}^{\prime}}\in{\mathbf{R}}^{d}$ respectively. For a proof of the addition
theorem for hyperspherical harmonics (37), see Wen & Avery (1985) [30] and for
a relevant discussion, see Section 10.2.1 in Fano & Rau (1996) [12]. The
correspondence between (36) and (37) arises from (4) and (23) namely (22).
One can use the addition theorem for hyperspherical harmonics to expand a
fundamental solution of the polyharmonic equation on ${\mathbf{R}}^{d}$.
Through the use of the addition theorem for hyperspherical harmonics we see
that the Gegenbauer polynomials $C_{n}^{d/2-1}(\cos\gamma)$ are hyperspherical
harmonics when regarded as a function of $\widehat{\bf x}$ only (see Vilenkin
(1968) [29]). Normalization of the hyperspherical harmonics is accomplished
through the following integral
$\int_{{\mathbf{S}}^{d-1}}Y_{n}^{K}({\mathbf{\widehat{x}}})\overline{Y_{n}^{K}({\mathbf{\widehat{x}}})}d\Omega=1,$
where $d\Omega$ is the Riemannian volume measure on ${\mathbf{S}}^{d-1}$. The
degeneracy, i.e., number of linearly independent solutions for a particular
value of ${n}$ and $d$, for the space of hyperspherical harmonics is given by
$(2{n}+d-2)\frac{(d-3+{n})!}{{n}!(d-2)!}$ (38)
(see (9.2.11) in Vilenkin (1968) [29]). The total number of linearly
independent solutions (38) can be determined by counting the total number of
terms in the sum over $K$ in (37). Note that this formula (38) reduces to the
standard result in $d=3$ with a degeneracy given by $2{n}+1$ and in $d=4$ with
a degeneracy given by $({n}+1)^{2}$.
One can show the consistency of Corollary 4.2 with the result for $d=2$ given
by
$\displaystyle\|{\bf x}-{{\bf
x}^{\prime}}\|^{\nu}=\frac{e^{i\pi(\nu+1)/2}}{\Gamma(-\nu/2)}\frac{(r_{>}^{2}-r_{<}^{2})^{(\nu+1)/2}}{\sqrt{\pi
rr^{\prime}}}\sum_{m=-\infty}^{\infty}e^{im(\phi-\phi^{\prime})}Q_{m-1/2}^{-(\nu+1)/2}\left(\frac{r^{2}+{r^{\prime}}^{2}}{2rr^{\prime}}\right),$
where $\nu\in{\mathbf{C}}\setminus\\{0,2,4,\ldots\\}$, by considering the
limit as $\mu\to 0$ in (1) (see (25) above). These expansions are useful in
that they allow one to perform azimuthal Fourier and Gegenbauer polynomial
analysis for power-law fundamental solutions of the polyharmonic equation on
${\mathbf{R}}^{d}$.
## 5 Conclusion
In this paper, we introduced a generalization of the generating function for
Gegenbauer polynomials which allows one to expand arbitrary powers of the
distance between two points on $d$-dimensional Euclidean space
${\mathbf{R}}^{d}$ in terms of hyperspherical harmonics. This result has
already found physical applications such as in Szmytkowski (2011) [27] who
uses this result to obtain a solution of the momentum-space Schrödinger
equation for bound states of the $d$-dimensional Coulomb problem. The
Gegenbauer expansions presented in this paper can be used in conjunction with
corresponding Fourier expansions (Cohl & Dominici (2010) [7]) to generate
infinite sequences of addition theorems for the Fourier coefficients (see Cohl
(2010) [4]) of these expansions. In future publications, we will present some
of these addition theorems as well as extensions related to Fourier and
Gegenbauer expansions for logarithmic fundamental solutions of the
polyharmonic equation.
## Acknowledgements
Much thanks to T. H. Koornwinder for valuable discussions. Part of this work
was conducted while H. S. Cohl was a National Research Council Research
Postdoctoral Associate in the Applied and Computational Mathematics Division
at the National Institute of Standards and Technology, Gaithersburg, Maryland,
U.S.A.
## References
* [1] M. Abramowitz and I. A. Stegun. Handbook of mathematical functions with formulas, graphs, and mathematical tables, volume 55 of National Bureau of Standards Applied Mathematics Series. U.S. Government Printing Office, Washington, D.C., 1972.
* [2] G. E. Andrews, R. Askey, and R. Roy. Special functions, volume 71 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, 1999.
* [3] J. B. Boyling. Green’s functions for polynomials in the Laplacian. Zeitschrift für Angewandte Mathematik und Physik, 47(3):485–492, 1996.
* [4] H. S. Cohl. Fourier and Gegenbauer expansions for fundamental solutions of the Laplacian and powers in ${\mathbf{R}}^{d}$ and ${\mathbf{H}}^{d}$. PhD thesis, The University of Auckland, Auckland, New Zealand, 2010. xiv+190 pages.
* [5] H. S. Cohl. Fourier expansions for a logarithmic fundamental solution of the polyharmonic equation. arXiv:1202.1811, 2012.
* [6] H. S. Cohl. On a generalization of the generating function for Gegenbauer polynomials. arXiv:1105.2735, 2012.
* [7] H. S. Cohl and D. E. Dominici. Generalized Heine’s identity for complex Fourier series of binomials. Proceedings of the Royal Society A, 467:333–345, 2011.
* [8] H. S. Cohl, A. R. P. Rau, J. E. Tohline, D. A. Browne, J. E. Cazes, and E. I. Barnes. Useful alternative to the multipole expansion of $1/r$ potentials. Physical Review A: Atomic and Molecular Physics and Dynamics, 64(5):052509, Oct 2001.
* [9] H. S. Cohl and J. E. Tohline. A Compact Cylindrical Green’s Function Expansion for the Solution of Potential Problems. The Astrophysical Journal, 527:86–101, December 1999.
* [10] L. Durand, P. M. Fishbane, and L. M. Simmons, Jr. Expansion formulas and addition theorems for Gegenbauer functions. Journal of Mathematical Physics, 17(11):1933–1948, 1976.
* [11] A. Erdélyi, W. Magnus, F. Oberhettinger, and F. G. Tricomi. Higher transcendental functions. Vol. II. Robert E. Krieger Publishing Co. Inc., Melbourne, Fla., 1981.
* [12] U. Fano and A. R. P. Rau. Symmetries in quantum physics. Academic Press Inc., San Diego, CA, 1996.
* [13] L. Gegenbauer. Über einige bestimmte Integrale. Sitzungsberichte der Kaiserlichen Akademie der Wissenschaften. Mathematische-Naturwissenschaftliche Classe., 70:433–443, 1874.
* [14] L. Gegenbauer. Über die Functionen $C_{n}^{\nu}(x)$. Sitzungsberichte der Kaiserlichen Akademie der Wissenschaften. Mathematische-Naturwissenschaftliche Classe., 75:891–896, 1877.
* [15] L. Gegenbauer. Zur theorie der functionen $C_{n}^{\nu}(x)$. Denkschriften der Kaiserlichen Akademie der Wissenschaften zu Wien. Mathematisch-naturwissenschaftliche Classe, 48:293–316, 1884.
* [16] L. Gegenbauer. Über die Functionen $C_{n}^{\nu}(x)$. Sitzungsberichte der Kaiserlichen Akademie der Wissenschaften. Mathematische-Naturwissenschaftliche Classe., 97:259–270, 1888.
* [17] L. Gegenbauer. Das Additionstheorem der Functionen $C_{n}^{\nu}(x)$. Sitzungsberichte der Kaiserlichen Akademie der Wissenschaften. Mathematische-Naturwissenschaftliche Classe., 102:942–950, 1893.
* [18] E. Heine. Handbuch der Kugelfunctionen, Theorie und Anwendungen (volume 1). Druck und Verlag von G. Reimer, Berlin, 1878.
* [19] E. Heine. Handbuch der Kugelfunctionen, Theorie und Anwendungen (volume 2). Druck und Verlag von G. Reimer, Berlin, 1881.
* [20] M. E. H. Ismail. Classical and quantum orthogonal polynomials in one variable, volume 98 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, 2005. With two chapters by Walter Van Assche, With a foreword by Richard A. Askey.
* [21] W. Magnus, F. Oberhettinger, and R. P. Soni. Formulas and theorems for the special functions of mathematical physics. Third enlarged edition. Die Grundlehren der mathematischen Wissenschaften, Band 52. Springer-Verlag New York, Inc., New York, 1966.
* [22] F. W. J. Olver. Asymptotics and special functions. AKP Classics. A K Peters Ltd., Wellesley, MA, 1997. Reprint of the 1974 original [Academic Press, New York].
* [23] F. W. J. Olver, D. W. Lozier, R. F. Boisvert, and C. W. Clark, editors. NIST handbook of mathematical functions. Cambridge University Press, Cambridge, 2010.
* [24] R. A. Sack. Generalization of Laplace’s expansion to arbitrary powers and functions of the distance between two points. Journal of Mathematical Physics, 5:245–251, 1964.
* [25] H. M. Srivastava and H. L. Manocha. A treatise on generating functions. Ellis Horwood Series: Mathematics and its Applications. Ellis Horwood Ltd., Chichester, 1984.
* [26] G. Szegő. Orthogonal polynomials. American Mathematical Society Colloquium Publications, Vol. 23. Revised ed. American Mathematical Society, Providence, R.I., 1959.
* [27] R. Szmytkowski. Alternative approach to the solution of the momentum-space Schrödinger equation for bound states of the $N$-dimensional Coulomb problem. Annalen der Physik, 524(6-7):345–352, 2012.
* [28] R. Szmytkowski. Some integrals and series involving the Gegenbauer polynomials and the Legendre functions on the cut $(-1,1)$. Integral Transforms and Special Functions, 23(11):847–852, 2012.
* [29] N. Ja. Vilenkin. Special functions and the theory of group representations. Translated from the Russian by V. N. Singh. Translations of Mathematical Monographs, Vol. 22. American Mathematical Society, Providence, R. I., 1968.
* [30] Z. Y. Wen and J. Avery. Some properties of hyperspherical harmonics. Journal of Mathematical Physics, 26(3):396–403, 1985.
|
arxiv-papers
| 2011-05-13T14:19:20 |
2024-09-04T02:49:18.741570
|
{
"license": "Public Domain",
"authors": "Howard S. Cohl",
"submitter": "Howard Cohl",
"url": "https://arxiv.org/abs/1105.2735"
}
|
1105.2783
|
# $p$-ary sequences with six-valued cross-correlation function: a new
decimation of Niho type
Yuhua Sun1, Hui Li1, Zilong Wang2, and Tongjiang Yan3,4
1 Key Laboratory of Computer Networks and Information Security,
Xidian University, Xi’an 710071, Shaanxi, China
2 State Key Laboratory of Integrated Service Networks,
Xidian University, Xi’an, 710071, Shaanxi, China
3 College of Mathematics and Computational Science,
China University of Petroleum Dongying 257061, Shandong, China
4 State Key Laboratory of Information Security (Institute of Software,
Chinese Academy of Sciences), Beijing 100049,China
###### Abstract
For an odd prime $p$ and $n=2m$, a new decimation
$d=\frac{(p^{m}-1)^{2}}{2}+1$ of Niho type of $m$-sequences is presented.
Using generalized Niho’s Theorem, we show that the cross-correlation function
between a $p$-ary $m$-sequence of period $p^{n}-1$ and its decimated sequence
by the above $d$ is at most six-valued and we can easily know that the
magnitude of the cross correlation is upper bounded by $4\sqrt{p^{n}}-1$.
Index Terms. $p$-ary $m$-sequence, Niho type, cross-correlation.
00footnotetext: The work is supported by the National Natural Science
Foundation of China (No.60772136), Shandong Provincial Natural Science
Foundation of China (No. ZR2010FM017), the Fundamental Research Funds for the
Central Universities(No.K50510010012) and the open fund of State Key
Laboratory of Information Security(Graduate University of Chinese Academy of
Sciences)(No.F1008001).
## 1 Introduction
It is of great interest to find a decimation value $d$ such that the cross-
correlation between a $p$-ary $m$-sequence $\\{s_{t}\\}$ of period $p^{n}-1$
and its decimation $\\{s_{dt}\\}$ is low. For the case
$\mathrm{gcd}(d,p^{n}-1)=1$, the decimated sequence $\\{s_{dt}\\}$ is also an
$m$-sequence of period $p^{n}-1$. Basic results on the cross-correlation
between two $m$-sequences can be found in [3], [5], [8] and [10]. For the case
$\mathrm{gcd}(d,p^{n}-1)\neq 1$, the reader is referred to [7], [6], and [1].
Let $p$ be a prime and $n=2m$. The cross correlation functions for the type of
decimations $d\equiv 1\ (\mathrm{mod}\ p^{m}-1)$ were first studied by Niho
[8] named as Niho type of decimations. In [8], for $p=2$ and $d=s(p^{m}-1)+1$,
Niho converted the problem of finding the values of cross-correlation
functions into the problem of determining the number of solutions of a system
of equations . This result is called Niho’s Theorem. In 2006, Rosendahl [9]
generalized Niho’s Theorem to nonbinary sequences. In 2007, Helleseth et al.
[4] proved that the cross correlation function between two $m$-sequences that
differ by a decimation $d$ of Niho type is at least four-valued.
When $d=2p^{m}-1\equiv 1\ (\mathrm{mod}\ p^{m}-1)$, the cross correlation
function between a $p$-ary $m$-sequence $\\{s_{t}\\}$ of period $p^{2m}-1$ and
its decimated sequence $\\{s_{dt}\\}$ is four-valued, which was originally
given by Niho [8] for the case $p=2$ and by Helleseth [3] for the case $p>2$.
And when $d=3p^{m}-2$, the cross correlation function between two
$m$-sequences that differ by $d$ is at most six-valued, especially, for $p=3$,
the cross correlation function is at most five-valued [9].
In this note, we study a new decimation $d=\frac{(p^{m}-1)^{2}}{2}+1$ of Niho
type. Employing generalized Niho’s Theorem, we show that the cross-correlation
function between a $p$-ary $m$-sequence and its decimated sequence by $d$ is
at most six-valued.
The rest of this note is organized as follows. Section 2 presents some
preliminaries and definitions. Using generalized Niho’s Theorem, we give an
alternative proof of a result by Helleseth [3] where
$d=\frac{(p^{m}-1)(p^{m}+1)}{2}+1$ in section 3. A new decimation
$d=\frac{(p^{m}-1)^{2}}{2}+1$ of Niho type is given in section 4. We prove
that the cross correlation function between a $p$-ary $m$-sequence and its
decimated sequence by $d$ takes at most six values.
## 2 Preliminaries
We will use the following notation in the rest of this note. Let $p$ be an odd
prime, $\mathrm{GF}(p^{n})$ the finite field with $p^{n}$ elements and
$\mathrm{GF}(p^{n})^{*}=\mathrm{GF}(p^{n})\backslash\\{0\\}$. The trace
function $\mathrm{Tr}_{m}^{n}$ from the field $\mathrm{GF}(p^{n})$ onto the
subfield $\mathrm{GF}(p^{m})$ is defined as
$\mathrm{Tr}_{m}^{n}(x)=x+x^{p^{m}}+x^{p^{2m}}+\cdots+x^{p^{(l-1)m}},$
where $l=\frac{n}{m}$ is an integer.
We may assume that a $p$-ary $m$-sequence $\\{s_{t}\\}$ of period $p^{n}-1$ is
given by
$s_{t}=\mathrm{Tr}_{1}^{n}(\alpha^{t}),$
where $\alpha$ is a primitive element of the finite field $\mathrm{GF}(p^{n})$
and $\mathrm{Tr}_{1}^{n}$ is the trace function from $\mathrm{GF}(p^{n})$ onto
$\mathrm{GF}(p)$. The periodic cross correlation function $C_{d}(\tau)$
between$\\{s_{t}\\}$ and its decimated sequence $\\{s_{dt}\\}$ is defined as
$C_{d}(\tau)=\Sigma_{t=0}^{p^{n}-2}\omega^{s_{t+\tau}-s_{dt}},$
where $\omega=e^{\frac{2\pi\sqrt{-1}}{p}}$ and $0\leq\tau\leq p^{n}-2$.
We will always assume that $n=2m$ is even in this note unless otherwise
specified.
## 3 An alternative proof of a known result
For $p=2$, Niho [8] presented Niho’s Theorem about decimations of Niho type of
$m$-sequences. Rosendahl [9] generalized this result as follows.
###### Lemma 1
(generalized Niho’s Theorem) [9] Let $p$, $n$, and $m$ be defined as in
section 2. Assume that $d\equiv 1\ (\mathrm{mod}\ p^{m}-1)$, and denote
$s=\frac{d-1}{p^{m}-1}$. Then when $y={\alpha}^{\tau}$ runs through the
nonzero elements of the field $\mathrm{GF}(p^{n})$, $C_{d}(\tau)$ assumes
exactly the values
$-1+\left(N(y)-1\right)\cdot p^{m},$
where $N(y)$ is the number of common solutions of
$\displaystyle x^{2s-1}+y^{p^{m}}x^{s}+yx^{s-1}+1$ $\displaystyle=0,$
$\displaystyle x^{p^{m}+1}$ $\displaystyle=1.$
In 1976, Helleseth [3] proved the following result. Here, using the
generalized Niho’s Theorem, we give a simple proof.
###### Theorem 1
[3] Let the symbols be defined as in section 2, $p$ be an odd prime and
$d=\frac{p^{n}-1}{2}+1$. Then $C_{d}(\tau)\in\\{-1-p^{m}$, $-1$, $-1+p^{m}$,
$-1+\frac{p^{m}-1}{2}p^{m},-1+\frac{p^{m}+1}{2}p^{m}\\}$.
Proof of Theorem 1. Since
$d=\frac{p^{n}-1}{2}+1=\frac{p^{m}+1}{2}(p^{m}-1)+1$, we get
$s=\frac{d-1}{p^{m}-1}=\frac{p^{m}+1}{2}$. By Lemma 1, we have
$C_{d}(\tau)=-1+\left(N(y)-1\right)\cdot p^{m},$
where $y=\alpha^{\tau}$, $0\leq\tau\leq p^{n}-2$, and $N(y)$ is the number of
common solutions of
$\displaystyle
x^{(p^{m}+1)-1}+y^{p^{m}}x^{\frac{p^{m}+1}{2}}+yx^{\frac{p^{m}+1}{2}-1}+1$
$\displaystyle=0,$ (1) $\displaystyle x^{p^{m}+1}$ $\displaystyle=1.$ (2)
Note that Eq. (2) implies
$\displaystyle x^{\frac{p^{m}+1}{2}}=1$ (3)
or
$\displaystyle x^{\frac{p^{m}+1}{2}}=-1.$ (4)
Substituting (3) and (4) into (1) respectively, we get
$C_{d}(\tau)=-1+\left(N_{1}(y)+N_{-1}(y)-1\right)\cdot p^{m},$
where $N_{1}(y)$ is the number of the common solutions of
$(3.1.1)\ \ \left\\{\begin{array}[]{ll}(y^{p^{m}}+1)x+(y+1)=0,\\\
x^{\frac{p^{m}+1}{2}}=1,\end{array}\right.$
and $N_{-1}(y)$ is the number of solutions of
$(3.1.2)\ \ \left\\{\begin{array}[]{ll}(y^{p^{m}}-1)x+(y-1)=0,\\\
x^{\frac{p^{m}+1}{2}}=-1.\end{array}\right.$
Obviously, for $y\neq\pm 1$, $0\leq N_{1}(y)+N_{-1}(y)\leq 2.$
Let $y=1$. First, it is straightforward to get $N_{-1}(1)=\frac{p^{m}+1}{2}$.
Second, we see that $x=-1$ is the only solution of (3.1.1) for $p^{m}+1\equiv
0\ \mathrm{mod}\ 4$ and $N_{1}(1)=0$ for $p^{m}+1\equiv 2\ \mathrm{mod}\ 4$.
Hence, we have
$N_{1}(1)+N_{-1}(1)=\left\\{\begin{array}[]{ll}1+\frac{p^{m}+1}{2},\ \ \
\mathrm{if}\ p^{m}+1\equiv 0\ \mathrm{mod}\ 4,\\\ \frac{p^{m}+1}{2},\ \ \ \ \
\ \ \ \mathrm{if}\ p^{m}+1\equiv 2\ \mathrm{mod}\ 4.\end{array}\right.$
Similarly, for $y=-1$, we have
$N_{1}(-1)+N_{-1}(-1)=\left\\{\begin{array}[]{ll}\frac{p^{m}+1}{2},\ \ \ \ \ \
\ \ \mathrm{if}\ p^{m}+1\equiv 0\ \mathrm{mod}\ 4,\\\ 1+\frac{p^{m}+1}{2},\ \
\ \mathrm{if}\ p^{m}+1\equiv 2\ \mathrm{mod}\ 4.\end{array}\right.$
The result follows. $\Box$
In Theorem 1, the value $s$ of Niho type decimation $d$ is equal to
$\frac{p^{m}+1}{2}$ corresponding to Lemma 1. Motivated by the above proof, we
take $s$ as the value $\frac{p^{m}-1}{2}$, a new decimation of Niho type will
be presented, and cross correlation values will be determined in the following
section.
## 4 A new decimation of Niho type
In this section, we give a new decimation $d$ of Niho’s type and we show that
the cross correlation function between a $p$-ary $m$-sequence and its
decimated sequence by $d$ is at most six-valued.
###### Theorem 2
Let the symbols be defined as in section 2. Let $d=\frac{(p^{m}-1)^{2}}{2}+1$.
Then $C_{d}(\tau)\in\\{-1+(j-1)\cdot p^{m}|\ 0\leq j\leq 5\\}$ is at most six-
valued.
Proof of Theorem 2. since
$d=\frac{(p^{m}-1)^{2}}{2}+1=\frac{p^{m}-1}{2}\cdot(p^{m}-1)+1\equiv 1\
\mathrm{mod}\ (p^{m}-1),$ we know that the value $s$ corresponding to that in
Lemma 1 is $\frac{p^{m}-1}{2}$. By the same argument as in Theorem 1, we get
$C_{d}(\tau)=-1+\left(N_{1}(y)+N_{-1}(y)-1\right)\cdot p^{m},$
where $N_{1}(y)$ is the number of solutions of
$(4.1.1)\ \ \left\\{\begin{array}[]{ll}x^{3}+y^{p^{m}}x^{2}+yx+1=0,\\\
x^{\frac{p^{m}+1}{2}}=1,\end{array}\right.$
and $N_{-1}(y)$ is the number of solutions of
$(4.1.2)\ \ \left\\{\begin{array}[]{ll}x^{3}-y^{p^{m}}x^{2}-yx+1=0,\\\
x^{\frac{p^{m}+1}{2}}=-1.\end{array}\right.$
By the basic algebraic theorem, we know that $0\leq N_{1}(y)\leq 3$ and $0\leq
N_{-1}(y)\leq 3$, i.e., $0\leq N_{1}(y)+N_{-1}(y)\leq 6$. Further, we will
prove $0\leq N_{1}(y)+N_{-1}(y)\leq 5$, i.e., we will prove
$N_{1}(y)+N_{-1}(y)\neq 6$.
Suppose that $N_{1}(y)+N_{-1}(y)=6$. Then $N_{1}(y)=3$ and $N_{-1}(y)=3$,
i.e., both (4.1.1) and (4.1.2) have three solutions. Now, for $i=1,2,3$, let
$x_{i}$ and $x_{i}^{\ast}$ be the solutions of (4.1.1) and (4.1.2),
respectively. Since $x_{i}$ satisfies $x^{\frac{p^{m}+1}{2}}=1$ and
$x_{i}^{\ast}$ satisfies $x^{\frac{p^{m}+1}{2}}=-1$, we know that there exists
some even integer $j_{i}$ satisfying $x_{i}=\alpha^{j_{i}(p^{m}-1)}$ and that
there exists some odd integer $j_{i}^{\ast}$ satisfying
$x_{i}^{\ast}=\alpha^{j_{i}^{\ast}(p^{m}-1)}$, where $i=1,2,3$.
Simultaneously, since $x_{i}$ and $x_{i}^{\ast}$ satisfy the first equations
of (4.1.1)and (4.1.2) respectively, we have
$\displaystyle\prod\limits_{i=1}^{3}x_{i}$
$\displaystyle=\alpha^{(p^{m}-1)\sum\limits_{i=1}^{3}j_{i}}=-1,$
$\displaystyle\prod\limits_{i=1}^{3}x_{i}^{\ast}$
$\displaystyle=\alpha^{(p^{m}-1)\sum\limits_{i=1}^{3}j_{i}^{\ast}}=-1.$
By multiplying the above two equations, we get
$\prod\limits_{i=1}^{3}x_{i}\prod\limits_{i=1}^{3}x_{i}^{\ast}=\alpha^{(p^{m}-1)(\sum\limits_{i=1}^{3}j_{i}+\sum\limits_{i=1}^{3}j_{i}^{\ast})}=1,$
and induce
$p^{m}+1|\sum\limits_{i=1}^{3}j_{i}+\sum\limits_{i=1}^{3}j_{i}^{\ast}$. This
contradicts to the fact that $p^{m}+1$ is even but
$\sum\limits_{i=1}^{3}j_{i}+\sum\limits_{i=1}^{3}j_{i}^{\ast}$ is odd.
Therefore, we get $N_{1}(y)+N_{-1}(y)\neq 6$, i.e., $0\leq
N_{1}(y)+N_{-1}(y)\leq 5$. The result follows. $\Box$
###### Remark 1
The decimated sequence $\\{s_{dt}\\}$ in Theorem 2 is not necessarily an
$m$-sequence. In fact,
$d=\frac{p^{m}-1}{2}(p^{m}-1)+1\equiv\frac{p^{m}-1}{2}(-2)+1\equiv 3\
\mathrm{mod}\ (p^{m}+1)$. For $p\equiv-1\ \mathrm{mod}\ 3$, $m$ odd, we know
that $\mathrm{gcd}(d,p^{n}-1)=3$, $\\{s_{dt}\\}$ is not an $m$-sequence. For
the other case, $\mathrm{gcd}(d,p^{n}-1)=1$, and $\\{s_{dt}\\}$ is an
$m$-sequence.
###### Remark 2
Theoretically, the number of the values of $C_{d}(\tau)$ can not be reduced to
less than 6. Following is an example whose cross correlation function between
an $m$-sequence and its decimated sequence by $d$ has exactly six values.
###### Example 1
Let $p=3$, $n=6$, $m=3$ and $d=\frac{(p^{m}-1)^{2}}{2}+1=339$. The polynomial
$f(x)=x^{6}+x^{5}+2$ is primitive over $\mathrm{GF}(3)$. Let $\alpha$ be a
root of $f(x)$, then $s_{t}=\mathrm{Tr}_{1}^{6}(\alpha^{t})$,
$s_{dt}=\mathrm{Tr}_{1}^{6}(\alpha^{339t})$. Computer experiment gives the
following cross correlation values:
$\begin{array}[]{lclc}-1-3^{3}&\mathrm{occurs}&246&\mathrm{times},\\\
-1&\mathrm{occurs}&284&\mathrm{times},\\\
-1+3^{3}&\mathrm{occurs}&144&\mathrm{times},\\\ -1+2\cdot
3^{3}&\mathrm{occurs}&42&\mathrm{times},\\\ -1+3\cdot
3^{3}&\mathrm{occurs}&3&\mathrm{times},\\\ -1+4\cdot
3^{3}&\mathrm{occurs}&9&\mathrm{times}.\end{array}$
## Conclusion
In this note, using generalized Niho’s Theorem, we give an alternative proof
of a known result. By changing the form of the known decimation factor, we
give a new decimation $d=\frac{(p^{m}-1)^{2}}{2}+1$ of Niho type. We prove
that the cross correlation function between a $p$-ary $m$-sequence of period
$p^{n}-1$ and its decimated sequence by the value $d$ is at most six-valued,
and we can easily see that the magnitude of the cross correlation values is
upper bounded by $4\sqrt{p^{n}}-1$.
## References
* [1] S.-T. Choi, J.-S. No, H. Chung, On the cross-correlation of a ternary $m$-sequence of period $3^{4k+2}-1$ and its decimated sequence by $\frac{(3^{2k+1}+1)^{2}}{8}$, _ISIT 2010_ , Austin, Texas, U.S.A., June 13-18, 2010.
* [2] H. Dobbertin, T. Helleseth, P.V. Kumar, and H. Martinsen, Ternary $m$-sequences with three-valued cross-correlation function:new decimations of Welch and Niho type, _IEEE Trans. Inf. Theory,_ vol. 47 (2001), pp. 1473-1481.
* [3] T. Helleseth, Some results about the cross-correlation function between two maximal linear sequences. _Discrete Math_. , vol. 16 (1976) pp. 209-232.
* [4] T. Helleseth, J. Lahtonen, and P. Rosendahl, On Niho type cross-correlation functions of $m$-sequences, _Finite Field and Their Applications_ , vol. 13 (2007) pp. 305-317.
* [5] T. Helleseth and P. V. Kumar, Sequences with low correlation, In V. S. Pless and W.C. Huffman (eds.), Handbook of Coding Theory, Elsevier Science (1998), pp. 1765-1853.
* [6] Z. Hu, Z. Li, D. Mills, E. N. Müller, W. Sun, W. Willems, Y. Yang, and Z. Zhang, On the cross-correlation of sequences with the decimation factor $d=\frac{p^{n}+1}{p+1}-\frac{p^{n}-1}{2},$ _Applicable Algebra in Engineering, Communication and Computing,_ vol. 12 (2001) pp. 255-263.
* [7] E.N. Müller, On the cross-correlation of sequences over $\mathrm{GF}(p)$ with short periods, _IEEE Trans. Inf. Theory,_ vol.45 (1999) pp. 289-295.
* [8] Y. Niho, Multivalued cross-correlation functions between two maximal linear recursive sequences. PhD Thesis, University of Southern California (1972).
* [9] P. Rosendahl, A generalization of Niho’s Theorem,_Des. Codes Cryptogr._ , vol. 38 (2006), pp. 331-336.
* [10] H.M. Trachtenberg, On the cross-correlation functions of maximal linear sequences, PhD Thesis , University of Southern California (1970).
|
arxiv-papers
| 2011-05-13T17:23:46 |
2024-09-04T02:49:18.748095
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Yuhua Sun, Hui Li, Zilong Wang and Tongjiang Yan",
"submitter": "Zilong Wang",
"url": "https://arxiv.org/abs/1105.2783"
}
|
1105.2815
|
CDF Collaboration222With visitors from aUniversity of Massachusetts Amherst,
Amherst, Massachusetts 01003, bIstituto Nazionale di Fisica Nucleare, Sezione
di Cagliari, 09042 Monserrato (Cagliari), Italy, cUniversity of California
Irvine, Irvine, CA 92697, dUniversity of California Santa Barbara, Santa
Barbara, CA 93106 eUniversity of California Santa Cruz, Santa Cruz, CA 95064,
fCERN,CH-1211 Geneva, Switzerland, gCornell University, Ithaca, NY 14853,
hUniversity of Cyprus, Nicosia CY-1678, Cyprus, iUniversity College Dublin,
Dublin 4, Ireland, jUniversity of Fukui, Fukui City, Fukui Prefecture, Japan
910-0017, kUniversidad Iberoamericana, Mexico D.F., Mexico, lIowa State
University, Ames, IA 50011, mUniversity of Iowa, Iowa City, IA 52242, nKinki
University, Higashi-Osaka City, Japan 577-8502, oKansas State University,
Manhattan, KS 66506, pUniversity of Manchester, Manchester M13 9PL, England,
qQueen Mary, University of London, London, E1 4NS, England, rMuons, Inc.,
Batavia, IL 60510, sNagasaki Institute of Applied Science, Nagasaki, Japan,
tNational Research Nuclear University, Moscow, Russia, uUniversity of Notre
Dame, Notre Dame, IN 46556, vUniversidad de Oviedo, E-33007 Oviedo, Spain,
wTexas Tech University, Lubbock, TX 79609, xUniversidad Tecnica Federico Santa
Maria, 110v Valparaiso, Chile, yYarmouk University, Irbid 211-63, Jordan, ggOn
leave from J. Stefan Institute, Ljubljana, Slovenia,
# First Search for Multijet Resonances in $\sqrt{s}=1.96$ TeV $p\bar{p}$
Collisions
T. Aaltonen Division of High Energy Physics, Department of Physics,
University of Helsinki and Helsinki Institute of Physics, FIN-00014, Helsinki,
Finland B. Álvarez Gonzálezv Instituto de Fisica de Cantabria, CSIC-
University of Cantabria, 39005 Santander, Spain S. Amerio Istituto Nazionale
di Fisica Nucleare, Sezione di Padova-Trento, aaUniversity of Padova, I-35131
Padova, Italy D. Amidei University of Michigan, Ann Arbor, Michigan 48109,
USA A. Anastassov Northwestern University, Evanston, Illinois 60208, USA A.
Annovi Laboratori Nazionali di Frascati, Istituto Nazionale di Fisica
Nucleare, I-00044 Frascati, Italy J. Antos Comenius University, 842 48
Bratislava, Slovakia; Institute of Experimental Physics, 040 01 Kosice,
Slovakia G. Apollinari Fermi National Accelerator Laboratory, Batavia,
Illinois 60510, USA J.A. Appel Fermi National Accelerator Laboratory,
Batavia, Illinois 60510, USA A. Apresyan Purdue University, West Lafayette,
Indiana 47907, USA T. Arisawa Waseda University, Tokyo 169, Japan A.
Artikov Joint Institute for Nuclear Research, RU-141980 Dubna, Russia J.
Asaadi Texas A&M University, College Station, Texas 77843, USA W. Ashmanskas
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA B.
Auerbach Yale University, New Haven, Connecticut 06520, USA A. Aurisano
Texas A&M University, College Station, Texas 77843, USA F. Azfar University
of Oxford, Oxford OX1 3RH, United Kingdom W. Badgett Fermi National
Accelerator Laboratory, Batavia, Illinois 60510, USA A. Barbaro-Galtieri
Ernest Orlando Lawrence Berkeley National Laboratory, Berkeley, California
94720, USA V.E. Barnes Purdue University, West Lafayette, Indiana 47907, USA
B.A. Barnett The Johns Hopkins University, Baltimore, Maryland 21218, USA P.
Barriacc Istituto Nazionale di Fisica Nucleare Pisa, bbUniversity of Pisa,
ccUniversity of Siena and ddScuola Normale Superiore, I-56127 Pisa, Italy P.
Bartos Comenius University, 842 48 Bratislava, Slovakia; Institute of
Experimental Physics, 040 01 Kosice, Slovakia M. Bauceaa Istituto Nazionale
di Fisica Nucleare, Sezione di Padova-Trento, aaUniversity of Padova, I-35131
Padova, Italy G. Bauer Massachusetts Institute of Technology, Cambridge,
Massachusetts 02139, USA F. Bedeschi Istituto Nazionale di Fisica Nucleare
Pisa, bbUniversity of Pisa, ccUniversity of Siena and ddScuola Normale
Superiore, I-56127 Pisa, Italy D. Beecher University College London, London
WC1E 6BT, United Kingdom S. Behari The Johns Hopkins University, Baltimore,
Maryland 21218, USA G. Bellettinibb Istituto Nazionale di Fisica Nucleare
Pisa, bbUniversity of Pisa, ccUniversity of Siena and ddScuola Normale
Superiore, I-56127 Pisa, Italy J. Bellinger University of Wisconsin,
Madison, Wisconsin 53706, USA D. Benjamin Duke University, Durham, North
Carolina 27708, USA A. Beretvas Fermi National Accelerator Laboratory,
Batavia, Illinois 60510, USA A. Bhatti The Rockefeller University, New York,
New York 10065, USA M. Binkley111Deceased Fermi National Accelerator
Laboratory, Batavia, Illinois 60510, USA D. Biselloaa Istituto Nazionale di
Fisica Nucleare, Sezione di Padova-Trento, aaUniversity of Padova, I-35131
Padova, Italy I. Bizjakgg University College London, London WC1E 6BT, United
Kingdom K.R. Bland Baylor University, Waco, Texas 76798, USA B. Blumenfeld
The Johns Hopkins University, Baltimore, Maryland 21218, USA A. Bocci Duke
University, Durham, North Carolina 27708, USA A. Bodek University of
Rochester, Rochester, New York 14627, USA D. Bortoletto Purdue University,
West Lafayette, Indiana 47907, USA J. Boudreau University of Pittsburgh,
Pittsburgh, Pennsylvania 15260, USA A. Boveia Enrico Fermi Institute,
University of Chicago, Chicago, Illinois 60637, USA B. Braua Fermi National
Accelerator Laboratory, Batavia, Illinois 60510, USA L. Brigliadoriz Istituto
Nazionale di Fisica Nucleare Bologna, zUniversity of Bologna, I-40127 Bologna,
Italy A. Brisuda Comenius University, 842 48 Bratislava, Slovakia; Institute
of Experimental Physics, 040 01 Kosice, Slovakia C. Bromberg Michigan State
University, East Lansing, Michigan 48824, USA E. Brucken Division of High
Energy Physics, Department of Physics, University of Helsinki and Helsinki
Institute of Physics, FIN-00014, Helsinki, Finland M. Bucciantoniobb Istituto
Nazionale di Fisica Nucleare Pisa, bbUniversity of Pisa, ccUniversity of Siena
and ddScuola Normale Superiore, I-56127 Pisa, Italy J. Budagov Joint
Institute for Nuclear Research, RU-141980 Dubna, Russia H.S. Budd University
of Rochester, Rochester, New York 14627, USA S. Budd University of Illinois,
Urbana, Illinois 61801, USA K. Burkett Fermi National Accelerator
Laboratory, Batavia, Illinois 60510, USA G. Busettoaa Istituto Nazionale di
Fisica Nucleare, Sezione di Padova-Trento, aaUniversity of Padova, I-35131
Padova, Italy P. Bussey Glasgow University, Glasgow G12 8QQ, United Kingdom
A. Buzatu Institute of Particle Physics: McGill University, Montréal, Québec,
Canada H3A 2T8; Simon Fraser University, Burnaby, British Columbia, Canada V5A
1S6; University of Toronto, Toronto, Ontario, Canada M5S 1A7; and TRIUMF,
Vancouver, British Columbia, Canada V6T 2A3 C. Calancha Centro de
Investigaciones Energeticas Medioambientales y Tecnologicas, E-28040 Madrid,
Spain S. Camarda Institut de Fisica d’Altes Energies, ICREA, Universitat
Autonoma de Barcelona, E-08193, Bellaterra (Barcelona), Spain M. Campanelli
Michigan State University, East Lansing, Michigan 48824, USA M. Campbell
University of Michigan, Ann Arbor, Michigan 48109, USA F. Canelli12 Fermi
National Accelerator Laboratory, Batavia, Illinois 60510, USA A. Canepa
University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA B. Carls
University of Illinois, Urbana, Illinois 61801, USA D. Carlsmith University
of Wisconsin, Madison, Wisconsin 53706, USA R. Carosi Istituto Nazionale di
Fisica Nucleare Pisa, bbUniversity of Pisa, ccUniversity of Siena and ddScuola
Normale Superiore, I-56127 Pisa, Italy S. Carrillok University of Florida,
Gainesville, Florida 32611, USA S. Carron Fermi National Accelerator
Laboratory, Batavia, Illinois 60510, USA B. Casal Instituto de Fisica de
Cantabria, CSIC-University of Cantabria, 39005 Santander, Spain M. Casarsa
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA A.
Castroz Istituto Nazionale di Fisica Nucleare Bologna, zUniversity of Bologna,
I-40127 Bologna, Italy P. Catastini Fermi National Accelerator Laboratory,
Batavia, Illinois 60510, USA D. Cauz Istituto Nazionale di Fisica Nucleare
Trieste/Udine, I-34100 Trieste, ffUniversity of Trieste/Udine, I-33100 Udine,
Italy V. Cavalierecc Istituto Nazionale di Fisica Nucleare Pisa, bbUniversity
of Pisa, ccUniversity of Siena and ddScuola Normale Superiore, I-56127 Pisa,
Italy M. Cavalli-Sforza Institut de Fisica d’Altes Energies, ICREA,
Universitat Autonoma de Barcelona, E-08193, Bellaterra (Barcelona), Spain A.
Cerrif Ernest Orlando Lawrence Berkeley National Laboratory, Berkeley,
California 94720, USA L. Cerritoq University College London, London WC1E 6BT,
United Kingdom Y.C. Chen Institute of Physics, Academia Sinica, Taipei,
Taiwan 11529, Republic of China M. Chertok University of California, Davis,
Davis, California 95616, USA G. Chiarelli Istituto Nazionale di Fisica
Nucleare Pisa, bbUniversity of Pisa, ccUniversity of Siena and ddScuola
Normale Superiore, I-56127 Pisa, Italy G. Chlachidze Fermi National
Accelerator Laboratory, Batavia, Illinois 60510, USA F. Chlebana Fermi
National Accelerator Laboratory, Batavia, Illinois 60510, USA K. Cho Center
for High Energy Physics: Kyungpook National University, Daegu 702-701, Korea;
Seoul National University, Seoul 151-742, Korea; Sungkyunkwan University,
Suwon 440-746, Korea; Korea Institute of Science and Technology Information,
Daejeon 305-806, Korea; Chonnam National University, Gwangju 500-757, Korea;
Chonbuk National University, Jeonju 561-756, Korea D. Chokheli Joint
Institute for Nuclear Research, RU-141980 Dubna, Russia J.P. Chou Harvard
University, Cambridge, Massachusetts 02138, USA W.H. Chung University of
Wisconsin, Madison, Wisconsin 53706, USA Y.S. Chung University of Rochester,
Rochester, New York 14627, USA C.I. Ciobanu LPNHE, Universite Pierre et
Marie Curie/IN2P3-CNRS, UMR7585, Paris, F-75252 France M.A. Cioccicc Istituto
Nazionale di Fisica Nucleare Pisa, bbUniversity of Pisa, ccUniversity of Siena
and ddScuola Normale Superiore, I-56127 Pisa, Italy A. Clark University of
Geneva, CH-1211 Geneva 4, Switzerland G. Compostellaaa Istituto Nazionale di
Fisica Nucleare, Sezione di Padova-Trento, aaUniversity of Padova, I-35131
Padova, Italy M.E. Convery Fermi National Accelerator Laboratory, Batavia,
Illinois 60510, USA J. Conway University of California, Davis, Davis,
California 95616, USA M.Corbo LPNHE, Universite Pierre et Marie
Curie/IN2P3-CNRS, UMR7585, Paris, F-75252 France M. Cordelli Laboratori
Nazionali di Frascati, Istituto Nazionale di Fisica Nucleare, I-00044
Frascati, Italy C.A. Cox University of California, Davis, Davis, California
95616, USA D.J. Cox University of California, Davis, Davis, California
95616, USA F. Cresciolibb Istituto Nazionale di Fisica Nucleare Pisa,
bbUniversity of Pisa, ccUniversity of Siena and ddScuola Normale Superiore,
I-56127 Pisa, Italy C. Cuenca Almenar Yale University, New Haven,
Connecticut 06520, USA J. Cuevasv Instituto de Fisica de Cantabria, CSIC-
University of Cantabria, 39005 Santander, Spain R. Culbertson Fermi National
Accelerator Laboratory, Batavia, Illinois 60510, USA D. Dagenhart Fermi
National Accelerator Laboratory, Batavia, Illinois 60510, USA N. d’Ascenzot
LPNHE, Universite Pierre et Marie Curie/IN2P3-CNRS, UMR7585, Paris, F-75252
France M. Datta Fermi National Accelerator Laboratory, Batavia, Illinois
60510, USA P. de Barbaro University of Rochester, Rochester, New York 14627,
USA S. De Cecco Istituto Nazionale di Fisica Nucleare, Sezione di Roma 1,
eeSapienza Università di Roma, I-00185 Roma, Italy G. De Lorenzo Institut de
Fisica d’Altes Energies, ICREA, Universitat Autonoma de Barcelona, E-08193,
Bellaterra (Barcelona), Spain M. Dell’Orsobb Istituto Nazionale di Fisica
Nucleare Pisa, bbUniversity of Pisa, ccUniversity of Siena and ddScuola
Normale Superiore, I-56127 Pisa, Italy C. Deluca Institut de Fisica d’Altes
Energies, ICREA, Universitat Autonoma de Barcelona, E-08193, Bellaterra
(Barcelona), Spain L. Demortier The Rockefeller University, New York, New
York 10065, USA J. Dengc Duke University, Durham, North Carolina 27708, USA
M. Deninno Istituto Nazionale di Fisica Nucleare Bologna, zUniversity of
Bologna, I-40127 Bologna, Italy F. Devoto Division of High Energy Physics,
Department of Physics, University of Helsinki and Helsinki Institute of
Physics, FIN-00014, Helsinki, Finland M. d’Erricoaa Istituto Nazionale di
Fisica Nucleare, Sezione di Padova-Trento, aaUniversity of Padova, I-35131
Padova, Italy A. Di Cantobb Istituto Nazionale di Fisica Nucleare Pisa,
bbUniversity of Pisa, ccUniversity of Siena and ddScuola Normale Superiore,
I-56127 Pisa, Italy B. Di Ruzza Istituto Nazionale di Fisica Nucleare Pisa,
bbUniversity of Pisa, ccUniversity of Siena and ddScuola Normale Superiore,
I-56127 Pisa, Italy J.R. Dittmann Baylor University, Waco, Texas 76798, USA
M. D’Onofrio University of Liverpool, Liverpool L69 7ZE, United Kingdom S.
Donatibb Istituto Nazionale di Fisica Nucleare Pisa, bbUniversity of Pisa,
ccUniversity of Siena and ddScuola Normale Superiore, I-56127 Pisa, Italy P.
Dong Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA M.
Dorigo Istituto Nazionale di Fisica Nucleare Trieste/Udine, I-34100 Trieste,
ffUniversity of Trieste/Udine, I-33100 Udine, Italy T. Dorigo Istituto
Nazionale di Fisica Nucleare, Sezione di Padova-Trento, aaUniversity of
Padova, I-35131 Padova, Italy K. Ebina Waseda University, Tokyo 169, Japan
A. Elagin Texas A&M University, College Station, Texas 77843, USA A. Eppig
University of Michigan, Ann Arbor, Michigan 48109, USA R. Erbacher
University of California, Davis, Davis, California 95616, USA D. Errede
University of Illinois, Urbana, Illinois 61801, USA S. Errede University of
Illinois, Urbana, Illinois 61801, USA N. Ershaidaty LPNHE, Universite Pierre
et Marie Curie/IN2P3-CNRS, UMR7585, Paris, F-75252 France R. Eusebi Texas
A&M University, College Station, Texas 77843, USA H.C. Fang Ernest Orlando
Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA S.
Farrington University of Oxford, Oxford OX1 3RH, United Kingdom M. Feindt
Institut für Experimentelle Kernphysik, Karlsruhe Institute of Technology,
D-76131 Karlsruhe, Germany J.P. Fernandez Centro de Investigaciones
Energeticas Medioambientales y Tecnologicas, E-28040 Madrid, Spain C.
Ferrazzadd Istituto Nazionale di Fisica Nucleare Pisa, bbUniversity of Pisa,
ccUniversity of Siena and ddScuola Normale Superiore, I-56127 Pisa, Italy R.
Field University of Florida, Gainesville, Florida 32611, USA G. Flanaganr
Purdue University, West Lafayette, Indiana 47907, USA R. Forrest University
of California, Davis, Davis, California 95616, USA M.J. Frank Baylor
University, Waco, Texas 76798, USA M. Franklin Harvard University,
Cambridge, Massachusetts 02138, USA J.C. Freeman Fermi National Accelerator
Laboratory, Batavia, Illinois 60510, USA Y. Funakoshi Waseda University,
Tokyo 169, Japan I. Furic University of Florida, Gainesville, Florida 32611,
USA M. Gallinaro The Rockefeller University, New York, New York 10065, USA
J. Galyardt Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA
J.E. Garcia University of Geneva, CH-1211 Geneva 4, Switzerland A.F.
Garfinkel Purdue University, West Lafayette, Indiana 47907, USA P. Garosicc
Istituto Nazionale di Fisica Nucleare Pisa, bbUniversity of Pisa, ccUniversity
of Siena and ddScuola Normale Superiore, I-56127 Pisa, Italy H. Gerberich
University of Illinois, Urbana, Illinois 61801, USA E. Gerchtein Fermi
National Accelerator Laboratory, Batavia, Illinois 60510, USA S. Giaguee
Istituto Nazionale di Fisica Nucleare, Sezione di Roma 1, eeSapienza
Università di Roma, I-00185 Roma, Italy V. Giakoumopoulou University of
Athens, 157 71 Athens, Greece P. Giannetti Istituto Nazionale di Fisica
Nucleare Pisa, bbUniversity of Pisa, ccUniversity of Siena and ddScuola
Normale Superiore, I-56127 Pisa, Italy K. Gibson University of Pittsburgh,
Pittsburgh, Pennsylvania 15260, USA C.M. Ginsburg Fermi National Accelerator
Laboratory, Batavia, Illinois 60510, USA N. Giokaris University of Athens,
157 71 Athens, Greece P. Giromini Laboratori Nazionali di Frascati, Istituto
Nazionale di Fisica Nucleare, I-00044 Frascati, Italy M. Giunta Istituto
Nazionale di Fisica Nucleare Pisa, bbUniversity of Pisa, ccUniversity of Siena
and ddScuola Normale Superiore, I-56127 Pisa, Italy G. Giurgiu The Johns
Hopkins University, Baltimore, Maryland 21218, USA V. Glagolev Joint
Institute for Nuclear Research, RU-141980 Dubna, Russia D. Glenzinski Fermi
National Accelerator Laboratory, Batavia, Illinois 60510, USA M. Gold
University of New Mexico, Albuquerque, New Mexico 87131, USA D. Goldin Texas
A&M University, College Station, Texas 77843, USA N. Goldschmidt University
of Florida, Gainesville, Florida 32611, USA A. Golossanov Fermi National
Accelerator Laboratory, Batavia, Illinois 60510, USA G. Gomez Instituto de
Fisica de Cantabria, CSIC-University of Cantabria, 39005 Santander, Spain G.
Gomez-Ceballos Massachusetts Institute of Technology, Cambridge,
Massachusetts 02139, USA M. Goncharov Massachusetts Institute of Technology,
Cambridge, Massachusetts 02139, USA O. González Centro de Investigaciones
Energeticas Medioambientales y Tecnologicas, E-28040 Madrid, Spain I. Gorelov
University of New Mexico, Albuquerque, New Mexico 87131, USA A.T. Goshaw
Duke University, Durham, North Carolina 27708, USA K. Goulianos The
Rockefeller University, New York, New York 10065, USA A. Gresele Istituto
Nazionale di Fisica Nucleare, Sezione di Padova-Trento, aaUniversity of
Padova, I-35131 Padova, Italy S. Grinstein Institut de Fisica d’Altes
Energies, ICREA, Universitat Autonoma de Barcelona, E-08193, Bellaterra
(Barcelona), Spain C. Grosso-Pilcher Enrico Fermi Institute, University of
Chicago, Chicago, Illinois 60637, USA R.C. Group University of Virginia,
Charlottesville, VA 22906, USA J. Guimaraes da Costa Harvard University,
Cambridge, Massachusetts 02138, USA Z. Gunay-Unalan Michigan State
University, East Lansing, Michigan 48824, USA C. Haber Ernest Orlando
Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA S.R.
Hahn Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA E.
Halkiadakis Rutgers University, Piscataway, New Jersey 08855, USA A.
Hamaguchi Osaka City University, Osaka 588, Japan J.Y. Han University of
Rochester, Rochester, New York 14627, USA F. Happacher Laboratori Nazionali
di Frascati, Istituto Nazionale di Fisica Nucleare, I-00044 Frascati, Italy
K. Hara University of Tsukuba, Tsukuba, Ibaraki 305, Japan D. Hare Rutgers
University, Piscataway, New Jersey 08855, USA M. Hare Tufts University,
Medford, Massachusetts 02155, USA R.F. Harr Wayne State University, Detroit,
Michigan 48201, USA K. Hatakeyama Baylor University, Waco, Texas 76798, USA
C. Hays University of Oxford, Oxford OX1 3RH, United Kingdom M. Heck
Institut für Experimentelle Kernphysik, Karlsruhe Institute of Technology,
D-76131 Karlsruhe, Germany J. Heinrich University of Pennsylvania,
Philadelphia, Pennsylvania 19104, USA M. Herndon University of Wisconsin,
Madison, Wisconsin 53706, USA S. Hewamanage Baylor University, Waco, Texas
76798, USA D. Hidas Rutgers University, Piscataway, New Jersey 08855, USA
A. Hocker Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
W. Hopkinsg Fermi National Accelerator Laboratory, Batavia, Illinois 60510,
USA D. Horn Institut für Experimentelle Kernphysik, Karlsruhe Institute of
Technology, D-76131 Karlsruhe, Germany S. Hou Institute of Physics, Academia
Sinica, Taipei, Taiwan 11529, Republic of China R.E. Hughes The Ohio State
University, Columbus, Ohio 43210, USA M. Hurwitz Enrico Fermi Institute,
University of Chicago, Chicago, Illinois 60637, USA U. Husemann Yale
University, New Haven, Connecticut 06520, USA N. Hussain Institute of
Particle Physics: McGill University, Montréal, Québec, Canada H3A 2T8; Simon
Fraser University, Burnaby, British Columbia, Canada V5A 1S6; University of
Toronto, Toronto, Ontario, Canada M5S 1A7; and TRIUMF, Vancouver, British
Columbia, Canada V6T 2A3 M. Hussein Michigan State University, East Lansing,
Michigan 48824, USA J. Huston Michigan State University, East Lansing,
Michigan 48824, USA G. Introzzi Istituto Nazionale di Fisica Nucleare Pisa,
bbUniversity of Pisa, ccUniversity of Siena and ddScuola Normale Superiore,
I-56127 Pisa, Italy M. Ioriee Istituto Nazionale di Fisica Nucleare, Sezione
di Roma 1, eeSapienza Università di Roma, I-00185 Roma, Italy A. Ivanovo
University of California, Davis, Davis, California 95616, USA G. Jain
Rutgers University, Piscataway, New Jersey 08855, USA E. James Fermi
National Accelerator Laboratory, Batavia, Illinois 60510, USA D. Jang
Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA B. Jayatilaka
Duke University, Durham, North Carolina 27708, USA E.J. Jeon Center for High
Energy Physics: Kyungpook National University, Daegu 702-701, Korea; Seoul
National University, Seoul 151-742, Korea; Sungkyunkwan University, Suwon
440-746, Korea; Korea Institute of Science and Technology Information, Daejeon
305-806, Korea; Chonnam National University, Gwangju 500-757, Korea; Chonbuk
National University, Jeonju 561-756, Korea M.K. Jha Istituto Nazionale di
Fisica Nucleare Bologna, zUniversity of Bologna, I-40127 Bologna, Italy S.
Jindariani Fermi National Accelerator Laboratory, Batavia, Illinois 60510,
USA W. Johnson University of California, Davis, Davis, California 95616, USA
M. Jones Purdue University, West Lafayette, Indiana 47907, USA K.K. Joo
Center for High Energy Physics: Kyungpook National University, Daegu 702-701,
Korea; Seoul National University, Seoul 151-742, Korea; Sungkyunkwan
University, Suwon 440-746, Korea; Korea Institute of Science and Technology
Information, Daejeon 305-806, Korea; Chonnam National University, Gwangju
500-757, Korea; Chonbuk National University, Jeonju 561-756, Korea S.Y. Jun
Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA T.R. Junk
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA T. Kamon
Texas A&M University, College Station, Texas 77843, USA P.E. Karchin Wayne
State University, Detroit, Michigan 48201, USA Y. Katon Osaka City
University, Osaka 588, Japan W. Ketchum Enrico Fermi Institute, University
of Chicago, Chicago, Illinois 60637, USA J. Keung University of
Pennsylvania, Philadelphia, Pennsylvania 19104, USA V. Khotilovich Texas A&M
University, College Station, Texas 77843, USA B. Kilminster Fermi National
Accelerator Laboratory, Batavia, Illinois 60510, USA D.H. Kim Center for
High Energy Physics: Kyungpook National University, Daegu 702-701, Korea;
Seoul National University, Seoul 151-742, Korea; Sungkyunkwan University,
Suwon 440-746, Korea; Korea Institute of Science and Technology Information,
Daejeon 305-806, Korea; Chonnam National University, Gwangju 500-757, Korea;
Chonbuk National University, Jeonju 561-756, Korea H.S. Kim Center for High
Energy Physics: Kyungpook National University, Daegu 702-701, Korea; Seoul
National University, Seoul 151-742, Korea; Sungkyunkwan University, Suwon
440-746, Korea; Korea Institute of Science and Technology Information, Daejeon
305-806, Korea; Chonnam National University, Gwangju 500-757, Korea; Chonbuk
National University, Jeonju 561-756, Korea H.W. Kim Center for High Energy
Physics: Kyungpook National University, Daegu 702-701, Korea; Seoul National
University, Seoul 151-742, Korea; Sungkyunkwan University, Suwon 440-746,
Korea; Korea Institute of Science and Technology Information, Daejeon 305-806,
Korea; Chonnam National University, Gwangju 500-757, Korea; Chonbuk National
University, Jeonju 561-756, Korea J.E. Kim Center for High Energy Physics:
Kyungpook National University, Daegu 702-701, Korea; Seoul National
University, Seoul 151-742, Korea; Sungkyunkwan University, Suwon 440-746,
Korea; Korea Institute of Science and Technology Information, Daejeon 305-806,
Korea; Chonnam National University, Gwangju 500-757, Korea; Chonbuk National
University, Jeonju 561-756, Korea M.J. Kim Laboratori Nazionali di Frascati,
Istituto Nazionale di Fisica Nucleare, I-00044 Frascati, Italy S.B. Kim
Center for High Energy Physics: Kyungpook National University, Daegu 702-701,
Korea; Seoul National University, Seoul 151-742, Korea; Sungkyunkwan
University, Suwon 440-746, Korea; Korea Institute of Science and Technology
Information, Daejeon 305-806, Korea; Chonnam National University, Gwangju
500-757, Korea; Chonbuk National University, Jeonju 561-756, Korea S.H. Kim
University of Tsukuba, Tsukuba, Ibaraki 305, Japan Y.K. Kim Enrico Fermi
Institute, University of Chicago, Chicago, Illinois 60637, USA N. Kimura
Waseda University, Tokyo 169, Japan M. Kirby Fermi National Accelerator
Laboratory, Batavia, Illinois 60510, USA S. Klimenko University of Florida,
Gainesville, Florida 32611, USA K. Kondo Waseda University, Tokyo 169, Japan
D.J. Kong Center for High Energy Physics: Kyungpook National University,
Daegu 702-701, Korea; Seoul National University, Seoul 151-742, Korea;
Sungkyunkwan University, Suwon 440-746, Korea; Korea Institute of Science and
Technology Information, Daejeon 305-806, Korea; Chonnam National University,
Gwangju 500-757, Korea; Chonbuk National University, Jeonju 561-756, Korea J.
Konigsberg University of Florida, Gainesville, Florida 32611, USA A.V.
Kotwal Duke University, Durham, North Carolina 27708, USA M. Kreps Institut
für Experimentelle Kernphysik, Karlsruhe Institute of Technology, D-76131
Karlsruhe, Germany J. Kroll University of Pennsylvania, Philadelphia,
Pennsylvania 19104, USA D. Krop Enrico Fermi Institute, University of
Chicago, Chicago, Illinois 60637, USA N. Krumnackl Baylor University, Waco,
Texas 76798, USA M. Kruse Duke University, Durham, North Carolina 27708, USA
V. Krutelyovd Texas A&M University, College Station, Texas 77843, USA T. Kuhr
Institut für Experimentelle Kernphysik, Karlsruhe Institute of Technology,
D-76131 Karlsruhe, Germany M. Kurata University of Tsukuba, Tsukuba, Ibaraki
305, Japan S. Kwang Enrico Fermi Institute, University of Chicago, Chicago,
Illinois 60637, USA A.T. Laasanen Purdue University, West Lafayette, Indiana
47907, USA S. Lami Istituto Nazionale di Fisica Nucleare Pisa, bbUniversity
of Pisa, ccUniversity of Siena and ddScuola Normale Superiore, I-56127 Pisa,
Italy S. Lammel Fermi National Accelerator Laboratory, Batavia, Illinois
60510, USA M. Lancaster University College London, London WC1E 6BT, United
Kingdom R.L. Lander University of California, Davis, Davis, California
95616, USA K. Lannonu The Ohio State University, Columbus, Ohio 43210, USA
A. Lath Rutgers University, Piscataway, New Jersey 08855, USA G. Latinocc
Istituto Nazionale di Fisica Nucleare Pisa, bbUniversity of Pisa, ccUniversity
of Siena and ddScuola Normale Superiore, I-56127 Pisa, Italy I. Lazzizzera
Istituto Nazionale di Fisica Nucleare, Sezione di Padova-Trento, aaUniversity
of Padova, I-35131 Padova, Italy T. LeCompte Argonne National Laboratory,
Argonne, Illinois 60439, USA E. Lee Texas A&M University, College Station,
Texas 77843, USA H.S. Lee Enrico Fermi Institute, University of Chicago,
Chicago, Illinois 60637, USA J.S. Lee Center for High Energy Physics:
Kyungpook National University, Daegu 702-701, Korea; Seoul National
University, Seoul 151-742, Korea; Sungkyunkwan University, Suwon 440-746,
Korea; Korea Institute of Science and Technology Information, Daejeon 305-806,
Korea; Chonnam National University, Gwangju 500-757, Korea; Chonbuk National
University, Jeonju 561-756, Korea S.W. Leew Texas A&M University, College
Station, Texas 77843, USA S. Leobb Istituto Nazionale di Fisica Nucleare
Pisa, bbUniversity of Pisa, ccUniversity of Siena and ddScuola Normale
Superiore, I-56127 Pisa, Italy S. Leone Istituto Nazionale di Fisica
Nucleare Pisa, bbUniversity of Pisa, ccUniversity of Siena and ddScuola
Normale Superiore, I-56127 Pisa, Italy J.D. Lewis Fermi National Accelerator
Laboratory, Batavia, Illinois 60510, USA C.-J. Lin Ernest Orlando Lawrence
Berkeley National Laboratory, Berkeley, California 94720, USA J. Linacre
University of Oxford, Oxford OX1 3RH, United Kingdom M. Lindgren Fermi
National Accelerator Laboratory, Batavia, Illinois 60510, USA E. Lipeles
University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA A. Lister
University of Geneva, CH-1211 Geneva 4, Switzerland D.O. Litvintsev Fermi
National Accelerator Laboratory, Batavia, Illinois 60510, USA C. Liu
University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA Q. Liu Purdue
University, West Lafayette, Indiana 47907, USA T. Liu Fermi National
Accelerator Laboratory, Batavia, Illinois 60510, USA S. Lockwitz Yale
University, New Haven, Connecticut 06520, USA N.S. Lockyer University of
Pennsylvania, Philadelphia, Pennsylvania 19104, USA A. Loginov Yale
University, New Haven, Connecticut 06520, USA H.K. Lou Rutgers University,
Piscataway, New Jersey 08855, USA D. Lucchesiaa Istituto Nazionale di Fisica
Nucleare, Sezione di Padova-Trento, aaUniversity of Padova, I-35131 Padova,
Italy J. Lueck Institut für Experimentelle Kernphysik, Karlsruhe Institute
of Technology, D-76131 Karlsruhe, Germany P. Lujan Ernest Orlando Lawrence
Berkeley National Laboratory, Berkeley, California 94720, USA P. Lukens
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA G. Lungu
The Rockefeller University, New York, New York 10065, USA J. Lys Ernest
Orlando Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA
R. Lysak Comenius University, 842 48 Bratislava, Slovakia; Institute of
Experimental Physics, 040 01 Kosice, Slovakia R. Madrak Fermi National
Accelerator Laboratory, Batavia, Illinois 60510, USA K. Maeshima Fermi
National Accelerator Laboratory, Batavia, Illinois 60510, USA K. Makhoul
Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA P.
Maksimovic The Johns Hopkins University, Baltimore, Maryland 21218, USA S.
Malik The Rockefeller University, New York, New York 10065, USA G. Mancab
University of Liverpool, Liverpool L69 7ZE, United Kingdom A. Manousakis-
Katsikakis University of Athens, 157 71 Athens, Greece F. Margaroli Purdue
University, West Lafayette, Indiana 47907, USA C. Marino Institut für
Experimentelle Kernphysik, Karlsruhe Institute of Technology, D-76131
Karlsruhe, Germany M. Martínez Institut de Fisica d’Altes Energies, ICREA,
Universitat Autonoma de Barcelona, E-08193, Bellaterra (Barcelona), Spain R.
Martínez-Ballarín Centro de Investigaciones Energeticas Medioambientales y
Tecnologicas, E-28040 Madrid, Spain P. Mastrandrea Istituto Nazionale di
Fisica Nucleare, Sezione di Roma 1, eeSapienza Università di Roma, I-00185
Roma, Italy M. Mathis The Johns Hopkins University, Baltimore, Maryland
21218, USA M.E. Mattson Wayne State University, Detroit, Michigan 48201, USA
P. Mazzanti Istituto Nazionale di Fisica Nucleare Bologna, zUniversity of
Bologna, I-40127 Bologna, Italy K.S. McFarland University of Rochester,
Rochester, New York 14627, USA P. McIntyre Texas A&M University, College
Station, Texas 77843, USA R. McNultyi University of Liverpool, Liverpool L69
7ZE, United Kingdom A. Mehta University of Liverpool, Liverpool L69 7ZE,
United Kingdom P. Mehtala Division of High Energy Physics, Department of
Physics, University of Helsinki and Helsinki Institute of Physics, FIN-00014,
Helsinki, Finland A. Menzione Istituto Nazionale di Fisica Nucleare Pisa,
bbUniversity of Pisa, ccUniversity of Siena and ddScuola Normale Superiore,
I-56127 Pisa, Italy C. Mesropian The Rockefeller University, New York, New
York 10065, USA T. Miao Fermi National Accelerator Laboratory, Batavia,
Illinois 60510, USA D. Mietlicki University of Michigan, Ann Arbor, Michigan
48109, USA A. Mitra Institute of Physics, Academia Sinica, Taipei, Taiwan
11529, Republic of China H. Miyake University of Tsukuba, Tsukuba, Ibaraki
305, Japan S. Moed Harvard University, Cambridge, Massachusetts 02138, USA
N. Moggi Istituto Nazionale di Fisica Nucleare Bologna, zUniversity of
Bologna, I-40127 Bologna, Italy M.N. Mondragonk Fermi National Accelerator
Laboratory, Batavia, Illinois 60510, USA C.S. Moon Center for High Energy
Physics: Kyungpook National University, Daegu 702-701, Korea; Seoul National
University, Seoul 151-742, Korea; Sungkyunkwan University, Suwon 440-746,
Korea; Korea Institute of Science and Technology Information, Daejeon 305-806,
Korea; Chonnam National University, Gwangju 500-757, Korea; Chonbuk National
University, Jeonju 561-756, Korea R. Moore Fermi National Accelerator
Laboratory, Batavia, Illinois 60510, USA M.J. Morello Fermi National
Accelerator Laboratory, Batavia, Illinois 60510, USA J. Morlock Institut für
Experimentelle Kernphysik, Karlsruhe Institute of Technology, D-76131
Karlsruhe, Germany P. Movilla Fernandez Fermi National Accelerator
Laboratory, Batavia, Illinois 60510, USA A. Mukherjee Fermi National
Accelerator Laboratory, Batavia, Illinois 60510, USA Th. Muller Institut für
Experimentelle Kernphysik, Karlsruhe Institute of Technology, D-76131
Karlsruhe, Germany P. Murat Fermi National Accelerator Laboratory, Batavia,
Illinois 60510, USA M. Mussiniz Istituto Nazionale di Fisica Nucleare
Bologna, zUniversity of Bologna, I-40127 Bologna, Italy J. Nachtmanm Fermi
National Accelerator Laboratory, Batavia, Illinois 60510, USA Y. Nagai
University of Tsukuba, Tsukuba, Ibaraki 305, Japan J. Naganoma Waseda
University, Tokyo 169, Japan I. Nakano Okayama University, Okayama 700-8530,
Japan A. Napier Tufts University, Medford, Massachusetts 02155, USA J. Nett
Texas A&M University, College Station, Texas 77843, USA C. Neu University of
Virginia, Charlottesville, VA 22906, USA M.S. Neubauer University of
Illinois, Urbana, Illinois 61801, USA J. Nielsene Ernest Orlando Lawrence
Berkeley National Laboratory, Berkeley, California 94720, USA L. Nodulman
Argonne National Laboratory, Argonne, Illinois 60439, USA O. Norniella
University of Illinois, Urbana, Illinois 61801, USA E. Nurse University
College London, London WC1E 6BT, United Kingdom L. Oakes University of
Oxford, Oxford OX1 3RH, United Kingdom S.H. Oh Duke University, Durham,
North Carolina 27708, USA Y.D. Oh Center for High Energy Physics: Kyungpook
National University, Daegu 702-701, Korea; Seoul National University, Seoul
151-742, Korea; Sungkyunkwan University, Suwon 440-746, Korea; Korea Institute
of Science and Technology Information, Daejeon 305-806, Korea; Chonnam
National University, Gwangju 500-757, Korea; Chonbuk National University,
Jeonju 561-756, Korea I. Oksuzian University of Virginia, Charlottesville,
VA 22906, USA T. Okusawa Osaka City University, Osaka 588, Japan R. Orava
Division of High Energy Physics, Department of Physics, University of Helsinki
and Helsinki Institute of Physics, FIN-00014, Helsinki, Finland L. Ortolan
Institut de Fisica d’Altes Energies, ICREA, Universitat Autonoma de Barcelona,
E-08193, Bellaterra (Barcelona), Spain S. Pagan Grisoaa Istituto Nazionale di
Fisica Nucleare, Sezione di Padova-Trento, aaUniversity of Padova, I-35131
Padova, Italy C. Pagliarone Istituto Nazionale di Fisica Nucleare
Trieste/Udine, I-34100 Trieste, ffUniversity of Trieste/Udine, I-33100 Udine,
Italy E. Palenciaf Instituto de Fisica de Cantabria, CSIC-University of
Cantabria, 39005 Santander, Spain V. Papadimitriou Fermi National
Accelerator Laboratory, Batavia, Illinois 60510, USA A.A. Paramonov Argonne
National Laboratory, Argonne, Illinois 60439, USA J. Patrick Fermi National
Accelerator Laboratory, Batavia, Illinois 60510, USA G. Paulettaff Istituto
Nazionale di Fisica Nucleare Trieste/Udine, I-34100 Trieste, ffUniversity of
Trieste/Udine, I-33100 Udine, Italy M. Paulini Carnegie Mellon University,
Pittsburgh, Pennsylvania 15213, USA C. Paus Massachusetts Institute of
Technology, Cambridge, Massachusetts 02139, USA D.E. Pellett University of
California, Davis, Davis, California 95616, USA A. Penzo Istituto Nazionale
di Fisica Nucleare Trieste/Udine, I-34100 Trieste, ffUniversity of
Trieste/Udine, I-33100 Udine, Italy T.J. Phillips Duke University, Durham,
North Carolina 27708, USA G. Piacentino Istituto Nazionale di Fisica
Nucleare Pisa, bbUniversity of Pisa, ccUniversity of Siena and ddScuola
Normale Superiore, I-56127 Pisa, Italy E. Pianori University of
Pennsylvania, Philadelphia, Pennsylvania 19104, USA J. Pilot The Ohio State
University, Columbus, Ohio 43210, USA K. Pitts University of Illinois,
Urbana, Illinois 61801, USA C. Plager University of California, Los Angeles,
Los Angeles, California 90024, USA L. Pondrom University of Wisconsin,
Madison, Wisconsin 53706, USA K. Potamianos Purdue University, West
Lafayette, Indiana 47907, USA O. Poukhov††footnotemark: Joint Institute for
Nuclear Research, RU-141980 Dubna, Russia F. Prokoshinx Joint Institute for
Nuclear Research, RU-141980 Dubna, Russia A. Pronko Fermi National
Accelerator Laboratory, Batavia, Illinois 60510, USA F. Ptohosh Laboratori
Nazionali di Frascati, Istituto Nazionale di Fisica Nucleare, I-00044
Frascati, Italy E. Pueschel Carnegie Mellon University, Pittsburgh,
Pennsylvania 15213, USA G. Punzibb Istituto Nazionale di Fisica Nucleare
Pisa, bbUniversity of Pisa, ccUniversity of Siena and ddScuola Normale
Superiore, I-56127 Pisa, Italy J. Pursley University of Wisconsin, Madison,
Wisconsin 53706, USA A. Rahaman University of Pittsburgh, Pittsburgh,
Pennsylvania 15260, USA V. Ramakrishnan University of Wisconsin, Madison,
Wisconsin 53706, USA N. Ranjan Purdue University, West Lafayette, Indiana
47907, USA I. Redondo Centro de Investigaciones Energeticas Medioambientales
y Tecnologicas, E-28040 Madrid, Spain P. Renton University of Oxford, Oxford
OX1 3RH, United Kingdom M. Rescigno Istituto Nazionale di Fisica Nucleare,
Sezione di Roma 1, eeSapienza Università di Roma, I-00185 Roma, Italy F.
Rimondiz Istituto Nazionale di Fisica Nucleare Bologna, zUniversity of
Bologna, I-40127 Bologna, Italy L. Ristori45 Fermi National Accelerator
Laboratory, Batavia, Illinois 60510, USA A. Robson Glasgow University,
Glasgow G12 8QQ, United Kingdom T. Rodrigo Instituto de Fisica de Cantabria,
CSIC-University of Cantabria, 39005 Santander, Spain T. Rodriguez University
of Pennsylvania, Philadelphia, Pennsylvania 19104, USA E. Rogers University
of Illinois, Urbana, Illinois 61801, USA S. Rolli Tufts University, Medford,
Massachusetts 02155, USA R. Roser Fermi National Accelerator Laboratory,
Batavia, Illinois 60510, USA M. Rossi Istituto Nazionale di Fisica Nucleare
Trieste/Udine, I-34100 Trieste, ffUniversity of Trieste/Udine, I-33100 Udine,
Italy F. Rubbo Fermi National Accelerator Laboratory, Batavia, Illinois
60510, USA F. Ruffinicc Istituto Nazionale di Fisica Nucleare Pisa,
bbUniversity of Pisa, ccUniversity of Siena and ddScuola Normale Superiore,
I-56127 Pisa, Italy A. Ruiz Instituto de Fisica de Cantabria, CSIC-
University of Cantabria, 39005 Santander, Spain J. Russ Carnegie Mellon
University, Pittsburgh, Pennsylvania 15213, USA V. Rusu Fermi National
Accelerator Laboratory, Batavia, Illinois 60510, USA A. Safonov Texas A&M
University, College Station, Texas 77843, USA W.K. Sakumoto University of
Rochester, Rochester, New York 14627, USA Y. Sakurai Waseda University,
Tokyo 169, Japan L. Santiff Istituto Nazionale di Fisica Nucleare
Trieste/Udine, I-34100 Trieste, ffUniversity of Trieste/Udine, I-33100 Udine,
Italy L. Sartori Istituto Nazionale di Fisica Nucleare Pisa, bbUniversity of
Pisa, ccUniversity of Siena and ddScuola Normale Superiore, I-56127 Pisa,
Italy K. Sato University of Tsukuba, Tsukuba, Ibaraki 305, Japan V.
Savelievt LPNHE, Universite Pierre et Marie Curie/IN2P3-CNRS, UMR7585, Paris,
F-75252 France A. Savoy-Navarro LPNHE, Universite Pierre et Marie
Curie/IN2P3-CNRS, UMR7585, Paris, F-75252 France P. Schlabach Fermi National
Accelerator Laboratory, Batavia, Illinois 60510, USA A. Schmidt Institut für
Experimentelle Kernphysik, Karlsruhe Institute of Technology, D-76131
Karlsruhe, Germany E.E. Schmidt Fermi National Accelerator Laboratory,
Batavia, Illinois 60510, USA M.P. Schmidt††footnotemark: Yale University,
New Haven, Connecticut 06520, USA M. Schmitt Northwestern University,
Evanston, Illinois 60208, USA T. Schwarz University of California, Davis,
Davis, California 95616, USA L. Scodellaro Instituto de Fisica de Cantabria,
CSIC-University of Cantabria, 39005 Santander, Spain A. Scribanocc Istituto
Nazionale di Fisica Nucleare Pisa, bbUniversity of Pisa, ccUniversity of Siena
and ddScuola Normale Superiore, I-56127 Pisa, Italy F. Scuri Istituto
Nazionale di Fisica Nucleare Pisa, bbUniversity of Pisa, ccUniversity of Siena
and ddScuola Normale Superiore, I-56127 Pisa, Italy A. Sedov Purdue
University, West Lafayette, Indiana 47907, USA S. Seidel University of New
Mexico, Albuquerque, New Mexico 87131, USA C. Seitz Rutgers University,
Piscataway, New Jersey 08855, USA Y. Seiya Osaka City University, Osaka 588,
Japan A. Semenov Joint Institute for Nuclear Research, RU-141980 Dubna,
Russia F. Sforzabb Istituto Nazionale di Fisica Nucleare Pisa, bbUniversity
of Pisa, ccUniversity of Siena and ddScuola Normale Superiore, I-56127 Pisa,
Italy A. Sfyrla University of Illinois, Urbana, Illinois 61801, USA S.Z.
Shalhout University of California, Davis, Davis, California 95616, USA T.
Shears University of Liverpool, Liverpool L69 7ZE, United Kingdom P.F.
Shepard University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA M.
Shimojimas University of Tsukuba, Tsukuba, Ibaraki 305, Japan S. Shiraishi
Enrico Fermi Institute, University of Chicago, Chicago, Illinois 60637, USA
M. Shochet Enrico Fermi Institute, University of Chicago, Chicago, Illinois
60637, USA I. Shreyber Institution for Theoretical and Experimental Physics,
ITEP, Moscow 117259, Russia A. Simonenko Joint Institute for Nuclear
Research, RU-141980 Dubna, Russia P. Sinervo Institute of Particle Physics:
McGill University, Montréal, Québec, Canada H3A 2T8; Simon Fraser University,
Burnaby, British Columbia, Canada V5A 1S6; University of Toronto, Toronto,
Ontario, Canada M5S 1A7; and TRIUMF, Vancouver, British Columbia, Canada V6T
2A3 A. Sissakian††footnotemark: Joint Institute for Nuclear Research,
RU-141980 Dubna, Russia K. Sliwa Tufts University, Medford, Massachusetts
02155, USA J.R. Smith University of California, Davis, Davis, California
95616, USA F.D. Snider Fermi National Accelerator Laboratory, Batavia,
Illinois 60510, USA A. Soha Fermi National Accelerator Laboratory, Batavia,
Illinois 60510, USA S. Somalwar Rutgers University, Piscataway, New Jersey
08855, USA V. Sorin Institut de Fisica d’Altes Energies, ICREA, Universitat
Autonoma de Barcelona, E-08193, Bellaterra (Barcelona), Spain P. Squillacioti
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA M.
Stancari Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
M. Stanitzki Yale University, New Haven, Connecticut 06520, USA R. St. Denis
Glasgow University, Glasgow G12 8QQ, United Kingdom B. Stelzer Institute of
Particle Physics: McGill University, Montréal, Québec, Canada H3A 2T8; Simon
Fraser University, Burnaby, British Columbia, Canada V5A 1S6; University of
Toronto, Toronto, Ontario, Canada M5S 1A7; and TRIUMF, Vancouver, British
Columbia, Canada V6T 2A3 O. Stelzer-Chilton Institute of Particle Physics:
McGill University, Montréal, Québec, Canada H3A 2T8; Simon Fraser University,
Burnaby, British Columbia, Canada V5A 1S6; University of Toronto, Toronto,
Ontario, Canada M5S 1A7; and TRIUMF, Vancouver, British Columbia, Canada V6T
2A3 D. Stentz Northwestern University, Evanston, Illinois 60208, USA J.
Strologas University of New Mexico, Albuquerque, New Mexico 87131, USA G.L.
Strycker University of Michigan, Ann Arbor, Michigan 48109, USA Y. Sudo
University of Tsukuba, Tsukuba, Ibaraki 305, Japan A. Sukhanov University of
Florida, Gainesville, Florida 32611, USA I. Suslov Joint Institute for
Nuclear Research, RU-141980 Dubna, Russia K. Takemasa University of Tsukuba,
Tsukuba, Ibaraki 305, Japan Y. Takeuchi University of Tsukuba, Tsukuba,
Ibaraki 305, Japan J. Tang Enrico Fermi Institute, University of Chicago,
Chicago, Illinois 60637, USA M. Tecchio University of Michigan, Ann Arbor,
Michigan 48109, USA P.K. Teng Institute of Physics, Academia Sinica, Taipei,
Taiwan 11529, Republic of China J. Thomg Fermi National Accelerator
Laboratory, Batavia, Illinois 60510, USA S. Thomas Rutgers University,
Piscataway, New Jersey 08855, USA J. Thome Carnegie Mellon University,
Pittsburgh, Pennsylvania 15213, USA G.A. Thompson University of Illinois,
Urbana, Illinois 61801, USA E. Thomson University of Pennsylvania,
Philadelphia, Pennsylvania 19104, USA P. Ttito-Guzmán Centro de
Investigaciones Energeticas Medioambientales y Tecnologicas, E-28040 Madrid,
Spain S. Tkaczyk Fermi National Accelerator Laboratory, Batavia, Illinois
60510, USA D. Toback Texas A&M University, College Station, Texas 77843, USA
S. Tokar Comenius University, 842 48 Bratislava, Slovakia; Institute of
Experimental Physics, 040 01 Kosice, Slovakia K. Tollefson Michigan State
University, East Lansing, Michigan 48824, USA T. Tomura University of
Tsukuba, Tsukuba, Ibaraki 305, Japan D. Tonelli Fermi National Accelerator
Laboratory, Batavia, Illinois 60510, USA S. Torre Laboratori Nazionali di
Frascati, Istituto Nazionale di Fisica Nucleare, I-00044 Frascati, Italy D.
Torretta Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
P. Totaroff Istituto Nazionale di Fisica Nucleare Trieste/Udine, I-34100
Trieste, ffUniversity of Trieste/Udine, I-33100 Udine, Italy M. Trovatodd
Istituto Nazionale di Fisica Nucleare Pisa, bbUniversity of Pisa, ccUniversity
of Siena and ddScuola Normale Superiore, I-56127 Pisa, Italy Y. Tu
University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA F. Ukegawa
University of Tsukuba, Tsukuba, Ibaraki 305, Japan S. Uozumi Center for High
Energy Physics: Kyungpook National University, Daegu 702-701, Korea; Seoul
National University, Seoul 151-742, Korea; Sungkyunkwan University, Suwon
440-746, Korea; Korea Institute of Science and Technology Information, Daejeon
305-806, Korea; Chonnam National University, Gwangju 500-757, Korea; Chonbuk
National University, Jeonju 561-756, Korea A. Varganov University of
Michigan, Ann Arbor, Michigan 48109, USA F. Vázquezk University of Florida,
Gainesville, Florida 32611, USA G. Velev Fermi National Accelerator
Laboratory, Batavia, Illinois 60510, USA C. Vellidis University of Athens,
157 71 Athens, Greece M. Vidal Centro de Investigaciones Energeticas
Medioambientales y Tecnologicas, E-28040 Madrid, Spain I. Vila Instituto de
Fisica de Cantabria, CSIC-University of Cantabria, 39005 Santander, Spain R.
Vilar Instituto de Fisica de Cantabria, CSIC-University of Cantabria, 39005
Santander, Spain J. Vizán Instituto de Fisica de Cantabria, CSIC-University
of Cantabria, 39005 Santander, Spain M. Vogel University of New Mexico,
Albuquerque, New Mexico 87131, USA G. Volpibb Istituto Nazionale di Fisica
Nucleare Pisa, bbUniversity of Pisa, ccUniversity of Siena and ddScuola
Normale Superiore, I-56127 Pisa, Italy P. Wagner University of Pennsylvania,
Philadelphia, Pennsylvania 19104, USA R.L. Wagner Fermi National Accelerator
Laboratory, Batavia, Illinois 60510, USA T. Wakisaka Osaka City University,
Osaka 588, Japan R. Wallny University of California, Los Angeles, Los
Angeles, California 90024, USA S.M. Wang Institute of Physics, Academia
Sinica, Taipei, Taiwan 11529, Republic of China A. Warburton Institute of
Particle Physics: McGill University, Montréal, Québec, Canada H3A 2T8; Simon
Fraser University, Burnaby, British Columbia, Canada V5A 1S6; University of
Toronto, Toronto, Ontario, Canada M5S 1A7; and TRIUMF, Vancouver, British
Columbia, Canada V6T 2A3 D. Waters University College London, London WC1E
6BT, United Kingdom M. Weinberger Texas A&M University, College Station,
Texas 77843, USA W.C. Wester III Fermi National Accelerator Laboratory,
Batavia, Illinois 60510, USA B. Whitehouse Tufts University, Medford,
Massachusetts 02155, USA D. Whitesonc University of Pennsylvania,
Philadelphia, Pennsylvania 19104, USA A.B. Wicklund Argonne National
Laboratory, Argonne, Illinois 60439, USA E. Wicklund Fermi National
Accelerator Laboratory, Batavia, Illinois 60510, USA S. Wilbur Enrico Fermi
Institute, University of Chicago, Chicago, Illinois 60637, USA F. Wick
Institut für Experimentelle Kernphysik, Karlsruhe Institute of Technology,
D-76131 Karlsruhe, Germany H.H. Williams University of Pennsylvania,
Philadelphia, Pennsylvania 19104, USA J.S. Wilson The Ohio State University,
Columbus, Ohio 43210, USA P. Wilson Fermi National Accelerator Laboratory,
Batavia, Illinois 60510, USA B.L. Winer The Ohio State University, Columbus,
Ohio 43210, USA P. Wittichg Fermi National Accelerator Laboratory, Batavia,
Illinois 60510, USA S. Wolbers Fermi National Accelerator Laboratory,
Batavia, Illinois 60510, USA H. Wolfe The Ohio State University, Columbus,
Ohio 43210, USA T. Wright University of Michigan, Ann Arbor, Michigan 48109,
USA X. Wu University of Geneva, CH-1211 Geneva 4, Switzerland Z. Wu Baylor
University, Waco, Texas 76798, USA K. Yamamoto Osaka City University, Osaka
588, Japan J. Yamaoka Duke University, Durham, North Carolina 27708, USA T.
Yang Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
U.K. Yangp Enrico Fermi Institute, University of Chicago, Chicago, Illinois
60637, USA Y.C. Yang Center for High Energy Physics: Kyungpook National
University, Daegu 702-701, Korea; Seoul National University, Seoul 151-742,
Korea; Sungkyunkwan University, Suwon 440-746, Korea; Korea Institute of
Science and Technology Information, Daejeon 305-806, Korea; Chonnam National
University, Gwangju 500-757, Korea; Chonbuk National University, Jeonju
561-756, Korea W.-M. Yao Ernest Orlando Lawrence Berkeley National
Laboratory, Berkeley, California 94720, USA G.P. Yeh Fermi National
Accelerator Laboratory, Batavia, Illinois 60510, USA K. Yim Fermi National
Accelerator Laboratory, Batavia, Illinois 60510, USA J. Yoh Fermi National
Accelerator Laboratory, Batavia, Illinois 60510, USA K. Yorita Waseda
University, Tokyo 169, Japan T. Yoshidaj Osaka City University, Osaka 588,
Japan G.B. Yu Duke University, Durham, North Carolina 27708, USA I. Yu
Center for High Energy Physics: Kyungpook National University, Daegu 702-701,
Korea; Seoul National University, Seoul 151-742, Korea; Sungkyunkwan
University, Suwon 440-746, Korea; Korea Institute of Science and Technology
Information, Daejeon 305-806, Korea; Chonnam National University, Gwangju
500-757, Korea; Chonbuk National University, Jeonju 561-756, Korea S.S. Yu
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA J.C. Yun
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA A.
Zanetti Istituto Nazionale di Fisica Nucleare Trieste/Udine, I-34100 Trieste,
ffUniversity of Trieste/Udine, I-33100 Udine, Italy Y. Zeng Duke University,
Durham, North Carolina 27708, USA S. Zucchelliz Istituto Nazionale di Fisica
Nucleare Bologna, zUniversity of Bologna, I-40127 Bologna, Italy
###### Abstract
We present the first model independent search for three-jet hadronic
resonances within multijet events in $\sqrt{s}=1.96$ TeV $p\bar{p}$ collisions
at the Fermilab Tevatron using the CDF II detector. Pair production of
supersymmetric gluinos and squarks with hadronic R-parity violating decays is
employed as an example of a new physics benchmark for this signature.
Selection criteria based on the kinematic properties of an ensemble of jet
combinations within each event help to extract signal from copious QCD
background. Our background estimates include all-hadronic $t\bar{t}$ decays
that have a signature similar to the signal. No significant excess outside the
top quark mass window is observed in data with an integrated luminosity of 3.2
fb-1. We place 95% confidence level limits on the production cross section
$\sigma(p\bar{p}\rightarrow
XX^{{}^{\prime}})\times\rm{BR}$($\widetilde{g}\widetilde{g}$$\rightarrow
3~{}{\rm jet}+3~{}{\rm jet})$ where
$X,X^{\prime}=\widetilde{g},~{}\widetilde{q}$, or $\widetilde{\bar{q}}$, with
$\widetilde{q}$, $\widetilde{\bar{q}}\rightarrow\widetilde{g}$ \+ jet, as a
function of gluino mass, in the range of 77 GeV/$c^{2}$ to 240 GeV/$c^{2}$.
###### pacs:
13.85.-t, 11.30.Pb
Most searches for new physics at high energy hadron colliders use signatures
that require leptons, photons, or missing transverse energy
(${\not\\!\\!E_{\rm T}}$) bib (a) in order to suppress backgrounds from QCD.
Final states with multijets and ${\not\\!\\!E_{\rm T}}$ have also been
explored Aaltonen et al. (2009a); Abazov et al. (2008).
In this letter, we present a first new physics search in an entirely hadronic
channel with no ${\not\\!\\!E_{\rm T}}$ signature using data collected with
the Collider Detector at Fermilab (CDF). This data set corresponds to an
integrated luminosity of 3.2 fb-1 of $p\bar{p}$ collisions at $\sqrt{s}=1.96$
TeV at the Tevatron collider. The search utilizes a novel approach Essig ;
Seitz : an ensemble of all possible jet triplets within an event consisting of
at least six jets is used to extract a signal from the multijet QCD
backgrounds. We model the possible new physics origin for this signature with
pair production of $SU(3)_{C}$ adjoint Majorana fermions each one decaying
into three quarks R. Chivukula, M. Golden, and E. Simmons (1991a, b). This
search is sensitive to models such as hadronic R-parity violating
supersymmetry (RPV SUSY) Essig with a gluino, chargino, or neutralino
lightest superpartner, as well as the hadronic decay modes of pairs of top
quarks or fourth generation quarks and it complements existing di-jet
resonances searches at hadron colliders. Moreover, it does not require any
$b$-quark jet identification which is an important tool, often used for top
quark identification.
The CDF II detector is a multi-purpose particle detector consisting of
tracking and calorimeter systems Abulencia et al. (2007). The data were
collected using an online event selection that requires at least four
calorimeter jets Aaltonen et al. (2009b) with uncorrected transverse energy
$E_{T}>$ 15 GeV. A jet is formed by a cluster of calorimeter towers and
reconstructed with a cone algorithm using a fixed cone of $\Delta R=0.4$ Abe
et al. (1992), with $\Delta R=\sqrt{\Delta\eta^{2}+\Delta\phi^{2}}$ bib (a).
In the online selection an additional request is made for the sum of the
transverse energy of all clusters to be larger than 175 GeV. At the analysis
level, jet energies are corrected to account for effects such as non-
linearities in the detector response and multiple $p\bar{p}$ collisions in an
event Bhatti et al. (2006).
Events are selected with at least six jets with transverse momentum ($p_{T}$)
greater than 15 GeV/$c$ and $|\eta|<2.5$. The scalar sum of the most energetic
six jets’ $p_{T}$, $\sum p_{T}$, is required to be greater than 250 GeV/$c$
and events with $\mbox{${\not\\!\\!E_{\rm T}}$}>50$ GeV are removed. Multiple
interactions, resulting in the reconstruction of more than one primary vertex
in the same event, contribute to the multijet background. We require at least
one primary vertex and discard events with more than four primary vertices. To
further reduce this background, we require jets in an event to originate from
near the same point on the beamline. We associate tracks with each jet where
possible bib (b) by requiring $\Delta R$ between the track and the jet to be
less than 0.4. The mean $z$-coordinate of all tracks associated with each jet
($\bar{z_{j}}$ for the $j^{\rm th}$ jet), and the associated standard
deviation ($\delta(z_{j})$) are determined. Events with jets that have
$|\bar{z_{j}}|>60$ cm are discarded. We then evaluate the standard deviation
of the $\bar{z_{j}}$ of all jets in the event ($\delta(z_{\textrm{all}}$)) and
select events that have at least four jets with $\delta(z_{j})<4$ cm, and
$\delta(z_{\textrm{all}})<0.5$ cm, consistent with the resolution of tracks
associated with jets. Once the selection is applied, pileup effects are
significantly reduced. Since we select events with at least six jets, we
consider an ensemble of 20 (or more) possible jet triplets. We discard those
triplets that have more than one jet with no $z$ information. In addition, all
jets in the triplet must have $\delta(z_{j})<2.5$ cm, and originate from
within 10 cm of the primary vertex of the event.
The biggest challenge of this analysis is to reduce the large multijet QCD
background. To extract signal from this background, we apply the following
technique: for every accepted triplet we calculate the invariant mass,
$M_{jjj}$, and scalar sum $p_{T}$, $\sum_{jjj}p_{T}$. Triplets made of
uncorrelated jets tend to have $M_{jjj}c\approx\sum_{jjj}p_{T}$, while signal
triplets should have $M_{jjj}$ as close to the mass of the decaying particle
as allowed by jet energy resolution. We then select triplets with
$\sum_{jjj}p_{T}-M_{jjj}c>\Delta$, $\Delta$ being a diagonal offset as
illustrated in Fig. 1. The diagonal offset values are optimized for the best
signal over background ratio separately for each hadronic resonance mass in
this search. The optimized diagonal offset selection greatly reduces the QCD
background and the contribution from incorrect combinations of jets. We note
that for a small fraction of events it is possible for multiple triplets to
pass all selection criteria.
Figure 1: Distribution of $M_{jjj}$ versus $\sum_{jjj}p_{T}$ for a pair-
produced RPV gluino with invariant mass 190 GeV/c2 generated with pythia MC.
Triplets to the right of a diagonal offset
($\sum_{jjj}p_{T}-M_{jjj}c=\Delta$), indicated by the dashed line, are are
kept. The inset shows the $M_{jjj}$ distribution for the RPV signal MC and
with no QCD background after a diagonal offset of 195 GeV/$c$ along with a
Gaussian plus a Landau fit; the Landau shows the combinatorial contribution
within the signal jet ensemble. The QCD background distribution resembles that
of the combinatorial contribution, because they are both due to effectively
uncorrelated triplets.
The QCD background is estimated from a 5-jet data sample, which is
statistically independent of the signal sample of $\geq 6$ jets (for brevity
referred to as 6-jet). The 5-jet $M_{jjj}$ distribution is rescaled by the
ratio of the 6-jet to 5-jet population in each $\sum_{jjj}{p_{T}}$ bin. A
Landau function is chosen Essig to fit the scaled 5-jet $M_{jjj}$
distribution. The Landau parameters extracted from the scaled 5-jet $M_{jjj}$
distribution vary by less than 2 GeV/$c^{2}$ from similar fits to the 6-jet
sample, indicating that the scaled 5-jet sample describes the background in
the 6-jet sample well. The contribution to the background from $t\bar{t}$ pair
production is estimated using the pythia Monte Carlo (MC) generator Sjöstrand
et al. (2001) followed by the CDF detector simulation bib (2003). These events
were generated assuming a top quark mass of 172.5 GeV/$c^{2}$ and production
cross section of 7.5 pb. To ensure a proper fit to the QCD background, the fit
is blinded to the mass region corresponding to the top quark, 153
GeV/$c^{2}$$<M_{jjj}<$189 GeV/$c^{2}$. Additionally, we find that truncating
the Landau fit for lower values of $\Delta$ gives an improved description of
the QCD background. The Landau parameters extracted from the fits vary
smoothly as functions of the diagonal offset value. We now have a firm
prediction for the QCD background and fix the parameters when we fit for
signal.
The signal is modeled using the pythia MC generator. The process
$p\bar{p}\rightarrow XX^{{}^{\prime}}$ where
$X,X^{\prime}=\widetilde{g},~{}\widetilde{q}$, or $\widetilde{\bar{q}}$ is
simulated at several gluino mass values, ranging from 74 GeV/$c^{2}$ to 245
GeV/$c^{2}$ with hadronic $uds$ RPV SUSY decays turned on, allowing gluino
decays to three light jets. Two scenarios of squark masses are considered
($0.5$ TeV/$c^{2}$ $<m_{\widetilde{q}}<0.7$ TeV/$c^{2}$,
$m_{\widetilde{q}}=m_{\widetilde{g}}+10$ GeV/$c^{2}$) and were found to give
equivalent acceptances.
The acceptance of the trigger, reconstruction, and selection requirements for
signal events is determined by fitting the pair produced RPV gluino MC with a
Landau plus Gaussian function, corresponding to the combinatorial contribution
and signal peak respectively. An example is shown in the inset of Fig. 1. The
Gaussian is integrated in a $\pm 1\sigma$ range to extract the number of
signal triplets. This procedure is repeated for various diagonal offset values
and the optimal offset for each hadronic resonance mass is determined. The
acceptance, calculated for these optimal offset values, is $5\times 10^{-5}$,
constant within 20% across all gluino mass points.
The expected sensitivity of this analysis in the absence of signal is
determined with a set of background-only experiments (pseudoexperiments). A
pseudoexperiment is constructed with the background modeled by a Landau
function whose parameters are chosen randomly from within the range allowed by
the background shape fits, with the expected amount of $t\bar{t}$ added. Each
pseudoexperiment is fit with the Landau background shape parameters fixed, and
a signal Gaussian whose position is determined by the mass point being fit,
and whose amplitude and width are allowed to vary within a range determined by
the expected signal shape. The number of signal triplets allowed by each
pseudoexperiment is extracted by integrating the Gaussian in the same way as
in the acceptance calculation.
Two broad categories of systematic uncertainties are accounted for in
extracting a cross section: uncertainties in the shape of the $M_{jjj}$
distribution and uncertainties in the acceptance of the signal. Shape
uncertainties, determined from background and signal fits, are incorporated in
the pseudoexperiments themselves. Acceptance uncertainties arise from modeling
the signal Monte Carlo and include effects of initial and final state
radiation Abulencia et al. (2006) (20%), parton distribution functions (PDFs)
from CTEQ Pumplin et al. (2002) (10%), jet energy scale Bhatti et al. (2006)
(31%) and luminosity Acosta et al. (2002) (6%) uncertainties. The overall
acceptance uncertainty due to these sources is 38%.
We search for a hadronic resonance in the data for an invariant mass ($m$) 77
– 240 GeV/$c^{2}$ in 9 GeV/$c^{2}$ steps, consistent with jet energy
resolution. For each mass, jet triplets are selected by the optimal diagonal
offset value. The data $M_{jjj}$ distribution is fit in exactly the same way
as the pseudoexperiments. Figure 2 shows the $M_{jjj}$ distribution for $m$ =
112 GeV/$c^{2}$ and 175 GeV/$c^{2}$. The latter fit shows a noticeable excess
consistent in mass with the hadronic decay of the top quark. The Gaussian
component of the fit integrated from 165 GeV/$c^{2}$ to 185 GeV/$c^{2}$,
corresponding to a $\pm 1\sigma$ window around the Gaussian peak, gives $11\pm
5$ triplets. The number of expected QCD background triplets in the same mass
window from the Landau function is $8\pm 1$. The $t\bar{t}$ contribution to
background is evaluated using pythia. It is cross-checked with higher order
$t\bar{t}$ MC generators alpgen Mangano et al. (2003) and mc@nlo Frixione and
Webber (2002), samples that varied the amount of initial and final state
radiation, as well as samples that varied the PDFs within their uncertainties.
These studies lead us to expect between 0.5 and 1.1 triplets from $t\bar{t}$
production in the aforementioned mass range. We note that $\sim$10% of the
triplets in the top mass window originate from two or more combinations in a
jet ensemble of a given event, consistent with the pythia $t\bar{t}$
simulation. We evaluate the significance of the excess using the
pseudoexperiment method described above, which includes systematic
uncertainties on signal acceptance as well as the shape of the $M_{jjj}$
distribution. The observed excess is 2 standard deviations (2$\sigma$) above
the prediction. Additional cross-checks, such as requiring one of the jets to
have originated from a $b$-quark, suggest that the excess is consistent with
coming from top quarks.
Figure 2: $M_{jjj}$ distributions in 3.2 fb-1 data fitted to a Landau
(parameters are extracted from fits to the scaled 5-jet $M_{jjj}$
distribution) plus a Gaussian at (a) 112 GeV/$c^{2}$ (optimal diagonal offset
value 155 GeV/$c$) and (b) 175 GeV/$c^{2}$ (optimal diagonal offset value of
190 GeV/$c$). The fit function in panel (b) includes a Gaussian fixed at
$m=175$ GeV/$c^{2}$.
We do not observe a significant deviation from standard model backgrounds
anywhere in the data. A Bayesian approach is used to place 95% confidence
level limits on $\sigma(p\bar{p}\rightarrow
XX^{{}^{\prime}})\times\rm{BR}$($\widetilde{g}\widetilde{g}$$\rightarrow
3~{}{\rm jet}+3~{}{\rm jet})$ where
$X,X^{\prime}=\widetilde{g},~{}\widetilde{q}$, or $\widetilde{\bar{q}}$, with
$\widetilde{q}$, $\widetilde{\bar{q}}\rightarrow\widetilde{g}$ \+ jet, versus
gluino mass, shown in Fig. 3. The largest excess observed is the one
previously noted located near the top quark mass. We find that our background
estimate has a 2.3% probability of producing such a deviation. Comparisons to
the theoretical cross section for $\sigma(p\bar{p}\rightarrow
XX^{{}^{\prime}})\times\rm{BR}$($\widetilde{g}\widetilde{g}$$\rightarrow
3~{}\rm{jet}+3~{}\rm{jet})$ from pythia corrected by a next-to-leading-order
(NLO) $k$-factor calculated using prospino W. Beenakker, R. Hoepker, and M.
Spira are shown in the dashed and dash-dot lines for two different squark
mass scenarios. For a decoupled squark mass ($0.5$ TeV/$c^{2}$
$<m_{\widetilde{q}}<0.7$ TeV/$c^{2}$) we exclude gluinos below a mass of 144
GeV/$c^{2}$ (dashed line). In the case of a squark mass which is nearly
degenerate with the gluino mass ($m_{\widetilde{q}}=m_{\widetilde{g}}+10$
GeV/$c^{2}$) we exclude gluinos below 155 GeV/$c^{2}$ (dash-dot line).
We have performed a first search for three-jet hadronic resonances in a six or
more jet final state using a data sample with an integrated luminosity of 3.2
fb-1 collected by the CDF II detector. A novel technique is introduced that
exploits kinematic features within an ensemble of jet combinations that allows
us to extract signal from the QCD background. We observe no significant excess
in the data in an invariant mass range from 77 GeV/$c^{2}$ to 240 GeV/$c^{2}$
and place 95% confidence level limits on the production cross section
$\sigma(p\bar{p}\rightarrow
XX^{{}^{\prime}})\times\rm{BR}$($\widetilde{g}\widetilde{g}$$\rightarrow
3~{}{\rm jet}+3~{}{\rm jet})$ where
$X,X^{\prime}=\widetilde{g},~{}\widetilde{q}$, or $\widetilde{\bar{q}}$, with
$\widetilde{q}$, $\widetilde{\bar{q}}\rightarrow\widetilde{g}$ \+ jet, versus
gluino mass. The results are presented as limits on RPV gluinos decaying to
three jets, but are more widely applicable to any new particle with a three-
jet decay mode. Two different squark mass scenarios have been considered:
decoupled squarks and squarks nearly degenerate in mass with the gluino. We
can exclude gluinos below 144 GeV/$c^{2}$ and 155 GeV/$c^{2}$ respectively.
Figure 3: The observed (points) and expected (solid black line) 95% confidence
level limits on the production cross section $\sigma(p\bar{p}\rightarrow
XX^{{}^{\prime}})\times\rm{BR}$($\widetilde{g}\widetilde{g}\rightarrow
3~{}{\rm jet}+3~{}{\rm jet}$) where
$X,X^{\prime}=\widetilde{g},~{}\widetilde{q}$, or $\widetilde{\bar{q}}$,
including systematic uncertainties. The shaded bands represent the total
uncertainty on the limit. Also shown is the model cross section from pythia
corrected by an NLO $k$-factor (dash-dot line for $0.5$ TeV/$c^{2}$
$<m_{\widetilde{q}}<0.7$ TeV/$c^{2}$, dashed line for
$m_{\widetilde{q}}=m_{\widetilde{g}}+10$ GeV/$c^{2}$).
###### Acknowledgements.
We thank R. Essig, S. Mrenna, M. Park, and Y. Zhao for assistance with this
analysis. We also thank the Fermilab staff and the technical staffs of the
participating institutions for their vital contributions. This work was
supported by the U.S. Department of Energy and National Science Foundation;
the Italian Istituto Nazionale di Fisica Nucleare; the Ministry of Education,
Culture, Sports, Science and Technology of Japan; the Natural Sciences and
Engineering Research Council of Canada; the National Science Council of the
Republic of China; the Swiss National Science Foundation; the A.P. Sloan
Foundation; the Bundesministerium für Bildung und Forschung, Germany; the
Korean World Class University Program, the National Research Foundation of
Korea; the Science and Technology Facilities Council and the Royal Society,
UK; the Institut National de Physique Nucleaire et Physique des
Particules/CNRS; the Russian Foundation for Basic Research; the Ministerio de
Ciencia e Innovación, and Programa Consolider-Ingenio 2010, Spain; the Slovak
R&D Agency; and the Academy of Finland.
## References
* bib (a) CDF uses a ($z$, $\phi$, $\theta$) coordinate system with the $z$-axis in the direction of the proton beam; $\phi$ and $\theta$ are the azimuthal and polar angle respectively. The pseudorapidity is defined as $\eta=-\ln(\tan{\theta\over 2})$, and the transverse momentum and energy as $p_{\rm T}=p\sin\theta$ and $E_{\rm T}=E\sin\theta$, respectively. Missing transverse energy ($\mbox{${\not\\!\\!E_{\rm T}}$}=|\mbox{${\not\\!\\!\vec{E}_{\rm T}}$}|$) is defined as $\mbox{${\not\\!\\!\vec{E}_{\rm T}}$}=-\sum_{i}E_{\rm T}^{i}{\bf{\hat{n}_{i}}}$ where ${\bf{\hat{n}_{i}}}$ is a unit vector in the transverse plane that points from the beam-line to the $i^{th}$ calorimeter tower.
* Aaltonen et al. (2009a) T. Aaltonen et al. (CDF Collaboration), Phys. Rev. Lett. 102, 121801 (2009a).
* Abazov et al. (2008) V. M. Abazov et al. (D0 Collaboration), Phys. Lett. B 660, 449 (2008).
* (4) R. Essig, Ph.D. thesis, Rutgers University, 2008\.
* (5) C. Seitz, Masters thesis, Rutgers University, 2011, FERMILAB-MASTERS-2011-01.
* R. Chivukula, M. Golden, and E. Simmons (1991a) R. Chivukula, M. Golden, and E. Simmons, Phys. Lett. B257, 403 (1991a).
* R. Chivukula, M. Golden, and E. Simmons (1991b) R. Chivukula, M. Golden, and E. Simmons, Nucl. Phys. B363, 83 (1991b).
* Abulencia et al. (2007) A. Abulencia et al. (CDF Collaboration), J. Phys. G: Nucl. Part. Phys. 34, 245 (2007).
* Aaltonen et al. (2009b) T. Aaltonen et al. (CDF Collaboration), Phys. Rev. Lett. 103, 221801 (2009b).
* Abe et al. (1992) F. Abe et al. (CDF Collaboration), Phys. Rev. D 45, 1448 (1992).
* Bhatti et al. (2006) A. Bhatti et al., Nucl. Instrum. Methods A 566, 375 (2006).
* bib (b) Note that tracking efficiency drops significantly beyond $|\eta|>1.0$. Jets at large $|\eta|$ do not have associated tracks and have only limited $z$ information.
* Sjöstrand et al. (2001) T. Sjöstrand et al., Comp. Phys. Comm. 135, 238 (2001).
* bib (2003) Nucl. Instrum. Methods A 506, 250 (2003).
* Abulencia et al. (2006) A. Abulencia et al. (CDF Collaboration), Phys. Rev. D 73, 032003 (2006).
* Pumplin et al. (2002) J. Pumplin et al., Nucl. Instrum. Methods A 447, 1 (2002).
* Acosta et al. (2002) D. Acosta et al., Nucl. Instrum. Methods A 494, 57 (2002).
* Mangano et al. (2003) M. L. Mangano et al., J. High Energy Phys. 07 , 001 (2003).
* Frixione and Webber (2002) S. Frixione and B. Webber, J. High Energy Phys. 06 , 029 (2002).
* (20) W. Beenakker, R. Hoepker, and M. Spira, “PROSPINO: A Program for the Production of Supersymmetric Particles In Next-to-leading Order QCD,” arXiv:hep-ph/9611232.
|
arxiv-papers
| 2011-05-13T19:44:53 |
2024-09-04T02:49:18.759625
|
{
"license": "Public Domain",
"authors": "CDF Collaboration: T. Aaltonen, et al",
"submitter": "Eva Halkiadakis",
"url": "https://arxiv.org/abs/1105.2815"
}
|
1105.2870
|
# A light charged Higgs boson in two-Higgs doublet model for CDF $Wjj$ anomaly
Chuan-Hung Chen1,2, Cheng-Wei Chiang3,4,2,5, Takaaki Nomura3, Fu-Sheng Yu6
1Department of Physics, National Cheng-Kung University, Tainan 701, Republic
of China
2Physics Division, National Center for Theoretical Sciences, Hsinchu 300,
Republic of China
3Department of Physics and Center for Mathematics and Theoretical Physics,
National Central University, Chungli, Taiwan 32001, Republic of China
4Institute of Physics, Academia Sinica, Taipei 11925, Republic of China
5Department of Physics, University of Wisconsin-Madison, Madison, WI 53706,
USA
6Institute of High Energy Physics and Theoretical Physics Center for Science
Facilities, Chinese Academy of Sciences, Beijing 100049, People s Republic of
China
###### Abstract
Motivated by recent anomalous CDF data on $Wjj$ events, we study a possible
explanation within the framework of the two-Higgs doublet model. We find that
a charged Higgs boson of mass $\sim$ 140 GeV with appropriate couplings can
account for the observed excess. In addition, we consider the flavor-changing
neutral current effects induced at loop level by the charged Higgs boson on
the $B$ meson system to further constrain the model. Our study shows that the
like-sign charge asymmetry $A_{s\ell}^{b}$ can be of ${\cal O}(10^{-3})$ in
this scenario.
Recently the CDF Collaboration reported data indicating an excess of $Wjj$
events where $W$ decayed leptonically Aaltonen:2011mk . The excess shows up as
a broad bump between about 120 and 160 GeV in the distribution of dijet
invariant mass $M_{jj}$. This dijet peak can be attributed to a resonance of
mass in that range, and the estimated production cross section times the dijet
branching ratio is about 4 pb. However, no statistically significant deviation
from the standard model (SM) background is found for $Zjj$ events. Events with
b-jets in the excess region have been checked to be consistent with
background. Moreover, the distribution of the invariant mass of the $\ell\nu
jj$ system in the $M_{jj}$ range of 120 to 160 GeV has been examined and
indicates no evidence of a resonance or quasi-resonant behavior. The DØ
Collaboration also performed a similar analysis, but found no excess $Wjj$
events Abazov:2011af . While waiting for further confirmation from the Large
Hadron Collider at CERN for the result of either experiment, it is
nevertheless worth pursuing the cause of the anomaly observed by CDF.
Many papers have discussed different possible explanations for the excess
Isidori:2011dp ; Buckley:2011vc ; Yu:2011cw ; Eichten:2011sh ; Wang:2011uq ;
Cheung:2011zt ; Kilic:2011sr ; AguilarSaavedra:2011zy ; Nelson:2011us ;
He:2011ss ; Sato:2011ui ; Wang:2011ta ; Anchordoqui:2011ag ; Dobrescu:2011px ;
Jung:2011ua ; Buckley:2011vs ; Zhu:2011ww ; Sullivan:2011hu ; Ko:2011ns ;
Plehn:2011nx ; Jung:2011ue ; Chang:2011wj ; Nielsen:2011wz ; Cao:2011yt ;
Babu:2011yw ; Dutta:2011kh ; Huang:2011ph ; Kim:2011xv ; Carpenter:2011yj ;
Segre:2011ka ; Bhattacherjee:2011yh . Most of them try to explain the excess
by introducing one or more additional new physics particles. Some consider
contributions from exchanging vector bosons, such as $Z^{\prime}$ and/or
$W^{\prime}$ bosons Buckley:2011vc ; Yu:2011cw ; Wang:2011uq ; Cheung:2011zt ;
AguilarSaavedra:2011zy ; Wang:2011ta ; Anchordoqui:2011ag ; Jung:2011ua ;
Ko:2011ns ; Buckley:2011vs ; Chang:2011wj ; Kim:2011xv ; Huang:2011ph , and
neutral color-singlet vector boson Jung:2011ue . Some others analyze the
anomaly considering scalar bosons, such as technipion Eichten:2011sh
(including technirho), super-partners of fermions Isidori:2011dp ;
Kilic:2011sr ; Sato:2011ui (fermions in SUSY model are also considered in
Refs. Isidori:2011dp ; Sato:2011ui ), color octet scalar Dobrescu:2011px ;
Carpenter:2011yj , scalars with flavor symmetry Nelson:2011us ; Zhu:2011ww ;
Babu:2011yw , radion Bhattacherjee:2011yh , scalar doublet with no vacuum
expectation value (VEV) Segre:2011ka , and new Higgs bosons Cao:2011yt ;
Dutta:2011kh . In Ref. Dutta:2011kh , the two-Higgs doublet model (THDM) is
discussed with flavor-changing neutral current (FCNC) interactions allowed
through neutral Higgs boson ($H^{0}$ and $A^{0}$) exchanges. Their result
favors a light charged Higgs boson. However, allowing large Yukawa couplings
to leptons in their work has the problem that lepton pairs will be copiously
produced, which is not the case in the CDF data. There are also attempts to
explain this puzzle within SM He:2011ss ; Sullivan:2011hu ; Plehn:2011nx ;
Nielsen:2011wz .
In this letter, we explore another scenario in the THDM as an explanation. The
fact that the excess dijets are non-b-jets suggests that the new resonance may
not couple universally to quarks. A scalar particle can accommodate this
feature more easily than a gauge particle. We show in Fig. 1 two processes in
the THDM that can possibly contribute to the excess events. The dijets come
from the decay of the charged Higgs boson $H^{\pm}$. Since the CDF
Collaboration does not observe any resonance in the invariant mass spectrum of
$\ell\nu jj$ for the excess events, we require that the mass of the
pseudoscalar Higgs boson $A^{0}$ to be sufficiently high. In this case, only
Fig. 1(a) is dominant, with the mass of the charged Higgs boson
$m_{H^{\pm}}\sim 140$ GeV, as suggested by data. Moreover, we assume that the
width of $H^{\pm}$ is sufficiently small in comparison with the jet energy
resolution of the experiment. We note that this is only one possible scenario
in the model. Another scenario is that $H^{\pm}$ and $A^{0}$ are interchanged
in Fig. 1, and so are their masses. We also note in passing that the assumed
mass of $\sim 140$ GeV for $H^{\pm}$ or $A_{0}$ is consistent with the lower
bounds of $76.6$ GeV for $H^{\pm}$ Abbiendi:2008be and 65 GeV for $A^{0}$
Abbiendi:2004gn from LEP experiments.
Figure 1: Diagrams contributing to the $Wjj$ events in the two-Higgs doublet
model.
The Yukawa sector of the THDM is given by
$\displaystyle-{\cal L}_{Y}$ $\displaystyle=$
$\displaystyle\bar{Q}_{L}Y^{U}_{1}U_{R}\tilde{H}_{1}+\bar{Q}_{L}Y^{U}_{2}U_{R}\tilde{H}_{2}$
(1) $\displaystyle+$
$\displaystyle\bar{Q}_{L}Y^{D}_{1}D_{R}H_{1}+\bar{Q}_{L}Y^{D}_{2}D_{R}H_{2}+h.c.~{},$
where $\tilde{H}_{1,2}$ are two Higgs doublet fields,
$\tilde{H}_{k}=i\tau_{2}H^{*}_{k}$, $Q_{L}$ represents left-handed quark
doublets, $U_{R}$ and $D_{R}$ are respectively right-handed up-type and down-
type quarks, and $Y^{U,D}_{1,2}$ are Yukawa couplings. Here we have suppressed
the generation indices. The fields $H_{1}$ and $H_{2}$ can be rotated so that
only one of the two Higgs doublets develops a VEV. Accordingly, the new
doublets are expressed by
$\displaystyle h$ $\displaystyle=$ $\displaystyle\sin\beta H_{1}+\cos\beta
H_{2}=\left(\begin{array}[]{c}G^{+}\\\ (v+h^{0}+iG^{0})/\sqrt{2}\\\
\end{array}\right)~{},$ (4) $\displaystyle H$ $\displaystyle=$
$\displaystyle\cos\beta H_{1}-\sin\beta H_{2}=\left(\begin{array}[]{c}H^{+}\\\
(H^{0}+iA^{0})/\sqrt{2}\\\ \end{array}\right)~{},$ (7)
where $\sin\beta=v_{1}/v$, $\cos\beta=v_{2}/v$,
$v=\sqrt{v^{2}_{1}+v^{2}_{2}}$, $\langle H\rangle=0$, $\langle
h\rangle=v/\sqrt{2}$. In our scenario, we assume that $H^{\pm}$ has mass $\sim
140$ GeV and is responsible for the excess $Wjj$ events observed by CDF. As a
result, Eq. (1) can be rewritten as
$\displaystyle-{\cal L}_{Y}$ $\displaystyle=$
$\displaystyle\bar{Q}_{L}Y^{U}U_{R}\tilde{h}+\bar{Q}_{L}Y^{D}D_{R}h$ (8)
$\displaystyle+$
$\displaystyle\bar{Q}_{L}\tilde{Y}^{U}U_{R}\tilde{H}+\bar{Q}_{L}\tilde{Y}^{D}_{2}D_{R}H~{}$
with
$\displaystyle Y^{F}$ $\displaystyle=$ $\displaystyle\sin\beta
Y^{F}_{1}+\cos\beta Y^{F}_{2}\,,$ $\displaystyle\tilde{Y}^{F}$
$\displaystyle=$ $\displaystyle\cos\beta Y^{F}_{1}-\sin\beta Y^{F}_{2}\,,$ (9)
and $F=U,D$. Here, $Y^{F}$ is proportional to the quark mass matrix while
$\tilde{Y}^{F}$ gives the couplings between the heavier neutral and charged
Higgs bosons and the quarks. Clearly, if $Y^{F}$ and $\tilde{Y}^{F}$ cannot be
diagonalized simultaneously, flavor-changing neutral currents (FCNC’s) will be
induced at tree level and associated with the doublet $H$. If we impose some
symmetry to suppress the tree-level FCNC’s, as in type-II THDM, the couplings
of the new Higgs bosons are always proportional to the quark masses. In this
case, the excess $Wjj$ events should be mostly b-flavored, which is against
the observation. To avoid this problem, instead of imposing symmetry, we find
that $Y^{F}$ and $\tilde{Y}^{F}$ can be simultaneously diagonalized if they
are related by some transformation.
To illustrate the desired relationship between $Y^{F}$ and $\tilde{Y}^{F}$, we
first introduce unitary matrices $V^{F}_{L,R}$ to diagonalize $Y^{F}$ in the
following bi-unitary way:
$\displaystyle Y^{\rm dia}_{F}$ $\displaystyle=$ $\displaystyle
V^{F}_{L}Y^{F}V^{F\dagger}_{R}=\mbox{diag}(Y^{1}_{F},Y^{2}_{F},Y^{3}_{F})~{}.$
(10)
Using
$\displaystyle I_{F}=\left(\begin{array}[]{ccc}0&0&a\\\ 0&b&0\\\ c&0&0\\\
\end{array}\right)\,,$ (14)
where $a$, $b$ and $c$ are arbitrary complex numbers, one can easily see that
$\displaystyle\tilde{Y}^{\rm dia}_{F}=I_{F}Y^{\rm
dia}_{F}I^{T}_{F}=\left(\begin{array}[]{ccc}a^{2}Y^{3}_{F}&0&0\\\
0&b^{2}Y^{2}_{F}&0\\\ 0&0&c^{2}Y^{1}_{F}\\\ \end{array}\right)$ (18)
is still diagonal. Now if $\tilde{Y}^{F}$ and $Y^{F}$ are related by
$\displaystyle\tilde{Y}^{F}=\bar{I}^{F}_{L}Y^{F}\tilde{I}^{F}_{R}~{},$ (19)
where $\bar{I}^{F}_{L}=V^{F^{\dagger}}_{L}I_{F}V^{F}_{L}$ and
$\tilde{I}^{F}_{R}=V^{F\dagger}_{R}I^{T}_{F}V^{F}_{R}$, then $\tilde{Y}^{F}$
and $Y^{F}$ can both be diagonalized by $V^{F}_{L,R}$, as can be explicitly
checked using Eq. (10) and the unitarity of $V^{F}_{L(R)}$. We note that the
matrix $I_{F}$ in Eq. (14) is not unique. More complicated examples can be
found in Ref. Ahn:2010zza . Now if the quark mass hierarchy is such that
$Y^{1}_{F}\ll Y^{2}_{F}\ll Y^{3}_{F}$, we see in Eq. (18) that the hierarchy
pattern in $\tilde{Y}^{F}$ can be inverted with suitable choices of $a,b$ and
$c$. We note that since $a,b$ and $c$ are arbitrary complex numbers, all
elements in $\tilde{Y}^{F}$ are also complex in general. As a result, the
couplings between the $H$ doublet and light quarks are not suppressed by their
masses. Moreover, the coupling to $b$ quarks can be suppressed.
To proceed the analysis, we write down the relevant interactions in terms of
physical eigenstates:
$\displaystyle-{\cal L}_{H^{\pm},H^{0},A^{0}}$ $\displaystyle=$
$\displaystyle\left(\bar{u}_{R}\boldsymbol{\eta}^{U^{\dagger}}u_{L}+\bar{d}_{L}\boldsymbol{\eta}^{D}d_{R}\right)\frac{H^{0}+iA^{0}}{\sqrt{2}}$
(20) $\displaystyle+$
$\displaystyle\left(-\bar{u}_{R}\boldsymbol{\eta}^{U^{\dagger}}{\bf
V}d_{L}+\bar{u}_{L}{\bf V}\boldsymbol{\eta}^{D}d_{R}\right)H^{+}+h.c.\,,$
where ${\bf V}$ is the Cabibbo-Kobayashi-Maskawa (CKM) matrix, and
$\boldsymbol{\eta}^{F}=$ diag$(\eta^{F}_{1},\eta^{F}_{2},\eta^{F}_{3})$
contains three free parameters. For simplicity and illustration purposes, we
will consider two schemes:
(I): $\displaystyle\eta_{i}\equiv\eta_{i}^{U}=\eta_{i}^{D}~{}\mbox{ with
}i=1,2,3~{};$ (21) (II): $\displaystyle\eta^{U}\equiv\eta_{i}^{U}~{}\mbox{ and
}~{}\eta^{D}\equiv\eta_{j}^{D}~{}\mbox{ for }i=1,2,3\mbox{ and }j=1,2~{}.$
(22)
To suppress the coupling with the $b$ quark, we require that
$\eta_{3}\ll\eta_{1,2}$ in Scheme (I) or $\eta^{D}_{3}\ll 1$ in Scheme (II).
In either scheme, we search for the parameter space that can explain the
excess $Wjj$ events, subject to the constraint
$\sigma_{Wjj}\equiv\sigma(p\bar{p}\rightarrow WH^{\pm})BR(H^{\pm}\rightarrow
jj)=4$ pb, as observed by CDF. Moreover, we consider a $25\%$ uncertainty in
the extracted $\sigma_{Wjj}$. In the scenario of a heavy CP-odd Higgs boson,
we ignore the contribution from Fig. 1(b) and consider only the t-channel
Feynman diagram. The contribution of Fig. 1(b) is one order less than that of
Fig. 1(a) when $m_{A^{0}}\gtrsim 650$ GeV.
We first consider Scheme (I). Due to small parton distribution functions
(PDF’s) associated with charm and strange quarks in the proton (or anti-
proton), we find that $\eta_{2}$ does not play a significant role in
determining $\sigma_{Wjj}$. Therefore, $\sigma_{Wjj}$ mainly depends on
$\eta_{1}$, the coupling between $H^{\pm}$ and quarks of the first generation,
and the hadronic branching ratio of $H^{\pm}$, ${\cal B}_{\rm jj}\equiv
BR(H^{\pm}\rightarrow jj)$. In Fig. 2, we fix $\eta_{2}=0.1$. The red curves
on the $\eta_{1}$-${\cal B}_{\rm jj}$ plane are contours corresponding to
$\sigma_{Wjj}=(4\pm 1)$ pb. In this analysis, we took mass of
$m_{H^{\pm}}=144$ GeV in accordance with the CDF result Aaltonen:2011mk .
In principle, the same t-channel diagram in Fig. 1 can contribute to $Zjj$
events. However, the couplings of the $Z$ boson to charged leptons are more
suppressed than $W$. The blue curves in Fig. 2 are contours of
$\sigma_{Zjj}\equiv\sigma(p\bar{p}\rightarrow ZH^{\pm})BR(H^{\pm}\rightarrow
jj)$ bring around 2.6 pb, which is the cross section of SM background process
$p\bar{p}\rightarrow ZZ+ZW\rightarrow Zjj$. We see that the preferred
parameter region of the red curves have $\sigma_{Zjj}$ well below the SM
background.
Figure 2: Contours of $\sigma_{Wjj}=(4\pm 1)$ pb (thick red curves) and
$\sigma_{Zjj}=(2.6\pm 0.6)$ pb (thin blue curves) for Scheme (I). In this
scenario, we take $m_{H^{\pm}}=144$ GeV and $A^{0}$ is sufficiently heavy. A
$K$ factor of 1.3 is used in computing the cross section.
Using the extracted parameter space, we then compute the total width of
$H^{\pm}$ using the partial width formula
$\Gamma_{q}=\sum_{i,j=1,2}\frac{3}{16\pi}m_{H^{\pm}}|V_{u_{i}d_{j}}|^{2}[(\eta^{U}_{i})^{2}+(\eta^{D}_{j})^{2}]$
(23)
and ${\cal B}_{\rm jj}$. Note that the $b$-quark coupling has been taken to be
zero in the above formula. When ${\cal B}_{\rm jj}\gtrsim 0.8$ for
$\eta_{1}=\eta_{2}$ or 0.7 for $\eta_{1}\gg\eta_{2}$, the total width
$\Gamma_{H^{\pm}}\lesssim 2$ GeV, consistent with our narrow width
approximation. This suggests that the charged Higgs boson couple dominantly to
quarks instead of leptons.
We now consider two cases in Scheme (II): (a) $\eta^{D}=\eta^{U}$ and (b)
$\eta^{D}=0.1\eta^{U}$. The independent parameters are then $\eta^{U}$ and
${\cal B}_{\rm jj}$. Plots in Fig. 3 show that it is preferred to have
$\eta^{D}<\eta^{U}$ because it helps suppressing $Zjj$ production. Likewise,
when ${\cal B}_{\rm jj}\gtrsim 0.8$, the total width $\Gamma_{H^{\pm}}\lesssim
2$ GeV, again consistent with our narrow width approximation.
Figure 3: Same as Fig. 2 but for Scheme (II). Results for $\eta^{D}=\eta^{U}$
are shown in plot (a), and results for $\eta^{D}=0.1\eta^{U}$ are shown in
plot (b).
(a) (b)
We note in passing that one can also consider the scenario where the roles of
$H^{\pm}$ and $A^{0}$ are interchanged, with the former being heavy and the
latter having a mass of $144$ GeV. However, the parameter region for
explaining the $Wjj$ events predicts a $Zjj$ rate very close to the SM
background in Scheme (I), as shown in Fig. 4(a). In Scheme (II), null
deviations of $Zjj$ and b-jets disfavor the small and large $\eta^{D}$
regions, respectively, as shown in Fig. 4(b). Therefore, in comparison the
previous scenario with light charged Higgs boson and heavy CP-odd Higgs boson
is favored. We will thus exclusively consider such a scenario in the following
analysis of low-energy constraints.
Figure 4: Contours of $\sigma(p\bar{p}\rightarrow WA^{0})Br(A^{0}\rightarrow
jj)=(4\pm 1)$ pb (thick red curves) and $\sigma(p\bar{p}\rightarrow
ZA^{0})Br(A^{0}\rightarrow jj)=(2.6\pm 0.6)$ pb (thin blue curves) for Scheme
(I) in plot (a) and Scheme (II) in plot (b). In this scenario, we take
$m_{A^{0}}=144$ GeV and $H^{\pm}$ is sufficiently heavy. A $K$ factor of 1.3
is used in computing the cross section.
(a) (b)
If the charged Higgs boson is a candidate for the new resonance, it will also
induce interesting phenomena in low-energy systems, where the same parameters
are involved. We find that the most interesting processes are the $B\to
X_{s}\gamma$ decay and the like-sign charged asymmetry (CA) in semileptonic
$B_{q}$ ($q=d,s$) decays. To simplify our presentation, we leave detailed
formulas in Appendix A. Using the interactions in Eq. (20), the effective
Hamiltonians for the $b\to s\gamma$ and $\Delta B=2$ processes induced by
$H^{\pm}$, as shown in Fig. 5, are respectively given by
$\displaystyle{\cal H}_{b\to s\gamma}$ $\displaystyle=$
$\displaystyle\frac{V^{*}_{ts}V_{tb}|\eta^{U}_{3}|^{2}}{16m^{2}_{H^{\pm}}}\left(Q_{t}I_{1}(y_{t})+I_{2}(y_{t})\right){\cal
O}_{7\gamma}$ $\displaystyle+$
$\displaystyle\frac{V^{*}_{ts}V_{tb}\eta^{D^{*}}_{2}\eta^{U^{*}}_{3}}{8m^{2}_{H^{\pm}}}\frac{m_{t}}{m_{b}}\left(Q_{t}J_{1}(y_{t})+J_{2}(y_{t})\right){\cal
O}^{\prime}_{7\gamma}\,,$ $\displaystyle{\cal H}(\Delta B=2)$ $\displaystyle=$
$\displaystyle\frac{\left(V^{*}_{tq}V_{tb}|\eta^{U}_{3}|^{2}\right)^{2}}{4(4\pi)^{2}m^{2}_{H^{\pm}}}I_{3}(y_{t})\bar{q}\gamma_{\mu}P_{L}b\bar{q}\gamma^{\mu}P_{L}b$
(24) $\displaystyle-$
$\displaystyle\frac{\left(V^{*}_{tq}V_{tb}\eta^{D^{*}}_{2}\eta^{U^{*}}_{3}\right)^{2}}{2(4\pi)^{2}m^{2}_{H^{\pm}}}y_{t}J_{3}(y_{t})\left(\bar{q}P_{L}b\right)^{2}~{},$
where $y_{t}\equiv m^{2}_{t}/m^{2}_{H^{\pm}}$, ${\cal O}_{7\gamma}$ and ${\cal
O}^{\prime}_{7\gamma}$ are defined in the Appendix, $Q_{t}=2/3$ is the top-
quark electric charge, and
$\displaystyle I_{1}(a)$ $\displaystyle=$
$\displaystyle\frac{2+5a-a^{2}}{6(1-a)^{3}}+\frac{a\ln a}{(1-a)^{4}}\,,$
$\displaystyle J_{1}(a)$ $\displaystyle=$
$\displaystyle\frac{3-a}{(1-a)^{2}}+\frac{\ln a}{(1-a)^{3}}\,,$ $\displaystyle
I_{2}(a)$ $\displaystyle=$
$\displaystyle\frac{1-5a-2a^{2}}{6(1-a)^{3}}-\frac{a^{2}\ln a}{(1-a)^{4}}\,,$
$\displaystyle J_{2}(a)$ $\displaystyle=$
$\displaystyle\frac{1+a}{2(1-a)^{2}}+\frac{a\ln a}{(1-a)^{3}}\,,$
$\displaystyle I_{3}(a)$ $\displaystyle=$
$\displaystyle\frac{1+a}{2(1-a)^{2}}+\frac{a\ln a}{(1-a)^{3}}\,,$
$\displaystyle J_{3}(a)$ $\displaystyle=$
$\displaystyle-\frac{2}{(1-a)^{2}}-\frac{1+a}{(1-a)^{3}}\ln a\,.$ (25)
Figure 5: (a) $b\to s\gamma$ transition, where the dot indicates another
possible place to attach the photon and (b) $M^{q}_{12}$ with $q=d,s$ induced
by $H^{\pm}$.
Using the hadronic matrix elements defined by
$\displaystyle\langle
B_{q}|\bar{q}\gamma_{\mu}P_{L}b\bar{q}\gamma^{\mu}P_{L}b|\bar{B}_{q}\rangle$
$\displaystyle=$
$\displaystyle\frac{1}{3}m_{B_{q}}f^{2}_{B_{q}}\hat{B}_{q}\,,$
$\displaystyle\langle B_{q}|\bar{q}P_{L}b\bar{q}P_{L}b|\bar{B}_{q}\rangle$
$\displaystyle\approx$
$\displaystyle-\frac{5}{24}\left(\frac{m^{2}_{B_{q}}}{m_{b}+m_{q}}\right)^{2}m_{B_{q}}f^{2}_{B_{q}}\hat{B}_{q}~{},$
(26)
and the formulas given in the Appendix, the dispersive part of
$B_{q}$-$\overline{B}_{q}$ mixing is found to be
$\displaystyle M^{q}_{12}$ $\displaystyle=$ $\displaystyle
M^{q,SM}_{12}+M^{q,H}_{12}=M^{q,SM}_{12}\Delta_{q}e^{i\phi^{\Delta}_{q}}\,,$
(27)
where
$\displaystyle\Delta_{q}$ $\displaystyle=$
$\displaystyle\left(1+R^{q^{2}}_{H}+2R^{q}_{H}\cos
2\theta^{q}_{H}\right)^{1/2}\,,$ $\displaystyle\theta^{q}_{H}$
$\displaystyle=$ $\displaystyle
arg\left(\frac{M^{q,H}_{12}}{M^{q,SM}_{12}}\right)\,,\ \ \
R^{q}_{H}=\left|\frac{M^{q,H}_{12}}{M^{q,SM}_{12}}\right|^{2}\,,$
$\displaystyle\tan\phi^{\Delta}_{q}$ $\displaystyle=$
$\displaystyle\frac{R^{q}_{H}\sin 2\theta^{q}_{H}}{1+R^{q}_{H}\cos
2\theta^{q}_{H}}\,.$ (28)
Since the charged Higgs boson is heavier than the $W$ boson, its influence on
$\Gamma^{s}_{12}$ is expected to be insignificant. Therefore, we set
$\Gamma^{q}_{12}\approx\Gamma^{q,SM}_{12}$ in our analysis. Using
$\phi_{q}=\mbox{arg}(-M^{q}_{12}/\Gamma^{q}_{12})$, the $H^{\pm}$-mediated
wrong-sign CA defined in Eq. (35) is given by
$\displaystyle a^{q}_{s\ell}(H^{\pm})$ $\displaystyle=$
$\displaystyle\frac{1}{\Delta_{q}}\frac{\sin\phi_{q}}{\sin\phi^{\rm
SM}_{q}}a^{q}_{s\ell}({\rm SM})~{},$ (29)
with $\phi^{\rm SM}_{q}=-2\beta_{q}-\gamma^{\rm SM}_{q}$ Lenz:2011ti and
$\phi_{q}=\phi^{\rm SM}_{q}+\phi^{\Delta}_{q}$. Consequently, the like-sign CA
in Eq. (36) is read as $A^{b}_{s\ell}\approx
0.506\,a^{d}_{s\ell}(H^{\pm})+0.494\,a^{s}_{s\ell}(H^{\pm})$, where the SM
contributions are $a^{s}_{s\ell}(\rm SM)\approx 1.9\times 10^{-5}$ and
$a^{d}_{s\ell}(\rm SM)\approx-4.1\times 10^{-4}$ Lenz:2011ti ; Lenz:2006hd .
In the following, we numerically study the charged Higgs contributions to the
$B\to X_{s}\gamma$ decay. In Scheme (I), as
$\eta_{3}=\eta^{U}_{3}=\eta^{D}_{3}\ll 1$ is assumed, it is clear that their
contributions to the $B$ decay are small. Thus, we concentrate on the analysis
of Scheme (II). Using Eqs. (24) and (31), the $H^{\pm}$-mediated Wilson
coefficients for $b\to s\gamma$ are given by
$\displaystyle\delta C_{7}$ $\displaystyle=$
$\displaystyle-\frac{|\eta^{U}|^{2}}{8\sqrt{2}m^{2}_{H^{\pm}}G_{F}}\left(Q_{t}I_{1}(y_{t})+I_{2}(y_{t})\right)\,,$
$\displaystyle\delta C^{\prime}_{7}$ $\displaystyle=$
$\displaystyle-\frac{\eta^{D^{*}}\eta^{U^{*}}}{4\sqrt{2}m^{2}_{H^{\pm}}G_{F}}\frac{m_{t}}{m_{b}}\left(Q_{t}J_{1}(y_{t})+J_{2}(y_{t})\right)\,.$
(30)
Here the enhancement mainly comes from the large $\eta^{U}$ coupling. Taking
Eq. (33) and setting $\eta^{D}=\rho\eta^{U}$ and
$\phi_{H}=\mbox{arg}(\eta^{D^{*}}\eta^{U^{*}})$, one can calculate the
branching ratio of $B\to X_{s}\gamma$ as a function of $\eta^{U}$ and
$\phi_{H}$. The $2\sigma$ range of experimental measurement ${\cal B}(B\to
X_{s}\gamma)=(3.55\pm 0.26)\times 10^{-4}$ TheHeavyFlavorAveragingGroup:2010qj
demands the two parameters to be within the shaded bands in Fig. 6, where plot
(a) and (b) use $\rho=1$ and $0.5$, respectively. The results show that ${\cal
B}(B\to X_{s}\gamma)$ is insensitive to the new phase $\phi_{H}$ and that the
allowed range of $\eta^{U}$ is compatible with the above analysis for the
$Wjj$ events. In addition, we also show in Fig. 6 the constraint from measured
$\Delta m_{B_{d}}$ (dashed blue curves). We only take into account $\Delta
m_{B_{d}}$ here simply because the measurement $\Delta m_{B_{d}}=0.507\pm
0.005$ ps-1 is more precise and thus stringent than $\Delta m_{B_{s}}=17.78\pm
0.12$ ps-1. It is observed that the measurement of $\Delta m_{B_{d}}$ further
excludes some of the parameter space allowed by the $B\to X_{s}\gamma$ decay.
Finally, we superimpose contours of the like-sign CA (solid red curves) in
Fig. 6. The like-sign CA has a strong dependence on the value of $\rho$. When
$\rho\sim O(1)$, $A^{b}_{s\ell}$ can be of the order of $10^{-3}$. However, it
drops close to the SM prediction when $\rho\sim O(0.1)$.
We now comment on the constraints from $K$-$\overline{K}$ and
$D$-$\overline{D}$ mixings. In the usual THDM, contributions from box diagrams
involving the charged Higgs bosons to the mass difference are important
because the charged Higgs couplings to quarks are proportional to their
masses. Therefore, the Glashow-Iliopoulos-Maiani (GIM) mechanism
Glashow:1970gm is not effective to suppress such new physics effects
Abbott:1979dt . In the scenarios considered in this work, the
$H^{\pm}qq^{\prime}$ couplings are simply proportional to the CKM matrix
elements. Therefore, the box diagrams involving the charged Higgs boson will
have GIM cancellation in the approximation that the masses of quarks in the
first two generations are negligible. Although the third generation fermions
do not have GIM cancellation, the associated CKM matrix elements are much
suppressed. In addition, the new effective operators thus induced will be
further suppressed by powers of $m_{W}/m_{H^{\pm}}$. For example, with
$m_{H^{\pm}}=140$ GeV, $\rho=1$, $\eta^{U}=0.4$ and the dispersive part of
$K$-$\overline{K}$ mixing given in Eq. (42), we obtain $\Delta m_{K}\sim
1.58\times 10^{-17}$ GeV, which is two orders of magnitude smaller than the
current measurement, $(\Delta m_{K})^{\rm exp}=(3.483\pm 0.006)\times
10^{-15}$ GeV PDG08 . We note that unlike the conventional THDM, where the
diagrams with one $W^{\pm}$ and one $H^{\pm}$ in the loop are important K , in
Scheme (II) of our model the GIM mechanism is very effective in the massless
limit of the first two generations of fermions. This has to do with the fact
that the charged Higgs couplings to these quarks are independent of quark
masses. The contributions from diagrams with the top-quark loop are also
negligible due to the suppression of the small CKM matrix elements
$(V_{ts}V^{*}_{td})^{2}$, as in the conventional THDM with small $\tan\beta$.
The relevant formulas for $K$-$\overline{K}$ mixing from these contributions
are given by Eqs. (42) and (A). The constraint from $D$-$\overline{D}$ mixing
is even weaker in view of current measurements delAmoSanchez:2010xz and the
fact that new physics contributions are both GIM and doubly Cabibbo
suppressed.
(a) (b)
Figure 6: Contours of $A^{b}_{s\ell}$ (in units of $10^{-4}$) on the
$\eta^{U}$-$\phi_{H}$ plane, where the shaded band and the dashed curves show
the constraints from the measured ${\cal B}(B\to X_{s}\gamma)$ and $\Delta
m_{B_{d}}$, respectively, within their $2\sigma$ errors.
In summary, we have studied a scenario of the two-Higgs doublet model as a
possible explanation for the excess $Wjj$ events observed by the CDF
Collaboration. In this scenario, the charged Higgs boson has a mass of about
$144$ GeV and decays into the dijets. We find that both Scheme (I) and Scheme
(II) considered in this work can explain the $Wjj$ anomaly while not upsetting
the constraints of $Zjj$ and b-jets being consistent with standard model
expectations. When applying the scenario to low-energy $B$ meson phenomena, we
find that very little constraint can be imposed on Scheme (I) as $\eta_{3}$
couplings to the third generation quarks are assumed to be negligible. Scheme
(II), on the other hand, has constraints from the $B\to X_{s}\gamma$ decay and
$\Delta m_{B_{d}}$. In particular, we find that if $\eta^{D}$ for the first
two generations is of the same order of magnitude as $\eta^{U}$, it is
possible to obtain $A_{s\ell}^{b}\sim{\cal O}(10^{-3})$. Constraints from
$K$-$\overline{K}$ and $D$-$\overline{D}$ mixings are found to be loose
primarily due to the GIM cancellation.
## Acknowledgments
F. S. Y. would like to thank Prof. Cai-Dian Lu and Dr. Xiang-Dong Gao for
useful discussions. C. H. C. was supported by NSC Grant No.
NSC-97-2112-M-006-001-MY3. C. W. C and T. N. were supported in part by
NSC-97-2112-M-001-004-MY3 and 97-2112-M-008-002-MY3. The work of F. S. Y. was
supported in part by National Natural Science Foundation of China under the
Grant Nos. 10735080 and 11075168 and National Basic Research Program of China
(973) No. 2010CB833000.
## Appendix A $B\to X_{s}\gamma$ and like-sign CA in semileptonic $B_{q}$
decays
For the $B\to X_{s}\gamma$ decay, the effective Hamiltonian is
$\displaystyle{\cal H}_{b\to s\gamma}$ $\displaystyle=$
$\displaystyle-\frac{G_{F}}{\sqrt{2}}V^{*}_{ts}V_{tb}\left(C_{7}(\mu)O_{7\gamma}+C^{\prime}_{7}(\mu)O^{\prime}_{7\gamma}\right)$
(31)
with
$\displaystyle O_{7\gamma}$ $\displaystyle=$
$\displaystyle\frac{em_{b}}{8\pi^{2}}\bar{s}\sigma_{\mu\nu}(1+\gamma_{5})bF^{\mu\nu}\,,$
$\displaystyle O^{\prime}_{7\gamma}$ $\displaystyle=$
$\displaystyle\frac{em_{b}}{8\pi^{2}}\bar{s}\sigma_{\mu\nu}(1-\gamma_{5})bF^{\mu\nu}\,.$
(32)
The branching ratio is given by bsga
$\displaystyle{\cal B}(B\to X_{s}\gamma)_{E_{\gamma}>1.6GeV}$ $\displaystyle=$
$\displaystyle\left[a_{00}+a_{77}\left(|\delta C_{7}|^{2}+|\delta
C^{\prime}_{7}|^{2}\right)\right.$ (33) $\displaystyle+$
$\displaystyle\left.a_{07}Re(\delta C_{7})+a^{\prime}_{07}Re(\delta
C^{\prime}_{7})\right]\times 10^{-4}~{},$
where $a_{00}=3.15\pm 0.23$, $a_{07}=-14.81$, $a_{77}=16.68$, and
$a^{\prime}_{07}=-0.23$. The parameters $\delta C_{7}=C^{NP}_{7}$ and $\delta
C^{\prime}_{7}\approx C^{NP}_{7}$ stand for new physics contributions.
To understand the like-sign CA, we start with a discussion of relevant
phenomena. In the strong interaction eigenbasis, the Hamiltonian for unstable
$\bar{B}_{q}$ and $B_{q}$ states is
$\displaystyle{\bf H}={\bf M^{q}}-i\frac{{\bf\Gamma^{q}}}{2}\,,$ (34)
where $\bf\Gamma^{q}$ ($\bf M^{q}$) denotes the absorptive (dispersive) part
of the $\overline{B}_{q}\leftrightarrow B_{q}$ transition. Accordingly, the
time-dependent wrong-sign CA in semileptonic $B_{q}$ decays is defined and
given PDG08 by
$\displaystyle a^{q}_{s\ell}$ $\displaystyle\equiv$
$\displaystyle\frac{\Gamma(\bar{B}_{q}(t)\to\ell^{+}X)-\Gamma(B_{q}(t)\to\ell^{-}X)}{\Gamma(\bar{B}_{q}(t)\to\ell^{+}X)+\Gamma(B_{q}(t)\to\ell^{-}X)}\,,$
(35) $\displaystyle\approx$ $\displaystyle{\rm
Im}\left(\frac{\Gamma^{q}_{12}}{M^{q}_{12}}\right)\,.$
Here, the assumption of $\Gamma^{q}_{12}\ll M^{q}_{12}$ in the $B_{q}$ system
has been used. Intriguingly, $a^{q}_{s\ell}$ is actually not a time-dependent
quantity. The relation between the wrong and like-sign CAs is defined and
expressed by Abazov:2010hv ; Grossman:2006ce
$\displaystyle A^{b}_{s\ell}$ $\displaystyle=$
$\displaystyle\frac{\Gamma(b\bar{b}\to\ell^{+}\ell^{+}X)-\Gamma(b\bar{b}\to\ell^{-}\ell^{-}X)}{\Gamma(b\bar{b}\to\ell^{+}\ell^{+}X)+\Gamma(b\bar{b}\to\ell^{-}\ell^{-}X)}\,,$
(36) $\displaystyle=$ $\displaystyle
0.506(43)a^{d}_{s\ell}+0.494(43)a^{s}_{s\ell}\,.$
Clearly, the like-sign CA is associated with the wrong-sign CA’s of the
$B_{d}$ and $B_{s}$ systems. Since the direct measurements of $a^{d}_{s\ell}$
and $a^{s}_{s\ell}$ are still quite imprecise, either $b\to d$ or $b\to s$
transition or both can be the source of the unexpectedly large $A^{b}_{s\ell}$
observed experimentally.
In order to explore new physics effects, we parameterize the transition matrix
elements as
$\displaystyle M^{q}_{12}$ $\displaystyle=$ $\displaystyle M^{q,{\rm
SM}}_{12}\Delta^{M}_{q}e^{i\phi^{\Delta}_{q}}\,,$
$\displaystyle\Gamma^{q}_{12}$ $\displaystyle=$ $\displaystyle\Gamma^{q,{\rm
SM}}_{12}\Delta^{\Gamma}_{q}e^{i\gamma^{\Delta}_{q}}~{},$ (37)
for $q=d,s$, where
$\displaystyle M^{q,{\rm SM}[{\rm NP}]}_{12}$ $\displaystyle=$
$\displaystyle\left|M^{q,{\rm SM}[{\rm
NP}]}_{12}\right|e^{2i\bar{\beta}_{q}[\theta^{{\rm NP}}_{q}]}\,,\ \
\Gamma^{q,{\rm SM}}_{12}=\left|\Gamma^{q,{\rm SM}[{\rm
NP}]}_{12}\right|e^{i\gamma^{{\rm SM}[{\rm NP}]}_{q}}\,,$
$\displaystyle\Delta^{M}_{q}$ $\displaystyle=$
$\displaystyle\left|1+r^{M}_{q}e^{2i(\theta^{{\rm
NP}}_{q}-\bar{\beta}_{q})}\right|\,,\ \ r^{M}_{q}=\frac{|M^{q,{\rm
NP}}_{12}|}{|M^{q,{\rm SM}}_{12}|}\,,$ $\displaystyle\Delta^{\Gamma}_{q}$
$\displaystyle=$ $\displaystyle\left|1+r^{\Gamma}_{q}e^{i(\gamma^{{\rm
NP}}_{q}-\gamma^{\rm SM}_{q})}\right|\,,\ \
r^{\Gamma}_{q}=\frac{|\Gamma^{q,{\rm NP}}_{12}|}{|\Gamma^{q,{\rm
SM}}_{12}|}\,,$ $\displaystyle\tan\phi^{\Delta}_{q}$ $\displaystyle=$
$\displaystyle\frac{r^{M}_{q}\sin 2(\theta^{{\rm
NP}}_{q}-\bar{\beta}_{q})}{1+r^{M}_{q}\cos 2(\theta^{{\rm
NP}}_{q}-\bar{\beta}_{q})}\,,\ \
\tan\gamma^{\Delta}_{q}=\frac{r^{\Gamma}_{q}\sin(\gamma^{{\rm
NP}}_{q}-\gamma^{\rm SM}_{q})}{1-r^{\Gamma}_{q}\cos(\gamma^{{\rm
NP}}_{q}-\gamma^{\rm SM}_{q})}\,.$ (38)
Here, the SM contribution is
$\displaystyle M^{q,SM}_{12}$ $\displaystyle=$
$\displaystyle\frac{G^{2}_{F}m^{2}_{W}}{12\pi^{2}}\eta_{B}m_{B_{q}}f^{2}_{B_{q}}\hat{B}_{q}(V^{*}_{tq}V_{tb})^{2}S_{0}(x_{t})~{},$
(39)
with $S_{0}(x_{t})=0.784x_{t}^{0.76}$, $x_{t}=(m_{t}/m_{W})^{2}$ and
$\eta_{B}\approx 0.55$ being the QCD correction to $S_{0}(x_{t})$. The phases
appearing in Eq. (37) are CP-violating phases. Note that
$\bar{\beta}_{d}=\beta_{d}$ and $\bar{\beta}_{s}=-\beta_{s}$. Using
$\phi_{q}=\mbox{arg}(-M^{q}_{12}/\Gamma^{q}_{12})$, the wrong-sign CA in Eq.
(35) with new physics effects on $\Gamma^{q}_{12}$ and $M^{q}_{12}$ can be
derived as
$\displaystyle a^{q}_{s\ell}$ $\displaystyle=$
$\displaystyle\frac{\Delta^{\Gamma}_{q}}{\Delta^{M}_{q}}\frac{\sin\phi_{q}}{\sin\phi^{\rm
SM}_{q}}a^{q}_{s\ell}({\rm SM})$ (40)
with $\phi^{\rm SM}_{q}=2\bar{\beta}_{q}-\gamma^{\rm SM}_{q}$ and
$\phi_{q}=\phi^{\rm SM}_{q}+\phi^{\Delta}_{q}-\gamma^{\Delta}_{q}$.
Furthermore, the mass and rate differences between the heavy and light $B$
mesons are given by
$\displaystyle\Delta m_{B_{q}}$ $\displaystyle=$ $\displaystyle
2|M^{q}_{12}|\,,$ $\displaystyle\Delta\Gamma^{q}$ $\displaystyle=$
$\displaystyle\Gamma_{L}-\Gamma_{H}=2|\Gamma^{q}_{12}|\cos\phi_{q}\,.$ (41)
As a comparison, we consider the new physics effect on $K-\bar{K}$ mixing due
to the box diagram with the top quark and the charged Higgs boson in the loop.
This is seen to be the major contribution as other diagrams involving lighter
quarks are GIM suppressed in the massless limit or are smaller even when mass
effects are taken into account. The result of $M_{12}$ for the diagram with
both the intermediate bosons being the charged Higgs boson is
$\displaystyle M^{K,HH}_{12}$ $\displaystyle\approx$
$\displaystyle\frac{m_{K}f^{2}_{K}(V^{*}_{td}V_{ts})^{2}}{12(4\pi)^{2}m^{2}_{H}}\left\\{\left(|\eta^{D}|^{4}+|\eta^{U}|^{4}-\frac{5}{2}|\eta^{D}|^{2}|\eta^{U}|^{2}\right)I_{3}(y_{t})\right.$
(42) $\displaystyle+$ $\displaystyle
4y_{t}J_{3}(y_{t})\left[\frac{8}{5}\left(\frac{m_{K}}{m_{s}+m_{d}}\right)^{2}Re(\eta^{D^{*}}\eta^{U})^{2}\right.$
$\displaystyle-$
$\displaystyle\left.\left.\left(\frac{1}{8}+\frac{3}{4}\left(\frac{m_{K}}{m_{s}+m_{d}}\right)^{2}\right)|\eta^{D}|^{2}|\eta^{U}|^{2}\right]\right\\}~{}.$
The contribution from the diagram with one $W$ boson and one charged Higgs
boson in the loop is
$\displaystyle M^{K,WH}_{12}$ $\displaystyle\approx$
$\displaystyle\frac{G_{F}f^{2}_{K}m_{K}}{12\sqrt{2}\pi^{2}}\left(V^{*}_{td}V_{ts}\right)^{2}$
$\displaystyle\times$
$\displaystyle\left[-\left(\frac{1}{4}+\frac{3}{2}\left(\frac{m_{K}}{m_{s}+m_{d}}\right)^{2}\right)\left|\eta^{D}\right|^{2}K_{1}(x_{t},x_{H})+\frac{m^{2}_{t}}{m^{2}_{W}}\left|\eta^{U}\right|^{2}K_{2}(x_{t},x_{H})\right]~{},$
where $x_{t}=m^{2}_{t}/m^{2}_{W}$, $x_{H}=m^{2}_{H^{\pm}}/m^{2}_{W}$ and
$\displaystyle
K_{n}(a,b)=\int^{1}_{0}dx_{1}\int^{x_{1}}_{0}dx_{2}\frac{x_{2}}{(1+(b-1)x_{1}+(a-b)x_{2})^{n}}~{}.$
(43)
Hadronic effects have been included in Eqs. (42,A) already.
## References
* (1) T. Aaltonen et al. [ CDF Collaboration ], Phys. Rev. Lett. 106, 171801 (2011). [arXiv:1104.0699 [hep-ex]].
* (2) V. M. Abazov et al. [D0 Collaboration], Phys. Rev. Lett. 107, 011804 (2011) [arXiv:1106.1921 [hep-ex]].
* (3) G. Isidori, J. F. Kamenik, [arXiv:1103.0016 [hep-ph]].
* (4) M. R. Buckley, D. Hooper, J. Kopp, E. Neil, [arXiv:1103.6035 [hep-ph]].
* (5) F. Yu, [arXiv:1104.0243 [hep-ph]].
* (6) E. J. Eichten, K. Lane and A. Martin, [arXiv:1104.0976 [hep-ph]].
* (7) C. Kilic and S. Thomas, [arXiv:1104.1002 [hep-ph]].
* (8) X. P. M. Wang, Y. K. M. Wang, B. Xiao, J. Xu and S. h. Zhu, [arXiv:1104.1161 [hep-ph]].
* (9) K. Cheung, J. Song, [arXiv:1104.1375 [hep-ph]].
* (10) J. A. Aguilar-Saavedra and M. Perez-Victoria, [arXiv:1104.1385 [hep-ph]].
* (11) X. G. He and B. Q. Ma, [arXiv:1104.1894 [hep-ph]].
* (12) X. P. Wang, Y. K. Wang, B. Xiao, J. Xu and S. h. Zhu, [arXiv:1104.1917 [hep-ph]].
* (13) R. Sato, S. Shirai and K. Yonekura, arXiv:1104.2014 [hep-ph].
* (14) A. E. Nelson, T. Okui and T. S. Roy, [arXiv:1104.2030 [hep-ph]].
* (15) L. A. Anchordoqui, H. Goldberg, X. Huang, D. Lust and T. R. Taylor, [arXiv:1104.2302 [hep-ph]].
* (16) B. A. Dobrescu and G. Z. Krnjaic, [arXiv:1104.2893 [hep-ph]].
* (17) S. Jung, A. Pierce and J. D. Wells, [arXiv:1104.3139 [hep-ph]].
* (18) M. Buckley, P. F. Perez, D. Hooper and E. Neil, [arXiv:1104.3145 [hep-ph]].
* (19) G. Zhu, [arXiv:1104.3227 [hep-ph]].
* (20) Z. Sullivan and A. Menon, [arXiv:1104.3790 [hep-ph]].
* (21) P. Ko, Y. Omura and C. Yu, [arXiv:1104.4066 [hep-ph]].
* (22) T. Plehn and M. Takeuchi, [arXiv:1104.4087 [hep-ph]].
* (23) D. W. Jung, P. Ko and J. S. Lee, [arXiv:1104.4443 [hep-ph]].
* (24) S. Chang, K. Y. Lee and J. Song, [arXiv:1104.4560 [hep-ph]].
* (25) H. B. Nielsen, [arXiv:1104.4642 [hep-ph]].
* (26) B. Bhattacherjee, S. Raychaudhuri, [arXiv:1104.4749 [hep-ph]].
* (27) Q. H. Cao, M. Carena, S. Gori, A. Menon, P. Schwaller, C. E. M. Wagner, L. T. Wang,, [arXiv:1104.4776 [hep-ph]].
* (28) K. S. Babu, M. Frank, and S. K. Raia, [arXiv:1104.4782 [hep-ph]].
* (29) B. Dutta, S. Khalil, Y. Mimura, Q. Shafi, [arXiv:1104.5209[hep-ph]].
* (30) X. Huang, [arXiv:1104.5389 [hep-ph]].
* (31) J. E. Kim, S. Shin, [arXiv:1104.5500 [hep-ph]].
* (32) L. M. Carpenter, S. Mantry, [arXiv:1104.5528 [hep-ph]].
* (33) G. Segr$\mathrm{\grave{e}}$ and B. Kayser, [arXiv:1105.1808[hep-ph]].
* (34) Y. H. Ahn and C. H. Chen, Phys. Lett. B 690, 57 (2010) [arXiv:1002.4216 [hep-ph]].
* (35) C. Amsler et al. [Particle Data Group], Phys. Lett. B667, 1 (2008).
* (36) V. M. Abazov et al. [D0 Collaboration], Phys. Rev. D 82, 032001 (2010) [arXiv:1005.2757 [hep-ex]].
* (37) Y. Grossman, Y. Nir and G. Raz, Phys. Rev. Lett. 97, 151801 (2006) [arXiv:hep-ph/0605028].
* (38) A. Lenz and U. Nierste, arXiv:1102.4274 [hep-ph].
* (39) A. Lenz and U. Nierste, JHEP 0706, 072 (2007) [arXiv:hep-ph/0612167]; A. J. Lenz, AIP Conf. Proc. 1026, 36 (2008) [arXiv:0802.0977 [hep-ph]].
* (40) M. Misiak et al., Phys. Rev. Lett. 98, 022002 (2007) [arXiv:hep-ph/0609232]; M. Misiak and M. Steinhauser, Nucl. Phys. B 840, 271 (2010) [arXiv:1005.1173 [hep-ph]]; S. Descotes-Genon, D. Ghosh, J. Matias and M. Ramon, arXiv:1104.3342 [hep-ph].
* (41) G. Abbiendi et al. [OPAL Collaboration], Euro. Phys. J C40, 317 (2006) [arXiv:hep-ex/0602042].
* (42) G. Abbiendi et al. [OPAL Collaboration], Submitted to Euro. Phys. J [arXiv:0812.0267 [hep-ex]].
* (43) The Heavy Flavor Averaging Group et al., arXiv:1010.1589 [hep-ex].
* (44) S. L. Glashow, J. Iliopoulos and L. Maiani, Phys. Rev. D 2, 1285 (1970).
* (45) M. Bona et al. [UTfit Collaboration], JHEP 0803, 049 (2008) [arXiv:0707.0636 [hep-ph]]; J. L. Evans, B. Feldstein, W. Klemm, H. Murayama and T. T. Yanagida, Phys. Lett. B 703, 599 (2011) [arXiv:1106.1734 [hep-ph]].
* (46) L. F. Abbott, P. Sikivie and M. B. Wise, Phys. Rev. D 21, 1393 (1980).
* (47) P. del Amo Sanchez et al. [The BABAR Collaboration], Phys. Rev. Lett. 105, 081803 (2010) [arXiv:1004.5053 [hep-ex]].
|
arxiv-papers
| 2011-05-14T07:45:33 |
2024-09-04T02:49:18.768912
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Chuan-Hung Chen, Cheng-Wei Chiang, Takaaki Nomura, Yu Fusheng",
"submitter": "Chuan Hung Chen",
"url": "https://arxiv.org/abs/1105.2870"
}
|
1105.2968
|
# Banking retail consumer finance data generator - credit scoring data
repository
Karol Przanowski
email: kprzan@interia.pl
url: http://kprzan.w.interia.pl
###### Abstract
This paper presents two cases of random banking data generators based on
migration matrices and scoring rules. The banking data generator is a new hope
in researches of finding the proving method of comparisons of various credit
scoring techniques. There is analyzed the influence of one cyclic
macro–economic variable on stability in the time account and client
characteristics. Data are very useful for various analyses to understand in
the better way the complexity of the banking processes and also for students
and their researches. There are presented very interesting conclusions for
crisis behavior, namely that if a crisis is impacted by many factors, both
customer characteristics: application and behavioral; then there is very
difficult to indicate these factors in the typical scoring analysis and the
crisis is everywhere, in every kind of risk reports.
Key words: credit scoring, crisis, banking data generator, retail portfolio.
###### Contents
1. 1 Introduction
2. 2 Detailed description of data generator
1. 2.1 The main options
2. 2.2 Production dataset
3. 2.3 Transaction dataset
4. 2.4 Inserting the Production dataset into the Transaction dataset
5. 2.5 Analytical Base Table – ABT dataset
6. 2.6 Migration matrix adjustment
7. 2.7 Iteration step
8. 2.8 Default definition
9. 2.9 Portfolio segmentation and risk measures
3. 3 General theory
1. 3.1 The main assumption and definition
2. 3.2 Open questions
4. 4 Two case studies
1. 4.1 Common parameters
2. 4.2 The first case study – unstable application characteristic – APP
3. 4.3 The second case study – unstable behavioral characteristic – BEH
4. 4.4 Stability problem
5. 4.5 Various types of risk measures
6. 4.6 Implementation
5. 5 Conclusions
## 1 Introduction
Currently predictive models and especially credit scoring models are very
popular in management of banking processes [1]. It is a typical that risk
scorecards are always used in credit acceptance process to optimize and
control the risk. Various forms of behavioral scorecards are also used for
management of repeat business and also for PD models in Basel RWA (Risk
Weighted Assets) calculation [2]. It is a kind of phenomenon that a list of
about 10 account or client characteristics can predict their future behavior,
their style of payments and their delinquency.
One can say the trivial fact scorecards are useful and methodology is well
known, but on the other hand still credit scoring can be developed and new
techniques should be tested. The main problem today is that there is not
defined the general testing idea of new methods and techniques, there is no
proving method of their correctness. Many good articles are prepared based on
one particular case study, on one example of real data coming from one or a
few banks [3], [4] and [5]. From a theoretical point of view, even there are
presented good results and very correct arguments to suggest choosing one
method than another, it is the prove only on that particular data is indicated
the difference, but nobody can prove it for other data, nobody can guarantee
the correctness for all cases.
There are also other important reasons why real banking data are not available
globally and cannot be used by everyone analysts, like legal constraints or
too fresh new products with too short data history. These two factors suggest
finding a quite another approach for predictive modeling testing in banking
usage.
It is a very good idea to start developing two parallel ways: real data and
random–simulated data approaches. The second one even cannot replace real data
it can be very useful to understand in the better way relations among various
factors in data, to imagine a complexity of the process and can be a trial to
create more general class of semi-real data.
Let be considered some advantages of randomly generated data:
* 1.
Today many analysts try to understand and to analyze the last crisis [6],
among other things they develop methods of indicating risk stable in the time
sub-portfolios. Topic is not easy and cannot be solved by typical predictive
models based on target variable like in the case of default risk. The notion
of stability cannot be defined for every particular account or client, one
cannot say that account is stable, only the set of accounts can be tested, so
that technique should be developed by quite different method than typical
predictive modeling with target variable. It can be formulated by a simple
conclusion: the more accounts the more robust stability testing. In the random
data generator can be tested various scenario to see and to better understand
the problem.
* 2.
Scoring Challenges or Scoring Olympic Games. From time to time there are
organized by different environments contests to find good modelers or to test
new techniques. Sometimes data are taken form too real case. Too real means,
that some real processes are not predictable, because they are influenced by
many immeasurable factors. Even if scoring models are used in practice also in
these cases it is not a good idea to use that data for contest. The best
solution and the best fairly is to use random data generator process directly
predictable.
* 3.
Reject inference area [4]. Still that topic needs development. Random data can
be generated also for, in the reality, rejected cases for testing, so it can
be used for better estimation of risk on blank areas and better experience.
* 4.
Today there are two or more techniques of scorecard building [7]. It needs to
make some comparisons, to make some analysis to define recommendations: where
and what conditions suggest to use one than another method. The same case can
be applied for different variable selection methods.
* 5.
Product profitability, bad debts and cut-offs. On random data all mentioned
notions can be tested and analyst s experience can be broadened.
* 6.
Random data can also be very important factor in the topic of data
standardization or the idea of auditing. Let imagine that there are prepared
all ready run software tools for MIS (Management Information Systems) and KPI
(Key Performance Indicators) reporting on the generic data structure firstly
uploaded by random data. Then auditing of all another data will be minimized
by only the upload data process.
Simulation data are used in many areas, for example it is very useful in
research of telecommunication network by the system like OPNET [8]. Also there
are developed simulated data in the banking area by [9] and [10].
The simplest retail consumer finance portfolio is the fixed installment loan
portfolio. Here process can be simplified by the following assumptions:
* •
for all accounts one due date in the middle of the month is defined (every
15th),
* •
every client has only one credit,
* •
client can pay whole one installment, a few installments or pay nothing, two
events only: payment or missing payment,
* •
there are measured delinquency on state: end of month by indicated the number
of due installments,
* •
all customer and account properties are randomly generated by defined proper
random distributions,
* •
if the number of due installments attain 7 (180 past due days) the process is
stopped and account is marked by bad account status, next collection steps are
omitted,
* •
if number of paid installments attains the number of all installments then the
process is stopped and account is marked by closed account status,
* •
payments or missing payments are determined by three factors: score calculated
on account characteristics, migration matrix and adjustment of that matrix by
one cycle time macroeconomic variable,
* •
score is calculated for every due installments group separately. In more
general case there can be defined different score for every status: due
installments 0, 1, …, and 6.
It is a good circumstance to emphasize that risk management today has very
good tools for risk control, even if the crisis has come and was not predicted
in the correct way, it could be indicated very quickly. It seems that the best
of risk control tools is the migration matrix reporting.
The goal of that paper can be also formulated in the following way: to create
random data with the condition to obtain the same results like observed in the
reality by typical reporting like migration matrix, flow-rates or roll-rates
and vintage or default rates.
## 2 Detailed description of data generator
### 2.1 The main options
All data are generated from starting date $T_{s}$ to ending $T_{e}$.
The migration matrix $M_{ij}$ (transition matrix) is defined as a percent of
transition after one month from due installments $i$ to due installments $j$.
There is one macro-economic variable dependent only on a time by the formula:
$E(m)$, where $m$ is a number of month from $T_{s}$. It should satisfy the
simple condition: $0.01<E(m)<0.9$, because it is used as an adjustment of
migration matrix, so it influences on the risk; in some months produces
slightly greater one and in some months lower.
### 2.2 Production dataset
The first dataset contains all applications with all available customer
characteristics and credit properties.
Customer characteristics (application data):
* •
Birthday - $T_{Birth}$ – with the distribution $D_{Birth}$
* •
Income – $x^{a}_{Income}$ \- $D_{Income}$
* •
Spending - $x^{a}_{Spending}$ – $D_{Spending}$
* •
Four nominal characteristics – $x^{a}_{Nom_{1}},...,x^{a}_{Nom_{4}}$ \-
$D_{Nom_{1}},D_{Nom_{2}},...,D_{Nom_{4}}$, in practice they can represent
variables like: job category, marital status, home status, education level, or
others.
* •
Four interval characteristics - $x^{a}_{Int_{1}},...,x^{a}_{Int_{4}}$ –
$D_{Int_{1}},D_{Int_{2}},...,D_{Int_{4}}$, represent variables like: job
seniority, personal account seniority, number of households, housing spending
or others.
Credit properties (loan data):
* •
Installment amount – $x^{l}_{Inst}$ \- with the distribution $D_{Inst}$
* •
Number of installments – $x^{l}_{N_{inst}}$ \- $D_{N_{inst}}$
* •
Loan amount – $x^{l}_{Amount}=x^{l}_{Inst}\cdot x^{l}_{N_{inst}}$
* •
Date of application (year, month) – $T_{app}$
* •
Id of application
The number of rows per month is generated based on the distribution
$D_{Applications}$.
### 2.3 Transaction dataset
Every row contains the following information (transaction data):
* •
Id of application
* •
Date of application (year, month) – $T_{app}$
* •
Current month – $T_{cur}$
* •
Number of due installments (number of missing payments) – $x^{t}_{n_{due}}$
* •
Number of paid installments – $x^{t}_{n_{paid}}$
* •
Status – $x^{t}_{status}$ \- Active (A) - is still not paid, Closed (C) is
paid, or Bad (B) – when $x^{t}_{n_{due}}=7$
* •
Pay days – $x^{t}_{days}$ – number of days from the interval $[-15,15]$ before
or after due date in a current month when payment was done, if there is
missing payment, then pay days are also missing.
### 2.4 Inserting the Production dataset into the Transaction dataset
Every month of the Production dataset updates the Transaction dataset with the
following formulas:
$T_{cur}=T_{app},\hskip 14.22636ptx^{t}_{n_{due}}=0,\hskip
14.22636ptx^{t}_{n_{paid}}=0,\hskip 14.22636ptx^{t}_{status}=A,\hskip
14.22636ptx^{t}_{days}=0.$
It is the process of inserting starting points of new accounts.
### 2.5 Analytical Base Table – ABT dataset
History of payments for every account is dependent on behavioral data, on
behavior of previous payments. It is, of course, the assumption of that data
generator.
There are many ideas of behavioral characteristics creation. There are
presented the simple methods to consider the last available states and to
indicate their evaluations in the time. All data are prepared in ABT datasets,
the notion Analytical Base Table is used by SAS Credit Scoring Solution [11].
Let set current date $T_{cur}$ as a fixed value. Actual states are calculated
for that date by the formulas (actual data):
$\displaystyle x^{act}_{days}=$ $\displaystyle x^{t}_{days}+15,$
$\displaystyle x^{act}_{n_{paid}}=$ $\displaystyle x^{t}_{n_{paid}},$
$\displaystyle x^{act}_{n_{due}}=$ $\displaystyle x^{t}_{n_{due}},$
$\displaystyle x^{act}_{utl}=$ $\displaystyle
x^{t}_{n_{paid}}/x^{l}_{N_{inst}},$ $\displaystyle x^{act}_{dueutl}=$
$\displaystyle x^{t}_{n_{due}}/x^{l}_{N_{inst}},$ $\displaystyle
x^{act}_{age}=$ $\displaystyle years(T_{Birth},T_{cur}),$ $\displaystyle
x^{act}_{capacity}=$
$\displaystyle(x^{l}_{Inst}+x^{a}_{Spending})/x^{a}_{Income},$ $\displaystyle
x^{act}_{dueinc}=$ $\displaystyle(x^{t}_{n_{due}}\cdot
x^{l}_{Inst})/x^{a}_{Income},$ $\displaystyle x^{act}_{loaninc}=$
$\displaystyle x^{l}_{Amount}/x^{a}_{Income},$ $\displaystyle
x^{act}_{seniority}=$ $\displaystyle T_{cur}-T_{app}+1,$
where $years()$ calculates the difference between two dates in years.
Let consider two time series of pay days and due installments for the last 11
months from fixed current date by the formulas:
$\displaystyle x^{act}_{days}(m)=$ $\displaystyle x^{act}_{days}(T_{cur}-m),$
$\displaystyle x^{act}_{n_{due}}(m)=$ $\displaystyle
x^{act}_{n_{due}}(T_{cur}-m),$
where $m=0,1,...,11$.
The characteristics indicated the evaluation in the time can be calculated by
the formulas:
If every elements of time series for the last $t$-months are available then
(behavioral data):
$\displaystyle x^{beh}_{days}(t)=$
$\displaystyle(\sum_{m=0}^{t-1}x^{act}_{days}(m))/t,$ $\displaystyle
x^{beh}_{n_{due}}(t)=$
$\displaystyle(\sum_{m=0}^{t-1}x^{act}_{n_{due}}(m))/t,$
where $t=3,6,9,12$.
If not all elements of time series are available then (missing imputation
formulas):
$\displaystyle x^{beh}_{days}(t)=$ $\displaystyle 15,$ $\displaystyle
x^{beh}_{n_{due}}(t)=$ $\displaystyle 2.$ (2.1)
In other words behavioral variables represent average states for last 3, 6, 9
or 12 months. Without any problem user can add many other variables by
replacing average statistic by another like MAX, MIN or other.
### 2.6 Migration matrix adjustment
Macro-economic variable $E(m)$ influenses on the migration matrix by the
formula:
$M^{adj}_{ij}=\left\\{\begin{array}[]{ll}M_{ij}(1-E(m))&\textrm{for}\hskip
14.22636ptj\leq i,\\\ M_{ij}&\textrm{for}\hskip 14.22636ptj>i+1,\\\
M_{ij}+\sum_{k=0}^{i}E(m)M_{ik}&\textrm{for}\hskip
14.22636ptj=i+1.\end{array}\right.$
### 2.7 Iteration step
That step is running to generate next month of transactions, from $T_{cur}$ to
$T_{cur}+1$. In every month some accounts are new, then the Transaction
dataset is only updated by the ideas described in the subsection 2.4. Some
accounts change the status by the formula:
$x^{t}_{status}=\left\\{\begin{array}[]{ll}C&\textrm{when}\hskip
14.22636ptx^{act}_{n_{paid}}=x^{l}_{N_{inst}},\\\ B&\textrm{when}\hskip
14.22636ptx^{act}_{n_{due}}=7,\end{array}\right.$
and these accounts are not continued in next months.
For other active accounts in the next month there are generated events:
payment or missing payment. It is based on two scorings:
$\displaystyle Score_{Main}=$
$\displaystyle\sum_{\alpha}\beta^{a}_{\alpha}x^{a}_{\alpha}+\sum_{\gamma}\beta^{l}_{\gamma}x^{l}_{\gamma}+\sum_{\delta}\beta^{act}_{\delta}x^{act}_{\delta}$
$\displaystyle+$
$\displaystyle\sum_{\eta}\sum_{t}\beta^{beh}_{\eta}(t)x^{beh}_{\eta}(t)+\beta_{r}\varepsilon+\beta_{0},$
(2.2) $\displaystyle Score_{Cycle}=$
$\displaystyle\sum_{\alpha}\phi^{a}_{\alpha}x^{a}_{\alpha}+\sum_{\gamma}\phi^{l}_{\gamma}x^{l}_{\gamma}+\sum_{\delta}\phi^{act}_{\delta}x^{act}_{\delta}$
$\displaystyle+$
$\displaystyle\sum_{\eta}\sum_{t}\phi^{beh}_{\eta}(t)x^{beh}_{\eta}(t)+\phi_{r}\epsilon+\phi_{0},$
(2.3)
where $t=3,6,9,12$,
$\alpha=Income,Spending,Nom_{1},...,Nom_{4},Int_{1},...,Int_{4}$,
$\gamma=Inst,N_{Inst},Amount$, $\eta=days,n_{due}$,
$\delta=days,n_{paid},n_{due},utl,dueutl,$
$age,capacity,dueinc,loaninc,seniority$, $\varepsilon$ and $\epsilon$ are
taken from the standardized normal distribution $N$.
Let consider the following migration matrix:
$M^{act}_{ij}=\left\\{\begin{array}[]{ll}M^{adj}_{ij}&\textrm{when}\hskip
14.22636ptScore_{Cycle}\leq\textrm{Cutoff},\\\ M_{ij}&\textrm{when}\hskip
14.22636ptScore_{Cycle}>\textrm{Cutoff},\par\end{array}\right.$
where Cutoff is another parameter like all $\beta$s and $\phi$s.
For fixed $T_{cur}$ and fixed $x^{act}_{n_{due}}=i$ all active accounts can be
segmented by $Score_{Main}$ to satisfy the same proportions like appropriate
elements of migration matrix $M^{act}_{ij}$: the first group $g=0$ by the
highest scores has share equaled to $M^{act}_{i0}$, the second $g=1$ has share
$M^{act}_{i1}$, …, and the last group $g=7$ share – $M^{act}_{i7}$.
For particular account assigned to the group $g$ payment is done in month
$T_{cur}+1$ when $g\leq i$, in other case payment is missing.
For missing payment Transaction dataset is updated by the following
information:
$x^{t}_{n_{paid}}=x^{act}_{n_{paid}},$ $x^{t}_{n_{due}}=g,$
$x^{t}_{days}=\textrm{Missing}.$
For payment by formulas:
$x^{t}_{n_{paid}}=\min(x^{act}_{n_{paid}}+x^{act}_{n_{due}}-g+1,x^{l}_{N_{inst}}),$
$x^{t}_{n_{due}}=g,$
and $x^{t}_{days}$ are generated from the distribution $D_{days}$.
Described steps are repeated for all months between $T_{s}$ and $T_{e}$.
### 2.8 Default definition
The Default is a typical credit scoring and Basel II notion. Every account
from the observation point $T_{cur}$ is tested during the outcome period
equals 3, 6, 9 and 12 months. During that time there is analyzed maximal
number of due installments, exactly:
$\textrm{MAX}=\textrm{MAX}_{m=0}^{t-1}(x^{act}_{n_{due}}(T_{cur}+m)),$
where $t=3,6,9,12$. Dependently on value MAX are defined three values of
default statuses $\textrm{Default}_{t}$:
Good: When $\textrm{MAX}\leq 1$ or during the outcome period was
$x^{t}_{status}=C$.
Bad: When $\textrm{MAX}>3$ or during the outcome period $x^{t}_{status}=B$. In
the case $t=3$ when $\textrm{MAX}>2$.
Indeterminate: for other cases.
Existing of Indeterminate status can be questionable. In some analysis only
two statuses are preferable, for example in Basel II. It is also a good topic
for father research which can be solved due to data generator described in
this paper.
### 2.9 Portfolio segmentation and risk measures
Typically credit scoring is used for the control of the following
sub–portfolios or processes:
Acceptance process – APP portfolio: It is the set of all starting points of
credits, where it is decided which one are accepted or rejected. Acceptance
sub–portfolio is defined as the set of rows of Transaction dataset with the
condition: $T_{cur}=T_{app}$. Every account belongs to that set only ones.
Cross–up sell process – BEH portfolio: It is the set of all accounts with the
longer history than 2 months and in the good condition (without delinquency).
Cross–up sell or Behavioral sub–portfolio is defined as the set of rows of
Transaction dataset with the condition: $x^{act}_{seniority}>2$ and
$x^{act}_{n_{due}}=0$. Every account can belongs to that set many times.
Collection process – COL portfolio: It is the set of all accounts with the
delinquency, but at the beginning of the collection process. Collection
sub–portfolio is defined as the set of rows of Transaction dataset with the
condition: $x^{act}_{n_{due}}=1$. Every account can belongs to that set many
times.
For every mentioned sub–portfolio one can calculates and tests risk measures
called bad rates defined as the share of Bad statuses for every observation
points and outcome periods.
Definitions of mentioned sub–portfolios in the reality can be more complex,
here are suggested the simplest versions for father analysis of cases studies
presented in the section 4.
## 3 General theory
### 3.1 The main assumption and definition
Definition. The layout
$(T_{s},T_{e},M_{ij},E(m),\beta^{a}_{\alpha},\beta^{l}_{\gamma},\beta^{act}_{\delta},\beta^{beh}_{\eta}(t),\beta_{r},\beta_{0},$
$\phi^{a}_{\alpha},\phi^{l}_{\gamma},\phi^{act}_{\delta},\phi^{beh}_{\eta}(t),\phi_{r},\phi_{0},\varepsilon,\epsilon,D_{Birth},D_{\alpha},D_{\gamma},D_{Applications},D_{days},\textrm{Cutoff})$
with the all rules and symbols, relations and processes described in the
section 2 is called The Retail Consumer Finance Data Generator in the case of
fixed installment loans with the nick name RCFDG.
Theorem – assumption. Every consumer finance portfolio with the fixed
installment loans can be estimated by the RCFDG.
The proof of that theorem can be always done in the correct way due to parts:
$\beta_{r}\varepsilon$ and $\phi_{r}\epsilon$ in the formulas 2.2 and 2.3.
From the empirical point of view credit scoring is always used in portfolio
control, so mentioned theorem is correct, but problem is with the goodness of
fit. Up to now theory is too early to define a good measures of fit, however
it is a proper starting point in the next development of the general theory of
consumer finance portfolios.
The similar ideas and researches are presented in [3].
### 3.2 Open questions
The next steps probably would be concentrated on:
* •
Finding the correct goodness of fit statistics measuring the distance between
the real consumer finance portfolio and RCFDG. Also it should be tested the
property of that statistics.
* •
Analyzing the additional constraints to satisfy for example properties like:
the predictive power, measured for example by Gini [12], of characteristic
$x^{beh}_{days}(3)$ on $\textrm{Default}_{6}$ should be equaled to $40\%$.
* •
Creating more general case with all collection processes, more than one credit
per customer, more than one macro-economic factors and other detailed issues.
* •
Analyzing of various existing real consumer finance portfolios and finding the
set of parameters describing each of them. Then there can be developed the
theory of principal component analysis (PCA) of all consumer finance
portfolios in the particular country or in the world.
* •
Defining the generalization of the notion of consumer finance portfolio
contains almost all properties of real portfolios.
* •
Using that generalized notion in researches on the development of scoring
methods to use that notion as a general idea of method proving. For example
the theorem: Scoring models build on $\textrm{Default}_{3}$ and on
$\textrm{Default}_{12}$ produce the same results could be solved by the
additional condition: betas for $t=3$ and for $t=12$ should be similar. It is
very probable that many future researches will discover many properties and
relations among betas, coefficients of the migration matrix and their
consequences.
## 4 Two case studies
### 4.1 Common parameters
All random numbers are based on two typical random generators: uniform $U$ and
standardized normal $N$ distributions, in details: the distribution $U$
returns a number from the interval $(0,1)$ with the equal probability.
All common coefficients are the following: $T_{s}=$ 1970.01 (January 1970),
$T_{e}=$ 1976.12 (December 1976),
$M_{ij}=\left[\begin{array}[]{ccccccccc}&j=0&j=1&j=2&j=3&j=4&j=5&j=6&j=7\\\
i=0&0.850&0.150&0.000&0.000&0.000&0.000&0.000&0.000\\\
i=1&0.250&0.450&0.300&0.000&0.000&0.000&0.000&0.000\\\
i=2&0.040&0.240&0.190&0.530&0.000&0.000&0.000&0.000\\\
i=3&0.005&0.025&0.080&0.100&0.790&0.000&0.000&0.000\\\
i=4&0.000&0.000&0.010&0.080&0.090&0.820&0.000&0.000\\\
i=5&0.000&0.000&0.000&0.000&0.020&0.030&0.950&0.000\\\
i=6&0.000&0.000&0.000&0.000&0.000&0.010&0.010&0.980\\\ \end{array}\right],$
$E(m)=0.01+(1.5+\sin((5\cdot\pi\cdot m)/(T_{e}-T_{s}))+N/5)/8$,
$D_{Applications}=300\cdot 30\cdot(1+N/20)$, if $T_{app}$ is December then
$D_{Applications}=D_{Applications}\cdot 1.2$. To define $D_{Birth}$ first is
defined distribution of age: $D_{Age}=((75-18)\cdot(N+4)/7+10+20\cdot U)$ if
$Age>75$ then $Age=75$, if $Age<18$ then $Age=18$.
$D_{Birth}=T_{app}-D_{Age}\cdot 365.5$, $D_{Income}=int((10000-500)/40\cdot
10\cdot abs(N)+500)$, $D_{Inst}=int(Income\cdot abs(N)/4)$,
$D_{Spending}=int(Income\cdot abs(N)/4)$, $D_{N_{Inst}}=int(30\cdot
abs(N)/4+6)$ if $N_{Inst}<6$ then $N_{Inst}=6$, $D_{Nom_{i}}=int(5\cdot
abs(N))$ and $D_{Int_{i}}=10\cdot U$, for $i=1,2,3,4$, if
$x^{act}_{n_{due}}<2$ then $D_{days}=-int(15\cdot(abs(N)/4))$ else
$D_{days}=int(15\cdot(N/4))$, where $int()$ and $abs()$ are integer value and
absolute value suitable.
To avoid scale or unit problem for every individual variable it is suggested
to make a simple standardization step for ABT table for every $T_{cur}$ before
score calculation. That idea is quite realistic, because even some customers
are good payers in the crisis time they can also have more problems, so
general condition of the current month can influence on all customers. On the
other hand to present interesting two cases is decided to standardize
variables by the global parameters.
Scoring formula for $Score_{Main}$ is calculated based on the table 1, namely:
$Score_{Main}=\sum_{index=1}^{28}\beta(x-\mu)/\sigma.$
All beta coefficients could be recalculated without standardization step, but
in that case it would be more difficult to interpret them. By a simple study
of the table 1 it can be indicated that the most significant variables have
absolute value equals 6.
Table 1: Scoring formula for $Score_{Main}$.
Index $x$ – variable $\mu$ $\sigma$ $\beta$ 1 $x^{a}_{Nom_{1}}$ 3.5 3 1 2
$x^{a}_{Nom_{2}}$ 3.5 3 2 3 $x^{a}_{Nom_{3}}$ 3.5 3 1 4 $x^{a}_{Nom_{4}}$ 3.5
3 3 5 $x^{a}_{Int_{1}}$ 5 2.89 1 6 $x^{a}_{Int_{2}}$ 5 2.89 -4 7
$x^{a}_{Int_{3}}$ 5 2.89 1 8 $x^{a}_{Int_{4}}$ 5 2.89 -2 9 $x^{act}_{days}$ 13
2.42 -5 10 $x^{act}_{utl}$ 0.36 0.28 -4 11 $x^{act}_{dueutl}$ 0.12 0.2 -6 12
$x^{act}_{n_{d}ue}$ 1.3 2 -2 13 $x^{act}_{age}$ 53 9.9 4 14
$x^{act}_{capacity}$ 0.4 0.21 -2 15 $x^{act}_{dueinc}$ 0.3 0.6 -1 16
$x^{act}_{loaninc}$ 2.4 2.1 -2 17 $x^{a}_{Income}$ 2395 1431 2 18
$x^{l}_{Amount}$ 5741 6804 -1 19 $x^{l}_{N_{inst}}$ 12.3 4.63 -4 20
$x^{beh}_{n_{due}}(3)$ 1.4 1.6 -4 21 $x^{beh}_{days}(3)$ 14.15 1.4 -6 22
$x^{beh}_{n_{due}}(6)$ 1.6 1.13 -5 23 $x^{beh}_{days}(6)$ 14.57 1.02 -6 24
$x^{beh}_{n_{due}}(9)$ 1.78 0.75 -5 25 $x^{beh}_{days}(9)$ 14.78 0.72 -6 26
$x^{beh}_{n_{due}}(12)$ 1.89 0.48 -5 27 $x^{beh}_{days}(12)$ 14.91 0.49 -6 28
$\varepsilon$ 0 0.02916 1
### 4.2 The first case study – unstable application characteristic – APP
In that case it is assumed that only customers with low income can be
influenced by a crisis. Application characteristic income in that data
generator is a stable variable during the time, and the migration matrix is
adjusted by the macro–economic $E(m)$ only for cases:
$x^{a}_{Income}<1800.$
Presented relation without any problem can be transformed into the general
form 2.3.
### 4.3 The second case study – unstable behavioral characteristic – BEH
Here the condition for migration matrix adjustment is the following:
$x^{beh}_{n_{due}}(6)>0\hskip 14.22636pt\textrm{and}\hskip
14.22636ptx^{act}_{seniority}>6,$
the rule for the seniority variable is added to not adjust accounts with
missing imputation based on 2.1. That case presents situation when crisis has
an impact on customers who had some delinquency during their last 6 months.
### 4.4 Stability problem
Let be considered the typical scoring models building process, for example on
behavioral sub–portfolio. Because two cases are based on two variables one
application and one behavioral let be considered only the set of these two
variables. To indicate strong instability models they are analyzed with the
target variable $\textrm{Default}_{9}$.
Every variable is segmented or binned for a few attributes described in the
tables 2 and 3.
In the case of unstable application variable (APP) by studying the figure 6
can be confirmed, what is expected, that attribute 2 is very stable during the
time and accounts from that group are not quite sensible for crisis changes.
In opposite attribute 1 is very unstable. The same groups in the case of
unstable behavioral variable (BEH) are both unstable, see the figure 7. The
same group, accounts from attribute 2, are presented on figure 5 for both
cases to indicate in a better scale that APP case can really choose accounts
not sensitive on the crisis. Even data generator is simplicity of the real
data, that conclusion is very useful. Some application data can be profitable
in risk management to indicate sub–segments with stable risk in the time.
Not the same conclusions can be formulated for behavioral variable
$x^{beh}_{n_{due}}(6)$. On the figure 3 there are presented risk evolutions
for three attributes of that variable. All of them are not stable. The most
stable attribute is with the number 3. Also for the case BEH that attribute is
not stable, see the figure 4. To be sure of that there are also presented on
the figure 2 only attributes 3 for both cases. Every reader can say that both
cases have unstable risk. Even in the case BEH the attribute 3 is expected to
have a stable risk, due to the rule for migration matrix adjustment,
expectation has failed. The reason comes from the correct understanding of the
process. Typical scoring approach is based on the principal idea that
historical information up to the observation point is able to predict behavior
during the outcome period. Up to the observation point account did not have
any delinquency so the variable $x^{beh}_{n_{due}}(6)=0$. After that point in
the next months account can have due installments. It can be adjusted by the
macro–economic variable and on the end that group can become unstable.
The mentioned idea is very important for father research of the crisis. It
should be emphasized that typical scoring methods used on three types of
sub–portfolios: APP, BEH and COL cannot discover in the correct way the rule
of crisis adjustment and cannot indicate some sub–segments stable in the time.
Of course scoring can be also used just like in that paper for prediction of
migration states; to be very clear, not for default statuses prediction but
for transition prediction. The best method is probably the survival analysis
[13] or [14] with time covariates (time dependent variables), where in natural
way there is indicated the factor of being better or worse payer in the
correct time, namely in the typical scoring model the factor is considered but
only up to the observation point. In the survival model however it can be also
taken into the account after that observation point, so in the more realistic
way.
There are made many other cases of data generators with more complex rule for
$Score_{Cycle}$. If there are taken together both types of variables:
application and behavioral the case is too complicated and unstable property
exists everywhere. In that case is not possible to find stable factor. That
conclusion is also very important for crisis analysis, because it describes
the nature of crisis: if it is a strong event and it has an impact on both
types of characteristics behavioral and application – it is and risk
management can try to find some sub–segments only more stable then others or
with maximal risk not exceeded the expected boundary.
Table 2: Simple binning for two variables in the case APP. Characteristic | Attribute | Condition | Bad rate | Population | Gini
---|---|---|---|---|---
| number | | on $\textrm{Default}_{9}$ | percent | on $\textrm{Default}_{9}$
$x^{beh}_{n_{due}}(6)$ | | 1
---
2
3
| $x^{act}_{seniority}<6$
---
$x^{beh}_{n_{due}}(6)>0$ and $x^{act}_{seniority}\geq 6$
otherwise
| 16. | 77%
---|---
6. | 48%
1. | 07%
| 37. | 09%
---|---
22. | 49%
40. | 42%
51.34%
$x^{a}_{Income}$ | | 1
---
2
| $x^{a}_{Income}<1800$
---
$x^{a}_{Income}\geq 1800$
| 20. | 11%
---|---
4. | 72%
| 18. | 32%
---|---
81. | 68%
36.29%
Table 3: Simple binning for two variables in the case BEH. Characteristic | Attribute | Condition | Bad rate | Population | Gini
---|---|---|---|---|---
| number | | on $\textrm{Default}_{9}$ | percent | on $\textrm{Default}_{9}$
$x^{beh}_{n_{due}}(6)$ | | 1
---
2
3
| $x^{act}_{seniority}<6$
---
$x^{beh}_{n_{due}}(6)>0$ and $x^{act}_{seniority}\geq 6$
otherwise
| 19. | 49%
---|---
14. | 04%
1. | 74%
| 40. | 05%
---|---
16. | 52%
43. | 43%
46.54%
$x^{a}_{Income}$ | | 1
---
2
| $x^{a}_{Income}<1800$
---
$x^{a}_{Income}\geq 1800$
| 12. | 09%
---|---
10. | 09%
| 39. | 49%
---|---
60. | 51%
5.04%
### 4.5 Various types of risk measures
Let be defined that crisis is a time where risk is the highest. The most
popular reporting for risk management is based on bad rates, vintage and flow
rates. The figure 1 presents bad rates for three different sub–portfolios
application, behavioral and collection. There is presented also one flow rate.
There is a simple conclusion that crisis does not occur in the same time. Some
curves indicate local maximum of risk earlier than others. The difference in
the time is significant and can be almost 6 months, so it is very important to
remember what kind of reports can indicate a crisis as quickly as possible. It
should be emphasized that bad rates reports present, by the standard way, the
evaluation of risk by observation points and a crisis time can occur between
observation point and the end of outcome period. It seems that flow rates
reports precise the crisis time in better way.
Figure 1: Risk measures on $\textrm{Default}_{9}$ comparison on
sub–portfolios: APP, BEH and COL and also with one flow rate $M_{23}$. Figure
2: Risk measures on $\textrm{Default}_{9}$ on attribute 3 of variable
$x^{beh}_{n_{due}}(6)$ for two cases APP and BEH. Figure 3: Risk measures on
$\textrm{Default}_{9}$ on attributes of variable $x^{beh}_{n_{due}}(6)$ for
the case APP. Figure 4: Risk measures on $\textrm{Default}_{9}$ on attributes
of variable $x^{beh}_{n_{due}}(6)$ for the case BEH. Figure 5: Risk measures
on $\textrm{Default}_{9}$ on attribute 2 of variable $x^{a}_{Income}$ for two
cases APP and BEH. Figure 6: Risk measures on $\textrm{Default}_{9}$ on
attributes of variable $x^{a}_{Income}$ for the case APP. Figure 7: Risk
measures on $\textrm{Default}_{9}$ on attributes of variable $x^{a}_{Income}$
for the case BEH.
### 4.6 Implementation
All data were prepared by the SAS System [11] by manual codes written in SAS
4GL used units: Base SAS and SAS/STAT. For the case of unstable behavioral
variable – BEH: Production dataset has 779 993 rows (about 90MB) and
Transaction dataset – 8 969 413 rows (about 400MB). Total time of calculation
per one case takes about 4 hours.
## 5 Conclusions
Even if data are generated by random–simulated process, which is not
realistic, the conclusions give the possibility to better understand the
nature of the crisis.
The banking data generator is a new hope for researching to find the proving
method of comparisons of various credit scoring techniques. It is probable
that in the future many random generated data will become the new repository
for testing and comparisons.
In the first case – unstable application variable like income is possible to
split portfolio for two parts: stable and unstable during the time. For the
second case unstable – behavioral characteristic the task is more complicated
and it is not possible to split in the same way. Some sub–segments can have
better stability but always they fluctuate. Moreover if a crisis is impacted
by many factors both from application form customer characteristics and from a
customer behavioral together it is very difficult to indicate these factors
and the crisis in reports is everywhere.
Generated data are very useful for various analysis and researches. There are
many rows, many bad default statuses, so analyst can make many good exercises
to improve his experience.
## References
* [1] Edward Huang. Scorecard specification, validation and user acceptance: A lesson for modellers and risk managers. Credit Scoring Conference CRC, Edinburgh, 2007.
* [2] Basel Committee on Banking Supervision. International convergence of capital measurement and capital standards. A Revised Framework, Updated November 2005. http://www.bis.org.
* [3] Madhur Malik & Lyn C Thomas. Modelling credit risk in portfolios of consumer loans: Transition matrix model for consumer credit ratings. Credit Scoring Conference CRC, Edinburgh, 2009.
* [4] Edward Huang, Christopher Scott. Credit risk scorecard design, validation and user acceptance: A lesson for modellers and risk managers. Credit Scoring Conference CRC, Edinburgh, 2007.
* [5] Izabela Majer. Application scoring: logit model approach and the divergence method compared. Warsaw School of Economics – SGH, Working Paper No. 10-06, 2010\.
* [6] Elizabeth Mays. Systematic risk effects on consumer lending products. Credit Scoring Conference CRC, Edinburgh, 2009.
* [7] Naeem Siddiqi. Credit risk scorecards: Developing and implementing intelligent credit scoring. Wiley and SAS Business Series, 2005.
* [8] OPNET Technologies Inc. http://www.opnet.com.
* [9] Bala Supramaniam, Mahadevan; Shanmugam. Simulating retail banking for banking students. Reports – Evaluative, Practitioners and Researchers ERIC Identifier: ED503907, 2009.
* [10] H. J. Watson. Simulating retail banking for banking students. Computer simulation in business. New York: John Wiley & Sons., 1981\.
* [11] SAS Institute Inc. http://www.sas.com.
* [12] Basel Committee on Banking Supervision. Validation of internal rating systems. Working Paper No. 14, February 2005. http://www.bis.org.
* [13] Tony Bellotti, Jonathan Crook. Credit scoring with macroeconomic variables using survival analysis. Journal of the Operational Research Society Key: citeulike:4083586, 2009.
* [14] Jonathan Crook. Dynamic consumer risk models: an overview. Credit Scoring Conference CRC, Edinburgh, 2008.
|
arxiv-papers
| 2011-05-15T20:54:07 |
2024-09-04T02:49:18.777469
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Karol Przanowski",
"submitter": "Karol Przanowski",
"url": "https://arxiv.org/abs/1105.2968"
}
|
1105.3254
|
# New metric tensors for anisotropic mesh generation
Hehu Xie111LSEC, ICMSEC, Academy of Mathematics and Systems Science, CAS,
Beijing 100080, China email: hhxie@lsec.cc.ac.cn Xiaobo Yin222Department of
Mathematics, Central China Normal University, Wuhan 430079, China email:
yinxb@lsec.cc.ac.cn
> Abstract. A new anisotropic mesh adaptation strategy for finite element
> solution of elliptic differential equations is presented. It generates
> anisotropic adaptive meshes as quasi-uniform ones in some metric space, with
> the metric tensor being computed based on a posteriori error estimates
> proposed in [36]. The new metric tensor explores more comprehensive
> information of anisotropy for the true solution than those existing ones.
> Numerical results show that this approach can be successfully applied to
> deal with poisson and steady convection-dominated problems. The superior
> accuracy and efficiency of the new metric tensor to others is illustrated on
> various numerical examples of complex two-dimensional simulations.
>
> Keywords. anisotropic; mesh adaptation; metric tensor.
>
> AMS subject classification. 65N30, 65N50
## 1 Introduction
Nowadays, many computational simulations of partial differential equations
(PDEs) involve adaptive triangulations. Mesh adaptation aims at improving the
efficiency and the accuracy of numerical solutions by concentrating more nodes
in regions of large solution variations than in other regions of the
computational domain. As a consequence, the number of mesh nodes required to
achieve a given accuracy can be dramatically reduced thus resulting in a
reduction of the computational cost. Traditionally, isotropic mesh adaptation
has received much attention, where almost regular mesh elements are only
adjusted in size based on an error estimate. However, in regions of large
solution gradient, adaptive isotropic meshes usually contain too many
elements. Moreover, if the problem at hand exhibits strong anisotropic
features that their solutions change more significantly in one direction than
the others, like boundary layers, shock waves, interfaces, and edge
singularities, etc.. In such cases it is advantageous to reflect this
anisotropy in the discretization by using meshes with anisotropic elements
(sometimes also called elongated elements). These elements have a small mesh
size in the direction of the rapid variation of the solution and a larger mesh
size in the perpendicular direction. Indeed anisotropic meshes have been used
successfully in many areas, for example in singular perturbation and flow
problems [1, 2, 4, 20, 21, 31, 39] and in adaptive procedures [5, 6, 8, 10,
22, 31, 32]. For problems with very different length scales in different
spatial directions, long and thin triangles turn out to be better choices than
shape regular ones if they are properly used.
Compared to traditionally used isotropic ones, anisotropic meshes are more
difficult to generate, requiring a full control of both the shape, size, and
orientation of elements. It is necessary to have as much information as
possible on the nature and local behavior of the solution. We need to convert
somehow what we know about the solution to something having the dimension of a
length and containing directional information. In practice, they are commonly
generated as quasi-uniform meshes in the metric space determined by a tensor
(or a matrix-valued function) specifying the shape, size, and orientation of
elements on the entire physical domain.
Such a metric tensor is often given on a background mesh, either prescribed by
the user or chosen as the mesh from the previous iteration in an adaptive
solver. So far, several meshing strategies have been developed for generating
anisotropic meshes according to a metric prescription. Examples include blue
refinement [27, 28], directional refinement [32], Delaunay-type triangulation
method [5, 6, 10, 31], advancing front method [18], bubble packing method
[35], local refinement and modification [20, 33]. On the other hand,
variational methods have received much attention in the recent years typically
for generating structured meshes as they are especially well suited for finite
difference schemes. In these approaches, an adaptive mesh is considered as the
image of a computational mesh under a coordinate transformation determined by
a functional [7, 15, 23, 25, 26, 29]. Readers are referred to [17] for an
overview.
Among these meshing strategies, the definition of the metric tensor based on
the Hessian of the solution seems nowadays commonly generalized in the meshing
community. This choice is largely motivated by the interesting numerical
results obtained by the results of D’Azevedo [13], D’Azevedo and Simpson [14]
on linear interpolation for quadratic functions on triangles. For example,
Castro-D$\acute{\imath}$az et al. [10], Habashi et al. [20], and Remacle et
al. [33] define their metric tensor as
$\displaystyle\mathcal{M}=|H(u)|\equiv
R\left(\begin{array}[]{cc}|\lambda_{1}|&0\\\
0&|\lambda_{2}|\end{array}\right)R^{T}$ (1.3)
where the Hessian of function $u$ has the eigen-decomposition
$H(u)=R\,\,\mbox{diag}(\lambda_{1},\lambda_{2})R^{T}$. To guarantee its
positive definiteness and avoid unrealistic metric, $\mathcal{M}$ in (1.3) is
modified by imposing the maximal and minimal edge lengths. Hecht [22] uses
$\displaystyle\mathcal{M}=\frac{1}{\epsilon_{0}\cdot\mbox{Coef}^{2}}\frac{|H(u)|}{\max\\{\mbox{CutOff},|u|\\}}$
(1.4)
for the relative error and
$\displaystyle\mathcal{M}=\frac{1}{\epsilon_{0}\cdot\mbox{Coef}^{2}}\frac{|H(u)|}{\sup(u)-\inf(u)}$
(1.5)
for the absolute error, where $\epsilon_{0}$, Coef, and CutOff are the user
specified parameters used for setting the level of the linear interpolation
error (with default value 10-2), the value of a multiplicative coefficient on
the mesh size (with default value 1), and the limit value of the relative
error evaluation (with default value 10-5), respectively. In [19], George and
Hecht define the metric tensor for various norms of the interpolation error as
$\displaystyle\mathcal{M}={\Big{(}}\frac{c_{0}}{\epsilon_{0}}{\Big{)}}^{\nu}R\left(\begin{array}[]{cc}|\lambda_{1}|^{\nu}&0\\\
0&|\lambda_{2}^{\nu}\end{array}\right)R^{T}$ (1.8)
where $c_{0}$ is a constant, $\epsilon_{0}$ is a given error threshold, and
$\nu=1$ for the $L^{\infty}$ norm and the $H^{1}$ semi-norm and $\nu=1/2$ for
$L^{2}$ norm of the error. It is emphasized that the definitions (1.3)-(1.5)
are based on the results of [13] while (1.8) largely on heuristic
considerations. Huang [24] develop the metric tensors as
$\displaystyle\mathcal{M}_{0}=\frac{1}{\sigma}\cdot{\Big{(}}\frac{\alpha}{\epsilon_{0}}{\Big{)}}\det{\Big{(}}I+\frac{1}{\alpha}|H(u)|{\Big{)}}^{-\frac{1}{6}}{\Big{[}}I+\frac{1}{\alpha}|H(u)|{\Big{]}}$
(1.9)
for the $L^{2}$ norm and
$\displaystyle\mathcal{M}_{1}=\frac{1}{\sigma}\cdot{\Big{(}}\frac{\alpha}{\epsilon_{0}}{\Big{)}}^{2}{\Big{[}}I+\frac{1}{\alpha}|H(u)|{\Big{]}}$
(1.10)
for the $H^{1}$ semi-norm.
This list is certainly incomplete, but from the papers we can draw two
conclusions. First, compared with isotropic mesh, significant improvements in
accuracy and efficiency can be gained when a properly chosen anisotropic mesh
is used in the numerical solution for a large class of problems which exhibit
anisotropic solution features. Second, there are still challenges to access
fully anisotropic information from the computed solution.
The objective of this paper is to develop a new way to get metric tensors for
anisotropic mesh generation in two dimension, which explores more
comprehensive anisotropic information than some exist methods. The development
is based on the error estimates obtained in our recent work [36] on linear
interpolation for quadratic functions on triangles. These estimates are
anisotropic in the sense that they allow a full control of the shape of
elements when used within a mesh generation strategy.
The application of the error estimates of [36] to formulate the metric tensor,
$\mathcal{M}$, is based on two factors: on the one hand, as a common practice
in the existing anisotropic mesh generation codes, we assume that the
anisotropic mesh is generated as a quasi-uniform mesh in the metric tensor
$\mathcal{M}$, i.e., a mesh where the elements are equilateral or isosceles
right triangle or other quasi-uniform triangles (shape requirement) in
$\mathcal{M}$ and unitary in size (size requirement) in $\mathcal{M}$. On the
other hand, the anisotropic mesh is required to minimize the error for a given
number of mesh points (or equidistribute the error). Then $\mathcal{M}$ is
constructed from these requirements. We will establish new metric tensors as
$\displaystyle\mathcal{M}_{0}({\bf
x})=\frac{N}{\sigma_{0}}\det{\Big{(}}\mathcal{H}{\Big{)}}^{-\frac{1}{6}}{\Big{[}}\mathcal{H}{\Big{]}}$
(1.11)
for the $L^{2}$ norm and
$\displaystyle\mathcal{M}_{1}({\bf
x})=\frac{N}{\sigma_{1}}{\Big{[}}\frac{\mbox{tr}(\mathcal{H})}{\sqrt{\det(\mathcal{H})}}{\Big{]}}^{\frac{1}{2}}{\Big{[}}\mathcal{H}{\Big{]}}$
(1.12)
for the $H^{1}$ semi-norm, where $N$ is the number of elements in the
triangulation. Under the condition $\mathcal{H}=I+\frac{1}{\alpha}|H(u)|$, our
metric tensor $\mathcal{M}_{0}$ for the $L^{2}$ norm is similar to (1.9).
However, the new metric tensor $\mathcal{M}_{1}$ is essentially different with
those metric tensors mentioned above. The difference lies on the term
$\displaystyle{\Big{[}}\frac{\mbox{tr}(\mathcal{H})}{\sqrt{\det(\mathcal{H})}}{\Big{]}}^{\frac{1}{2}},$
(1.13)
which indicates our metric tensor explores more comprehensive anisotropic
information of the solution when the term (1.13) varies significantly at
different points or elements. In addition, numerical results will show that
the more anisotropic the solution is, the more obvious the superiority of the
new metric tensor is.
The paper is organized as follows. In Section 2, we describe the anisotropic
error estimates on linear interpolation for quadratic functions on triangles
obtained in the recent work [36]. The formulation of the metric tensor is
developed in Section 3. Numerical results are presented in Section 4 for
various examples of complex two-dimensional simulations. Finally, conclusions
are drawn in Section 5.
## 2 Anisotropic estimates for interpolation error
Needless to say, the interpolation error depends on the solution and the size
and shape of the elements in the mesh. Understanding this relation is crucial
for the generation of efficient and effective meshes for the finite element
method. In the mesh generation community, this relation is studied more
closely for the model problem of interpolating quadratic functions. This model
is a reasonable simplification of the cases involving general functions, since
quadratic functions are the leading terms in the local expansion of the linear
interpolation errors. For instance, Nadler [30] derived an exact expression
for the $L^{2}$-norm of the linear interpolation error in terms of the three
sides ${\bf\ell}_{1}$, ${\bf\ell}_{2}$, and ${\bf\ell}_{3}$ of the triangle K
(see Figure 1),
$\displaystyle\|u-u_{I}\|^{2}_{L^{2}(K)}=\frac{|K|}{180}{\Big{[}}{\Big{(}}d_{1}+d_{2}+d_{3}{\Big{)}}^{2}+d_{1}d_{2}+d_{2}d_{3}+d_{1}d_{3}{\Big{]}}.$
(2.1)
where $|K|$ is the area of the triangle, $d_{i}={\bf\ell}_{i}\cdot
H{\bf\ell}_{i}$ with $H$ being the Hessian of $u$. Bank and Smith [3] gave a
formula for the $H^{1}$-seminorm of the linear interpolation error
$\displaystyle\|\nabla(u-u_{I})\|^{2}_{L^{2}(K)}=\frac{1}{4}{\bf v}\cdot B{\bf
v},$
where ${\bf v}=[d_{1},d_{2},d_{3}]^{T}$,
$\displaystyle
B=\frac{1}{48|K|}\left(\begin{array}[]{ccc}|{\bf\ell}_{1}|^{2}+|{\bf\ell}_{2}|^{2}+|{\bf\ell}_{3}|^{2}&2{\bf\ell}_{1}\cdot{\bf\ell}_{2}&2{\bf\ell}_{1}\cdot{\bf\ell}_{3}\\\
2{\bf\ell}_{1}\cdot{\bf\ell}_{2}&|{\bf\ell}_{1}|^{2}+|{\bf\ell}_{2}|^{2}+|{\bf\ell}_{3}|^{2}&2{\bf\ell}_{2}\cdot{\bf\ell}_{3}\\\
2{\bf\ell}_{1}\cdot{\bf\ell}_{3}&2{\bf\ell}_{2}\cdot{\bf\ell}_{3}&|{\bf\ell}_{1}|^{2}+|{\bf\ell}_{2}|^{2}+|{\bf\ell}_{3}|^{2}\end{array}\right).$
Cao [9] derived two exact formulas for the $H^{1}$-seminorm and $L^{2}$-norm
of the interpolation error in terms of the area, aspect ratio, orientation,
and internal angles of the triangle.
Chen, Sun and Xu [11] obtained the error estimate
$\displaystyle\|\nabla(u-u_{I})\|^{2}_{L^{p}(\Omega)}\leq
CN^{-\frac{2}{n}}\|\sqrt[n]{\det H}\|_{L^{\frac{pn}{2p+n}}(\Omega)},1\leq
p\leq\infty,$
where the constant $C$ does not depend on $u$ and $N$. They also showed the
estimate is optimal in the sense that it is a lower bound if $u$ is strictly
convex or concave.
Assuming $u=\lambda_{1}x^{2}+\lambda_{2}y^{2}$, D’Azevedo and Simpson [12]
derived the exact formula for the maximum norm of the interpolation error
$\displaystyle\|(u-u_{I})\|^{2}_{L^{\infty}(K)}=\frac{D_{12}D_{23}D_{31}}{16\lambda_{1}\lambda_{2}|K|^{2}},$
where
$D_{ij}={\bf\ell}_{i}\cdot\mbox{diag}(\lambda_{1},\lambda_{2}){\bf\ell}_{i}$.
Based on the geometric interpretation of this formula, they proved that for a
fixed area the optimal triangle, which produces the smallest maximum
interpolation error, is the one obtained by compressing an equilateral
triangle by factors $\sqrt{\lambda_{1}}$ and $\sqrt{\lambda_{1}}$ along the
two eigenvectors of the Hessian of $u$. Furthermore, the optimal incidence for
a given set of interpolation points is the Delaunay triangulation based on the
stretching map (by factors $\sqrt{\lambda_{1}}$ and $\sqrt{\lambda_{1}}$ along
the two eigenvector directions) of the grid points. Rippa [34] showed that the
mesh obtained this way is also optimal for the $L^{p}$-norm of the error for
any $1\leq p\leq\infty$.
The element-wise error estimates in the following theorem are developed in
[36] using the theory of interpolation for finite elements and proper
numerical quadrature formula.
###### Theorem 2.1.
Let $u$ be a quadratic function and $u_{I}$ is the Lagrangian linear finite
element interpolation of $u$. The following relationship holds:
$\displaystyle\|\nabla(u-u_{I})\|^{2}_{L^{2}(K)}=\frac{1}{48|K|}\sum_{i=1}^{3}({\bf\ell}_{i+1}\cdot
H{\bf\ell}_{i+2})^{2}|{\bf\ell}_{i}|^{2},$
where we prescribe $i+3=i,i-3=i$.
Naturally,
$\displaystyle\eta_{I}=\sqrt{\sum_{K\in\mathcal{T}_{h}}\frac{1}{48|K|}\sum_{i=1}^{3}({\bf\ell}_{i+1}\cdot
H{\bf\ell}_{i+2})^{2}|{\bf\ell}_{i}|^{2}}$
is set as the a posteriori estimator for
$\|\nabla(u-u_{I})\|_{L^{2}(\Omega)}$. Numerical experiments in [36] show that
the estimators are always asymptotically exact under various isotropic and
anisotropic meshes.
## 3 Metric tensors for anisotropic mesh adaptation
We now use the results of Theorem 2.1 to develop a formula for the metric
tensor. As a common practice in anisotropic mesh generation, we assume that
the metric tensor, $\mathcal{M}({\bf x})$, is used in a meshing strategy in
such a way that an anisotropic mesh is generated as a quasi-uniform mesh in
the metric determined by $\mathcal{M}({\bf x})$. Mathematically, this can be
interpreted as the shape, size and equidistribution requirements as follows.
The shape requirement. The elements of the new mesh, $\mathcal{T}_{h}$, are
(or are close to being) equilateral in the metric.
The size requirement. The elements of the new mesh $\mathcal{T}_{h}$ have a
unitary volume in the metric, i.e.,
$\displaystyle\int_{K}\sqrt{\det(\mathcal{M}({\bf x}))}d{\bf x}=1,\quad\forall
K\in\mathcal{T}_{h}.$ (3.1)
The equidistribution requirement. The anisotropic mesh is required to
minimize the error for a given number of mesh points (or equidistribute the
error on every element).
We now state the most significant contributions of this paper.
### 3.1 Metric tensor for the $H^{1}$ semi-norm
$\mathcal{F}_{K}$$\bf\ell_{1}$$\bf\ell_{2}$$\bf\ell_{3}$$\theta_{1}$$\theta_{2}$$\theta_{3}$$\hat{\theta}_{1}$$\hat{\theta}_{2}$$\hat{\theta}_{3}$$\hat{\ell}_{1}$$\hat{\ell}_{2}$$\hat{\ell}_{3}$
Figure 1: Affine map ${\bf\hat{x}}=\mathcal{F}_{K}{\bf x}$ from $K$ to the
reference triangle $\hat{K}$.
Assume that $H({\bf x})$ is a symmetric positive definite matrix on every
point ${\bf x}$. Let $\mathcal{M}_{1}({\bf x})=\theta_{1}M_{1}({\bf x})$ where
$\theta_{1}$ is to be determined. Here $M_{1}({\bf x})$ is often called the
monitor function. Both the monitor function $M_{1}({\bf x})$ and metric tensor
$\mathcal{M}_{1}({\bf x})$ play the same role in mesh generation, i.e., they
are used to specify the size, shape, and orientation of mesh elements
throughout the physical domain. The only difference lies in the way they
specify the size of elements. Indeed, $M_{1}({\bf x})$ specifies the element
size through the equidistribution condition, while $\mathcal{M}_{1}({\bf x})$
determines the element size through the unitary volume requirement (3.1).
Assume that $H({\bf x})$ is a constant matrix on $K$, denoted by $H_{K}$, then
so does $M_{1}({\bf x})$, denoted by $M_{1,K}$. Since $H_{K}$ is a symmetric
positive definite matrix, we consider the singular value decomposition
$H_{K}=R^{T}\Lambda R$, where $\Lambda=\mbox{diag}(\lambda_{1},\lambda_{2})$
is the diagonal matrix of the corresponding eigenvalues
($\lambda_{1},\lambda_{2}>0$) and $R$ is the orthogonal matrix having as rows
the eigenvectors of $H_{K}$. Denote by $F_{K}$ and ${\bf t}_{K}$ the matrix
and the vector defining the invertible affine map $\hat{\bf
x}=\mathcal{F}_{K}({\bf x})=F_{K}{\bf x}+{\bf t}_{K}$ from the generic element
$K$ to the reference triangle $\hat{K}$ (see Figure 1).
Let $M_{1,K}=C_{K}H_{K}$, then
$F_{K}=C_{K}^{\frac{1}{2}}\Lambda^{\frac{1}{2}}R$ and
$M_{1,K}=F_{K}^{T}F_{K}$. Mathematically, the shape requirement can be
expressed as
$\displaystyle|\hat{\ell}_{i}|=L\,\,\mbox{and}\,\,\cos\hat{\theta}_{i}=\frac{\hat{\ell}_{i+1}\cdot\hat{\ell}_{i+2}}{L^{2}}=\frac{1}{2},\,i=1,2,3,$
where $L$ is a constant for every element $K$. Together with Theorem 2.1 we
have
$\displaystyle\|\nabla(u-u_{I})\|^{2}_{L^{2}(K)}$ $\displaystyle=$
$\displaystyle\frac{1}{48|K|}\sum_{i=1}^{3}({\bf\ell}_{i+1}\cdot
H_{K}{\bf\ell}_{i+2})^{2}|{\bf\ell}_{i}|^{2}$ $\displaystyle=$
$\displaystyle\frac{1}{48|K|C_{K}^{2}}\sum_{i=1}^{3}({\bf\ell}_{i+1}\cdot
M_{1,K}{\bf\ell}_{i+2})^{2}|{\bf\ell}_{i}|^{2}$ $\displaystyle=$
$\displaystyle\frac{L^{4}}{48|K|C_{K}^{2}}\sum_{i=1}^{3}(\cos\hat{\theta}_{i})^{2}|{\bf\ell}_{i}|^{2}$
$\displaystyle=$
$\displaystyle\frac{C_{K}\sqrt{\lambda_{1}\lambda_{2}}L^{4}}{192|\hat{K}|C_{K}^{2}}\sum_{i=1}^{3}{\Big{|}}C_{K}^{-\frac{1}{2}}R^{-1}\Lambda^{-\frac{1}{2}}\hat{\ell}_{i}{\Big{|}}^{2}$
$\displaystyle=$
$\displaystyle\frac{\sqrt{\lambda_{1}\lambda_{2}}L^{2}}{48\sqrt{3}C_{K}^{2}}\sum_{i=1}^{3}{\Big{|}}\Lambda^{-\frac{1}{2}}\hat{\ell}_{i}{\Big{|}}^{2}$
$\displaystyle=$
$\displaystyle\frac{L^{4}}{32\sqrt{3}C_{K}^{2}}{\Big{(}}\sqrt{{\lambda_{1}}/{\lambda_{2}}}+\sqrt{{\lambda_{2}}/{\lambda_{1}}}{\Big{)}}.$
To satisfy the equidistribution requirement, let
$\displaystyle\|\nabla(u-u_{I})\|^{2}_{L^{2}(K)}={\Big{(}}\sum\limits_{K\in\mathcal{T}_{h}}e_{K}^{2}{\Big{)}}/N=\epsilon_{1}^{2}/N,$
where $N$ is the number of elements of $\mathcal{T}_{h}$. Then
$\displaystyle
C_{K}=L^{2}{\Big{(}}\frac{N}{32\sqrt{3}\epsilon_{1}^{2}}{\Big{)}}^{\frac{1}{2}}{\Big{(}}\sqrt{{\lambda_{1}}/{\lambda_{2}}}+\sqrt{{\lambda_{2}}/{\lambda_{1}}}{\Big{)}}^{\frac{1}{2}}=\overline{C}{\Big{(}}\sqrt{{\lambda_{1}}/{\lambda_{2}}}+\sqrt{{\lambda_{2}}/{\lambda_{1}}}{\Big{)}}^{\frac{1}{2}}=\overline{C}{\Big{[}}\frac{\mbox{tr}(H)}{\sqrt{\mbox{det}(H)}}{\Big{]}}^{\frac{1}{2}},$
where $\overline{C}$ is a constant which depends on the prescribed error and
the numbers of elements. Generally speaking, $H({\bf x})$ can not be a
constant matrix on $K$ , $M_{1}({\bf x})$ should be the form
$\displaystyle M_{1}({\bf
x})={\Big{[}}\frac{\mbox{tr}(H)}{\sqrt{\mbox{det}(H)}}{\Big{]}}^{\frac{1}{2}}{\Big{[}}H(u){\Big{]}}={\Big{(}}\sqrt{{\lambda_{1}({\bf
x})}/{\lambda_{2}({\bf x})}}+\sqrt{{\lambda_{2}({\bf x})}/{\lambda_{1}({\bf
x})}}{\Big{)}}^{\frac{1}{2}}{\Big{[}}H(u){\Big{]}},$
since $M_{1}({\bf x})$ can be modified by multiplying a constant. Note that
$\lambda_{1}({\bf x})$ and $\lambda_{2}({\bf x})$ are the corresponding
eigenvalue of $H(u)$ at point ${\bf x}$.
To establish $\mathcal{M}_{1}({\bf x})$, the size requirement (3.1) should be
used, which leads to
$\displaystyle\theta_{1}\int_{K}\rho_{1}({\bf x})d{\bf x}=1,$
where
$\displaystyle\rho_{1}({\bf x})=\sqrt{\det(M_{1}({\bf x}))}.$
Summing the above equation over all the elements of $\mathcal{T}_{h}$, one
gets
$\displaystyle\theta_{1}\sigma_{1}=N,$
where
$\displaystyle\sigma_{1}=\int_{\Omega}\rho_{1}({\bf x})d{\bf x}.$
Thus, we get
$\displaystyle\theta_{1}=\frac{N}{\sigma_{1}},$
and as a consequence,
$\displaystyle\mathcal{M}_{1}({\bf
x})=\frac{N}{\sigma_{1}}{\Big{[}}\frac{\mbox{tr}(H)}{\sqrt{\mbox{det}(H)}}{\Big{]}}^{\frac{1}{2}}{\Big{[}}H(u){\Big{]}}=\frac{N}{\sigma_{1}}{\Big{(}}\sqrt{{\lambda_{1}({\bf
x})}/{\lambda_{2}({\bf x})}}+\sqrt{{\lambda_{2}({\bf x})}/{\lambda_{1}({\bf
x})}}{\Big{)}}^{\frac{1}{2}}{\Big{[}}H(u){\Big{]}}.$
### 3.2 Metric tensor for the $L^{2}$ norm
Using the expression (2.1) for the $L^{2}$-norm of the linear interpolation
error derived by Nadler[30] and the condition (3.1), we have
$\displaystyle\|u-u_{I}\|^{2}_{L^{2}(K)}$ $\displaystyle=$
$\displaystyle\frac{|K|}{180}{\Big{[}}{\Big{(}}\sum_{i=1}^{3}d_{i}{\Big{)}}^{2}+d_{1}d_{2}+d_{2}d_{3}+d_{1}d_{3}{\Big{]}}$
$\displaystyle=$
$\displaystyle\frac{|K|}{180C_{K}^{2}}{\Big{[}}{\Big{(}}\sum_{i=1}^{3}|\hat{\ell}_{i}|^{2}{\Big{)}}^{2}+\sum_{i=1}^{3}{\Big{(}}|\hat{\ell}_{i+1}||\hat{\ell}_{i+2}|{\Big{)}}^{2}{\Big{]}}$
$\displaystyle=$
$\displaystyle\frac{L^{4}|K|}{15C_{K}^{2}}=\frac{L^{4}|\hat{K}|}{15C_{K}^{3}\sqrt{\lambda_{1}\lambda_{2}}}=\frac{\sqrt{3}L^{6}}{60C_{K}^{3}\sqrt{\lambda_{1}\lambda_{2}}}.$
To satisfy the equidistribution requirement, let
$\displaystyle\|u-u_{I}\|^{2}_{L^{2}(K)}={\Big{(}}\sum\limits_{K\in\mathcal{T}_{h}}e_{K}^{2}{\Big{)}}/N=\epsilon_{0}^{2}/N.$
Using similar argument with last subsection, we easily get the monitor
function
$\displaystyle M_{0}({\bf
x})=\det{\Big{(}}H{\Big{)}}^{-\frac{1}{6}}{\Big{[}}H(u){\Big{]}},$
and the metric tensor
$\displaystyle\mathcal{M}_{0}({\bf
x})=\frac{N}{\sigma_{0}}\det{\Big{(}}H{\Big{)}}^{-\frac{1}{6}}{\Big{[}}H(u){\Big{]}}$
for the $L^{2}$ norm.
### 3.3 Practice use of metric tensors
So far we assume that $H({\bf x})$ is a symmetric positive definite matrix at
every point. However this assumption doesn’t hold in many cases. In order to
obtain a symmetric positive definite matrix, the following procedure are often
implemented. First, the Hessian $H$ is modified into
$|H|=R^{T}\,\,\mbox{diag}(|\lambda_{1}|,|\lambda_{2}|)R$ by taking the
absolute value of its eigenvalues ([21]). Since $|H|$ is only semi-positive
definite, $\mathcal{M}_{0}$ and $\mathcal{M}_{1}$ cannot be directly applied
to generate the anisotropic meshes. To avoid this difficulty, we regularize
the expression with two flooring parameters $\alpha_{0}>0$ for
$\mathcal{M}_{0}$ and $\alpha_{1}>0$ for $\mathcal{M}_{1}$, respectively
([23]). Replace $|H|$ with
$\displaystyle\mathcal{H}=\alpha_{1}I+|H|,$
then we get the modified metric tensor
$\displaystyle\mathcal{M}_{1}({\bf
x})=\frac{N}{\sigma_{1}}{\Big{[}}\frac{\mbox{tr}(\mathcal{H})}{\sqrt{\det(\mathcal{H})}}{\Big{]}}^{\frac{1}{2}}{\Big{[}}\mathcal{H}{\Big{]}}.$
(3.2)
Similarly, replacing $|H|$ with
$\displaystyle\mathcal{H}=\alpha_{0}I+|H|$
leads to
$\displaystyle\mathcal{M}_{0}({\bf
x})=\frac{N}{\sigma_{0}}\det{\Big{(}}\mathcal{H}{\Big{)}}^{-\frac{1}{6}}{\Big{[}}\mathcal{H}{\Big{]}}.$
(3.3)
The two modified metric tensors (3.2) and (3.3) are suitable for practical
mesh generation.
## 4 Numerical experiments
We have elaborated that our metric tensor explores more anisotropic
information for a given problem. Then it is crucial to check that whether our
metric tensor exhibits better than those without the term
$\displaystyle{\Big{[}}\frac{\mbox{tr}(\mathcal{H})}{\sqrt{\mbox{det}(\mathcal{H})}}{\Big{]}}^{\frac{1}{2}}\quad\mbox{or}\quad{\Big{(}}\sqrt{{\lambda_{1}({\bf
x})}/{\lambda_{2}({\bf x})}}+\sqrt{{\lambda_{2}({\bf x})}/{\lambda_{1}({\bf
x})}}{\Big{)}}^{\frac{1}{2}}.$ (4.1)
### 4.1 Mesh adaptation tool
All the presented experiments are performed by using the BAMG software [22].
Given a background mesh and an approximation solution, BAMG generates the mesh
according to the metric. The code allows the user to supply his/her own metric
tensor defined on a background mesh. In our computation, the background mesh
has been taken as the most recent mesh available.
### 4.2 Comparisons between two types of metric tensors
Specifically, in every example, the PDE is discretized using linear triangle
finite elements. Two serials of adaptive meshes of almost the same number of
elements are shown (the iterative procedure for solving PDEs is shown in
Figure 2).
Figure 2: Iterative procedure for adaptive mesh solution of PDEs.
The finite element solutions are computed by using the metric tensor of
modified Hessian
$\displaystyle\mathcal{M}_{1}^{m}({\bf x})=\frac{N}{\sigma_{1}}|H|,$ (4.2)
and the new metric tensor
$\displaystyle\mathcal{M}_{1}^{n}({\bf
x})=\frac{N}{\sigma_{1}}{\Big{[}}\frac{\mbox{tr}(|H|)}{\sqrt{\det(|H|)}}{\Big{]}}^{\frac{1}{2}}|H|.$
(4.3)
(a)
(b)
Figure 3: Example 4.1. The $H^{1}$ and $H^{2}$ semi-norms of discretization
error are plotted as functions of the number of elements ($nbt$) in (a) and
(b), respectively.
Notice that the formulas of the metric tensors involve second order
derivatives of the physical solution. Generally speaking, one can assume that
the nodal values of the solution or their approximations are available, as
typically in the numerical solution of partial differential equations. Then,
one can get approximations for the second order derivatives using gradient
recovery techniques (such as [40] and [38]) twice or a Hessian recovery
technique (using piecewise quadratic polynomials fitting in least-squares
sense to nodal values of the computed solution [37]) just once, although their
convergence has been analyzed only on isotropic meshes. In our computations,
we use the technique [40] twice.
In the current computation, each run is stopped after 10 iterations to
guarantee that the adaptive procedure tends towards stability.
Denote by $nbt$ the number of the elements (triangles in 2D case) in the
current mesh. The number of triangles or nodes is adjusted when necessary by
trial and errors through the modification of the multiplicative coefficient of
the metric tensors.
Example 4.1. This example, though numerically solved in
$\Omega\equiv(0,1)\times(0,1),$ is in fact one-dimensional:
$\displaystyle-\kappa\triangle u+\frac{\partial u}{\partial x_{1}}$
$\displaystyle=$ $\displaystyle f,$
with $f=0$, $u(x_{1}=0,x_{2})=0$, $u(x_{1}=1,x_{2})=1$, and $\frac{\partial
u}{\partial n}=0$ along the top and bottom sides of the square (taken from
[8]). The exact solution is given by
$\displaystyle u({\bf
x})=\frac{1-e^{\frac{x_{1}}{\kappa}}}{1-e^{\frac{1}{\kappa}}},$
with a boundary layer of width $\kappa$ at $x_{1}\approx 1$. We have set
$\kappa=0.0015$, so that convection is dominant and the Galerkin method yields
oscillatory solutions unless the mesh is highly refined at the boundary layer.
Figure 3 (a) compares $H^{1}$ semi-norms of the discretization errors for the
two metric tensors. In (b) $H^{2}$ semi-norm of error is computed by the
difference between the Hessian of $u$ and the recovered one, which exhibits
the quality of meshes to some certain extent.
(a)
(b)
Figure 4: Example 4.2. (a) Anisotropic mesh obtained with the metric tensor
$\mathcal{M}_{1}^{m}({\bf x})$: $nbt=4244$, $|e|_{H^{1}}=0.3727$, and
$|e|_{H^{2}}=1762$. (b) Anisotropic mesh obtained the metric tensor
$\mathcal{M}_{1}^{n}({\bf x})$: $nbt=4243$, $|e|_{H^{1}}=0.2842$, and
$|e|_{H^{2}}=1101$.
Example 4.2. We study the Poisson equation (taken from [16])
$\left\\{\begin{array}[]{lll}-\triangle
u&=&f,\quad\mbox{in}\quad\Omega\equiv(0,1)\times(0,1),\\\
u&=&0,\quad\mbox{on}\quad\partial\Omega,\end{array}\right.$
where $f$ has been chosen in such a way that the exact solution $u({\bf
x})=[1-e^{-\alpha x_{1}}-(1-e^{-\alpha})x_{1}]4x_{2}(1-x_{2})$ and $\alpha$ is
chosen to be 1000. Notice that the function $u$ exhibits an exponential layer
along the boundary $x_{1}=0$ with an initial step of $\alpha$.
(a)
(b)
Figure 5: Example 4.2. The $H^{1}$ and $H^{2}$ semi-norms of discretization
error are plotted as functions of the number of elements ($nbt$) in (a) and
(b), respectively.
(a)
(b)
Figure 6: Example 4.3. $\beta=40$, (a) Anisotropic mesh obtained with the
metric tensor $\mathcal{M}_{1}^{m}({\bf x})$: $nbt=892$, $|e|_{H^{1}}=0.2581$,
$|e|_{H^{2}}=102.0$. (b) Anisotropic mesh obtained with the metric tensor
$\mathcal{M}_{1}^{n}({\bf x})$: $nbt=891$, $|e|_{H^{1}}=0.1893$,
$|e|_{H^{2}}=57.57$.
Figure 4 contains two meshes obtained by the two different metric tensors
(4.2) and (4.3). It is easily seen that the two meshes are obvious different.
The former explores more anisotropic features of the solution $u$ than the
latter. In other words, the term (4.1) complements more comprehensive
information of the exact solution. Figure 5 contains $H^{1}$ and $H^{2}$ semi-
norms of error similar to Figure 3.
Example 4.3. Let $\Omega\equiv(0,1)\times(0,1)$, and $u$ be the solution of
$\displaystyle-\triangle
u=\beta(\beta-1)x_{1}^{\beta-2}(1-x_{2}^{2\beta})+2\beta(2\beta-1)x_{2}^{2\beta-2}(1-x_{1}^{\beta})$
with boundary conditions $u=0$ along the sides $x_{1}=1$ and $x_{2}=1$, and
$\partial u/\partial n=0$ along $x_{1}=0$ and $x_{2}=0$ (taken from [8]). The
exact solution $u({\bf x})=(1-x_{1}^{\beta})(1-x_{2}^{2\beta})$ exhibits two
boundary layers along the right and top sides, the latter being stronger than
the former.
(a)
(b)
(c)
(d)
Figure 7: Example 4.3. The $H^{1}$ semi-norms of discretization error are
plotted as functions of $nbt$ under the conditions (a) $\beta=5$, (b)
$\beta=10$, (c) $\beta=20$, (d) $\beta=40$, respectively.
Figure 6 contains two meshes obtained by the two different metric tensors from
solution perspective, where the difference is obvious. This comparison, again,
indicates our metric tensor explores more comprehensive anisotropic
information of the solution $u$. Figure 7 and Figure 8 contain $H^{1}$ and
$H^{2}$ semi-norms of error, respectively, under various parameters $\beta$.
We know that the larger the $\beta$ becomes, the more significant the
anisotropy of the solution exihibits. Then we can conclude that the more
anisotropic the solution is, the more obvious the difference is. In fact, the
difference lies on the term (4.1) which indicates our metric tensor explores
more comprehensive anisotropic information of the solution when the term
varies significantly at different points or elements.
(a)
(b)
(c)
(d)
Figure 8: Example 4.3. The $H^{2}$ semi-norms of discretization error are
plotted as functions of $nbt$ under the conditions (a) $\beta=5$, (b)
$\beta=10$, (c) $\beta=20$, (d) $\beta=40$, respectively.
## 5 Conclusions
In the previous sections we have developed a new anisotropic mesh adaptation
strategy for finite element solution of elliptic differential equations. Note
that the new metric tensor $\mathcal{M}_{0}$ for the $L^{2}$ norm is similar
to (1.9) ([24]). However, the new metric tensor $\mathcal{M}_{1}$ is
essentially different with those metric tensors existed. The difference lies
on the term (1.13) which indicates our metric tensor explores more
comprehensive anisotropic information of the solution when the term varies
significantly at different points or elements. Numerical results also show
that this approach is superior to the existing ones to deal with poisson and
steady convection-dominated problems.
## References
* [1] D. Ait-Ali-Yahia, W. Habashi, A. Tam, M.-G. Vallet, M. Fortin, A directionally adaptive methodology using an edge-based error estimate on quadrilateral grids, Int. J. Numer. Methods Fluids 23 (1996) 673-690.
* [2] T. Apel, G. Lube, Anisotropic mesh refinement in stabilized Galerkin methods, Numer. Math. 74(3) (1996) 261-282.
* [3] R.E. Bank, R.K. Smith, Mesh smoothing using a posteriori error estimates, SIAM J. Numer. Anal. 34 (1997) 979-997.
* [4] R. Becker, An adaptive finite element method for the incompressible Navier-stokes equations on time-dependent domains, Ph.D. thesis, Ruprecht-Karls-Universit$\ddot{a}$t Heidelberg, 1995.
* [5] H. Borouchaki, P.L. George, F. Hecht, P. Laug and E. Saltel, Delaunay mesh generation governed by metric specifications Part I. Algorithms, finite elem. anal. des. 25 (1997) 61-83.
* [6] H. Borouchaki, P.L. George, B. Mohammadi, Delaunay mesh generation governed by metric specifications Part II. Applications, finite elem. anal. des. 25 (1997) 85-109.
* [7] J.U. Brackbill, J.S. Saltzman, Adaptive zoning for singular problems in two dimensions, J. Comput. Phys. 46 (1982) 342-368.
* [8] G. Buscaglia, E. Dari, Anisotropic Mesh Optimization and its Application in Adaptivity, Int. J. Numer. Meth. Eng. 40(22) (1997) 4119-4136.
* [9] W. Cao, On the error of linear interpolation and the orientation, aspect ratio, and internal angles of a triangle, SIAM J. Numer. Anal. 43(1) (2005) 19-40.
* [10] M.J. Castro-D$\acute{\imath}$az, F. Hecht, B. Mohammadi, O. Pironneau, Anisotropic unstructured mesh adaption for flow simulations, Internat. J. Numer. Methods Fluids 25(4) (1997) 475-491.
* [11] L. Chen, P. Sun, J. Xu, Optimal anisotropic meshes for minimizing interpolation errors in $L^{p}$-norm, Math. Comp. 76(257) (2007) 179-204.
* [12] E.F. D’Azevedo, R.B. Simpson, On optimal interpolation triangle incidences, SIAM J. Sci. Statist. Comput. 10 (1989) 1063-1075.
* [13] E.F. D’Azevedo, Optimal triangular mesh generation by coordinate transformation, SIAM J. Sci. Stat. Comput. 12 (1991) 755-786.
* [14] E.F. D’Azevedo, R.B. Simpson, On optimal triangular meshes for minimizing the gradient error, Numer. Math. 59 (1991) 321-348.
* [15] A.S. Dvinsky, Adaptive grid generation from harmonic maps on Riemannian manifolds, J. Comput. Phys. 95 (1991) 450-476.
* [16] L. Formaggia, S. Perotto, Anisotropic error estimates for elliptic problems, Numer. Math. 94(1) (2003) 67-92.
* [17] P. Frey, P.L. George, Mesh Generation: Application to Finite Elements, Hermes Science, Oxford and Paris, 2000.
* [18] R.V. Garimella, M.S. Shephard, Boundary layer meshing for viscous flows in complex domain. in: Proceedings of the 7th International Meshing Roundtable, Sandia National Laboratories, Albuquerque, NM, 1998, 107-118.
* [19] P.L. George, F. Hecht. Nonisotropic grids, in: J.F. Thompson, B.K. Soni, N.P. Weatherill, (Eds.), Handbook of Grid Generation, CRC Press, Boca Raton, 1999 20.1-20.29.
* [20] W.G. Habashi, J. Dompierre, Y. Bourgault, D. Ait-Ali-Yahia, M. Fortin, M.-G. Vallet, Anisotropic mesh adaptation: towards user-indepedent, mesh-independent and solver-independent CFD. Part I: general principles, Int. J. Numer. Meth. Fluids 32 (2000) 725-744.
* [21] W.G. Habashi, M. Fortin, J. Dompierre, M.-G. Vallet, Y. Bourgault, Anisotropic mesh adaptation: a step towards a mesh-independent and user-independent CFD, Barriers and challenges in computational fluid dynamics (Hampton, VA, 1996), 99-117, Kluwer Acad. Publ., Dordrecht, 1998\.
* [22] F. Hecht, Bidimensional anisotropic mesh generator, Technical Report, INRIA, Rocquencourt, 1997.
* [23] W. Huang. Measuring mesh qualities and application to variational mesh adaptation. SIAM J. Sci. Comput. 26(5) (2005) 1643-1666.
* [24] W. Huang, Metric tensors for anisotropic mesh generation, J. Comput. Phys. 204(2) (2005) 633-665.
* [25] O.P. Jacquotte, A mechanical model for a new grid generation method in computational fluid dynamics, Comput. Meth. Appl. Mech. Eng. 66 (1988) 323-338.
* [26] P. Knupp, L. Margolin, M. Shashkov, Reference jacobian optimization-based rezone strategies for arbitrary lagrangian eulerian methods, J. Comput. Phys. 176 (2002) 93-128.
* [27] R. Kornhuber, R. Roitzsch, On adaptive grid refinement in the presence of internal or boundary layers, IMPACT Comput. Sci. Eng. 2 (1990) 40-72.
* [28] J. Lang, An adaptive finite element method for convection-diffusion problems by interpolation techniques, Technical Report TR 91-4, Konrad-Zuse-Zentrum Berlin, 1991.
* [29] R. Li, T. Tang, and P. Zhang, Moving mesh methods in multiple dimensions based on harmonic maps, J. Comput. Phys. 170(2) (2001) 562-588.
* [30] E.J. Nadler, Piecewise linear approximation on triangulations of a planar region, Ph.D. Thesis, Division of Applied Mathematics, Brown University, Providence, RI, 1985.
* [31] J. Peraire, M. Vahdati, K. Morgan, O.C. Zienkiewicz, Adaptive remeshing for compressible flow computation, J. Comp. Phys. 72(2) (1987) 449-466.
* [32] W. Rachowicz, An anisotropic h-adaptive finite element method for compressible Navier CStokes equations, Comput. Meth. Appl. Mech. Eng. 146 (1997) 231-252.
* [33] J. Remacle, X. Li, M.S. Shephard, and J.E. Flaherty, Anisotropic adaptive simulation of transient flows using discontinuous Galerkin methods, Int. J. Numer. Meth. Eng., 62(7) (2005) 899-923.
* [34] S. Rippa, Long and thin triangles can be good for linear interpolation, SIAM J. Numer. Anal. 29 (1992) 257-270.
* [35] S. Yamakawa and K. Shimada, High quality anisotropic tetrahedral mesh generation via ellipsoidal bubble packing. in: Proceedings of the 9th International Meshing Roundtable, Sandia National Laboratories, Albuquerque, NM, 2000. Sandia Report 2000-2207, 263-273.
* [36] X. Yin, H. Xie, A-posteriori error estimators suitable for moving mesh methods under anisotropic meshes, to appear.
* [37] X. Zhang, Accuracy concern for Hessian metric, Internal Note, CERCA.
* [38] Z. Zhang, A. Naga, A new finite element gradient recovery method: superconvergence property, SIAM J. Sci. Comput. 26(4) (2005) 1192-1213.
* [39] O.C. Zienkiewicz, J. Wu, Automatic directional refinement in adaptive analysis of compressible flows, Int. J. Numer. Meth. Eng. 37 (1994) 2189-2210.
* [40] O.C. Zienkiewicz, J. Zhu, A simple error estimator and adaptive procedure for practical engineering analysis, Int. J. Numer. Meth. Eng. 24 (1987) 337-357.
|
arxiv-papers
| 2011-05-17T01:36:27 |
2024-09-04T02:49:18.787174
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Xiaobo Yin and Hehu Xie",
"submitter": "Xiaobo Yin",
"url": "https://arxiv.org/abs/1105.3254"
}
|
1105.3313
|
# Simultaneous wavelength translation and amplitude modulation of single
photons from a quantum dot
Matthew T. Rakher matthew.rakher@gmail.com Current Address: Departement
Physik, Universität Basel, Klingelbergstrasse 82, CH-4056 Basel, Switzerland
Center for Nanoscale Science and Technology, National Institute of Standards
and Technology, Gaithersburg, MD 20899, USA Lijun Ma Information Technology
Laboratory, National Institute of Standards and Technology, Gaithersburg, MD
20899, USA Marcelo Davanço Center for Nanoscale Science and Technology,
National Institute of Standards and Technology, Gaithersburg, MD 20899, USA
Maryland NanoCenter, University of Maryland, College Park, MD 20742, USA
Oliver Slattery Information Technology Laboratory, National Institute of
Standards and Technology, Gaithersburg, MD 20899, USA Xiao Tang Information
Technology Laboratory, National Institute of Standards and Technology,
Gaithersburg, MD 20899, USA Kartik Srinivasan Center for Nanoscale Science
and Technology, National Institute of Standards and Technology, Gaithersburg,
MD 20899, USA
###### Abstract
Hybrid quantum information devices that combine disparate physical systems
interacting through photons offer the promise of combining low-loss
telecommunications wavelength transmission with high fidelity visible
wavelength storage and manipulation. The realization of such systems requires
control over the waveform of single photons to achieve spectral and temporal
matching. Here, we experimentally demonstrate the simultaneous wavelength
conversion and temporal shaping of single photons generated by a quantum dot
emitting near 1300 nm with an exponentially-decaying waveform (lifetime
$\approx$1.5 ns). Quasi-phase-matched sum-frequency generation with a pulsed
1550 nm laser creates single photons at 710 nm with a controlled amplitude
modulation at 350 ps timescales.
###### pacs:
The integration of disparate quantum systems is an ongoing effort in the
development of distributed quantum networks Kimble (2008). Two challenges in
hybrid schemes which use photons for coupling include the differences in
transition frequencies and linewidths among the component systems. Previous
experiments using non-linear optical media to translate (or transduce) photons
from one wavelength to another while preserving quantum-mechanical properties
Huang and Kumar (1992); *ref:Tanzili_Zbinden; *ref:McGuinnes_PRL10; Rakher
_et al._ (2010) provide a means to overcome the first impediment. The second
constraint has been addressed through single photon waveform manipulation
using direct electro-optic modulation Kolchin _et al._ (2008);
*ref:Specht_NPhot_09; *ref:Rakher_APL_2011, $\Lambda$-type cavity-QED McKeever
_et al._ (2004); *ref:Keller and atomic ensemble systems Eisaman _et al._
(2004); *ref:Chen_PRL10, and conditional, non-local operations in spontaneous
parametric downversion Baek _et al._ (2008). Here, we use pulsed frequency
upconversion to simultaneously change the frequency and temporal amplitude
profile of single photons produced by a semiconductor quantum dot (QD).
Triggered single photons that have an exponentially decaying temporal profile
with a time constant of 1.5 ns and a wavelength of 1300 nm are converted to
photons that have a Gaussian temporal profile with a controllable full-width
at half-maximum (FWHM) as narrow as 350 ps $\pm$ 16 ps and a wavelength of 710
nm. The use of a high conversion efficiency nonlinear waveguide and low-loss
fiber optics results in a 16 $\%$ overall efficiency in producing such
frequency converted, amplitude-modulated photons. The simultaneous combination
of wavepacket manipulation and quantum frequency conversion may be valuable in
integrating telecommunications-band semiconductor QDs with broadband visible
wavelength quantum memories Reim _et al._ (2010); *ref:Saglamyurek_Nat11 as
part of a distributed quantum network, for the creation of ultra-high
bandwidth Kielpinski _et al._ (2011), indistinguishable photons from
disparate quantum emitters, and for efficient, temporally-resolved photon
counting at the ps level.
Figure 1: Schematic of the experiment for simultaneous wavelength translation
and amplitude modulation of single photons from a quantum dot.
Single epitaxially-grown semiconductor QDs are promising stable, bright, and
scalable on-demand sources of single photons Shields (2007), with improvements
in photon extraction efficiency Strauf _et al._ (2007); Claudon _et al._
(2010), suppression of multi-photon events Claudon _et al._ (2010); Ates _et
al._ (2009), and photon indistinguishability Santori _et al._ (2002); Ates
_et al._ (2009) indicative of their potential for high performance in quantum
information applications. On the other hand, the dominant and most mature
system for such QD single photon source developments has been the InGaAs/GaAs
material system, whose band structure constrains the range of available
emission energies, and temporal control of the photon wavepacket shape remains
a challenge in these systems. Despite recent progress Fernandez _et al._
(2009), access to three-level Raman transitions in which the temporal shape is
determined by the pump mode profile, a staple of trapped atom and ion systems
McKeever _et al._ (2004); Keller _et al._ (2004); Vasilev _et al._ (2010)
is typically not available. Instead, most QDs are two-level systems in which
the emitted photons have an exponentially decaying temporal profile, and
temporal shaping must occur after photon generation. As we describe below, the
method used in this work produces wavelength-translated, single photon
wavepackets with a temporal profile inherited from the classical pump beam
used in the frequency upconversion process. Though this technique is not
lossless, it can still be quite efficient, is flexible, straightforward to use
on existing devices, and operates on the nanosecond timescales requisite for
QD experiments and for which classical coherent pulse shaping techniques
Weiner (2000) are difficult to implement. In comparison to direct amplitude
modulation of single photon wavepackets Kolchin _et al._ (2008);
*ref:Specht_NPhot_09; *ref:Rakher_APL_2011, the technique presented here has
lower loss, can operate on much faster timescales using existing mode-locked
laser technologies, and importantly, simultaneously changes the wavelength of
the photons. This is necessary for integration with visible wavelength quantum
systems and provides a method to overcome spectral and temporal
distinguishability of disparate quantum sources.
We generated single photons at 1.3 $\mu$m from a single InAs QD embedded in a
GaAs mesa SM_ . The QD sample is cooled to a temperature of $\approx$7 K,
excited with a repetitively pulsed (50 MHz) laser, and its photoluminescence
(PL) is out-coupled into a single mode fiber as depicted in Fig. 1. The PL is
directed either into a grating spectrometer for characterization or into the
pulsed upconversion setup for simultaneous wavelength translation and
amplitude modulation. The PL spectrum from a single QD measured by the
spectrometer is shown in Fig. 2(a). It displays two sharp peaks corresponding
to two excitonic charge configurations, $X^{+}$ near 1296 nm, and $X^{0}$ near
1297 nm. Photons emitted at the $X^{0}$ transition wavelength will be used for
the experiments described hereafter.
PL from the $X^{0}$ transition is directed into an upconversion setup where it
is combined with a strong 1550 nm pulse in a periodically-poled LiNbO3 (PPLN)
waveguide SM_ . A simplified schematic of the experimental timing sequence is
shown in Fig. 1. The pump pulse is created by gating the output of a tunable
laser with an electro-optic modulator (EOM). An electrical pulse generator
drives the EOM synchronously with the 780 nm QD excitation laser, but at half
the repetition rate (25 MHz), using a trigger signal from the delay generator.
These instruments combine to generate electrical pulses with controllable FWHM
($\tau_{mod}$) and delay ($\Delta T_{mod}$) as shown in Fig. 1, and the
resulting optical pulses have an extinction ratio $>20$ dB. The modulated 1550
nm pump signal is amplified to produce a peak power of 85 mW entering the PPLN
waveguide where it interacts with a 1300 nm QD single photon to create a
single photon at the sum-frequency near 710 nm. This $\chi^{(2)}$ process is
made efficient through quasi-phase-matching by periodic poling Fejer _et al._
(1992) as well as the tight optical confinement of the waveguide Chanvillard
_et al._ (2000). Previous measurements using a continuous-wave (CW) pump in
the same setup demonstrated single-photon conversion efficiencies
$\gtrsim~{}75$ $\%$ Rakher _et al._ (2010), and others have measured
efficiencies near unity with attenuated laser pulses Langrock _et al._
(2005); Vandevender and Kwiat (2004); *ref:Albota_Wong_upconversion;
*ref:Xu_Tang. Light exiting the PPLN is spectrally filtered to isolate the 710
nm photons, which are detected by Si single photon counting avalanche
detectors (SPADs) for excited state lifetime and second-order correlation
measurement ($g^{(2)}(\tau)$).
Figure 2: (a) PL spectrum of a single QD after 60 s integration showing two
excitonic transitions, $X^{+}$ (1296 nm) and $X^{0}$ (1297 nm). (b-c) Second-
order intensity correlation, $g^{(2)}(\tau)$, of single photons upconverted
with a CW pump, where $g^{2}(0)=0.41\pm 0.02$ (c) and a pulsed pump with
$\tau_{mod}$=500 ps, where $g^{2}(0)=0.45\pm 0.03$ (d) for an integration time
of 7200 s.
First, we compare the measured $g^{(2)}(\tau)$ for photons that are
upconverted using a 1550 nm CW pump (Fig. 2(b)) and 500 ps pulses (Fig. 2(c))
with the same peak power of 85 mW. Both are antibunched with $g^{(2)}(0)<0.5$,
showing that the signal is dominantly composed of single photons in both
cases. However, pulsed pumping reduces events that are uncorrelated in time
with the QD single photons and contribute a constant background. This unwanted
background results from upconversion of anti-Stokes Raman photons from the
strong (CW) 1550 nm beam Langrock _et al._ (2005), and is seen in Fig. 2(b)
but not in Fig. 2(c). For understanding the origin of the non-zero
$g^{(2)}(0)$ value, the background is helpful in distinguishing the fraction
due to anti-Stokes Raman photons from that due to upconversion of multi-photon
emission from the QD sample Rakher _et al._ (2010). For a practical
implementation, however, it adds a constant level to the communications
channel and pulsed upconversion removes this noise without gating of the
detector. Ideally, Fig. 2(c) would only show peaks spaced by 40 ns, due to the
25 MHz repetition rate of the EOM. In practice, the small peaks spaced 20 ns
from the large peaks are due to imperfect extinction of the EOM and pulse
generator, resulting in upconversion of QD photons when the EOM is nominally
off.
Figure 3: (a,b) Temporal profile of the upconverted photons using a CW (blue)
and $\tau_{mod}=$260 ps pulsed (maroon) 1550 nm pump on linear (a) and log (b)
scales. Inset: 1550 nm pump pulse measured by the optical communications
analyzer. (c) Same as (a) but using a reduced timing jitter SPAD. (d) Temporal
profile of upconverted photons using $\tau_{mod}=\\{0.5,1.25,2.5,5.1\\}$ ns
along with a CW pump. All measurements are taken with 1200 s integration.
Next, we perform time-resolved measurements of the upconverted 710 nm photons.
In recent work Rakher _et al._ (2010) using a CW 1550 nm pump beam, the
temporal amplitude profile of the upconverted 710 nm photon exactly matched
that of the original 1300 nm photon, and was used to measure the QD lifetime
with dramatically better dynamic range than with a telecommunications SPAD.
Here, the pulsed 1550 nm pump not only upconverts the QD photon to 710 nm, but
also modulates its temporal amplitude profile because $\tau_{mod}$ is less
than the lifetime of the QD transition ($\approx 1.5$ ns). Figure 3(a)
displays the temporal amplitude profile of 710 nm single photons generated
using a 1550 nm pulse with $\tau_{mod}=260$ ps (maroon), along with that of
single photons generated with a CW pump (blue) for comparison. The measured
480 ps $\pm$ 16 ps FWHM of the upconverted photon is limited by the
$\approx$350 ps timing jitter of the Si SPAD and its uncertainty is given by
the timebase of the photon counting board. The same plot is reproduced in Fig.
3(b) on a log scale, with an apparent increase in the dynamic range due to the
removal of CW anti-Stokes Raman photons. This same measurement was performed
using a SPAD with a reduced timing jitter ($\approx 50$ ps), and the resulting
data is shown in Fig. 3(c) corresponding to a FWHM of 350 ps $\pm$ 16 ps.
Here, the resulting FWHM is not limited by the detector timing jitter but by
an effective broadening of the pump pulse in the frequency conversion process
SM_ . Even so, taken together with the commercial availability of 40 GHz EOMs
and drivers for 1550 nm lasers, these results demonstrate a first step towards
the creation of quantum light sources that are modulated to operate near the
capacity of telecommunications channels Kielpinski _et al._ (2011). To show
the versatility of the setup, Fig. 3(d) shows the temporal profile of QD
single photons after upconversion using pump pulse widths of $\tau_{mod}=$ 500
ps, 1.25 ns, 2.5 ns, and 5.1 ns along with a CW pump for comparison. By simply
adjusting the pulse generator that drives the EOM, one can create single
photons of arbitrary width and shape SM_ .
In addition to changing $\tau_{mod}$, the delay between the arrival of the QD
single photon and pump pulse, $\Delta T_{mod}$, can also be varied (Fig. 1).
Figure 4(a)-(b) show the result of such a measurement in linear (a) and log
(b) scale for $\Delta T_{mod}=\\{0.0,0.5,1.0,1.5,2.0,2.5,3.0,3.35\\}$ ns under
pulsed pumping with $\tau_{mod}=260$ ps. The inset of (a) shows a similar
measurement using the reduced timing jitter SPAD. The peaks heights nicely
follow the decay curve of the CW profile, shown in blue for comparison. This
measurement suggests that pulsed frequency upconversion could be used for
achieving high timing resolution in experiments on single quantum emitters.
These time-correlated single photon counting experiments are currently limited
by the timing jitter of the SPAD, which is typically $>50$ ps. The time-domain
sampling enabled by pulsed upconversion Shah (1988); *ref:Ma_OE_11 provides a
timing resolution set by $\tau_{mod}$, which is limited by the quasi-phase-
matching spectral bandwidth of the non-linear material. For the PPLN waveguide
used here, the bandwidth ($\approx 0.35$ nm) corresponds to a minimum
$\tau_{mod}\approx$10 ps, while sub-ps timing resolution should be available
in broader bandwidth systems Kuzucu _et al._ (2008); *ref:Suchowski_OPN_10.
Sub-ps 1550 nm pulses can be generated by mode-locked fiber lasers, and if
applied as in Fig. 4(a)-(b), could trace out emission dynamics with a timing
resolution 1-2 orders of magnitude better than the typical timing jitter of a
SPAD, allowing, for example, measurement of beat phenomena within the QD
Flissikowski _et al._ (2001) or time-domain observation of vacuum Rabi
oscillations in cavity quantum electrodynamics Srinivasan _et al._ (2008).
Figure 4: (a,b) Temporal profile of the upconverted photons using a CW (blue)
and $\tau_{mod}=$260 ps pulsed (various colors) 1550 nm pump on linear (a) and
log (b) scales for delays $\Delta
T_{mod}=\\{0.0,0.5,1.0,1.5,2.0,2.5,3.0,3.35\\}$ ns. Inset: Similar measurement
as (a), but measured with reduced timing jitter SPAD. All measurements are
taken with 1200 s integration. (c) Net conversion efficiency and photon count
rate as a function of the pump pulse FWHM, $\tau_{mod}$. Inset: Focus on sub-
ns regime.
The data from Figs. 2 and 3 indicate that while the quantum nature of the
photon has been inherited from the QD emission near 1300 nm, its temporal
profile has been inherited from the strong pump pulse near 1550 nm. This is a
direct consequence of the nonlinear nature of the upconversion process.
However, because QD-generated single photons have a coherence time that is
typically less than twice the lifetime, they are not perfectly
indistinguishable Santori _et al._ (2002). This arises due to interaction of
the confined carriers in the QD with the surrounding crystal and, for the type
of QD considered here, yields a coherence time of $\approx 280$ ps Srinivasan
_et al._ (2008) and an indistinguishability of $\approx 10$ $\%$. For our
experiments, this means that each photon is not modulated in the same way and
the resulting histograms are ensemble averages. Nonetheless, the experiments
would proceed in the exact same manner for more ideal photons, such as those
produced with a ns coherence time through near-resonant excitation Ates _et
al._ (2009). In fact, simultaneous frequency translation and amplitude
modulation can be used to generate indistinguishable single photons from non-
identical QDs Patel _et al._ (2010); *ref:Flagg_PRL10. Frequency translation
can move each single photon to the same wavelength while amplitude modulation
can be used to select the coherent part of the wave-packet. Since the quasi-
phase-matching of the PPLN can be tuned by temperature, this offers the
ability to create indistinguishable single photons from QDs spectrally
separated by the entire inhomogeneous linewidth of the QD distribution (which
is usually tens of nanometers) without the need for electrical gates or
modification of the sample.
The single photon manipulation demonstrated here is essentially a combination
of quantum frequency conversion and amplitude modulation. Coherent pulse-
shaping techniques Weiner (2000), which have been used with entangled photon
pairs Pe’er _et al._ (2005), are currently quite difficult to directly apply
to QD single photons due to their narrow spectral bandwidth compared to
photons produced by parametric downconversion, for example. Furthermore,
recent work Kielpinski _et al._ (2011) has suggested that a combination of
frequency upconversion using a spectrally tailored 1550 nm pump beam and
spectral phase correction may be an approach to lossless shaping of QD single
photons. Our work, utilizing a similar sum frequency generation approach,
represents a step towards such a goal. Though our approach is not lossless,
broadband insertion loss (usually $>3$ dB) is avoided in comparison to direct
amplitude modulation of the single photon state because the modulation in
nonlinear mixing approaches such as ours and that of Ref. Kielpinski _et al._
(2011) is performed on the classical pump beam. Nonetheless, the fact that the
pump pulse is temporally shorter than the single photon wave-packet introduces
extra loss. A full derivation of this loss is included in the supplemental
material SM_ , but the result is shown in Fig. 4(c) which plots the net
conversion efficiency as a function of the pump pulse FWHM, $\tau_{mod}$, from
100 ps to 5 ns (inset displays sub-ns regime). The efficiency asymptotically
approaches 75 $\%$, the measured conversion efficiency with a CW pump, and
ranges from 16 $\%$ for $\tau_{mod}=260$ ps to 71 $\%$ for $\tau_{mod}=5$ ns.
For our FTW-based PL collection with 0.1 $\%$ collection efficiency and 50 MHz
excitation rate, this translates to a single photon count rate of 8000
$s^{-1}$ to 36000 $s^{-1}$ as shown on the right axis in Fig. 4(c). Using more
advanced techniques that have demonstrated $>$10 $\%$ collection efficiency
Strauf _et al._ (2007); *ref:Claudon, the overall production rate of
frequency translated, temporally modulated single photons can easily reach
$10^{6}$ $s^{-1}$.
In summary, we have demonstrated simultaneous wavelength translation and
amplitude modulation of a single photon from a quantum dot using pulsed
frequency upconversion. The use of a quasi-phase-matched waveguide and low-
loss fiber optics results in a 16 $\%$ overall conversion efficiency in
producing Gaussian-shaped, single photon pulses near 710 nm with a FWHM of 350
ps from single photons near 1.3 $\mu$m with an exponentially-decaying
wavepacket. Such methods may prove valuable for integrating disparate quantum
systems, creating ultra-high bandwidth indistinguishable single photon
sources, and for achieving high resolution in time-resolved experiments of
single quantum systems.
## References
* Kimble (2008) H. J. Kimble, Nature (London) 453, 1023 (2008).
* Huang and Kumar (1992) J. M. Huang and P. Kumar, Phys. Rev. Lett. 68, 2153 (1992).
* Tanzilli _et al._ (2005) S. Tanzilli _et al._ , Nature 437, 116 (2005).
* McGuinness _et al._ (2010) H. J. McGuinness _et al._ , Phys. Rev. Lett. 105, 093604 (2010).
* Rakher _et al._ (2010) M. T. Rakher _et al._ , Nature Photonics 4, 786 (2010).
* Kolchin _et al._ (2008) P. Kolchin _et al._ , Phys. Rev. Lett. 101, 103601 (2008).
* Specht _et al._ (2009) H. P. Specht _et al._ , Nature Photonics 3, 469 (2009).
* Rakher and Srinivasan (2011) M. T. Rakher and K. Srinivasan, Appl. Phys. Lett. 98, 211103 (2011).
* McKeever _et al._ (2004) J. McKeever _et al._ , Science 303, 1992 (2004).
* Keller _et al._ (2004) M. Keller _et al._ , Nature 431, 1075 (2004).
* Eisaman _et al._ (2004) M. D. Eisaman _et al._ , Phys. Rev. Lett. 93, 233602 (2004).
* Chen _et al._ (2010) J. F. Chen _et al._ ,Phys. Rev. Lett. 104, 183604 (2010).
* Baek _et al._ (2008) S.-Y. Baek, O. Kwon, and Y.-H. Kim, Phys. Rev. A 77, 013829 (2008).
* Reim _et al._ (2010) K. F. Reim _et al._ ,Nature Photonics 4, 218 (2010).
* Saglamyurek _et al._ (2011) E. Saglamyurek _et al._ ,Nature (London) 469, 512 (2011).
* Kielpinski _et al._ (2011) D. Kielpinski, J. F. Corney, and H. M. Wiseman, Phys. Rev. Lett. 106, 130501 (2011).
* Shields (2007) A. J. Shields, Nature Photonics 1, 215 (2007).
* Strauf _et al._ (2007) S. Strauf _et al._ , Nature Photonics 1, 704 (2007).
* Claudon _et al._ (2010) J. Claudon _et al._ , Nature Photonics 4, 174 (2010).
* Ates _et al._ (2009) S. Ates _et al._ , Phys. Rev. Lett. 103, 167402 (2009).
* Santori _et al._ (2002) C. Santori _et al._ , Nature 419, 594 (2002).
* Fernandez _et al._ (2009) G. Fernandez _et al._ , Phys. Rev. Lett. 103, 087406 (2009).
* Vasilev _et al._ (2010) G. S. Vasilev, D. Ljunggren, and A. Kuhn, New Journal of Physics 12, 063024 (2010).
* Weiner (2000) A. M. Weiner, Review of Scientific Instruments 71, 1929 (2000).
* (25) See Supplemental Material at [URL will be inserted by publisher] for experimental details, a full analysis of the total conversion efficiency, and more complex amplitude profiles.
* Fejer _et al._ (1992) M. M. Fejer _et al._ , IEEE J. Quan. Elec. 28, 2631 (1992).
* Chanvillard _et al._ (2000) L. Chanvillard _et al._ , Appl. Phys. Lett. 76, 1089 (2000).
* Langrock _et al._ (2005) C. Langrock _et al._ , Opt. Lett. 30, 1725 (2005).
* Vandevender and Kwiat (2004) A. Vandevender and P. Kwiat, J. Mod. Opt. 51, 1433 (2004).
* Albota and Wong (2004) M. Albota and F. Wong, Opt. Lett. 29, 1449 (2004).
* Xu _et al._ (2007) H. Xu _et al._ , Opt. Express 15, 7247 (2007).
* Shah (1988) J. Shah, IEEE J. Quan. Elec. 24, 276 (1988).
* Ma _et al._ (2011) L. Ma, J. C. Bienfang, O. Slattery, and X. Tang, Opt. Express 19, 5470 (2011).
* Kuzucu _et al._ (2008) O. Kuzucu _et al._ , Opt. Lett. 33, 2257 (2008).
* Suchowski _et al._ (2010) H. Suchowski _et al._ , Opt. Photon. News 21, 36 (2010).
* Flissikowski _et al._ (2001) T. Flissikowski _et al._ , Phys. Rev. Lett. 86, 3172 (2001).
* Srinivasan _et al._ (2008) K. Srinivasan _et al._ , Phys. Rev. A 78, 033839 (2008).
* Patel _et al._ (2010) R. B. Patel _et al._ , Nature Photonics 4, 632 (2010).
* Flagg _et al._ (2010) E. B. Flagg _et al._ , Phys. Rev. Lett. 104, 137401 (2010).
* Pe’er _et al._ (2005) A. Pe’er _et al._ , Phys. Rev. Lett. 94, 073601 (2005).
|
arxiv-papers
| 2011-05-17T09:19:48 |
2024-09-04T02:49:18.793264
|
{
"license": "Public Domain",
"authors": "Matthew T. Rakher, Lijun Ma, Marcelo Davanco, Oliver Slattery, Xiao\n Tang, Kartik Srinivasan",
"submitter": "Matthew Rakher",
"url": "https://arxiv.org/abs/1105.3313"
}
|
1105.3353
|
# Accurate Modeling of the Cubic and Antiferrodistortive Phases of SrTiO3 with
Screened Hybrid Density Functional Theory
Fadwa El-Mellouhi fadwa.el_mellouhi@qatar.tamu.edu Science Program, Texas A&M
at Qatar, Texas A&M Engineering Building, Education City, Doha, Qatar Edward
N. Brothers ed.brothers@qatar.tamu.edu Science Program, Texas A&M at Qatar,
Texas A&M Engineering Building, Education City, Doha, Qatar Melissa J. Lucero
Department of Chemistry, Rice University, Houston, Texas 77005-1892 Gustavo
E. Scuseria Department of Chemistry, Rice University, Houston, Texas
77005-1892 Department of Physics and Astronomy, Rice University, Houston,
Texas 77005-1892
###### Abstract
We have calculated the properties of SrTiO3 (STO) using a wide array of
density functionals ranging from standard semi-local functionals to modern
range-separated hybrids, combined with several basis sets of varying
size/quality. We show how these combination’s predictive ability varies
significantly, both for STO’s cubic and antiferrodistortive (AFD) phases, with
the greatest variation in functional/basis set efficacy seen in modeling the
AFD phase. The screened hybrid functionals we utilized predict the structural
properties of both phases in very good agreement with experiment, especially
if used with large (but still computationally tractable) basis sets. The most
accurate results presented in this study, namely those from HSE06/modified-
def2-TZVP, stand as the most accurate modeling of STO to date when compared to
the literature; these results agree well with experimental structural and
electronic properties as well as providing insight into the band structure
alteration during the phase transition.
###### pacs:
71.15.Mb,71.15.Ap,77.80.−e, 77.84.−s
††preprint: UPDATED:
## I Introduction
Strontium titanate (SrTiO3; STO) is a complex oxide perovskite of great
technological interest for its superconductivity,Ueno _et al._ (2008) blue-
light emission,Kan _et al._ (2005) photovoltaic effect,Zhou _et al._ (2009)
and so on. Under normal conditions, bulk SrTiO3 crystallizes in a cubic
perovskite structure; it subsequently undergoes a second order phase
transition at $T_{c}$=105 K to a tetragonal structure with slightly rotated
oxygens around the z-axis, known as the antiferrodistortive (AFD) phase (see
Fig. 1). Many of the interesting properties of STO, either in bulk or in
superlattices formed with other metal oxides, are believed to be caused by the
cubic to AFD phase transition. Examples of this attribution are STO’s
superlattice’s high Tc superconductivity Reyren _et al._ (2007); Caviglia
_et al._ (2008); Kozuka _et al._ (2009) and its colossal magnetoresistivity.
Gao _et al._ (2009) First-principles calculations (see Ref. Pentcheva and
Pickett, 2010 and references therein) have indicated that the strain-induced
competition between octahedral rotation modes and the lattice distortion in
metal oxide superlattices are behind these interesting properties. Thus, there
is a considerable need Borisevich _et al._ (2010); Chang _et al._ (2010) for
precise theoretical calculations of the structural and electronic properties
of complex oxides, as well as accurate estimation of the phase transition
order parameters, to understand and eventually exploit these phenomena.
Figure 1: (Color online) SrTiO3 unit cells for the a) cubic phase and b)
antiferrodistortive phase; b) shows the TiO6 octahedra’s rotation around the
[001] axis. Sr atom are in green, Ti are in blue and O are in red. The O1
(equatorial) and O2 (axial) labels denote the non-rotating and rotating
oxygens, respectively.
The phase transition of STO is governed by two order parameters. The primary
order parameter is the rotation angle of the TiO6 octahedra ($\theta$). The
experimentally measured Unoki and Sakudo (1967) octahedral rotation of AFD STO
is 1.4° at 77 K and increases as the temperature drops toward the maximum
measured value of 2.1° at 4.2 K. The octahedron’s rotation is believed to be
almost complete Jauch and Palmer (1999) at around 50 K, where
$\theta$=2.01$\pm$0.07° was reported.111As the standard DFT calculations done
in this article not include temperature, we take the 0 K experimental/target
value to be 2.1°. The secondary order parameter is the tetragonality of the
unit cell ($c/a$), which increases from 1.00056 Cao _et al._ (2000) to 1.0009
Heidemann and Wettengel (1973) as the temperature decreases from 65 K to 10
K.222As temperature is not included in the standard DFT work done here, we
take the 0 K experimental/target value to be 1.0009. The AFD phase can also
appear in thin films of STO He _et al._ (2004, 2005, 2003) at much higher Tc
than the bulk, depending on the substrate used, the thickness of deposited STO
film, the strain and the lattice mismatch. For example, 10 nm of STO deposited
on LaAlO3 (LAO) undergoes a transition to the AFD phase at $T_{c}\cong$ 332 K.
As the simplest metal oxide perovskite, STO has been extensively studied in
the last decades with different ab initio schemes.Zhukovskii _et al._ (2009);
Eglitis and Vanderbilt (2008); Heifets _et al._ (2006a); Wahl _et al._
(2008) However, it is still a challenging material for theory; only a few of
the previously published works have been able to accurately describe the
structural and electronic properties of the both phases of STO. The balance of
this section will consist of a brief review of the theoretical work performed
to date.
Sai and Vanderbilt Sai and Vanderbilt (2000) carried out one of the first LDA
calculations on STO using a plane-wave basis and ultra-soft pseudopotentials.
LDA predicted an exaggerated tetragonal AFD phase of STO, with octahedral
rotation angles of 6°, significantly overestimating the 2.1° rotation measured
experimentally. Unoki and Sakudo (1967) Using LDA with other basis sets Uchida
_et al._ (2003) shows similar issues, predicting rotations up 8.4°.
Wahl et al. Wahl _et al._ (2008) used a plane-wave basis while simulating STO
with LDAVosko _et al._ (1980), PBE Perdew _et al._ (1996, 1997) and
PBEsol.Staroverov _et al._ (2003, 2004) (See Section II for further
descriptions of these density functionals). LDA underestimated experimental
lattice constants, while PBE overestimated them; both methods had band-gaps
that were seriously underestimated compared to experiment. This
underestimation is well known for these functionals; see e.g. Ref. Mori-
Sánchez _et al._ , 2008 and references therein. PBEsol was found to reproduce
accurately the experimental structure, but considerably underestimated the
band gaps. For the AFD phase, the octahedral angle $\theta$ was found to be
very sensitive to the functional used; all three overestimate the AFD
deformation, with LDA worse than PBE and PBEsol splitting the difference.
Rondinelli et al. Rondinelli and Spaldin (2010) applied the LSDA+$U$
correction to cubic STO and found that while it corrects the band gap, the
calculated octahedral rotation angle remains overestimated at 5.7°. To date,
none of the post-DFT corrections which benefit band-gaps have successfully
corrected the octahedral rotation overestimation, and many authors attribute
this to the argument proposed by Sai and Vanderbilt Sai and Vanderbilt (2000)
stating that this can be caused by the exchange and correlation terms in DFT
not capturing quantum point fluctuations.
Piskunov et al. Piskunov _et al._ (2004) conducted one of the most complete
and comprehensive ab initio studies of STO, using Gaussian basis sets
specifically optimized for modeling STO crystals. This study of STO showed
problems when modeling with pure DFT or pure HF, namely underestimated and
overestimated band gaps, respectively; this is a well known problem.Janesko
_et al._ (2009) Hybrid functionals, specifically B3PWBecke (1993) and
B3LYP,Lee _et al._ (1988) gave more reasonable results, with direct band gaps
overestimated by 5% for B3PW and 3.5% for B3LYP compared to experiment and
indirect band gaps overestimated by 12% for B3PW and 10% for B3LYP. (We will
demonstrate that an important part of this overestimation can be attributed to
the basis set employed; see section III.) The hybrid functionals also gave the
best agreement with experiment for the lattice constant and the bulk modulus,
and generally did better than semilocal functionals in all categories. This
success of hybrid functionals motivated more detailed calculations Zhukovskii
_et al._ (2009); Eglitis and Vanderbilt (2008); Heifets _et al._ (2006a) of
the properties of the cubic and AFD phases of STO, again using the optimized
basis set of Piskunov et al. Piskunov _et al._ (2004) and the B3PW
functional.
Next, Wahl et al. Wahl _et al._ (2008) applied the Heyd-Scuseria-Ernzerhof
Heyd _et al._ (2003, 2006) screened Coulomb hybrid density functional (HSE)
in a plane-wave basis set. HSE performed exceptionally well, doing much better
than any of semilocal functionals, as it gave a very accurate estimate of both
the structural and electronic properties of the cubic phase. HSE also showed
excellent agreement with the experimental octahedral angle and tetragonality
of the unit cell which constitute, to our knowledge, the most accurately
computed STO properties available in the literature for both phases, prior to
the current study.
As noted above, hybrid functionals have proved their effectiveness in studying
metal oxides, but they are computationally much more demanding than semilocal
functionals. While it would be ideal to do high accuracy ab initio on metal
oxide superlattices using complete basis sets and large supercells, this is
prohibitively expensive at the current level of computer power. Screened
hybrid functionals with only short range exact exchange are computationally
less demanding; they allow the use of large supercells, especially when used
with localized basis sets such as Gaussian functions. We hope to use the most
effective methods/basis sets from this study on more complicated metal oxide
systems, and thus we have concentrated on methods and basis sets that would be
practical for those systems as well as the systems currently under
consideration.
This paper focuses on two tightly linked problems. We are interested in the
degree of completeness (or size) of the localized basis set necessary to
correctly simulate both phases of STO, and in the efficacy of recently
developed functionals (including screened hybrids) in predicting the
properties of STO. To discuss these issues, the paper proceeds as follows: In
Section II, we briefly describe the technical details before turning in
Section III to the basis set optimization/modification technique we used to
make standard basis sets PBC-compatible. In Section IV, we report the results
of semilocal and range separated hybrid functionals applied to the cubic and
the AFD phases of STO. We show also how the quality of basis set affected the
accurate prediction of the octahedral rotation angle in the AFD phase of STO.
Finally, we discuss the results of our best functional/basis set combination
for STO, comparing them with previously published theoretical and experimental
data, with special emphasis on the effect of varying the range separation
parameter in the screened functionals.
## II Computational Details
All calculations were performed using a development version of the Gaussian
suite of programs,Frisch _et al._ with the periodic boundary condition
(PBC)Kudin and Scuseria (2000, 1998a, 1998b) code used throughout. A wide
array of functionals were applied, including: the Local Spin Density
Approximation Vosko _et al._ (1980) (LSDA), the generalized gradient
approximation (GGA) corrected functional of Perdew, Burke and Ernzerhof Perdew
_et al._ (1996, 1997) (PBE), the reparametrization of PBE for solids, PBEsol,
Staroverov _et al._ (2003, 2004) the revised meta-GGA of Tao, Perdew,
Staroverov and ScuseriaTao _et al._ (2003); Perdew _et al._ (2009)
(revTPSS), and finally a modern and highly parametrized meta-GGA functional,
M06L. Zhao and Truhlar (2006, 2008) Two screened hybrid functionals were also
tested, namely the short-range exact exchange functional of Heyd, Scuseria,
and Ernzerhof Krukau _et al._ (2006); Heyd _et al._ (2003) (HSE, with the
2006 errata, also referred to as HSE06) and the exact exchange in middle-range
functional333Originally introduced as HISS-B in Ref Henderson _et al._ , 2007
and called simply HISS as in Ref. Henderson _et al._ , 2008 of Henderson,
Izmaylov, Scuseria, and Savin (HISS).Henderson _et al._ (2007, 2008) Because
regular hybrids with unscreened exact exchange like B3LYP and B3PW have higher
computational cost compared to screened hybrids we decided to exclude them
from this test.
Gaussian basis sets of different quality have been tested for their ability to
simulate the properties of STO; the details of these tests and the
modification of the basis set was detailed enough to merit its own section,
section III.
A few numerical considerations should be mentioned here. During the initial
(or exploratory) calculations for the AFD phase, we found some dependence of
octahedral rotation angle ($\theta$) on initial atomic positions. After
further investigation, this can be attributed to the geometric optimization
convergence criteria. Since $\theta$ is so small, very stringent convergence
criteria is required.444The standard RMS force threshold in gaussian for
geometry optimizations is 450$\times 10^{-6}$ Hartrees/Bohr. Using “verytight”
convergence, this becomes 1$\times 10^{-6}$. Another modified (versus the
default) setting was that a pruned integration grid for DFT of (99, 590) was
employed, which corresponds to the Gaussian option “ultrafine”. Note that this
grid is big enough for this system to avoid any of the instabilities with M06L
reported in the literature with small grids.Johnson _et al._ (2009); Wheeler
and Houk (2010) To ensure this, we tested M06L with a larger grid, without
noticing any modification in the calculated properties. Thus while “ultrafine”
is sometimes insufficient for M06L, it is not for this system. Other numerical
settings in gaussian were left at the default values, e.g. integral cut-offs,
k-point meshes555Reciprocal space integration used 12$\times$12$\times$12 k
-point mesh for the cubic unit cell, while for the larger AFD supercell, the
default k -point mesh of 8$\times$8$\times$6 was found to be sufficient. , SCF
convergence criterion,666Because we did geometry optimization, this was by
default set to “tight”, or 10-8. and the like.
Finally, the geometry of each phase is worth discussing briefly. The starting
configuration for the cubic phase (see Figure 1(a)) consisted of the
perovskite primitive cell containing 5 atoms at the experimental lattice
constant Abramov _et al._ (1995) ($a_{0}$= 3.890 Å). For the AFD phase, we
couldn’t simply use the 5 atom tetragonal unit cell with rotated oxygens and
the lattice parameters set to $a=b\neq c$. A 20 atoms supercell/simulation
cell was necessary, as the phase transition requires a rotation of every pair
of neighboring TiO6 octahedra in opposite directions (figure 1(b)). Thus, the
volume of the AFD supercell is about four times the volume of the cubic phase
with tetragonal lattice constants $a^{*}=b^{*}=\sqrt{2}a$ and $c^{*}=2c$, with
$a$ and $c$ being the lattice parameters of the 5 atoms tetragonal unit cell
in the AFD phase. The starting AFD structure of STO was taken from the
experimental structure of Jauch et al. Jauch and Palmer (1999) obtained at 50
K and downloaded as CIF file from the ICSD,ICS with $a^{*}=b^{*}=5.507$ Å and
$c^{*}=7.796$ Å. The starting rotation angle for TiO6 octahedra was 2.1° while
the $c/a-1=10\times 10^{-4}$. Please note the geometries were only starting
points; as mentioned above all geometries were optimized with the method/basis
set under consideration. In order to avoid introducing any errors coming from
size effects or k-space integration, the calculated properties of the AFD
supercell are always compared with a 20 atoms supercell constructed from four
cubic primitive cells (without octahedral rotation or tetragonality) fully
relaxed using the same k -point mesh. It should be noted that the supercell in
the cubic phase is a local minimum and is higher in energy than the supercell
in the AFD phase for all reported calculations. The final (reported) $\theta$
values were determined from Ti-O2-Ti angle measurements, and any octahedral
tilts can be estimated by measuring the Ti-O1-Ti angles (On’s subscript was
defined in Figure 1). Finally, all geometric visualization was done using
GaussView. Dennington _et al._
## III Basis set efficiency for SrTiO3
The challenge in selecting a basis set is always balancing accuracy with
computational cost. In molecular calculations, the computational cost of a
gaussian basis set is determined by the number of functions used, while in PBC
calculations the spatial extent or diffuseness of the basis set also plays a
major role. The more diffuse a basis set is, the larger the chunk of matter
that must be included in the calculations to avoid numerical issues.
Coupled with the argument that the long density tail is more necessary for
molecular work than work in extended systems, it becomes obvious that basis
sets developed for non-periodic calculations can require modification for PBC
use. This section describes the basis set optimization/modification procedure
we employed to find the appropriate Gaussian basis sets to simulate periodic
STO while keeping within reasonable computational expense. We based our
evaluations of a basis set’s accuracy on cubic STO results using the Heyd-
Scuseria-Ernzerhof Heyd _et al._ (2003, 2006) screened Coulomb hybrid density
functional (HSE06). Krukau _et al._ (2006)
The obvious starting point was the basis sets used in previous
calculations/studies of bulk STO, including:
* •
Gaussian-type basis sets published by Piskunov et al. Piskunov _et al._
(2000) in 2000, optimized using the Hartree-Fock (HF) and density functional
theory (DFT) with Hay-Wadt pseudopotentials Hay and Wadt (1984a, b, c) for Sr
and Ti, denoted here as P1.
* •
The subsequently improved version of P1 published by Piskunov et al. Piskunov
_et al._ (2004) in 2004, which expands of P1 by adding polarization
d-functions to oxygen and making the Ti s and p functions more diffuse,
denoted here as P2.
Table 1: The electronic and structural properties of cubic SrTiO3 computed with HSE06 Krukau _et al._ (2006) and different basis sets. Please see the text for basis set naming conventions. Basis set | P1 | P2 | SZVP | TZVP | Experiment
---|---|---|---|---|---
Direct gap (eV) | 3.87 | 3.80 | 3.59 | 3.59 | 3.75111Reference van Benthem _et al._ , 2001.
Indirect gap (eV) | 3.53 | 3.46 | 3.18 | 3.20 | 3.25111Reference van Benthem _et al._ , 2001.
a0(Å) | 3.900 | 3.908 | 3.887 | 3.902 | 3.890222Reference Abramov _et al._ , 1995., 3.900333Reference Hellwege and Hellwege, 1969.
B(GPa) | 198 | 194 | 204 | 193 | 179222Reference Abramov _et al._ , 1995.,
| | | | | 179$\pm{4.6}$444Reference Fischer _et al._ , 1993.
Tests on P1 and P2 were done with HSE06, because it has been found to give the
best results versus experiment for both structural and electronic
propertiesWahl _et al._ (2008) in older calculations. Both P1 and P2
reproduce the experimental equilibrium lattice constants Abramov _et al._
(1995) (see Table 1) almost perfectly. Cubic STO modeled with P1 has a
slightly higher bulk modulus compared to P2, although the difference between
the two basis sets in fairly minimal for structural properties. A more
important effect is observed for the electronic properties: P1 and P2
overestimate the direct band gap of STO by 0.12 and 0.05 eV respectively, and
seriously overestimate the indirect band gap by 0.28 and 0.21 eV.
It is easy to see that the P2 basis set employed with HSE06 lead to results
that are closer to experiment than P1, a fact noted by PiskunovPiskunov _et
al._ (2004) for a number of functionals. The more important point is that
increasing the size/quality of the basis set made a noticeable change in the
results; the immediate question is whether another increase in basis set size
would bring about similar improvement. In other words, using polarization
d-orbitals for O and diffuse functions for Ti improved the HSE06 results, and
imply that further improvement could potentially be achieved if more basis set
improvements are implements, e.g. including titanium core electrons and/or
adding more diffuse functions for oxygen.
We decided to optimize some of the Def2- Weigend and Ahlrichs (2005) series of
Gaussian basis sets for use in bulk STO calculations. The original Def2- basis
sets for the atoms of interest in this project included small-exponent diffuse
functions ($\alpha_{min}$ less than 0.10) that are spatially quite extended;
as stated above, this long tail is necessary to improve the DFT results for
molecules but not necessary for crystals. Heyd _et al._ (2005); Strain _et
al._ (1996) Basis sets with large spatial ranges dramatically slow down the
calculation of Coulomb contributions to the total energy of crystals. Thus, to
be useful in PBC calculations, Def2- basis sets must be modified by removing
some of the most diffuse functions.
The series of Def2- basis sets are available up to quadruple zeta valence
quality for a large set of elements.Weigend and Ahlrichs (2005); Weigend
(2006) In the original optimizations, the oxygen, strontium and titanium basis
sets were optimized (using HF and DFT) versus the properties of SrO, TiO and
TiO2 molecules. Strontium has the inner shell electrons replaced with small
core pseudopotentials, Kaupp _et al._ (1991) while the other two atoms
utilize all electron basis sets; this differs from P1 and P2 which uses
pseudopotentials on titanium as well. In general, Def2- basis sets are larger
and more expensive than P1 and P2 basis sets, but are expected to give a
better representation of both phases of STO due to greater “completeness.”
To make a Def2- basis set applicable to PBC, the first step is selecting a
maximum allowable diffuseness, or equivalently the smallest acceptable
Gaussian orbital exponent, $\alpha_{min}$. The larger the value of
$\alpha_{min}$, the faster the calculations become, but if $\alpha_{min}$ is
set too high, significant degradation of physical property prediction results.
After the threshold is defined, one pass is made through the basis set to
reset all $\alpha<\alpha_{min}$ to $\alpha_{min}$, and then a second pass is
made through the basis set to remove any redundancies. Note that after
modifying or deleting an element of a contracted basis set, we rely on the
code’s internal renormalization code, i.e. no attempt is made to reoptimize
contraction coefficients.
We first began with the largest Def2- basis sets, Def2-QZVP and Def2-QZVPP,
but these were found to be computationally intractable for bulk STO even for
$\alpha_{min}$ as big as 0.2, and previous experience have shown that
$\alpha_{min}$ larger than 0.2 causes physically unacceptable results. We then
moved to the smaller basis sets, Def2-TZVP and Def2-SZVP. We first set
$\alpha_{min}$= 0.12, but found this made the calculations very slow. Our
tests showed that $\alpha_{min}$= 0.15 constitutes a more computationally
efficient choice without important loss in accuracy.
Henceforth, the Def2-TZVP and Def2-SZVP, with $\alpha_{min}$ modified and
redundant s functions removed, will be denoted TZVP and SZVP, respectively.
Table 1 summarizes the calculated electronic and structural properties of
cubic STO using our basis set modifications as well as the aforementioned P1
and P2. The optimized basis sets SZVP and TZVP give an overall excellent
agreement with experiment: Abramov _et al._ (1995) direct band gaps are now
underestimated by 0.16 eV while indirect band gaps are now underestimated by
$\sim$0.05 eV. These two new basis sets are larger than the previously
utilized P1 and P2, are more accurate for indirect gaps, as well for other
measured properties, and due to their greater size are expected to be closer
to the upper limit of HSE06 accuracy for this system. Note also that the
electronic properties of STO remain almost unchanged by moving from a SZVP to
TZVP basis set. The deviation from the experimental lattice constant do not
exceed 0.07 and 0.3% for SZVP and TZVP respectively but is more substantial
for the bulk modulus reaching 14% for SZVP and 8% for TZVP. Finally, the same
series of basis set optimizations were also performed using HISS and M06L
functionals, which lead to the same conclusions regarding the basis set
efficiency; these are not presented here for space reasons.
Before moving on to the results section, a brief mention of the expense of the
various basis sets should be included. In term of relative CPU time, one SCF
cycle takes about 12 units for TZVP compared to 6 units for SZVP and 1 unit
for P2. All of these basis sets still have potential uses; SZVP or P2, for
example, might be very useful for a rapid investigation of the electronic
properties of some complex STO systems. But, in term of completeness, TZVP is
the most complete and the closest to the planewave basis set limit, followed
by SZVP, then P2.
## IV Results: Basis set and functional evaluation
In this section we present the calculated properties of SrTiO3, always
discussing the results of each functional using the TZVP basis set first,
followed with a discussion of the sensitivity of the functionals to smaller
basis sets, namely SZVP and P2.
### IV.1 Structural properties of cubic SrTiO3
Table 2: Computed lattice parameter $a_{0}$(Å) and bulk modulus B(GPa) for Cubic STO using different combinations of functionals and basis sets compared to experiment. | HSE06 | HISS | M06L | revTPSS | LSDA | PBE | PBEsol | Experiment
---|---|---|---|---|---|---|---|---
a0(Å) | | | | | | | | 3.890111Reference Abramov _et al._ , 1995., 3.900222Reference Hellwege and Hellwege, 1969.
TZVP | 3.902 | 3.883 | 3.925 | 3.921 | 3.862 | 3.941 | 3.897 |
SZVP | 3.887 | 3.869 | 3.909 | 3.903 | 3.845 | 3.924 | 3.881 |
P2 | 3.908 | 3.891 | 3.930 | 3.920 | 3.870 | 3.946 | 3.903 |
B(GPa) | | | | | | | | 179111Reference Abramov _et al._ , 1995.,
TZVP | 193 | 206 | 187 | 180 | 201 | 169 | 184 | 179$\pm{4.6}$333Reference Fischer _et al._ , 1993.
SZVP | 204 | 218 | 198 | 193 | 214 | 180 | 196 |
P2 | 194 | 205 | 191 | 184 | 203 | 173 | 187 |
The calculated equilibrium lattice constants and the bulk moduli of cubic STO
using different functionals and basis sets are reported in Table
LABEL:tab:ela-cub. Unless specified, the deviation of theory from experiment
will be always referred to the data of Abramov et al, Abramov _et al._ (1995)
i.e. treated as the target value. Focusing first on the TZVP results, we
observe that the screened hybrids HSE06 and HISS give lattice parameters in
excellent agreement with experiment.
The calculated bulk modulus using HSE06 is fairly close to the experimentally
reported values, although overestimated by 8%. (The same magnitude of
overestimation have been also reported in the HSE planewave calculations of
Wahl et al. Wahl _et al._ (2008).) However, a larger bulk modulus
overestimation of 15% is observed for HISS, which constitutes the largest
deviation from experiment among all the studied functionals.
M06L and revTPSS predict slightly higher equilibrium lattice constants than
screened hybrids do, but their bulk moduli are closer to experiment, with
revTPSS being especially close. LSDA underestimates the lattice constant by
0.03 Å, while PBE predicts lattice constants 0.05 Å larger than experiment.
PBEsol is in excellent agreement with the experimental lattice constant. Thus
PBEsol corrects the LSDA underestimation and the PBE overcorrection to LSDA
for lattice constants; in addition, the PBEsol bulk modulus deviate by less
than 3% from experiment, while LSDA and PBE are off by 11% and 12%,
respectively. This is an example of PBEsol meeting its purpose, as it improves
the PBE lattice constant and bulk modulus for the cubic phase, approaching
very closely the experimental data.
Turning now to the functional sensitivity to basis set size, we observe from
the HSE06 results that SZVP basis set predict bond lengths that are very
slightly shorter than the TZVP and a bulk modulus that is 6% higher. As such,
SZVP predicts SrTiO3 to be 14% harder than experiment. P2 behave in the
opposite direction, predicting slightly longer bonds when compared to TZVP,
while the bulk moduli are only 1GPa higher. From table LABEL:tab:ela-cub, this
sensitivity of HSE06 to smaller basis set can be generalized to M06L, revTPSS
and the semilocal functionals LSDA, PBE and PBEsol.
Finally, it should be noted that PBEsol results offer the best agreement with
experimental structural properties Abramov _et al._ (1995) of SrTiO3 among
all the studied functionals with the TZVP basis set, followed by the screened
hybrid HSE06 and the meta-GGA revTPSS.
### IV.2 Electronic properties of cubic SrTiO3
Table 3: Direct and indirect band gaps computed for Cubic SrTiO3 using different basis sets and functionals compared to experiment. | HSE06 | HISS | M06L | revTPSS | LSDA | PBE | PBEsol | Experiment
---|---|---|---|---|---|---|---|---
Direct gap(eV) | | | | | | | | 3.75 van Benthem _et al._ (2001)
TZVP | 3.59 | 4.39 | 2.51 | 2.24 | 2.08 | 2.11 | 2.10 |
SZVP | 3.59 | 4.45 | 2.53 | 2.28 | 2.12 | 2.14 | 2.14 |
P2 | 3.80 | 4.56 | 2.63 | 2.52 | 2.34 | 2.33 | 2.34 |
Indirect gap(eV) | | | | | | | | 3.25 van Benthem _et al._ (2001)
TZVP | 3.20 | 3.98 | 2.09 | 1.87 | 1.75 | 1.74 | 1.75 |
SZVP | 3.18 | 4.03 | 2.10 | 1.89 | 1.76 | 1.75 | 1.76 |
P2 | 3.46 | 4.22 | 2.24 | 2.17 | 2.04 | 1.99 | 2.02 |
The computed electronic properties of SrTiO3 are summarized in table
LABEL:tab:elec_cub. As expected, HSE06 gives an excellent estimate of the
electronic properties when used with the large TZVP basis sets. Deviations
from the experimental values are 0.15 eV for the direct gap and 0.05 for the
indirect gap. A cursory glance over the rest of table LABEL:tab:elec_cub
indicates that no other functional was comparable to HSE06’s efficacy for band
gaps, i.e. everything else we tried had much larger errors.
The midle range screened hybrid HISS tend to overestimate the direct and
indirect band gaps by 0.35 and 0.73 eV, respectively. M06L and revTPSS tend to
underestimate both band gaps, by an average of $\sim$1.2 and $\sim$1.4 eV
respectively. The semilocal functionals LSDA, PBEsol, PBE underestimate the
experimental band gaps by an average of 45% or 1.5 eV. This was expected, and
is in agreement with the behavior observed earlier in the literature for this
system. Wahl _et al._ (2008) It can be easily seen from these results that
HSE06 is the best functional choice for investigating this system.
Turning to basis set sensitivity, it can be seen from the HSE06 numbers that
band gaps are nearly unaffected by using the smaller SZVP basis sets, but when
used with the still smaller P2 basis set, direct and indirect band gaps
increase by $\sim$0.25 eV versus TZVP. The predicted direct band gap becomes
closer to experiment when using P2 and HSE06, probably due to a cancellation
of errors effect, while the indirect band gap is noticeably worse. This same
sensitivity holds for almost every other functional, with SZVP and TZVP giving
about the same band gaps and P2 opening the band gaps up by a few tenths of an
eV. M06L appears to be slightly less sensitive; no obvious reason for this
exists.
### IV.3 Stability of the AFD phase of STO
This section examines the stability of the AFD phase of STO as calculated by
the various functionals and basis sets previously tested for the cubic phase.
The functional/basis set combinations tested face the challenge of predicting
the AFD octahedral rotation angle, $\theta$, as well as the tetragonality
parameter $c/a$, which as shown in section I is not trivial. The AFD phase
order parameters are evaluated from the relaxed 20-atoms AFD supercells as
described in section II. The performance of each functional with TZVP, and
then an analysis of the functional’s sensitivity to the smaller basis sets is
presented in turn.
Figure 2: (Color online) Performance of different functional/basis set
combinations in predicting the order parameters of the AFD phase transition in
STO. Dashed lines depict the experimental octahedral angle measured at 4.2 K
from Ref. Unoki and Sakudo, 1967 (left) and the tetragonality parameter
obtained at 50 K from Ref. Jauch and Palmer, 1999 (right).
Figure 2 shows that the screened hybrid functional HSE06 is excellent for the
structural properties of AFD, as it was for the cubic phase. Both the rotation
angle $\theta$ and the $c/a$ ratio are in very good agreement with experiment.
These properties are not significantly affected when SZVP is used, but
HSE06/P2 predicts a very very small angle for the AFD phase, while retaining a
good $c/a$. This is one area where TZVP noticeably outperforms P2 with HSE06.
HISS and revTPSS behave as HSE06 for both TZVP and SZVP, giving a good
estimate of both order parameters. However, they demonstrate a higher
sensitivity to the smaller P2 basis set and required the use of a very
stringent convergence criterium to finally relax the structure back to a
pseudocubic phase with $\theta\approx$0\. On the other hand, M06L predicts the
AFD phase to be unstable, and relaxes to a non-rotated structure regardless of
the basis set used.
Table 4: Structural and electronic properties of the antiferrodistorsive phase of SrTiO3 compared to previously simulated data and experiments. $a^{*}$ and $c^{*}$ are the lattice parameters of the AFD supercell and $c/a=c^{*}/{\sqrt{2}a^{*}}$. $\Delta E=E_{Cubic}-E_{AFD}$ represent the gain in total energy after the cubic to AFD phase transition, while $\Delta E_{g}$ denote the corresponding increase in the band gap. | LSDA | PBE | PBEsol | HSE06 | HISS | revTPSS | M06L | Experiment |
---|---|---|---|---|---|---|---|---|---
$a^{*}$(Å) | | | | | | | | |
Present | 5.449 | 5.568 | 5.500 | 5.515 | 5.448 | 5.543 | 5.551 | 5.507555Reference Jauch and Palmer, 1999 (at 50K). |
Ref. Wahl _et al._ , 2008111plane-wave calculation using a different HSE screening parameter. | 5.440 | 5.562 | 5.495 | 5.515 | | | | |
$c^{*}$(Å) | | | | | | | | |
Present | 7.727 | 7.900 | 7.812 | 7.809 | 7.772 | 7.846 | 7.862 | 7.796555Reference Jauch and Palmer, 1999 (at 50K). |
Ref Wahl _et al._ , 2008111plane-wave calculation using a different HSE screening parameter. | 7.755 | 7.897 | 7.818 | 7.808 | | | | |
$(c/a-1)\times 10^{-4}$ | | | | | | | | |
Present | 27 | 32 | 44 | 12 | 14 | 7.6 | 7 | 10555Reference Jauch and Palmer, 1999 (at 50K). |
Ref. Wahl _et al._ , 2008111plane-wave calculation using a different HSE screening parameter. | 80 | 40 | 60 | 10 | | | | |
Others | 40444Reference Hong _et al._ , 2010 using numerical atomic orbitals. | | | | | | | |
$\theta$(°) | | | | | | | | |
Present | 4.14 | 3.54 | 3.81 | 2.01 | 1.92 | 2.01 | 0 | 2.01$\pm$0.07555Reference Jauch and Palmer, 1999 (at 50K). |
Ref. Wahl _et al._ , 2008111plane-wave calculation using a different HSE screening parameter. | 6.05 | 4.74 | 5.31 | 2.63 | | | | 2.1666Reference Unoki and Sakudo, 1967 (at 4.2K). |
Others | 8.40222Reference Uchida _et al._ , 2003., 6333Reference Sai and Vanderbilt, 2000. | | | | | | | |
| 4444Reference Hong _et al._ , 2010 using numerical atomic orbitals. | | | | | | | |
$\Delta E\times 10^{-5}$ (eV) | | | | | | | | |
Present | 1796 | 854 | 44 | 35 | 578 | 258 | 122 | |
Ref. Wahl _et al._ , 2008111plane-wave calculation using a different HSE screening parameter. | 1900 | 700 | 1100 | 200 | | | | |
Indirect band gap (eV) | | | | | | | | |
Present | 1.820 | 1.787 | 1.808 | 3.227 | 3.995 | 1.890 | 2.060 | 3.246 Yamada and Kanemitsu (2010) |
Ref. Wahl _et al._ , 2008111plane-wave calculation using a different HSE screening parameter. | 1.970 | 1.790 | 1.930 | 3.110 | | | | 3.160 Hasegawa _et al._ (2000) |
$\Delta E_{g}$ (meV) | | | | | | | | |
Present | 75 | 49 | 58 | 27 | 15 | 15 | 30 | 50777Reference Yamada and Kanemitsu, 2010 difference between 85 K and 8 K measured gaps. |
Ref. Wahl _et al._ , 2008111plane-wave calculation using a different HSE screening parameter. | 160 | 10 | 110 | 40 | | | | |
The semilocal functionals LSDA, PBEsol and PBE all overestimate the
tetragonality of the AFD phase by predicting $\theta$ and $c/a$ almost twice
the size of the experimental results. The highest overestimation was observed
for LSDA, followed by PBEsol then PBE. Note that our result here is in
excellent qualitative agreement with the behavior found in the planewave
calculations of Wahl et al; Wahl _et al._ (2008) quantitatively, however, the
LSDA, PBEsol and PBE octahedral angles with TZVP are 25-30% lower than the
planewave results Wahl _et al._ (2008); Sai and Vanderbilt (2000); Uchida
_et al._ (2003) (for a detailed numerical comparison see table 4, and ref.
Wahl _et al._ , 2008 has additional comparison with experiment). Similar
behavior has been recently published Hong _et al._ (2010) for LSDA
calculation with finite-range numerical atomic orbitals using a double-$\zeta$
polarized basis set. This indicates that localized basis sets tend to reduce
the AFD octahedral rotation compared to plane-waves but do not succeed to
suppress the DFT overestimation.
When used with the SZVP basis sets, the LSDA, PBE and PBEsol rotation angles
are larger than the TZVP ones. Furthermore, when LSDA, PBEsol and PBE are used
with the P2 basis set, we observe a small and coherent reduction in the
octahedral rotation angle of the AFD structure compared to TZVP results. This
demonstrates that semilocal functionals have different degrees of sensitivity
to the quality of the localized basis sets used, but the error caused by
functional choice is always the more important source of error. Thus the
functionals examined here will lead to exaggerated AFD $\theta$ values for all
basis sets considered.
## V Discussion: Physical properties of STO
Before talking about specific issues, there are a few general conclusions we
can reach from examining the results in section IV
1. 1.
HSE06/P2 did a good job in describing accurately the structural properties for
the cubic phase as well as providing a descent estimation of the band gap.
However, the failure of HSE06/P2 to correctly model the structure of the AFD
phase indicates that it must be abandoned as a useful combination for this and
related systems.
2. 2.
HSE06/SZVP has the drawback of predicting a stiffer SrTiO3 in the cubic phase,
although it predicts electronic properties as well as TVZP. It also predicts a
stiffer AFD structure, but the octahedral angle and $c/a$ parameters are very
good.
3. 3.
HSE06/TZVP gave the best agreement with experiment for the cubic phase and for
the AFD phase. It is definitely the most reliable combination of functional
and basis set among all studied variations. Thus HSE06/TZVP can be used with
confidence on more complicated structures, as well as to understand the change
in the electronic structure during the cubic to AFD transition for this
system. More concisely, we believe that this combination is an accurate enough
functional in a good enough basis set to explain phenomena in metal oxides.
Figure 3: Band structure of cubic SrTiO3 calculated with HSE06/TZVP. The
dashed line depict the Fermi level lying at the valence band maximum ($R$
special point.) Figure 4: (Color online) Total electronic density of states
(DOS) of cubic SrTiO3 calculated with HSE06/TZVP. Projected density of states
(PDOS) of the main contributing orbitals are also shown.
### V.1 Band structure alteration by the AFD phase transition
The band structure of the cubic unit cell of STO computed with HSE06/TZVP is
shown in figure 3, with the high symmetry points $\Gamma=(0,0,0)$,
$X=(0,\frac{1}{2},0)$, $M=(\frac{1}{2},\frac{1}{2},0)$ and
$R=(\frac{1}{2},\frac{1}{2},\frac{1}{2})$ labeled, in the first Brillouin zone
of the simple cubic system. The dashed line depict the Fermi level lying at
the valence band maximum at the $R$ point.
Our band structure agrees qualitatively with previous band structures from
LSDA/PW calculations, which can be seen (for example) in Fig. 5 of Ref. Uchida
_et al._ , 2003, as well as the B3PW/P2 band structure in Ref. Piskunov _et
al._ , 2004 Fig. 2(a), with the exception of a few details. Our direct band
gap ($\Gamma\rightarrow\Gamma$) of 3.59 eV and indirect gap
($R\rightarrow\Gamma$) of 3.2 eV are in better agreement with experiment van
Benthem _et al._ (2001) compared to the underestimation observed in the
LSDA/PW gaps and the overestimation found with B3PW/P2. Thus for a DFT
approach, this diagram is the best band structure to date. (An even more
accurate band structure was computed using the experimental lattice constant
of SrTiO3 by mean of post-LSDA quasiparticle self-consistent GW (QSGW)
correction to the band structure by Hamann and Vanderbilt.Hamann and
Vanderbilt (2009))
Figure 4 shows the total density of states (DOS) as well as the projected
density of states (PDOS) on every atomic orbital. The PDOS of oxygen
represents the sum of the contributions of all three oxygen atoms in the cubic
unit cell. In the energy window shown here, the DOS is dominated by oxygen
2$p$, titanium 3$d$ and strontium 4$d$ states. (All the remaining orbitals
have a negligible contribution, so their PDOS are not shown.) The valence band
(VB) from 0 to $-$6 eV is dominated by oxygen 2$p$ states, with a small
contribution from titanium 3$d$ states in the range $-$3 to $-$6 eV. The
conduction band (CB) is clearly dominated by titanium 3$d$ in the energy range
3.2$-$7 eV, with a smaller contribution coming from the 3 oxygen 2$p$ states
as well. The admixture in the VB and CB between the titanium 3$d$ and oxygen
2$p$ orbitals demonstrates that the Ti$-$O bonds have a partially covalent
character with a small degree of hybridization. (This behavior has been noted
in previously published data. Uchida _et al._ (2003)) Between 7$-$9 eV, the
spectrum is the sum of contributions from oxygen 2$p$, titanium 3$d$ and
strontium 4$d$ orbitals. The higher energy region in the CB (9$-$12 eV) is
dominated by strontium 4$d$ orbitals with small contributions from titanium
3$d$ and with oxygen 2$p$ vanishing at around 10.5 eV.
Figure 5 compares the total electronic densities of states for the cubic and
AFD supercells. As a general trend, the cubic to AFD phase transition does not
lead to a significant modification in the total DOS; both the valence and the
conduction bands experience a slight shift to higher energies (horizontal
arrows) together with some small modifications, indicated by vertical black
arrows in Figure 5. However, the VB shift does not affect the peak at the VB
maximum, while a very small shift to higher energies is observed for the CB
minimum, indicating that the band gap increase by $\approx$27 meV after the
transition. This same behavior holds for all the functionals/basis sets
combinations tested, and is in line with some experimental observations
Hasegawa _et al._ (2000) reporting very small changes in their measured band
gaps due to the cubic-to-tetragonal structural transition. Further
confirmation of this physical effect can be seen in recent photoluminescence
measurements, Yamada and Kanemitsu (2010) which reported that the band gap
increased by 50 meV when temperature decreased from 85 K to 8 K, which is a
temperature range over which the AFD rotation would go from incomplete to
nearly total.
Figure 5: (Color online) Modification in the total DOS of STO upon the cubic
to AFD phase transition. Horizontal arrows indicate the direction of the
energy shift, while vertical arrows point to the most important changes.
A more detailed comparison between the PDOS for each atomic orbital can give a
better understanding of the origin of these modifications. It is important to
mention that in the AFD supercell, there is one nonrotating O1 atom and two
rotating O2 oxygens for every Sr and Ti atom. Concentrating on the oxygen 2$p$
orbitals, we observe that the non-rotating O1 atoms are nearly unchanged in
the PDOS compared to the cubic phase, with the exception of a tiny shift to
higher energy (see the inset in Figure 6), which can be attributed to the
elongation of the cell along the z-axis. However, the O2 demonstrate a much
more significant shift to higher energies, along with changes in the height
and width of some peaks. This is mainly caused by the octahedral rotation
involving O2 atoms. The titanium 3$d$ and strontium 4$d$ spectra experience
the same aforementioned shift to higher energies in the VB and the CB due to
the elongation of the lattice, with a few noticeable changes in the titanium
3$d$ spectrum at $-$2.9 as well as between 5 and 6.5 eV. Most of the
modifications observed in the total DOS, with the exception of few originate
from the changes in the O2 2$p$ and Ti 3$d$ spectra with the O2 being far more
important.
Figure 6: (Color online) Modification of the partial densities of states
(PDOS) for O 2p, Ti 3d and Sr 4d Left: Valence band Right: Conduction band
### V.2 The effect of the HSE screening parameter, $\omega$
Relying on the assumption that plane-waves are much closer to the infinite
basis set limit than the Gaussian basis sets we used, it is useful to compare
our HSE06/TZVP results with the HSE plane-wave results. To our knowledge, only
Wahl et al. Wahl _et al._ (2008) have published data using plane-waves and a
Heyd-Scuseria-Ernzerhof Heyd _et al._ (2003, 2006); Krukau _et al._ (2006)
style screened Coulomb hybrid density functional for this system. However, a
direct comparison with our present data is not possible because Wahl et al.
used a different screening parameter in their calculations.
Briefly, the HSE functional partitions the coulomb potential into short-range
(SR) and long-range (LR) components:
$E^{HSE}_{xc}=\frac{1}{4}E^{HF,SR}_{x}(\omega)+\frac{3}{4}E^{PBE,SR}_{x}(\omega)+E^{PBE,LR}_{x}(\omega)+E^{PBE}_{c}$
(1)
The screening parameter $\omega$ defines the separation range, as it controls
the distance at which the long-range nonlocal interaction becomes negligible,
i.e. it “turns off” exact exchange after a specified distance. Wahl et al.
used $\omega_{1}$=0.159 $a.u.^{-1}$, effectively using an HSE-style
functional, but not either of the functionals HSE03 or HSE06. Krukau et
al.Krukau _et al._ (2006) applied HSE while varying 0.11$\leq\omega\leq$ 0.20
to a number of bulk metals and semiconductors. They concluded that a small
increase of $\omega$ substantially lowers the calculated band gaps and the
smaller value omega takes in this range, the closer the calculated band gaps
and lattice constants are to the experiment. Based on the above, Krukau et
al., Krukau _et al._ (2006) recommended $\omega_{2}$=0.11 $a.u.^{-1}$ for
both the HF and PBE part of the exchange. This is the value we used in all our
calculations, and this value is part of the formal definition of HSE06.
However, in order to make a comparison between our HSE($\omega_{2}$)/TZVP and
the HSE($\omega_{1}$)/PW data of Wahl et al., we must perform a HSE/TZVP
calculation with $\omega_{1}$ and isolate the screening parameter effect on
the calculated properties of SrTiO3.
Table 5: Variation of the cubic STO lattice parameter(a0 in Å ), bulk modulus (B in GPa), and direct ($E_{g}^{d}$) and indirect ($E_{g}^{i}$) band gaps (in eV) by decreasing the HSE screening parameter from $\omega_{1}$=0.159 $a.u.^{-1}$ to $\omega_{2}$=0.11 $a.u.^{-1}$. Results are from our Gaussian basis set (TZVP) and the plane-wave (PW) calculations in Ref. Wahl _et al._ , 2008. | Gaussian | PW | Experiment
---|---|---|---
| $\omega_{1}\longrightarrow\omega_{2}$ | $\omega_{1}\longrightarrow\omega_{2}$ |
a0 | 3.903 | 3.902 | 3.904 | 3.903555Estimated values if $\omega_{2}$=0.11 $a.u.^{-1}$ was used in plane-wave calculations of Ref. Wahl _et al._ , 2008. | 3.890111Reference Abramov _et al._ , 1995., 3.900222Reference Hellwege and Hellwege, 1969.
B | 192 | 193 | 192 | 193555Estimated values if $\omega_{2}$=0.11 $a.u.^{-1}$ was used in plane-wave calculations of Ref. Wahl _et al._ , 2008. | 179111Reference Abramov _et al._ , 1995., 179$\pm{4.6}$333Reference Fischer _et al._ , 1993.
$E_{g}^{d}$ | 3.37 | 3.59 | 3.47 | 3.67555Estimated values if $\omega_{2}$=0.11 $a.u.^{-1}$ was used in plane-wave calculations of Ref. Wahl _et al._ , 2008. | 3.75444Reference van Benthem _et al._ , 2001.
$E_{g}^{i}$ | 2.96 | 3.20 | 3.07 | 3.27555Estimated values if $\omega_{2}$=0.11 $a.u.^{-1}$ was used in plane-wave calculations of Ref. Wahl _et al._ , 2008. | 3.25444Reference van Benthem _et al._ , 2001.
Table LABEL:tab:omega shows that the HSE($\omega_{1}$)/TZVP lattice constant
and bulk modulus changes very slightly by decreasing the screening parameter
from $\omega_{1}$ to $\omega_{2}$: the change is 0.001 Å and 1 GPa
respectively. A much more significant effect is, however, observed for the
band gaps: decreasing the screening parameter by 50% (
$\omega_{1}\rightarrow\omega_{2}$), lead to an increase in the band gaps,
effectively a rigid shift of 0.22 and 0.24 eV for the direct and indirect band
gaps respectively. If examined from the other direction, decreasing the
screening parameter from $\omega_{1}$ to $\omega_{2}$ (with HSE/TZVP) tends to
bring the band gaps closer to the experiment (see table LABEL:tab:omega),
which suggests that $\omega_{2}$ provides better agreement with experiment
than $\omega_{1}$ does. The same structural changes and band gap shifts were
also found for the small basis sets SZVP and P2, which are not presented here
and which demonstrate that this effect is completely independent from the
basis set used. Finally, the HSE($\omega_{2}$)/TZVP band gaps are very close
to the HSE($\omega_{2}$)/PW values we estimated, suggesting that our TZVP
basis set is very close in quality to the previously used plane waves, and
thus is closer to the basis set limit.
This section is contains one of the most important results of this paper, and
as such should be clearly restated. If we use the same version of HSE used in
plane wave studies, we can show that our TZVP is a high quality basis set as
it matches the excellent basis set plane wave results. If we use the proper
$\omega$ in HSE with our basis set, we arrive at the best results/smallest
errors versus experiment ever reported for SrTiO3.
Finally, it should be noted that this is not an ad hoc parameterization of
$\omega$ to give the best results for this study. We were able to obtain
results that closely match experiment by using a demonstrably high quality
basis set and a parameter in the density functional determined by a large test
bed of structures and properties.Krukau _et al._ (2006)
### V.3 Screened hybrids compared to regular hybrids.
Table 6: Our most converged direct ($E_{g}^{d}$) and indirect ($E_{g}^{i}$) band gaps (in eV) for cubic STO alongside previously published hybrid functional results done with the P2 basis set. Regular hybrids data are corrected according to the basis set sensitivity effect deduced in section IV.2. | functional/basis | $E_{g}^{d}$ | $E_{g}^{i}$
---|---|---|---
| | Ori. | Corr.${}^{\text{estimated}}$ | Ori. | Corr.${}^{\text{estimated}}$
Exp. van Benthem _et al._ (2001) | | 3.75 | | 3.25 |
Present | HSE06/TZVP | 3.59 | | 3.20 |
Ref. Piskunov _et al._ ,2004 | B3PW/P2 | 3.96 | 3.74 | 3.63 | 3.35
| B3LYP/P2 | 3.89 | 3.67 | 3.57 | 3.30
Ref. Heifets _et al._ ,2006b | B3PW/P2 | 4.02 | 3.80 | 3.70 | 3.42
Ref. Zhukovskii _et al._ ,2009 | B3PW/P2 | — | — | 3.63 | 3.35
Ref. Bilc _et al._ ,2008 | B1-WC/P2111P2 basis set with all electrons for Ti, basis set correction cannot be applied. | 3.91 | | 3.57 |
Table LABEL:tab:all_litt summarizes the calculated band gaps of HSE06/TZVP and
compares them with previously published gaps computed with the regular hybrids
B3PW and B3LYP, done with the P2 basis set. There are noticeable differences
between the results of HSE06 and the regular hybrids, with HSE06/TZVP giving
band gaps very close to experiment while regular hybrids used with P2
overestimate the gap, especially the indirect band gap. The band gap
overestimation is of same magnitude we observed in section IV.2 for HSE06/P2
as well as all the other functionals tested on STO with P2. This suggests that
P2 is also behind the band gap overestimation in the regular hybrids data
reported in the literature.Piskunov _et al._ (2004); Heifets _et al._
(2006b); Zhukovskii _et al._ (2009) By comparing the P2 and TZVP band gaps
from table LABEL:tab:elec_cub, we can deduce that the P2 basis set has an
effect (versus a large basis set) of increasing the direct and indirect band
gaps by average values of 0.22 and 0.28 eV respectively. By applying this
P2$\rightarrow$TZVP basis set correction to the regular hybrid B3PW/P2 and
B3LYP/P2 band gaps (see the corrected values in table LABEL:tab:all_litt),
band gaps are brought closer to the experimental values, and thus closer to
the HSE06/TZVP results as well. Consequently, differences in the computed
electronic properties of HSE06 and B3PW and B3LYP are considerably attenuated
and suggest that the screened hybrid HSE06 is comparable in accuracy with
regular hybrids for STO, while being much more computationally efficient.
The final issue to discuss is the comparison of the structural and elastic
properties of STO computed with HSE06 versus regular hybrids. Perovskite
crystals in the cubic structure have only three independent elastic constants,
namely C11, C12 and C44, as well as a bulk modulus:
$B=\frac{1}{3}(C_{11}+2C_{12})$ (2)
We calculated the elastic constants of STO using HSE06/TVZP, following the
methodology described in Ref. Wu _et al._ , 2005. Ideally we would like to
compare our cubic elastic constants calculated at 0 K with low temperature
data, but as experimentally the cubic structure turns to a tetragonal
structure below a transition temperature, making any comparison of this kind
impossible. Experimentally, Bell and Rupprecht Bell and Rupprecht (1963) found
that the elastic constants of STO measured between 303 and 112 K obey the
following empirical relations:
$\displaystyle C_{11}=334.1[1-2.62\times
10^{-4}(T-T_{a})-\frac{0.0992}{(T-T_{a})}]$ (3a) $\displaystyle
C_{12}=104.9[1-1.23\times 10^{-4}(T-T_{a})+\frac{0.1064}{(T-T_{a})}]$ (3b)
$\displaystyle C_{44}=126.7[1-1.30\times
10^{-4}(T-T_{a})-\frac{0.1242}{(T-T_{a})}]$ (3c)
where the elastic constants are in GPa, T is the temperature and $T_{a}$=108 K
is the critical temperature. C11 and C44 reach their maximum values at 133 K
where STO is still cubic, then they start to decrease as $-1/(T-T_{a})$ in the
region around the transition temperature, in contrast, C12 continue to
increase as $1/(T-T_{a})$ in the same temperature range.
Since we don’t know at which temperature the change from cubic to tetragonal
phase begins to take place, it is better to limit our comparison with data
measured at 133 K and above. Table LABEL:tab:ela summarizes our results and
compares them with experiment as well as previously published results with
B3PW/P2 and B3LYP/P2. HSE06/TZVP provides excellent lattice constants but
predicts the bulk modulus to be 8% higher than experiment. The elastic
constants from HSE06/TZVP overestimate the experimental data at room
temperature by 10% and the 133 K data by 6%; this was expected given the
overestimation of the bulk modulus. The B3PW hybrid also gave very good
lattice constant and bulk modulus, but the calculated elastic constants are
lower than the room and the low temperature experimental values. B3LYP
predicted a lattice constant higher by 1%, a good bulk modulus, and offers the
best agreement with the low temperatures elastic constants. In summary, none
of the screened or regular hybrids considered was able to give simultaneously
excellent bulk moduli and elastic constants, still HSE06/TZVP offer the best
compromise between efficiency, accuracy and speed.
Table 7: Calculated elastic constants with HSE06/TZVP for cubic STO compared to experiment and previously published results with the regular hybrid functional B3PW and the P2 basis set from Ref. Piskunov _et al._ , 2004. a0 is in Å, B, C11, C12 and C44 are in GPa | $a_{0}$ | B | C11 | C12 | C44
---|---|---|---|---|---
HSE06/TZVP | 3.902 | 193 | 351.4 | 113 | 137.3
B3PW/P2 | 3.900 | 177 | 316 | 92.7 | 120.1
B3LYP/P2 | 3.940 | 177 | 328.3 | 105.7 | 124.6
Exp. | 3.890 Abramov _et al._ (1995) | 179 Abramov _et al._ (1995) | 317.2 | 102.5 | 123.5111Ref. Bell and Rupprecht, 1963 at room temperature.
| 3.900 Hellwege and Hellwege (1969) | 179$\pm{4.6}$ Fischer _et al._ (1993) | 330 | 105 | 126222Ref. Bell and Rupprecht, 1963: max. measured values for C11 and C44 at 133 K, C12 increase further as temperature drop.
| 3.910 | 184 | | | 128333Landolt–Börnstein Group III Condensed Matter 2002 vol 36, subvol V (Berlin: Springer) chapter 1A (Simple Perovskyte-Type Oxides) pp 116–47
## VI Conclusion
We used the ab-initio code gaussian to simulate the properties of SrTiO3 (STO)
using a large spectrum of functionals, from LSDA, GGAs (PBE and PBEsol) and
meta-GGAs (M06L and revTPSS) to modern range-separated hybrid functionals
(HSE06 and HISS); assessing their ability in predicting the properties of the
cubic and the AFD phases of STO.
We found that pure DFT functionals tend to overestimate the octahedral
rotation angles of the AFD phase, in agreement with previously reported
results in the literature using plane-wave basis sets of comparable quality.
Wahl _et al._ (2008) Also, basis sets of low quality tend to inhibit the
tetragonality of the AFD phase and sometimes even suppress it, regardless of
the functional used. We therefore constructed a localized basis set of
sufficient completeness (or size) to correctly simulate the TiO6 octahedral
rotation and the cubic phases of STO. We also evaluated the band gap errors
arising from the use P2 basis set and from the magnitude of the HSE screening
parameter $\omega$. By applying our basis set and $\omega$ corrections to the
previously published work with regular and screened hybrid functionals on STO,
we showed that the discrepancies between published simulated data can be
explained and that hybrid functionals used with sufficiently big Gaussian-type
basis sets can give results comparable with plane-wave calculations and in
excellent agreement with experiment.
The screened hybrid functional HSE06 predicts the electronic and structural
properties of the cubic and AFD phase in very good agreement with experiment,
especially if used with high quality basis set TZVP. HSE06/TZVP is the most
reliable combination of functional and Gaussian basis set for STO which is
computationally tractable with the current computer power. It is accurate
enough to enable us to understand the changes in the band structure during the
cubic to AFD phase transition. The success of HSE06/TZVP encourages its use on
more complicated cases like the bond breaking and over binding and defect
formation, where the basis set completeness is expected to play a major role.
###### Acknowledgements.
This work is supported by the Qatar National Research Fund (QNRF) through the
National Priorities Research Program (NPRP 481007-20000). We are thankful to
Cris V. Diaconu for the technical support with the band structure code in
gaussian. We are grateful to the Research computing facilities at Texas A&M
university at Qatar for generous allocations of computer resources.
## References
* Ueno _et al._ (2008) K. Ueno, S. Nakamura, H. Shimotani, A. Ohtomo, N. Kimura, T. Nojima, H. Aoki, Y. Iwasa, and M. Kawasaki, Nat Mater 7, 855 (2008).
* Kan _et al._ (2005) D. Kan, T. Tetashima, R. Kanda, A. Masuno, K. Tanaka, S. Chu, H. Kan, A. Ishizumi, Y. Kanemitsu, Y. Shimakawa, and M. Takano, Nat Mater 4, 816 (2005).
* Zhou _et al._ (2009) N. Zhou, K. Zhao, H. Liu, Z. Lu, H. Zhao, L. Tian, W. Liu, and S. Zhao, J. Appl. Phys. 105, 083110 (2009).
* Reyren _et al._ (2007) N. Reyren, S. Thiel, A. D. Caviglia, L. F. Kourkoutis, G. Hammerl, C. Richter, C. W. Schneider, T. Kopp, A. S. Rueetschi, D. Jaccard, M. Gabay, D. A. Muller, J. M. Triscone, and J. Mannhart, Science 317, 1196 (2007).
* Caviglia _et al._ (2008) A. D. Caviglia, S. Gariglio, N. Reyren, D. Jaccard, T. Schneider, M. Gabay, S. Thiel, G. Hammerl, J. Mannhart, and J. M. Triscone, Nature 456, 624 (2008).
* Kozuka _et al._ (2009) Y. Kozuka, M. Kim, C. Bell, B. G. Kim, Y. Hikita, and H. Y. Hwang, Nature 462, 487 (2009).
* Gao _et al._ (2009) G. M. Gao, C. L. Chen, L. A. Han, and X. S. Cao, J. Appl. Phys. 105, 033707/1 (2009).
* Pentcheva and Pickett (2010) R. Pentcheva and W. E. Pickett, J. Phys.: Condens. Matter 22, 043001/1 (2010).
* Borisevich _et al._ (2010) A. Y. Borisevich, H. J. Chang, M. Huijben, M. P. Oxley, S. Okamoto, M. K. Niranjan, J. D. Burton, E. Y. Tsymbal, Y. H. Chu, P. Yu, R. Ramesh, S. V. Kalinin, and S. J. Pennycook, Phys. Rev. Lett. 105, 087204 (2010).
* Chang _et al._ (2010) Y. J. Chang, A. Bostwick, Y. S. Kim, K. Horn, and E. Rotenberg, Phys. Rev. B 81, 235109 (2010).
* Unoki and Sakudo (1967) H. Unoki and T. Sakudo, J. Phys. Soc. Jpn. 23, 546 (1967).
* Jauch and Palmer (1999) W. Jauch and A. Palmer, Phys. Rev. B 60, 2961 (1999).
* Note (1) As the standard DFT calculations done in this article not include temperature, we take the 0 K experimental/target value to be 2.1°.
* Cao _et al._ (2000) L. Cao, E. Sozontov, and J. Zegenhagen, Phys. Status Solidi A 181, 387 (2000).
* Heidemann and Wettengel (1973) A. Heidemann and H. Wettengel, Z. Phys. 258, 429 (1973).
* Note (2) As temperature is not included in the standard DFT work done here, we take the 0 K experimental/target value to be 1.0009.
* He _et al._ (2004) F. He, B. O. Wells, Z. G. Ban, S. P. Alpay, S. Grenier, S. M. Shapiro, W. Si, A. Clark, and X. X. Xi, Phys. Rev. B 70, 235405 (2004).
* He _et al._ (2005) F. He, B. O. Wells, and S. M. Shapiro, Phys. Rev. Lett. 94, 176101 (2005).
* He _et al._ (2003) F. He, B. O. Wells, S. M. Shapiro, M. v. Zimmermann, A. Clark, and X. X. Xi, Applied Physics Letters 83, 123 (2003).
* Zhukovskii _et al._ (2009) Y. F. Zhukovskii, E. A. Kotomin, S. Piskunov, and D. E. Ellis, Solid State Communications 149, 1359 (2009).
* Eglitis and Vanderbilt (2008) R. I. Eglitis and D. Vanderbilt, Phys. Rev. B 77, 195408 (2008).
* Heifets _et al._ (2006a) E. Heifets, E. Kotomin, and V. A. Trepakov, J. Phys. cond. Matter 18, 4845 (2006a).
* Wahl _et al._ (2008) R. Wahl, D. Vogtenhuber, and G. Kresse, Phys. Rev. B 78, 104116 (2008).
* Sai and Vanderbilt (2000) N. Sai and D. Vanderbilt, Phys. Rev. B 62, 13942 (2000).
* Uchida _et al._ (2003) K. Uchida, S. Tsuneyuki, and T. Schimizu, Phys. Rev. B 68, 174107 (2003).
* Vosko _et al._ (1980) S. H. Vosko, L. Wilk, and M. Nusair, Can. J. Phys. 58, 1200 (1980).
* Perdew _et al._ (1996) J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996).
* Perdew _et al._ (1997) J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 78, 1396 (1997).
* Staroverov _et al._ (2003) V. N. Staroverov, G. E. Scuseria, J. Tao, and J. P. Perdew, J. Chem. Phys. 119, 12129 (2003).
* Staroverov _et al._ (2004) V. N. Staroverov, G. E. Scuseria, J. Tao, and J. P. Perdew, J. Chem. Phys. 121, 11507 (2004).
* Mori-Sánchez _et al._ (2008) P. Mori-Sánchez, A. J. Cohen, and W. Yang, Phys. Rev. Lett. 100, 146401 (2008).
* Rondinelli and Spaldin (2010) J. M. Rondinelli and N. A. Spaldin, Phys. Rev. B 82, 113402 (2010).
* Piskunov _et al._ (2004) S. Piskunov, E. Heifets, R. I. Eglitis, and G. Borstel, Comput. Mater. Sci. 29, 165 (2004).
* Janesko _et al._ (2009) B. G. Janesko, T. M. Henderson, and G. E. Scuseria, Phys. Chem. Chem. Phys. 11, 443 (2009).
* Becke (1993) A. D. Becke, J. Chem. Phys. 98, 5648 (1993).
* Lee _et al._ (1988) C. Lee, W. Yang, and R. G. Parr, Phys. Rev. B 37, 785 (1988).
* Heyd _et al._ (2003) J. Heyd, G. E. Scuseria, and M. Ernzerhof, Journal of Chemical Physics 118, 8207 (2003).
* Heyd _et al._ (2006) J. Heyd, G. E. Scuseria, and M. Ernzerhof, Journal of Chemical Physics 124, 219906 (2006).
* (39) M. J. Frisch, G. W. Trucks, G. E. Schlegel, H. B.and Scuseria, M. A. Robb, J. R. Cheeseman, G. Scalmani, V. Barone, B. Mennucci, G. A. Petersson, H. Nakatsuji, M. Caricato, X. Li, H. P. Hratchian, A. F. Izmaylov, J. Bloino, G. Zheng, J. L. Sonnenberg, M. Hada, M. Ehara, K. Toyota, R. Fukuda, J. Hasegawa, M. Ishida, T. Nakajima, Y. Honda, O. Kitao, H. Nakai, T. Vreven, J. A. Montgomery, Jr., J. E. Peralta, F. Ogliaro, M. Bearpark, J. J. Heyd, E. Brothers, V. N. Kudin, K. N.and Staroverov, R. Kobayashi, J. Normand, K. Raghavachari, A. Rendell, J. C. Burant, S. S. Iyengar, J. Tomasi, M. Cossi, N. Rega, J. M. Millam, M. Klene, J. E. Knox, J. B. Cross, V. Bakken, C. Adamo, J. Jaramillo, R. Gomperts, R. E. Stratmann, O. Yazyev, A. J. Austin, R. Cammi, C. Pomelli, R. L. Ochterski, J. W.and Martin, K. Morokuma, V. G. Zakrzewski, G. A. Voth, P. Salvador, J. J. Dannenberg, S. Dapprich, A. D. Daniels, O. Farkas, J. B. Foresman, J. V. Ortiz, J. Cioslowski, and D. J. Fox, “Gaussian development version, revision h.07+,” .
* Kudin and Scuseria (2000) K. N. Kudin and G. E. Scuseria, Phys. Rev. B 61, 16440 (2000).
* Kudin and Scuseria (1998a) K. N. Kudin and G. E. Scuseria, Chemical Physics Letters 289, 611 (1998a).
* Kudin and Scuseria (1998b) K. N. Kudin and G. E. Scuseria, Chemical Physics Letters 283, 61 (1998b).
* Tao _et al._ (2003) J. Tao, J. P. Perdew, V. N. Staroverov, and G. E. Scuseria, Phys. Rev. Lett. 91, 146401 (2003).
* Perdew _et al._ (2009) J. P. Perdew, A. Ruzsinszky, G. I. Csonka, L. A. Constantin, and J. Sun, Phys. Rev. Lett. 103, 026403/1 (2009).
* Zhao and Truhlar (2006) Y. Zhao and D. G. Truhlar, J. Chem. Phys. 125, 194101 (2006).
* Zhao and Truhlar (2008) Y. Zhao and D. G. Truhlar, J. Chem. Phys. 128, 184109 (2008).
* Krukau _et al._ (2006) A. V. Krukau, O. A. Vydrov, A. F. Izmaylov, and G. E. Scuseria, J. Chem. Phys. 125, 224106/1 (2006).
* Note (3) Originally introduced as HISS-B in Ref Henderson:2007 and called simply HISS as in Ref. Henderson:2008.
* Henderson _et al._ (2007) T. M. Henderson, A. F. Izmaylov, G. E. Scuseria, and A. Savin, J. Chem. Phys. 127, 221103 (2007).
* Henderson _et al._ (2008) T. M. Henderson, A. F. Izmaylov, G. E. Scuseria, and A. Savin, J. Chem. Theory Comput. 4, 1254 (2008).
* Note (4) The standard RMS force threshold in gaussian for geometry optimizations is 450$\times 10^{-6}$ Hartrees/Bohr. Using “verytight” convergence, this becomes 1$\times 10^{-6}$.
* Johnson _et al._ (2009) E. R. Johnson, A. D. Becke, C. D. Sherrill, and G. A. DiLabio, The Journal of Chemical Physics 131, 034111 (2009).
* Wheeler and Houk (2010) S. E. Wheeler and K. N. Houk, Journal of Chemical Theory and Computation 6, 395 (2010), http://pubs.acs.org/doi/pdf/10.1021/ct900639j .
* Note (5) Reciprocal space integration used 12$\times$12$\times$12 k -point mesh for the cubic unit cell, while for the larger AFD supercell, the default k -point mesh of 8$\times$8$\times$6 was found to be sufficient.
* Note (6) Because we did geometry optimization, this was by default set to “tight”, or 10-8.
* Abramov _et al._ (1995) Y. A. Abramov, V. G. Tsirelson, V. E. Zavodnik, S. A. Ivanov, and I. D. Brown, Acta Crystallographica Section B 51, 942 (1995).
* (57) Http://www.fiz-karlsruhe.de/icsd.htm.
* (58) R. Dennington, T. Keith, and J. Millam, “Gaussview Version 5,” Semichem Inc. Shawnee Mission KS 2009.
* Piskunov _et al._ (2000) S. Piskunov, Y. F. Zhukovskii, E. A. Kotomin, and Y. N. Shunin, Comput. Modell. New Technol. 4, 7 (2000).
* Hay and Wadt (1984a) J. P. Hay and R. W. Wadt, J. Chem. Phys 82, 270 (1984a).
* Hay and Wadt (1984b) J. P. Hay and R. W. Wadt, J. Chem. Phys 82, 284 (1984b).
* Hay and Wadt (1984c) J. P. Hay and R. W. Wadt, J. Chem. Phys 82, 299 (1984c).
* van Benthem _et al._ (2001) K. van Benthem, C. Elsasser, and R. H. French, Journal of Applied Physics 90, 6156 (2001).
* Hellwege and Hellwege (1969) K. H. Hellwege and A. M. Hellwege, “Ferroelectrics and related substances, landolt-börnstein, new series, group iii,” (Springer Verlag, Berlin, 1969).
* Fischer _et al._ (1993) G. J. Fischer, Z. Wang, and S. Karato, Phys. Chem. Miner. 20, 97 (1993).
* Weigend and Ahlrichs (2005) F. Weigend and R. Ahlrichs, Phys. Chem. Chem. Phys. 7, 3297 (2005).
* Heyd _et al._ (2005) J. Heyd, J. E. Peralta, G. E. Scuseria, and R. L. Martin, The Journal of Chemical Physics 123, 174101 (2005).
* Strain _et al._ (1996) M. C. Strain, G. E. Scuseria, and M. J. Frisch, Science 271, 51 (1996).
* Weigend (2006) F. Weigend, Phys. Chem. Chem. Phys. 8, 1057 (2006).
* Kaupp _et al._ (1991) M. Kaupp, P. v. R. Schleyer, H. Stoll, and H. Preuss, J. Chem. Phys. 94, 1360 (1991).
* Yamada and Kanemitsu (2010) Y. Yamada and Y. Kanemitsu, Phys. Rev. B 82, 121103 (2010).
* Hasegawa _et al._ (2000) T. Hasegawa, M. Shirai, and K. Tanaka, Journal of Luminescence 87-89, 1217 (2000).
* Hong _et al._ (2010) J. Hong, G. Catalan, J. F. Scott, and E. Artacho, J. Phys.: Condens. Matter 22, 112201/1 (2010).
* Hamann and Vanderbilt (2009) D. R. Hamann and D. Vanderbilt, Phys. Rev. B 79, 045109 (2009).
* Heifets _et al._ (2006b) E. Heifets, E. Kotomin, and V. A. Trepakov, J. Phys.: Condens. Matter 18, 4845 (2006b).
* Bilc _et al._ (2008) D. I. Bilc, R. Orlando, R. Shaltaf, G. M. Rignanese, J. Íñiguez, and P. Ghosez, Phys. Rev. B 77, 165107 (2008).
* Wu _et al._ (2005) Z. Wu, X.-J. Chen, V. V. Struzhkin, and R. E. Cohen, Phys. Rev. B 71, 214103 (2005).
* Bell and Rupprecht (1963) R. O. Bell and G. Rupprecht, Phys. Rev. 129, 90 (1963).
|
arxiv-papers
| 2011-05-17T12:31:19 |
2024-09-04T02:49:18.802221
|
{
"license": "Public Domain",
"authors": "Fadwa El-Mellouhi, Edward N. Brothers, Melissa J. Lucero, and Gustavo\n E. Scuseria",
"submitter": "Fedwa El-Mellouhi",
"url": "https://arxiv.org/abs/1105.3353"
}
|
1105.3406
|
# Amenability and vanishing of $L^{2}$-Betti numbers:
an operator algebraic approach
Vadim Alekseev Vadim Alekseev, Mathematisches Institut, Georg-August-
Universität Göttingen, Bunsenstraße 3-5, D-37073 Göttingen, Germany.
alekseev@uni-math.gwdg.de and David Kyed David Kyed, Department of
Mathematics, KU Leuven, Celestijnenlaan 200B, B-3001 Leuven, Belgium.
David.Kyed@wis.kuleuven.be
###### Abstract.
We introduce a Følner condition for dense subalgebras in finite von Neumann
algebras and prove that it implies dimension flatness of the inclusion in
question. It is furthermore proved that the Følner condition naturally
generalizes the existing notions of amenability and that the ambient von
Neumann algebra of a Følner algebra is automatically injective. As an
application, we show how our techniques unify previously known results
concerning vanishing of $L^{2}$-Betti numbers for amenable groups, quantum
groups and groupoids and moreover provide a large class of new examples of
algebras with vanishing $L^{2}$-Betti numbers.
###### Key words and phrases:
Amenability, $L^{2}$-Betti numbers, operator algebras
###### 2010 Mathematics Subject Classification:
46L10, 43A07, 18G15
The research of the second named author is supported by The Danish Council for
Independent Research $|$ Natural Sciences and the ERC Starting Grant
VNALG-200749
## 1\. Introduction
$L^{2}$-Betti numbers originated from topology, but have later unveiled an
interesting connection to operator algebras. Indeed, while the original
definition of $L^{2}$-Betti numbers for groups involved a geometric
construction [Ati76, CG86], it was later shown by Lück (see also [Far98]) that
they can also be described in terms of certain $\operatorname{Tor}$ modules —
thus using only homological algebra and modules over von Neumann algebras.
This has led to several generalizations of $L^{2}$-Betti numbers: they have
been defined for discrete measured groupoids [Sau05], quantum groups [Kye08b]
and general subalgebras of finite von Neumann algebras [CS05]. Similarly,
$L^{2}$-Betti numbers were introduced in the setting of equivalence relations
in [Gab02] following the original geometric approach, but as shown in [Sau05]
these can also be expressed in terms of homological algebra.
There are two properties which attract attention in all these situations.
Firstly, in each case the definition involves certain operator algebras
canonically associated with the situation at hand. Secondly, for groups,
quantum groups and groupoids it is well known that the $L^{2}$-Betti numbers
vanish in the presence of amenability. In view of this, it is natural to seek
a common operator algebraic reason for this to happen. In doing so, one
firstly observes that the actual reason for the vanishing of $L^{2}$-Betti
numbers for amenable groups and quantum groups is a certain dimension-flatness
property of an inclusion $\mathcal{A}\subseteq M$, where $\mathcal{A}$ is a
strongly dense $\ast$-subalgebra of a finite von Neumann algebra $M$.
Secondly, the key to the proof of this dimension-flatness result is a Følner
condition for the notion of amenability in question. In the present paper we
introduce a Følner type condition for a general weakly dense $*$-subalgebra
$\mathcal{A}$ in a tracial von Neumann algebra $(M,\tau)$ and show how this
leads to dimension-flatness of the inclusion $\mathcal{A}\subseteq M$ and
subsequently to the vanishing of the operator algebraic $L^{2}$-Betti numbers
$\beta_{p}^{(2)}(\mathcal{A},\tau)$. This approach unifies the above mentioned
vanishing results and furthermore provides a large class of new examples of
algebras with vanishing $L^{2}$-Betti numbers (see Section 6). More precisely,
we prove the following:
###### Theorem (see Theorem 4.4 & Corollary 6.5).
If $\mathcal{A}\subseteq M$ satisfies the Følner condition then for any left
$\mathcal{A}$-module $X$ and any $p\geqslant 1$ we have
$\dim_{M}\operatorname{Tor}_{p}^{\mathcal{A}}(M,X)=0,$
and the Connes-Shlyakhtenko $L^{2}$-Betti numbers of $\mathcal{A}$ vanish in
positive degrees.
Secondly, we link our Følner condition to the classical operator algebraic
notion of amenability by proving (a slightly more general version of) the
following:
###### Theorem (see Theorem 5.1).
If $\mathcal{A}\subseteq M$ satisfies the Følner condition $M$ is injective.
The Følner condition consists of two requirements: an almost invariance
property and a trace approximation property. The almost invariance property
requires that the left action of $\mathcal{A}$ on $L^{2}(M)$ admits almost
invariant subspaces, the almost invariance being measured by a dimension
function. The naive approach would be to use the usual dimension over
$\mathbb{C}$ and require these subspaces to be finite-dimensional; however, it
turns out that the dimension theory over a von Neumann subalgebra
$N\subseteq\mathcal{A}$ can be efficiently applied in order to allow for
almost invariant subspaces which are finitely generated projective $N$-modules
(thus usually having infinite dimension over $\mathbb{C}$). For this reason,
parts of the paper deals with results related to dimension theory for modules
over von Neumann algebras; the reader not familiar with these notions may,
without loosing the essential ideas, think of the case $N={\mathbb{C}}$, which
reduces most arguments to finite dimensional linear algebra.
_Structure:_ The paper is organized as follows. In the second section we
recapitulate Lück’s dimension theory for modules over von Neumann algebras and
prove a few results concerning a relative dimension function in this context.
In the third section we introduce the operator-algebraic Følner condition and
draw some easy consequences, preparing for the proof of the main theorem which
is given in the fourth section. The fifth section is devoted to the discussion
concerning the relationship between the Følner condition for an algebra and
injectivity of its enveloping von Neumann algebra. In the sixth section we
discuss examples and show how the Følner condition implies vanishing of
$L^{2}$-Betti numbers in a variety of different instances.
_Assumptions:_ Throughout the paper, all generic von Neumann algebras are
assumed to have separable predual. Our main focus will be on finite von
Neumann algebras, for which separability of the predual is equivalent to
separability of the von Neumann algebra itself for the strong, weak and ultra-
weak topology. These topologies are furthermore all metrizable on bounded
subsets of $M$. In order to be consistent with the general Hilbert
$C^{*}$-module terminology (see below) inner products on Hilbert spaces are
assumed to be linear in the second variable and conjugate linear in the first.
Algebraic tensor products are denoted “$\odot$”, tensor products of Hilbert
spaces by “$\otimes$” and tensor products of von Neumann algebras by
$``\bar{\otimes}$”. Our main setup will consist of an inclusion $N\subseteq M$
of finite von Neumann algebras with a fixed normal tracial state $\tau$ on
$M$. The von Neumann algebra $N$ should be thought of as a “coefficient
algebra” and will in many applications be equal to ${\mathbb{C}}$. We denote
by $H$ the GNS-space $L^{2}(M,\tau)$, by $J\colon H\to H$ the modular
conjugation arising from $\tau$ and by $E\colon M\to N$ the $\tau$-preserving
conditional expectation. Note that $H$ automatically carries a normal right
$N$-action given by $\xi\cdot x:=Jx^{*}J\xi$ which extends the natural right
$N$-module structure on $M$.
_Acknowledgements:_ The authors would like to thank Thomas Schick for a
crucial remark at an early stage in the project and Stefaan Vaes for
suggesting an improvement of Theorem 5.1. Moreover, thanks are due to Étienne
Blanchard, Henrik D. Petersen and the anonymous referee for numerous valuable
corrections and suggestions.
## 2\. Dimension theory
In this section we recapitulate parts of the von Neumann dimension theory for
modules over a finite von Neumann algebra and introduce a relative version of
this notion of dimension. The relative dimension function will play a
prominent role in the definition of the Følner condition given in Section 3.
### 2.1. Dimension of modules and the trace on endomorphisms
Consider a finite von Neumann algebra $N$ endowed with a normal, faithful
tracial state $\tau$ and denote by $L^{2}(N)$ the GNS-space arising from
$\tau$ and by $N^{{\operatorname{{op}}}}$ the opposite von Neumann algebra.
Denoting by $\Lambda$ the natural embedding of $N$ into $L^{2}(N)$ the inner
product in $L^{2}(N)$ is therefore given by
$\left\langle\Lambda(x),\Lambda(y)\right\rangle=\tau(x^{*}y)$ for $x,y\in N$.
In what follows, we will often suppress the map $\Lambda$ and simply consider
$N$ as a subspace of $L^{2}(N)$. A Hilbert space $F$ endowed with a normal
$*$-representation of $N^{{\operatorname{{op}}}}$ is called a _finitely
generated normal (right) $N$-module_ if there exists an isometric
$N^{{\operatorname{{op}}}}$-equivariant embedding of $F$ into $L^{2}(N)^{k}$
for some finite integer $k$, where $L^{2}(N)^{k}$ is considered with the
standard $N^{{\operatorname{{op}}}}$-action given by
$a^{{\operatorname{{op}}}}\colon(\xi_{1},\dots,\xi_{n})\mapsto(Ja^{*}J\xi_{1},\dots,Ja^{*}J\xi_{n}).$
Thus, any finitely generated normal $N$-module $F$ is isomorphic to one of the
form $pL^{2}(N)^{k}$ for a projection $p\in{\mathbb{M}}_{k}(N)$ and _the von
Neumann dimension_ of $F$ is then defined as
$\dim_{N}F:=\sum_{i=1}^{k}\tau(p_{ii}).$
The number $\dim_{N}F$ is independent of the choice of $p$ and $k$ (see e.g.
[Lüc02, Section 1.1.2]) and thus depends only on the isomorphism class of $F$.
###### Remark 2.1.
In most of the existing literature, finitely generated normal right modules
over $N$ are called _finitely generated Hilbert $N$-modules_, but since we are
going to consider finitely generated Hilbert $C^{*}$-modules over $N$ (in the
operator algebraic sense [Lan95]) as well, we have chosen the term “normal
module” in order to avoid unnecessary confusion. We advise the reader not
familiar with these notions to consult the article [Fra01] where a detailed
comparison is carried out.
Next we recall the construction of the trace on the endomorphism von Neumann
algebra $\operatorname{End}_{N}(F)$ associated with a finitely generated
normal $N$-module $F$; here, and in what follows, $\operatorname{End}_{N}(F)$
denotes the von Neumann algebra of all bounded operators on $F$ commuting with
the $N$-action. The trace on $\operatorname{End}_{N}(F)$ was previously
considered by Lück [Lüc02, Section 1.1] and also by Farber [Far98] in the more
general context of von Neumann categories.
###### Lemma 2.2 ([Lüc02, Section 1.1.3]).
Let $F$ be a finitely generated normal $N$-module and consider an operator
$T\in\operatorname{End}_{N}(F)$. Upon choosing an isomorphism $F\simeq
pL^{2}(N)^{k}$ we obtain an induced $*$-isomorphism
$\operatorname{End}_{N}(F)\simeq p{\mathbb{M}}_{k}(N)p$ and we denote the
matrix representing $T$ by $(T_{ij})_{i,j=1}^{k}$. Then the number
$\operatorname{Tr}_{N}(T):=\sum_{i=1}^{k}\tau(T_{ii})$ does not depend on the
choice of isomorphism and defines a normal, faithful, positive trace on
$\operatorname{End}_{N}(F)$ with
$\operatorname{Tr}_{N}(\operatorname{id}_{F})=\dim_{N}F$.
###### Remark 2.3.
In the case $N={\mathbb{C}}$ the normal $N$-module $F$ is simply a finite
dimensional vector space and hence $\operatorname{Tr}_{N}(-)$ is just the
standard, non-normalized trace on $B(F)$.
The first step towards extending the notion of dimension to arbitrary
$N$-modules is to pass from normal modules to Hilbert $C^{*}$-modules over
$N$.
###### Definition 2.4.
A (right) Hilbert $C^{*}$-module over $N$ consists of an algebraic right
$N$-module $\mathcal{X}$ together with a map
$\left\langle\cdot,\cdot\right\rangle_{N}\colon\mathcal{X}\times\mathcal{X}\to
N$ such that
* (i)
For all $x\in\mathcal{X}$ we have ${\left\langle
x,x\right\rangle}_{N}\geqslant 0$ and ${\left\langle x,x\right\rangle}_{N}=0$
only for $x=0$.
* (ii)
For all $x,y\in\mathcal{X}$ we have $\left\langle
x,y\right\rangle_{N}=\left\langle y,x\right\rangle_{N}^{*}$.
* (iii)
For all $x,y,z\in\mathcal{X}$ and $a,b\in N$ we have $\left\langle
z,xa+yb\right\rangle_{N}=\left\langle z,x\right\rangle_{N}a+\left\langle
z,y\right\rangle_{N}b$.
* (iv)
The space $\mathcal{X}$ is complete with respect to the norm
$\|x\|:=\|\left\langle x,x\right\rangle\|^{\frac{1}{2}}$.
A Hilbert $C^{*}$-module over $N$ is called _finitely generated_ if it is
algebraically finitely generated over $N$ and _projective_ if it is projective
as a right $N$-module. Moreover, the terms “Hilbert module over $N$”, “Hilbert
$C^{*}$-module over $N$” and “Hilbert $N$-module” will be used synonymously in
the sequel.
Note that the finitely generated free $N$-modules $N^{k}$ become Hilbert
$N$-modules when endowed with the $N$-valued inner product $\left\langle
x,y\right\rangle_{{\operatorname{st}}}:=\sum_{i=1}^{k}x_{i}^{*}y_{i}$; we
refer to this as the _standard_ inner product on $N^{k}$. If
$(\mathcal{P},\left\langle\cdot,\cdot\right\rangle_{N})$ is a finitely
generated projective Hilbert $N$-module then
$\tau\circ\left\langle\cdot,\cdot\right\rangle_{N}$ defines a positive
definite ${\mathbb{C}}$-valued inner product on $\mathcal{P}$ and we denote
the Hilbert space completion of $\mathcal{P}$ with respect to this inner
product by $L^{2}(\mathcal{P})$. It was shown by Lück that we get a finitely
generated normal $N$-module in this way and that this construction yields an
equivalence of categories:
###### Theorem 2.5 (Lück, [Lüc02, Lemma 6.23 & Theorem 6.24]).
Every finitely generated projective $N$-module allows an $N$-valued inner
product turning it into a Hilbert $N$-module and the inner product is unique
up to unitary isomorphism. Furthermore, Hilbert space completion constitutes
an equivalence of categories between the category of finitely generated
projective Hilbert $N$-modules and the category of finitely generated normal
$N$-modules.
We remark that a finitely generated projective Hilbert $N$-module
$(\mathcal{P},\left\langle\cdot,\cdot\right\rangle_{N})$ is automatically
self-dual; i.e. $\operatorname{Hom}_{N}(\mathcal{P},N)=\\{\left\langle
x,\cdot\right\rangle_{N}\mid x\in\mathcal{P}\\}$. This follows easily from the
uniqueness of the inner product and the obvious self-duality of the finitely
generated free Hilbert modules
$(N^{k},\left\langle\cdot,\cdot\right\rangle_{{\operatorname{st}}})$.
Furthermore, by [MT05, Lemma 2.3.7] every finitely generated Hilbert submodule
in $\mathcal{P}$ splits off orthogonally as a direct summand and is therefore,
in particular, itself finitely generated and projective. Due to Theorem 2.5,
it makes sense, for a finitely generated projective $N$-module $\mathcal{P}$,
to define the von Neumann dimension of $\mathcal{P}$ as
$\dim_{N}\mathcal{P}=\dim_{N}L^{2}(\mathcal{P})$ where $L^{2}(\mathcal{P})$ is
the Hilbert space completion relative to some choice of Hilbert $N$-module
structure on $\mathcal{P}$. Moreover, for an arbitrary (algebraic) $N$-module
$\mathcal{X}$ its von Neumann dimension is defined as
$\dim_{N}\mathcal{X}=\sup\\{\dim_{N}\mathcal{P}\mid\mathcal{P}\subseteq\mathcal{X},\mathcal{P}\
\text{finitely generated projective}\\}\in[0,\infty].$
This dimension function is no longer faithful (i.e. non-zero modules might be
zero-dimensional) but it still has remarkably good properties; one of its
prime features being additivity with respect to short exact sequences [Lüc02,
Theorem 6.7]. For a submodule $\mathcal{E}$ in an $N$-module $\mathcal{F}$ the
_algebraic closure_ $\overline{\mathcal{E}}^{{\operatorname{alg}}}$ of
$\mathcal{E}$ inside $\mathcal{F}$ is defined as the intersection of the
kernels of those homomorphisms in the dual module
$\operatorname{Hom}_{N}(\mathcal{F},N)$ that vanish on $\mathcal{E}$, and if
$\mathcal{F}$ is finitely generated we have that
$\dim_{N}\mathcal{E}=\dim_{N}\overline{\mathcal{E}}^{{\operatorname{alg}}}$
[Lüc02, Theorem 6.7].
###### Lemma 2.6.
If $\mathcal{P}$ is a finitely generated projective Hilbert $N$-module and
$\mathcal{E}\subseteq\mathcal{P}$ a submodule then
${\mathcal{E}}^{\perp\perp}=\overline{\mathcal{E}}^{{\operatorname{alg}}}$ and
$\mathcal{E}^{\perp\perp}$ splits off orthogonally as a direct summand in
$\mathcal{P}$. Furthermore, the Hilbert space closures of $\mathcal{E}$ and
$\mathcal{E}^{\perp\perp}$ in $L^{2}(\mathcal{P})$ coincide.
###### Proof.
Since $\mathcal{P}$ is selfdual, the equality
$\mathcal{E}^{\perp\perp}=\overline{\mathcal{E}}^{\operatorname{alg}}$ follows
directly from the definition of the algebraic closure. Since $\mathcal{P}$ is
finitely generated and projective it now follows from [Lüc02, Theorem 6.7]
that the same is true for $\mathcal{E}^{\perp\perp}$ and by [MT05, Lemma
2.3.7] it therefore follows that $\mathcal{E}^{\perp\perp}$ splits off as an
orthogonal direct summand. Since
$\dim_{N}(\mathcal{E}^{\perp\perp}/\mathcal{E})=0$, Sauer’s local criterion
[Sau05, Theorem 2.4] implies that for every $x\in\mathcal{E}^{\perp\perp}$
there exists a sequence of projections $p_{i}\in N$ such that
$\lim_{i}\tau(p_{i})=1$ and $xp_{i}\in\mathcal{E}$. Thus
$\displaystyle\|x-xp_{i}\|_{2}^{2}$ $\displaystyle=\tau\left(\left\langle
x(1-p_{i}),x(1-p_{i})\right\rangle_{N}\right)$
$\displaystyle=\tau\left(\left\langle
x,x\right\rangle_{N}(1-p_{i})^{2}\right)$
$\displaystyle\leqslant\tau\left(\left\langle
x,x\right\rangle_{N}^{2}\right)^{\frac{1}{2}}\tau\left(1-p_{i}\right)^{\frac{1}{2}}\underset{i\to\infty}{\longrightarrow}0.$
Hence
$\mathcal{E}^{\perp\perp}\subseteq\overline{\mathcal{E}}^{\|\cdot\|_{2}}$ and
since $\mathcal{E}\subseteq\mathcal{E}^{\perp\perp}$ this proves that the two
closures coincide.
∎
If $(\mathcal{F},\left\langle\cdot,\cdot\right\rangle_{N})$ is a finitely
generated projective Hilbert $N$-module we will often identify the algebraic
endomorphism ring $\operatorname{End}_{N}(\mathcal{F})$ with
$\operatorname{End}_{N}(L^{2}(\mathcal{F}))$ and thus also consider
$\operatorname{Tr}_{N}(-)$ as defined on
$\operatorname{End}_{N}(\mathcal{F})$. Recall, e.g. from [KW00, Lemma 1.7],
that every finitely generated projective Hilbert $N$-module
$(\mathcal{F},\left\langle\cdot,\cdot\right\rangle_{N})$ has a _basis_ ; that
is there exist $u_{1},\dots,u_{n}\in\mathcal{F}$ such that
$x=\sum_{i=1}^{n}u_{i}\left\langle u_{i},x\right\rangle_{N}$ for each
$x\in\mathcal{F}$.111Note that in general the elements $u_{1},\dots,u_{n}$ are
not linearly independent over $N$ and thus not a basis in the standard sense
of homological algebra. Furthermore, the matrix $p=(\left\langle
u_{i},u_{j}\right\rangle_{N})_{i,j=1}^{n}$ is a projection in
${\mathbb{M}}_{n}(N)$ and the map
$\alpha\colon(\mathcal{F},\left\langle\cdot,\cdot\right\rangle_{N})\to(pN^{n},\left\langle\cdot,\cdot\right\rangle_{\operatorname{st}})$
given by $\alpha(x)=(\left\langle u_{i},x\right\rangle_{N})_{i=1}^{n}$ is a
unitary isomorphism of Hilbert $N$-modules with $u_{i}=\alpha^{-1}(pe_{i})$,
where $e_{1},\dots,e_{n}$ is the standard basis in $N^{n}$. From this we
obtain the following result.
###### Lemma 2.7.
For a finitely generated projective Hilbert $N$-module
$(\mathcal{F},\left\langle\cdot,\cdot\right\rangle_{N})$ and an endomorphism
$T\in\operatorname{End}_{N}(\mathcal{F})$ we have
$\operatorname{Tr}_{N}(T)=\sum_{i=1}^{n}\tau(\left\langle
u_{i},Tu_{i}\right\rangle_{N})$ for any basis $u_{1},\dots,u_{n}$ in
$\mathcal{F}$.
### 2.2. The relative dimension function
Next we consider a trace-preserving inclusion of $N$ into a bigger finite von
Neumann algebra $(M,\tau)$ acting on its GNS-space $H=L^{2}(M,\tau)$. Note
that any $N$-submodule $\mathcal{F}\subseteq M$ acquires an $N$-valued inner
product arising from the trace-preserving conditional expectation $E\colon
M\to N$ by setting $\left\langle a,b\right\rangle_{N}:=E(a^{*}b)$. Consider
now a finitely generated, projective $N$-submodule $\mathcal{F}\subseteq M$
which is complete with respect to this pre-Hilbert module structure. In other
words, the relation $\left\langle a,b\right\rangle_{N}:=E(a^{*}b)$ defines a
Hilbert $N$-module structure on $\mathcal{F}$ and hence $L^{2}(\mathcal{F})$
can be obtained by completion with respect to the ${\mathbb{C}}$-valued inner
product $\tau\circ\left\langle\cdot,\cdot\right\rangle_{N}$. Since $E$ is
trace-preserving, this completion is exactly the closure of $\mathcal{F}$ in
$H$.
###### Definition 2.8.
An $N$-submodule $\mathcal{F}\subseteq M$ which is a Hilbert $N$-module with
respect to the $N$-valued inner product arising from the conditional
expectation $E\colon M\to N$ is called complete.
Let $\mathcal{F}\subseteq M$ be a complete, non-zero, finitely generated
projective $N$-submodule and denote by $P_{F}\in B(H)$ the projection onto its
closure $F$ in $H$. Note that $F$ is a finitely generated normal $N$-module.
Any operator $T\in(JNJ)^{\prime}\subseteq B(H)$ gives rise to an
$N$-equivariant operator $P_{F}TP_{F}|_{F}$ on $F$ and we may therefore define
a normal state $\varphi_{\mathcal{F}}$ on $(JNJ)^{\prime}$ by setting
(2.1)
$\displaystyle\varphi_{\mathcal{F}}(T)=\frac{\operatorname{Tr}_{N}(P_{F}TP_{F}|_{F})}{\dim_{N}(\mathcal{F})}.$
By choosing a basis $u_{1},\dots,u_{k}\in\mathcal{F}$ for the Hilbert
$N$-module structure arising from $E$, the state $\varphi_{\mathcal{F}}$ can
be computed as
$\varphi_{\mathcal{F}}(T)=(\dim_{N}\mathcal{F})^{-1}\sum_{i=1}^{k}\left\langle\Lambda(u_{i}),T\Lambda(u_{i})\right\rangle$
where the inner product is taken in $H$. This is due to the fact that
$P_{F}TP_{F}|_{\mathcal{F}}\in\operatorname{End}_{N}(\mathcal{F})$ and hence
by Lemma 2.7
$\displaystyle\varphi_{\mathcal{F}}(T)$
$\displaystyle=\frac{\sum_{i=1}^{k}\tau(\left\langle
u_{i},P_{F}TP_{F}u_{i}\right\rangle_{N})}{\dim_{N}\mathcal{F}}$
$\displaystyle=\frac{\sum_{i=1}^{k}\tau(E(u_{i}^{*}(P_{F}TP_{F}u_{i}))}{\dim_{N}\mathcal{F}}$
$\displaystyle=\frac{\sum_{i=1}^{k}\left\langle\Lambda(u_{i}),P_{F}TP_{F}\Lambda(u_{i})\right\rangle}{\dim_{N}\mathcal{F}}$
$\displaystyle=\frac{\sum_{i=1}^{k}\left\langle\Lambda(u_{i}),T\Lambda(u_{i})\right\rangle}{\dim_{N}\mathcal{F}}.$
In what follows we will also consider operators acting on a $k$-fold
amplification $H^{k}=\bigoplus_{1}^{k}H$ of the Hilbert space $H$; these
amplifications are always implicitly assumed equipped with the diagonal normal
right action of $N$. For a subspace $L\subseteq H$ we denote by
$L^{k}\subseteq H^{k}$ its amplification.
###### Definition 2.9.
Let $k\in{\mathbb{N}}$ and a complete, non-zero, finitely generated,
projective right $N$-submodule $\mathcal{F}$ in $M$ be given. For an
$N$-submodule $\mathcal{E}\subseteq H^{k}$ we define _the dimension of
$\mathcal{E}$ relative to $\mathcal{F}$_ as
$\dim_{\mathcal{F}}(\mathcal{E})=\frac{\operatorname{Tr}_{N}(P_{F^{k}}P_{E}P_{F^{k}}|_{F^{k}})}{\dim_{N}\mathcal{F}},$
where $F$ and $E$ are the closures in $H^{k}$ of $\mathcal{F}$ and
$\mathcal{E}$, respectively.
Note that this is well-defined since $P_{E}$ commutes with the right action of
$N$ so that
$P_{F^{k}}P_{E}P_{F^{k}}|_{F^{k}}\in\operatorname{End}_{N}(F^{k})$. Note also
the trivial fact that if two submodules have the same closure in $H^{k}$ then
their relative dimensions agree.
###### Proposition 2.10.
The relative dimension function $\dim_{\mathcal{F}}(-)$ has the following
properties.
* (i)
If $\mathcal{E}_{1}\subseteq\mathcal{E}_{2}\subseteq H^{k}$ are two
$N$-submodules then
$\dim_{\mathcal{F}}(\mathcal{E}_{1})\leqslant\dim_{\mathcal{F}}(\mathcal{E}_{2})$.
* (ii)
If $\mathcal{E}\subseteq\mathcal{F}^{k}$ then
$\dim_{\mathcal{F}}(\mathcal{E})=\frac{\dim_{N}\mathcal{E}}{\dim_{N}\mathcal{F}}$.
In other words, $\dim_{N}\mathcal{E}=\operatorname{Tr}_{N}(P_{E}|_{F^{k}})$
where $P_{E}$ is the projection onto the closure $E$ of $\mathcal{E}$ in
$H^{k}$.
###### Proof.
For the sake of notational convenience we restrict attention to the case
$k=1$, but the same proof goes through in higher dimensions. The first claim
follows directly from positivity of the state $\varphi_{\mathcal{F}}$. To
prove the second claim, first note that $\mathcal{F}$ splits orthogonally as
$\mathcal{E}^{\perp\perp}\oplus\mathcal{G}$ for some Hilbert $N$-submodule
$\mathcal{G}$. Since $\mathcal{F}$ is assumed to be complete, its
$L^{2}$-completion coincides with its closure $F$ inside $H$ and hence
$F=\overline{\mathcal{E}^{\perp\perp}}\oplus\overline{\mathcal{G}}$
as an orthogonal direct sum inside $H$ and
$L^{2}(\mathcal{E}^{\perp\perp})=\overline{\mathcal{E}^{\perp\perp}}$. Denote
by $P_{\mathcal{E}^{\perp\perp}}\in\operatorname{End}_{N}(\mathcal{F})$ the
projection onto the summand ${\mathcal{E}^{\perp\perp}}$ and by
$P_{\mathcal{E}^{\perp\perp}}^{(2)}\in\operatorname{End}_{N}(F)$ its extension
to $F$. Then clearly $P_{\mathcal{E}^{\perp\perp}}^{(2)}$ projects onto the
summand $\overline{\mathcal{E}^{\perp\perp}}$ which by Lemma 2.6 coincides
with $\overline{\mathcal{E}}$. Thus
$P_{F}P_{E}P_{F}|_{F}=P_{\mathcal{E}^{\perp\perp}}^{(2)}$. By choosing a basis
$u_{1},\dots u_{n}$ for $\mathcal{E}^{\perp\perp}$ and a basis
$v_{1},\dots,v_{l}$ for $\mathcal{G}$ we get a basis for $\mathcal{F}$ and
using this basis to compute the trace we get
$\displaystyle\dim_{\mathcal{F}}\mathcal{E}$
$\displaystyle=\frac{\sum_{i=1}^{n}\left\langle
u_{i},P_{\mathcal{E}^{\perp\perp}}^{(2)}u_{i}\right\rangle+\sum_{j=1}^{l}\left\langle
v_{j},P_{\mathcal{E}^{\perp\perp}}^{(2)}v_{j}\right\rangle}{\dim_{N}\mathcal{F}}$
$\displaystyle=\frac{\sum_{i=1}^{n}\left\langle
u_{i},u_{i}\right\rangle}{\dim_{N}\mathcal{F}}$
$\displaystyle=\frac{\dim_{N}\mathcal{E}^{\perp\perp}}{\dim_{N}\mathcal{F}}$
$\displaystyle=\frac{\dim_{N}\mathcal{E}}{\dim_{N}\mathcal{F}},$
where the last equality follows from Lemma 2.6.
∎
Relative dimension functions were originally introduced by Eckmann [Eck99] in
a topological setting (see also [Ele03]) and the relative dimension function
$\dim_{\mathcal{F}}(-)$ introduced above can be seen as an operator algebraic
analogue of this construction. A similar construction for quantum groups was
considered in [KT09]. We end this section with a small lemma which will turn
out useful in the following.
###### Lemma 2.11.
Let $\mathcal{F}$ be a non-zero, finitely generated, projective $N$-module and
let $\mathcal{S}_{1},\dots,\mathcal{S}_{n}$ be a family of submodules in
$\mathcal{F}$. Suppose that
$\forall
i\in\\{1,\dots,n\\}:\quad\frac{\dim_{N}\mathcal{S}_{i}}{\dim_{N}\mathcal{F}}>1-\varepsilon_{i}$
for some $\varepsilon_{1},\dots,\varepsilon_{n}>0$. Then
$\frac{\dim_{N}\left(\bigcap_{i=1}^{n}\mathcal{S}_{i}\right)}{\dim_{N}\mathcal{F}}>1-\sum_{i=1}^{n}\varepsilon_{i}.$
###### Proof.
By induction it is sufficient to consider the case $n=2$. In this case we have
an exact sequence of $N$-modules
$0\longrightarrow\mathcal{S}_{1}\cap\mathcal{S}_{2}\longrightarrow\mathcal{S}_{1}\oplus\mathcal{S}_{2}\longrightarrow{\mathcal{S}_{1}+\mathcal{S}_{2}}\longrightarrow
0,$
and by [Lüc02, Theorem 6.7] we have
$\dim_{N}(\mathcal{S}_{1}\oplus\mathcal{S}_{2})=\dim_{N}(\mathcal{S}_{1}\cap\mathcal{S}_{2})+\dim_{N}({\mathcal{S}_{1}+\mathcal{S}_{2}})$.
Thus
$\frac{\dim_{N}(\mathcal{S}_{1}\cap\mathcal{S}_{2})}{\dim_{N}\mathcal{F}}=\frac{\dim_{N}\mathcal{S}_{1}+\dim_{N}\mathcal{S}_{2}-\dim_{N}({\mathcal{S}_{1}+\mathcal{S}_{2}})}{\dim_{N}\mathcal{F}}>1-\varepsilon_{1}-\varepsilon_{2}.$
∎
Of course the whole theory has a mirror counterpart for _left_ modules over
$N$ and we will use both theories without further specification throughout the
paper. Moreover, we will not make any notational distinction between the
corresponding dimension functions; so when $\mathcal{X}$ is a _left_
$N$-module $\dim_{N}\mathcal{X}$ will also denote the von Neumann dimension of
$\mathcal{X}$ as a left module and similarly with the relative dimension
functions.
## 3\. The Følner property
Consider again a trace-preserving inclusion of finite von Neumann algebras
$N\subseteq M$ and let $\mathcal{A}$ be an intermediate $\ast$-algebra (i.e.
$N\subseteq\mathcal{A}\subseteq M$) which is strongly dense in $M$. We will
keep this setup fixed throughout the present section and the symbols $N$,
$\mathcal{A}$ and $M$ will therefore refer to objects specified above.
Moreover, $\tau$ will denote the common trace on $N$ and $M$ and $H$ the GNS-
space $L^{2}(M,\tau)$. The aim of this section is to introduce a Følner type
condition for $\mathcal{A}$, relative to the chain
$N\subseteq\mathcal{A}\subseteq M$, and study its basic properties. As already
mentioned, the von Neumann algebra $N$ will be thought of as a “coefficient
algebra”, and we advise the reader to keep the example $N=\mathbb{C}$ in mind
in order to get a more intuitive picture. Recall from Section 2, that an
$N$-submodule $\mathcal{F}$ in $M$ is called _complete_ if it is a Hilbert
module with respect to the $N$-valued inner product arising from the trace-
preserving conditional expectation $E\colon M\to N$.
###### Definition 3.1.
The algebra $\mathcal{A}$ is said to have the strong Følner property with
respect to $N$ if for every finite set $T_{1},\dots,T_{r}\in\mathcal{A}$ and
every $\varepsilon>0$ there exists a complete, non-zero, finitely generated
projective $N$-submodule $\mathcal{F}\subseteq\mathcal{A}$ such that
$\frac{\dim_{N}(T_{i}^{-1}(\mathcal{F})\cap\mathcal{F})}{\dim_{N}(F)}>1-\varepsilon\quad\text{
and }\quad\|\varphi_{\mathcal{F}}-\tau\|_{M_{*}}<\varepsilon$
for all $i\in\\{1,\dots,r\\}$. Here $T_{i}^{-1}(\mathcal{F})$ denotes the
preimage of $\mathcal{F}$ under the left multiplication operator given by
$T_{i}$.
This definition is an operator algebraic analogue of the Følner condition for
discrete groups, where the almost invariant finite subset of the group is
replaced by an almost invariant “finite-dimensional” $N$-submodule
$\mathcal{F}$ in $\mathcal{A}$. In fact, putting $N={\mathbb{C}}$ one can
easily check that the linear span of a subset $F$ in a group $\Gamma$ which is
$\varepsilon$-invariant under the action of another set $S$ gives rise to an
almost invariant submodule $\mathcal{F}$ in the above sense (see Corollary 6.1
for details) for any set of operators in ${\mathbb{C}}\Gamma$ not supported
outside of $S$. The condition regarding the trace approximation is trivially
fulfilled in this case as $\varphi_{\mathcal{F}}(T)=\tau(T)$ for each $T\in
L\Gamma$, a fact due to ${\mathbb{C}}\Gamma$ being spanned by a multiplicative
set of orthogonal unitaries. Since this need not be the case for a general
$\mathcal{A}$, we have to include the approximation property in order to
compare the dimension of an $N$-submodule with the relative dimensions of its
“finite-dimensional approximations”; see Proposition 4.1 for the precise
statement.
###### Remark 3.2.
Strictly speaking, the norm estimate in Definition 3.1 should read
$\|\varphi_{\mathcal{F}}|_{M}-\tau\|_{M_{*}}<\varepsilon$ but for notational
convenience we will consistently suppress the restriction in the sequel. This
should not lead to any confusion as the algebra on which the states are
considered can always be read off the subscript on the norm. Moreover, the
strong Følner property should, more precisely, be called the _right_ strong
Følner property since we have chosen to use _right_ modules in the definition.
However, if ${\mathcal{A}}$ has the right Følner property then it also has the
corresponding left Følner property and vice versa. This can be seen by noting
that if $T_{1},\dots,T_{r}\in{\mathcal{A}}$ and $\varepsilon>0$ are given and
$\mathcal{F}\subseteq\mathcal{A}$ is a right $N$-module satisfying the
conditions in Definition 3.1, then $\mathcal{F}^{*}$ (the adjoint taken inside
$\mathcal{A}$) is a complete, finitely generated projective _left_ Hilbert
$N$-submodule in $M$ (the latter endowed with the inner product $\left\langle
x,y\right\rangle_{N}=E(xy^{*})$) which satisfies the conditions in the left
version of Definition 3.1 for the set of operators
$T_{1}^{*},\dots,T_{r}^{*}$. Note, in particular, that this implies that the
strong Følner property is stable under passing to opposite algebras; i.e. if
the tower $N\subseteq\mathcal{A}\subseteq M$ has the strong Følner property
then the same is true for the tower
$N^{\operatorname{{op}}}\subseteq\mathcal{A}^{\operatorname{{op}}}\subseteq
M^{\operatorname{{op}}}$.
For our purposes the following slight reformulation of the Følner property
will turn out convenient.
###### Proposition 3.3.
The algebra ${\mathcal{A}}$ has the strong Følner property with respect to $N$
iff for any finite set $T_{1},\dots,T_{r}\in\mathcal{A}$ there exists a
sequence of complete, non-zero, finitely generated, projective $N$-modules
$\mathcal{P}_{n}\subseteq\mathcal{A}$ with submodules
$\mathcal{S}_{n}\subseteq\mathcal{P}_{n}$ such that
1. (i)
$T_{i}(\mathcal{S}_{n})\subseteq\mathcal{P}_{n}$ for each
$i\in\\{1,\dots,r\\}$ and each $n\in\mathbb{N}$;
2. (ii)
$\displaystyle\lim_{n\to\infty}\frac{\dim_{N}\mathcal{S}_{n}}{\dim_{N}\mathcal{P}_{n}}=1$;
3. (iii)
The sequence $\varphi_{\mathcal{P}_{n}}$ (restricted to $M$) converges
uniformly to the trace $\tau$; i.e.
$\lim_{n\to\infty}\|\varphi_{\mathcal{P}_{n}}-\tau\|_{M_{*}}=0$.
###### Proof.
Clearly (i)-(iii) imply the Følner property. Conversely, if $\mathcal{A}$ has
the Følner property and $T_{1},\dots,T_{r}\in\mathcal{A}$ is given then for
each $n\in{\mathbb{N}}$ we get a complete, non-zero, finitely generated
projective module $\mathcal{F}_{n}\subseteq\mathcal{A}$ such that
$\frac{\dim_{N}(T_{i}^{-1}(\mathcal{F}_{n})\cap\mathcal{F}_{n})}{\dim_{N}\mathcal{F}_{n}}>1-\frac{1}{rn}\quad\text{
and }\quad\|\varphi_{\mathcal{F}_{n}}-\tau\|_{M_{*}}<\frac{1}{n}.$
Putting $\mathcal{P}_{n}:=\mathcal{F}_{n}$ and
$\mathcal{S}_{n}:=\cap_{i=1}^{r}T_{i}^{-1}(\mathcal{F}_{n})\cap\mathcal{F}_{n}$
we clearly have (i) and (iii) satisfied and (ii) follows from Lemma 2.11. ∎
###### Definition 3.4.
If $\mathcal{A}$ has the strong Følner property and $\\{T_{1},\dots,T_{r}\\}$
is a finite subset in $\mathcal{A}$, then we call a sequence
$(\mathcal{P}_{n},\mathcal{S}_{n})$ with the properties in Proposition 3.3 a
_strong Følner sequence_ for the given set of operators.
The final result in this section shows that the von Neumann dimension can be
approximated by relative dimensions in the presence of a strong Følner
sequence.
###### Proposition 3.5.
Let ${\mathcal{A}}$ have the strong Følner property relative to $N$ and let
$(\mathcal{P}_{n},\mathcal{S}_{n})$ be a strong Følner sequence for an
arbitrary finite set in $\mathcal{A}$. If $K\subseteq H^{k}$ is a closed
subspace which is invariant under the diagonal right action of $M$ then
$\dim_{\mathcal{P}_{n}}K\underset{n\to\infty}{\longrightarrow}\dim_{M}K$.
###### Proof.
Denote by $P_{K}\in{\mathbb{M}}_{k}(M)\subseteq B(H^{k})$ the projection onto
$K$. Fix an $n\in{\mathbb{N}}$ and choose a basis
$u_{1},\dots,u_{l}\in\mathcal{P}_{n}$. Then the set
$\\{u_{i}\otimes e_{j}\mid 1\leqslant i\leqslant l,1\leqslant j\leqslant k\\}$
is a basis for the amplification
$\mathcal{P}_{n}^{k}=\mathcal{P}_{n}\otimes{\mathbb{C}}^{k}$; here
$e_{1},\dots,e_{k}$ denotes the standard basis in ${\mathbb{C}}^{k}$. By
computing the trace in this basis one easily gets
$\dim_{\mathcal{P}_{n}}K=\sum_{j=1}^{k}\varphi_{\mathcal{P}_{n}}((P_{K})_{jj})\underset{n\to\infty}{\longrightarrow}\sum_{j=1}^{k}\tau((P_{K})_{jj})=\dim_{M}K,$
where the convergence follows from Proposition 3.3. ∎
## 4\. Dimension flatness
Throughout this section we consider again the setup consisting of a trace-
preserving inclusion $N\subseteq M$ of tracial von Neumann algebras together
with an intermediate $*$-algebra $N\subseteq\mathcal{A}\subseteq M$ which is
weakly dense in $M$. We will also consider a $k$-fold amplification $H^{k}$ of
the GNS-space $H:=L^{2}(M,\tau)$ and the natural left action of
${\mathbb{M}}_{k}(\mathcal{A})\subseteq{\mathbb{M}}_{k}(M)$ thereon. Our aim
is Theorem 4.4 which roughly says that when $\mathcal{A}$ is strongly Følner
then the ring inclusion $\mathcal{A}\subseteq M$ is flat from the point of
view of dimension theory. Before reaching our goal we need a few preparatory
results.
###### Proposition 4.1.
Assume that $\mathcal{A}$ has the strong Følner property relative to $N$ and
let $T=(T_{ij})\in{\mathbb{M}}_{k}(\mathcal{A})$ be given. If
$(\mathcal{P}_{n},\mathcal{S}_{n})$ is a strong Følner sequence for the set of
matrix entries of $T$, then
$\dim_{M}\ker(T)=\lim_{n\to\infty}\dim_{\mathcal{P}_{n}}\ker(T|_{\mathcal{S}_{n}^{k}}).$
The proof is an extension of the corresponding argument in [Ele03].
###### Proof.
By Lück’s dimension theorem [Lüc02, Theorem 6.7] we have
$\dim_{N}\ker(T|_{\mathcal{S}_{n}^{k}})+\dim_{N}{\operatorname{rg\hskip
1.13791pt}}(T|_{\mathcal{S}_{n}^{k}})=\dim_{N}\mathcal{S}_{n}^{k}=k\dim_{N}\mathcal{S}_{n}.$
Since $T_{ij}\mathcal{S}_{n}\subseteq\mathcal{P}_{n}$ we have
${{\operatorname{rg\hskip
1.13791pt}}(T|_{\mathcal{S}_{n}^{k}})}\subseteq\mathcal{P}_{n}^{k}$ and
$\ker(T|_{\mathcal{S}_{n}^{k}})\subseteq\mathcal{P}_{n}^{k}$ and by the basic
properties of the relative dimension function (Proposition 2.10) we now get
(4.1)
$\displaystyle\dim_{\mathcal{P}_{n}}\ker(T|_{\mathcal{S}_{n}^{k}})+\dim_{\mathcal{P}_{n}}{{\operatorname{rg\hskip
1.13791pt}}(T|_{\mathcal{S}_{n}^{k}}})=k\frac{\dim_{N}\mathcal{S}_{n}}{\dim_{N}\mathcal{P}_{n}}.$
Denote by $P$ the kernel projection of $T\colon H^{k}\to H^{k}$ and by $R$ the
projection onto the closure of its range. By Proposition 3.5, for any
$\varepsilon>0$ we can find $n_{0}\in{\mathbb{N}}$ such that for all
$n\geqslant n_{0}$ we have
(4.2) $\displaystyle a_{n}$
$\displaystyle:=\dim_{\mathcal{P}_{n}}\ker(T|_{\mathcal{S}_{n}^{k}})\leqslant\dim_{\mathcal{P}_{n}}\ker(T)\leqslant\dim_{M}\ker(T)+\varepsilon;$
(4.3) $\displaystyle b_{n}$
$\displaystyle:=\dim_{\mathcal{P}_{n}}{{\operatorname{rg\hskip
1.13791pt}}(T|_{\mathcal{S}_{n}^{k}})}\leqslant\dim_{\mathcal{P}_{n}}\overline{{\operatorname{rg\hskip
1.13791pt}}(T)}\leqslant\dim_{M}\overline{{\operatorname{rg\hskip
1.13791pt}}(T)}+\varepsilon.$
By (4.1) we have $\lim_{n}(a_{n}+b_{n})=k$ and by [Lüc02, Theorem 1.12 (2)]
$\dim_{M}\ker(T)+\dim_{M}\overline{{\operatorname{rg\hskip 1.13791pt}}(T)}=k.$
Our task is to prove that $a_{n}$ converges to $\dim_{M}\ker(T)$. If this were
not the case, by passing to a subsequence we can assume that there exists
$\varepsilon_{0}>0$ and an $n_{1}\in{\mathbb{N}}$ such that
$a_{n}\notin[\dim_{M}\ker(T)-\varepsilon_{0},\dim_{M}\ker(T)+\varepsilon_{0}]$
for $n\geqslant n_{1}$. The estimates (4.2) and (4.3) imply the existence of
an $n_{2}\in{\mathbb{N}}$ such that
$\displaystyle
a_{n}\leqslant\dim_{M}\ker(T)+\frac{\varepsilon_{0}}{2}\quad\text{ and }\quad
b_{n}\leqslant\dim_{M}\overline{{\operatorname{rg\hskip
1.13791pt}}(T)}+\frac{\varepsilon_{0}}{2}$
for $n\geqslant n_{2}$; hence we must have
$a_{n}\leqslant\dim_{M}\ker(T)-\varepsilon_{0}$ for
$n\geqslant{\operatorname{max}}\\{n_{1},n_{2}\\}$. But then from this point on
$a_{n}+b_{n}\leqslant\dim_{M}\ker(T)-\varepsilon_{0}+\dim_{M}\overline{{\operatorname{rg\hskip
1.13791pt}}(T)}+\frac{\varepsilon_{0}}{2}=k-\frac{\varepsilon_{0}}{2},$
contradicting the fact that $\lim_{n}(a_{n}+b_{n})=k$. ∎
Consider again the operator $T\in{\mathbb{M}}_{k}(\mathcal{A})\subseteq
B(H^{k})$ as well as its restriction
$T_{0}\colon\mathcal{A}^{k}\to\mathcal{A}^{k}$.
###### Lemma 4.2.
If $\mathcal{A}$ has the strong Følner property relative to $N$ then the
closure of ${\ker(T_{0})}$ in $H^{k}$ coincides with $\ker(T)$.
###### Proof.
Let $(\mathcal{P}_{n},\mathcal{S}_{n})$ be a strong Følner sequence for the
matrix coefficients of $T$. Denote by $P$ the kernel projection of $T$ and let
$Q$ be the projection onto ${\overline{\ker(T_{0})}}^{\perp}\cap\ker(T)$. We
need to prove that $Q=0$. One easily checks that ${\operatorname{rg\hskip
1.13791pt}}(Q)$ is a finitely generated, normal, right $M$-module and it is
therefore enough to prove that $\dim_{M}{\operatorname{rg\hskip
1.13791pt}}(Q)=0$. Denote by $R$ the projection onto the space
$\overline{\ker(T_{0})}$. Given $\varepsilon>0$, Proposition 4.1 provides an
$n_{0}\in{\mathbb{N}}$ such that for $n\geqslant n_{0}$ we have
$\dim_{M}{\operatorname{rg\hskip
1.13791pt}}(P)=\dim_{M}\ker(T)\leqslant\dim_{\mathcal{P}_{n}}\ker(T|_{\mathcal{S}_{n}^{k}})+\varepsilon\leqslant\dim_{\mathcal{P}_{n}}{\operatorname{rg\hskip
1.13791pt}}(R)+\varepsilon,$
simply because
$\ker(T|_{\mathcal{S}_{n}^{k}})\subseteq\ker(T_{0})\subseteq\overline{\ker(T_{0})}={\operatorname{rg\hskip
1.13791pt}}(R)$. Since $P=R+Q$, Proposition 3.5 implies that we eventually
have
$\displaystyle\dim_{\mathcal{P}_{n}}{\operatorname{rg\hskip
1.13791pt}}(R)+\dim_{\mathcal{P}_{n}}{\operatorname{rg\hskip 1.13791pt}}(Q)$
$\displaystyle=\dim_{\mathcal{P}_{n}}{\operatorname{rg\hskip 1.13791pt}}(P)$
$\displaystyle\leqslant\dim_{M}{\operatorname{rg\hskip
1.13791pt}}(P)+\varepsilon$
$\displaystyle\leqslant\dim_{\mathcal{P}_{n}}{\operatorname{rg\hskip
1.13791pt}}(R)+2\varepsilon$
and hence $\dim_{\mathcal{P}_{n}}{\operatorname{rg\hskip
1.13791pt}}(Q)\leqslant 2\varepsilon$ from a certain point on. Thus
$\dim_{M}{\operatorname{rg\hskip
1.13791pt}}(Q)=\lim_{n}\dim_{\mathcal{P}_{n}}{\operatorname{rg\hskip
1.13791pt}}(Q)=0$. ∎
###### Remark 4.3.
If instead $T_{0}$ is given by _right_ multiplication with a matrix from
${\mathbb{M}}_{k}(\mathcal{A})$ then
$T\in\operatorname{diag}(M)^{\prime}={\mathbb{M}}_{k}(M^{\prime})$ and hence
commutes with the diagonal action of $N$ from the left. By using the obvious
variations of the above results for left modules we therefore also obtain
$\overline{\ker(T_{0})}=\ker(T)$ if $\mathcal{A}$ has the Følner property (see
e.g. Remark 3.2). We will use this variant of the result in the proof of
Theorem 4.4 which is formulated using dimension-theory for _left_ modules over
$M$; this is done in order be consistent with the majority of the references
(e.g. [Lüc02, CS05, Sau05]) on $L^{2}$-Betti numbers in the homological
algebraic context.
We are now ready to state and prove the main theorem of this section. Recall,
that $N\subseteq M$ is a trace-preserving inclusion of finite von Neumann
algebras and that $\mathcal{A}$ denotes an intermediate $*$-algebra which is
weakly dense in $M$.
###### Theorem 4.4 (Dimension flatness).
If $\mathcal{A}$ has the strong Følner property relative to $N$ then the
inclusion $\mathcal{A}\subseteq M$ is dimension flat; that is
$\dim_{M}\operatorname{Tor}_{p}^{\mathcal{A}}(M,X)=0$
for any $p\geqslant 1$ and any left $\mathcal{A}$-module $X$.
Note that if the ring inclusion $\mathcal{A}\subseteq M$ actually were flat
(in the standard sense of homological algebra) then we would have
$\operatorname{Tor}_{p}^{\mathcal{A}}(M,X)=0$ for every left
$\mathcal{A}$-module $X$ and every $p\geqslant 1$. This need not be the case
in our setup222For example, the tower
${\mathbb{C}}\subseteq{\mathbb{C}}[{\mathbb{Z}}\times{\mathbb{Z}}]\subseteq
L({\mathbb{Z}}\times{\mathbb{Z}})$ has the strong Følner property (Corollary
6.1) but is not flat as
$\operatorname{Tor}_{1}^{{\mathbb{C}}[{\mathbb{Z}}\times{\mathbb{Z}}]}(L({\mathbb{Z}}\times{\mathbb{Z}}),{\mathbb{C}})\neq\\{0\\}$.
See e.g. [LRS99, Theorem 3.7]., but from the point of view of the dimension
function it looks as if it were the case — hence the name “dimension-
flatness”. The first part of the proof of Theorem 4.4 consists of a reduction
to the case when $X$ is finitely presented. This part is verbatim identical to
the corresponding proof for groups due to Lück (see [Lüc02, Theorem 6.37]),
but we include it here for the sake of completeness.
###### Proof.
Let an arbitrary left $\mathcal{A}$-module $X$ be given and choose a short
exact sequence $0\to Y\to F\to X\to 0$ of $\mathcal{A}$-modules in which $F$
is free. Then the corresponding long exact $\operatorname{Tor}$-sequence gives
$\operatorname{Tor}_{p+1}^{\mathcal{A}}(M,X)\simeq\operatorname{Tor}_{p}^{\mathcal{A}}(M,Y),$
and hence it suffices to prove that
$\dim_{M}\operatorname{Tor}_{1}^{\mathcal{A}}(M,X)=0$ for arbitrary $X$.
Recall that $\operatorname{Tor}$ commutes with direct limits and that the
dimension function $\dim_{M}(-)$ is also well behaved with respect to direct
limits [Lüc02, Theorem 6.13]); seeing that an arbitrary module is the directed
union of its finitely generated submodules we may therefore assume $X$ to be
finitely generated. Hence we can find a short exact sequence $0\to Y\to F\to
X\to 0$ with $F$ finitely generated and free. The module $Y$ is the directed
union of its system of finitely generated submodules $(Y_{j})_{j\in J}$ and
therefore $X$ can be realized as the direct limit $\varinjlim_{j}F/Y_{j}$.
Since each of the modules $F/Y_{j}$ is finitely presented by construction this
shows that it suffices to treat the case where $X$ is a finitely presented
module. In this case we may therefore choose a presentation of the form
$\mathcal{A}^{k}\overset{T_{0}}{\longrightarrow}\mathcal{A}^{l}\longrightarrow
X\longrightarrow 0.$
This presentation can be continued to a free resolution
$\cdots\overset{S_{0}}{\longrightarrow}\mathcal{A}^{k}\overset{T_{0}}{\longrightarrow}\mathcal{A}^{l}\longrightarrow
X\longrightarrow 0$
of $X$ that can be used to compute the $\operatorname{Tor}$ functor; in
particular we get
$\operatorname{Tor}_{1}^{\mathcal{A}}(M,X)\simeq\frac{\ker(\operatorname{id}_{M}\otimes_{\mathcal{A}}T_{0})}{{\operatorname{rg\hskip
1.13791pt}}(\operatorname{id}_{M}\otimes_{\mathcal{A}}S_{0})}.$
Denote by $T_{0}^{\operatorname{vN}}\colon M^{k}\to M^{l}$ the map induced by
$\operatorname{id}_{M}\otimes_{\mathcal{A}}T_{0}$ under the natural
identification $M\odot_{\mathcal{A}}\mathcal{A}^{i}=M^{i}$ ($i=k,l$) and by
$T\colon H^{k}\to H^{l}$ its continuous extension333Recall that “$\odot$”
denotes the algebraic tensor product and $H$ the GNS-space $L^{2}(M,\tau)$..
Since $M\odot_{\mathcal{A}}-$ is right exact and $S_{0}$ surjects onto
$\ker(T_{0})$, we see that ${\operatorname{rg\hskip
1.13791pt}}(\operatorname{id}_{M}\otimes_{\mathcal{A}}S_{0})\subseteq
M\odot_{\mathcal{A}}\mathcal{A}^{k}$ is identified with the $M$-submodule
$M\ker(T_{0})$ in $M^{k}$ generated by $\ker(T_{0})$; thus
(4.4)
$\displaystyle\dim_{M}\operatorname{Tor}_{1}^{\mathcal{A}}(M,X)=\dim_{M}\ker(T_{0}^{\operatorname{vN}})-\dim_{M}(M\ker(T_{0})).$
We now claim that
(4.5) $\displaystyle\dim_{M}\ker
T_{0}^{\operatorname{vN}}=\dim_{M}\ker(T)\quad\text{ and
}\quad\dim_{M}(M\ker(T_{0}))=\dim_{M}\overline{\ker(T_{0})},$
where the closure is taken in the Hilbert space $H^{k}$. The first equality
follows from the fact that the $L^{2}$-completion functor has an exact and
dimension preserving inverse [Lüc02, Theorem 6.24]. By Lemma 2.6, the
dimension of a submodule in $M^{k}$ coincides with the dimension of its
closure in $H^{k}$, so to prove the the second equality it suffices to prove
that $M\ker(T_{0})$ and $\ker(T_{0})$ have the same closure in $H^{k}$. But
this follows from the following simple approximation argument: Take $x\in M$
and $a\in\ker(T_{0})$ and consider $xa\in M\ker(T_{0})$. By picking a net
$x_{\alpha}\in\mathcal{A}$ converging in the strong operator topology to $x$
we obtain that $\|xa-x_{\alpha}a\|_{2}\to 0$ and $x_{\alpha}a\in\ker(T_{0})$;
hence $M\ker(T_{0})\subseteq\overline{\ker(T_{0})}$ and the proof of (4.5) is
complete. By (4.4) and (4.5) it is therefore sufficient to prove that
$\overline{\ker(T_{0})}=\ker(T)$. The adjoint operator $T^{*}\colon H^{l}\to
H^{k}$ is the extension of the operator
$T_{0}^{*}\colon\mathcal{A}^{l}\to\mathcal{A}^{k}$ which multiplies from the
right with the adjoint of the matrix defining $T_{0}$. From this we obtain
$\ker(T_{0}^{*}T_{0}^{\phantom{*}})=\ker(T_{0})$ and since
$\ker(T^{*}T)=\ker(T)$ it is equivalent to prove that
$\overline{\ker(T_{0}^{*}T_{0}^{\phantom{*}})}=\ker(T^{*}T),$
but this follows from Lemma 4.2 and Remark 4.3. ∎
###### Remark 4.5.
A careful examination of the results in this section reveals that the
dimension-flatness theorem can actually be obtained under slightly less
restrictive assumptions. Namely, the proof goes through as long as
$N\subseteq\mathcal{A}\subseteq M$ satisfies the requirements (i)-(iii) from
Proposition 3.3, but with the uniform convergence in (iii) replaced by weak∗
convergence. However, for the results in the following section the uniform
convergence will be of importance which is the reason for it being included in
the definition of the strong Følner property.
## 5\. Operator algebraic amenability
In this section we explore the connection between the strong Følner property
and the existing operator algebraic notions of amenability. Consider therefore
again a trace-preserving inclusion $N\subseteq M$ of finite von Neumann
algebras and an intermediate $*$-algebra $\mathcal{A}$ which is weakly dense
in $M$. The main goal is to prove the following result:
###### Theorem 5.1.
If $\mathcal{A}$ has the strong Følner property relative to $N$ then $M$ is
amenable relative to $N$. In particular, $M$ is injective if $\mathcal{A}$ has
the strong Følner property and $N$ is injective.
The notion of relative amenability for von Neumann algebras dates back to
Popa’s work in [Pop86]; we briefly recall the basics here following the
exposition in [OP10]. Consider again the trace-preserving inclusion
$N\subseteq M$ of finite von Neumann algebras and denote by $Q:=\langle
M,e_{N}\rangle=(JNJ)^{\prime}\cap B(L^{2}(M))$ the basic construction. Recall
that $Q$ has a unique normal, semifinite tracial weight $\Psi\colon
Q_{+}\to[0,\infty]$ with the property that
$\Psi(ae_{N}b)=\tau(ab)\text{ for all }a,b\in N.$
One way to construct the trace $\Psi$ is as follows: Since any normal
representation of $N^{\operatorname{{op}}}$ is an amplification followed by a
reduction, there exists a projection $q\in B(\ell^{2}({\mathbb{N}})\otimes
L^{2}(N))$ and a right $N$-equivariant unitary identification
$L^{2}(M)=q(\ell^{2}({\mathbb{N}})\otimes L^{2}(N)),$
where the right hand side is endowed with the diagonal right $N$-action. This
induces an identification $Q:=(JNJ)^{\prime}\cap
B(L^{2}(M))=q(B(\ell^{2}({\mathbb{N}})\otimes N))q$ and the trace $\Psi$ is
simply the pull back of the restriction of $\operatorname{Tr}\otimes\tau$
under this identification.
###### Theorem 5.2 ([OP10, Theorem 2.1]).
The following conditions are equivalent:
* (i)
There exists a state $\varphi\colon Q\to{\mathbb{C}}$ such that
$\varphi|_{M}=\tau$ and $\varphi(xT)=\varphi(Tx)$ for all $x\in M$ and $T\in
Q$.
* (ii)
There exists a conditional expectation $E\colon Q\to M$.
* (iii)
There exists a net $\xi_{n}\in L^{2}(Q,\Psi)$ such that
$\lim_{n}\left\langle\xi_{n},x\xi_{n}\right\rangle_{L^{2}(Q,\Psi)}=\tau(x)$
and $\lim_{n}\|[x,\xi_{n}]\|_{2,\Psi}=0$ for every $x\in M$.
If $N\subseteq M$ satisfies one, and hence all, of these conditions then $M$
is said to be amenable relative to $N$ (or $N$ to be coamenable in $M$).
Note that (ii) in the above theorem gives that amenability of $M$ relative to
${\mathbb{C}}$ is equivalent to amenability (a.k.a. injectivity) of $M$. Note
also that if $\mathcal{F}\subseteq M$ is a complete, finitely generated,
projective right $N$-module then the projection $P_{F}$ onto its closure $F$
in $L^{2}(M)$ is an element in $Q$ and
$\dim_{N}(\mathcal{F})=\Psi(P_{F}).$
This follows from the construction of $\Psi$, the equality
$\dim_{N}(\mathcal{F})=\dim_{N}(F)$ and the definition of the von Neumann
dimension for normal $N$-modules; see [Lüc02, Definition 1.8] and the comments
following it. More generally, for any operator $T\in Q$ we have
$\Psi(P_{F}TP_{F})=\operatorname{Tr}_{N}(P_{F}TP_{F})$ where
$\operatorname{Tr}_{N}$ is the trace on $\operatorname{End}_{N}(F)$ considered
in Section 2 (see also Lemma 2.2). We are now ready to give the proof of
Theorem 5.1
###### Proof of Theorem 5.1..
Assume that $N\subseteq\mathcal{A}\subseteq M$ has the Følner property. Since
$M_{*}$ is assumed separable the unit ball $(M)_{1}$ is separable and
metrizable for the strong operator topology and by Kaplansky’s density theorem
the unit ball $(\mathcal{A})_{1}$ is strongly dense in $(M)_{1}$. We may
therefore choose a sequence $\\{T_{i}\\}_{i=1}^{\infty}$ in
$(\mathcal{A})_{1}$ which is strongly dense in $(M)_{1}$ and upon adding
further operators to this sequence we may assume that
$\\{T_{i}\\}_{i=1}^{\infty}$ is $*$-stable. Since $\mathcal{A}$ satisfies the
strong Følner condition we can, for each $T_{1},\dots,T_{n}$, find a complete,
finitely generated, projective $N$-submodule
$\mathcal{P}_{n}\subseteq\mathcal{A}$ and a submodule
$\mathcal{S}_{n}\subseteq\mathcal{P}_{n}$ such that
* •
$T_{i}(\mathcal{S}_{n})\subseteq\mathcal{P}_{n}$ for all
$i\in\\{1,\dots,n\\}$,
* •
$\frac{\dim_{N}\mathcal{S}_{n}}{\dim_{N}\mathcal{P}_{n}}\geqslant
1-\frac{1}{n}$,
* •
$\|\varphi_{\mathcal{P}_{n}}-\tau\|_{M_{*}}\leqslant\frac{1}{n}$.
In what follows we denote by $P_{n}\in B(H)$ the orthogonal projection onto
the closure (in $H$) of $\mathcal{P}_{n}$, by $S_{n}$ the orthogonal
projection onto the closure of $\mathcal{S}_{n}$ and by $S_{n}^{\perp}$ the
difference $P_{n}-S_{n}$. Since $\mathcal{P}_{n}$ is complete and finitely
generated projective, the discussion preceding the proof implies that
$\Psi(P_{n})=\dim_{N}\mathcal{P}_{N}$; in particular $P_{n}\in L^{2}(Q,\Psi)$.
We aim at proving that the unit vectors
$\xi_{n}:=\frac{1}{\sqrt{\dim_{N}\mathcal{P}_{n}}}P_{n}\in L^{2}(Q,\Psi)$
satisfy condition (iii) of Theorem 5.2. The trace approximation is automatic
since
$\left\langle\xi_{n},x\xi_{n}\right\rangle_{L^{2}(Q,\Psi)}=\frac{\Psi(P_{n}xP_{n})}{\dim_{N}\mathcal{P}_{n}}=\frac{\operatorname{Tr}_{N}(P_{n}xP_{n})}{\dim_{N}\mathcal{P}_{n}}=\varphi_{\mathcal{P}_{n}}(x)\underset{n\to\infty}{\longrightarrow}\tau(x)$
for any $x\in M$. To prove the asymptotic commutation property, consider first
$x=T_{i_{1}}\in\\{T_{i}\\}_{i=1}^{\infty}$. Then $x^{*}=T_{i_{2}}$ for some
$i_{2}\in{\mathbb{N}}$ so for for
$n\geqslant{\operatorname{max}}\\{i_{1},i_{2}\\}$ we have $P_{n}xS_{n}=xS_{n}$
and $P_{n}x^{*}S_{n}=x^{*}S_{n}$. Hence
$\displaystyle\|xP_{n}-P_{n}x\|_{2,\Psi}^{2}$
$\displaystyle=\Psi((xP_{n}-P_{n}x)^{*}(xP_{n}-P_{n}x))$
$\displaystyle=\Psi(P_{n}x^{*}xP_{n}-P_{n}x^{*}P_{n}x-x^{*}P_{n}xP_{n}+x^{*}P_{n}x)$
$\displaystyle=\Psi(P_{n}x^{*}xP_{n}-P_{n}x^{*}P_{n}xP_{n}-P_{n}xP_{n}x^{*}P_{n}+P_{n}xx^{*}P_{n})$
$\displaystyle=\Psi\left((P_{n}x^{*}xP_{n}-P_{n}x^{*}P_{n}xP_{n}-P_{n}xP_{n}x^{*}P_{n}+P_{n}xx^{*}P_{n})S_{n}^{\perp}\right)$
$\displaystyle=\Psi\left(S_{n}^{\perp}(P_{n}x^{*}xP_{n}-P_{n}x^{*}P_{n}xP_{n}-P_{n}xP_{n}x^{*}P_{n}+P_{n}xx^{*}P_{n})S_{n}^{\perp}\right)$
$\displaystyle=\left|\left\langle
S_{n}^{\perp},(P_{n}x^{*}xP_{n}-P_{n}x^{*}P_{n}xP_{n}-P_{n}xP_{n}x^{*}P_{n}+P_{n}xx^{*}P_{n})S_{n}^{\perp}\right\rangle_{L^{2}(Q,\Psi)}\right|$
$\displaystyle\leqslant\|S_{n}^{\perp}\|_{2,\Psi}\|(P_{n}x^{*}xP_{n}-P_{n}x^{*}P_{n}xP_{n}-P_{n}xP_{n}x^{*}P_{n}+P_{n}xx^{*}P_{n})S_{n}^{\perp}\|_{2,\Psi}$
$\displaystyle\leqslant\|S_{n}^{\perp}\|_{2,\Psi}^{2}\|P_{n}x^{*}xP_{n}-P_{n}x^{*}P_{n}xP_{n}-P_{n}xP_{n}x^{*}P_{n}+P_{n}xx^{*}P_{n}\|_{\infty}$
$\displaystyle\leqslant 4\Psi(S_{n}^{\perp})$
$\displaystyle=4(\dim_{N}\mathcal{P}_{n}-\dim_{N}\mathcal{S}_{n}).$
Hence
$\|[x,\xi_{n}]\|_{2,\Psi}=\frac{\|xP_{n}-P_{n}x\|_{2,\Psi}}{\sqrt{\dim_{N}\mathcal{P}_{n}}}\leqslant
2\sqrt{\frac{\dim_{N}\mathcal{P}_{n}-\dim_{N}\mathcal{S}_{n}}{\dim_{N}\mathcal{S}_{n}}}\underset{n\to\infty}{\longrightarrow}0.$
Now the general case follows from this and an approximation argument: let
$x\in M$ and ${\varepsilon}>0$ be given and assume without loss of generality
that $\|x\|_{\infty}\leqslant 1$. Choose a net
$x_{\alpha}\in\\{T_{i}\\}_{i=1}^{\infty}$ converging strongly to $x$. Then
$x_{\alpha}$ also converges to $x$ in $L^{2}(M,\tau)$ and we may therefore
choose an $\alpha$ such that
$\|x-x_{\alpha}\|_{2,\tau}<\varepsilon.$
We now have
$\|x\xi_{n}-\xi_{n}x\|_{2,\Psi}\leqslant\|(x-x_{\alpha})\xi_{n}\|_{2,\Psi}+\|x_{\alpha}\xi_{n}-\xi_{n}x_{\alpha}\|_{2,\Psi}+\|\xi_{n}(x_{\alpha}-x)\|_{2,\Psi}$
Considering the first term we get
$\displaystyle\|(x-x_{\alpha})\xi_{n}\|_{2,\Psi}^{2}$
$\displaystyle=\frac{\Psi(P_{n}(x-x_{\alpha})^{*}(x-x_{\alpha})P_{n})}{\dim_{N}\mathcal{P}_{n}}$
$\displaystyle=\varphi_{\mathcal{P}_{n}}((x-x_{\alpha})^{*}(x-x_{\alpha}))$
$\displaystyle\leqslant|(\varphi_{\mathcal{P}_{n}}-\tau)((x-x_{\alpha})^{*}(x-x_{\alpha}))|+\tau((x-x_{\alpha})^{*}(x-x_{\alpha}))$
$\displaystyle\leqslant\|\varphi_{\mathcal{P}_{n}}-\tau\|_{M_{*}}\|x-x_{\alpha}\|_{\infty}^{2}+\|x-x_{\alpha}\|_{2,\tau}^{2}$
$\displaystyle\leqslant\frac{2}{n}+\varepsilon^{2}.$
Thus, for a suitably chosen $n_{1}\in{\mathbb{N}}$ we have
$\|(x-x_{\alpha})\xi_{n}\|_{2,\Psi}^{2}\leqslant 2{\varepsilon}$ for all
$n\geqslant n_{1}$. Considering the third term we get, in a completely similar
manner, an $n_{3}\in{\mathbb{N}}$ such that
$\|\xi_{n}(x_{\alpha}-x)\|_{2,\Psi}\leqslant 2\varepsilon$ for $n\geqslant
n_{3}$. Since $x_{\alpha}\in\\{T_{i}\\}_{i=1}^{\infty}$, the second term
converges to zero by what was shown in the first part of the proof, and hence
there exists an $n_{2}\in{\mathbb{N}}$ such that
$\|x_{\alpha}\xi_{n}-\xi_{n}x_{\alpha}\|_{2,\Psi}\leqslant\varepsilon$ for
$n\geqslant n_{2}$. Thus, for
$n\geqslant{\operatorname{max}}\\{n_{1},n_{2},n_{3}\\}$ we have
$\|x\xi_{n}-\xi_{n}x\|_{2,\Psi}\leqslant 5\varepsilon$ and since
$\varepsilon>0$ was arbitrary this completes the proof of the amenability of
$M$ relative to $N$.
We now just have to prove the final claim regarding the injectivity of $M$. So
assume that $N$ is injective and, as before, that $\mathcal{A}\subseteq M$
satisfies the Følner condition relative to $N$. Injectivity of $N$ simply
means that $N$ is amenable relative to the subalgebra ${\mathbb{C}}\subseteq
N$ and by what was just proven $M$ is amenable relative to $N$ as well. By
[Pop86, Theorem 3.2.4] this means that $M$ is also amenable relative to
${\mathbb{C}}$; i.e $M$ is injective.
∎
## 6\. Examples and applications to $L^{2}$-Betti numbers
The first goal in this section is to show how our techniques can be used to
unify the dimension-flatness results known for amenable groups and quantum
groups and, more generally, to provide a vanishing result in the context of
operator algebraic $L^{2}$-Betti numbers. Secondly, in order to convince the
reader that the class of algebras with the strong Følner property is rich and
extends beyond the ones arising from amenable groups an quantum groups, we
give a number of such examples and show how they yield results about vanishing
of $L^{2}$-Betti numbers in various situations. As a first application we re-
obtain the original dimension flatness result of Lück [Lüc02, Theorem 6.37].
###### Corollary 6.1 ($L^{2}$-Betti numbers of amenable groups).
If $\Gamma$ is a countable, discrete, amenable group then ${\mathbb{C}}\Gamma$
has the strong Følner property with respect to the tower
${\mathbb{C}}\subseteq{\mathbb{C}}\Gamma\subseteq L\Gamma$. In particular, the
inclusion ${\mathbb{C}}\Gamma\subseteq L\Gamma$ is dimension flat and
$\beta_{p}^{(2)}(\Gamma)=0$ for $p\geqslant 1$.
###### Proof.
Let $T_{1},\dots,T_{r}\in{\mathbb{C}}\Gamma$ be given and put
$S=\bigcup_{i=1}^{r}\operatorname{supp}(T_{i})\subseteq\Gamma$. Since $\Gamma$
is amenable, we can find (see e.g. [BP92, F.6.8]) a sequence of finite subset
$F_{n}\subseteq\Gamma$ such that
$\frac{\operatorname{card}(\partial_{S}(F_{n}))}{\operatorname{card}(F_{n})}\underset{n\to\infty}{\longrightarrow}0,$
where $\partial_{S}(F):=\\{\gamma\in F\mid\exists s\in S:s\gamma\notin F\\}.$
Then putting $\mathcal{P}_{n}={\mathbb{C}}[F_{n}]\subseteq{\mathbb{C}}\Gamma$
and $\mathcal{S}_{n}={\mathbb{C}}[F_{n}\setminus\partial_{S}(F_{n})]$ we
clearly have $T_{i}(\mathcal{S}_{n})\subseteq\mathcal{P}_{n}$ for all
$i\in\\{1,\dots,r\\}$ and
$\frac{\dim_{{\mathbb{C}}}\mathcal{S}_{n}}{\dim_{\mathbb{C}}\mathcal{P}_{n}}=\frac{\operatorname{card}(F_{n})-\operatorname{card}(\partial_{S}(F))}{\operatorname{card}(F_{n})}\underset{n\to\infty}{\longrightarrow}1.$
Denoting by $\rho$ the right regular representation of $\Gamma$ we get for
$T\in L\Gamma$
$\displaystyle\varphi_{\mathcal{F}}(T)=\frac{\sum_{\gamma\in
F}\left\langle\delta_{\gamma},T\delta_{\gamma}\right\rangle}{\operatorname{card}(F)}=\frac{\sum_{\gamma\in
F}\left\langle\delta_{e},\rho_{\gamma}^{*}T\rho_{\gamma}\delta_{e}\right\rangle}{\operatorname{card}(F)}=\frac{\sum_{\gamma\in
F}\left\langle\delta_{e},T\delta_{e}\right\rangle}{\operatorname{card}(F)}=\tau(T).$
Hence ${\mathbb{C}}\Gamma$ has the strong Følner property and dimension
flatness therefore follows from Theorem 4.4. The fact that the $L^{2}$-Betti
numbers vanish in positive degrees follows from this since
$\beta_{p}^{(2)}(\Gamma)=\dim_{L\Gamma}\operatorname{Tor}_{p}^{{\mathbb{C}}\Gamma}(L\Gamma,{\mathbb{C}})=0.\qed$
Next we turn our attention to the class of compact/discrete quantum groups.
Since this is a bit of an aside we shall not elaborate on the basics of
quantum group theory; for back ground material the reader is referred to the
detailed survey articles [Wor98], [KT99] and [MVD98]. In what follows, we
denote by $\widehat{{\mathbb{G}}}$ a discrete quantum group of Kac type and by
${\mathbb{G}}=(C({\mathbb{G}}),\Delta_{\mathbb{G}})$ its compact dual.
Associated with ${\mathbb{G}}$ is a Hopf $*$-algebra
${\operatorname{Pol}}({\mathbb{G}})$ which is naturally included into a finite
von Neumann algebra $L^{\infty}({\mathbb{G}})$. Furthermore, the von Neumann
algebra $L^{\infty}({\mathbb{G}})$ carries a natural trace $\tau$, namely the
Haar state associated with ${\mathbb{G}}$. In [Kye08a] and [Kye08b]
$L^{2}$-Betti numbers were studied in the context of discrete quantum groups
of Kac type and we re-obtain [Kye08a, Theorem 6.1] by setting
$N={\mathbb{C}}$, $\mathcal{A}={\operatorname{Pol}}({\mathbb{G}})$ and
$M=L^{\infty}({\mathbb{G}})$:
###### Corollary 6.2 ($L^{2}$-Betti numbers of amenable quantum groups).
If $\widehat{{\mathbb{G}}}$ is a discrete amenable quantum group of Kac type
and ${\mathbb{G}}$ is its compact dual then
${\operatorname{Pol}}({\mathbb{G}})$ has the strong Følner property with
respect to the tower
${\mathbb{C}}\subseteq{\operatorname{Pol}}({\mathbb{G}})\subseteq
L^{\infty}({\mathbb{G}})$. In particular, the inclusion
${\operatorname{Pol}}({\mathbb{G}})\subseteq L^{\infty}({\mathbb{G}})$ is
dimension flat and $\beta_{p}^{(2)}(\widehat{{\mathbb{G}}})=0$ for $p\geqslant
1$.
###### Proof.
Choose a complete set $(u^{\alpha})_{\alpha\in I}$ of representatives for the
set of equivalence classes of irreducible unitary corepresentations of
${\mathbb{G}}$ and denote by $n_{\alpha}$ the matrix size of $u^{\alpha}$.
Recall that the corresponding matrix coefficients $u_{ij}^{\alpha}$ constitute
a linear basis for ${\operatorname{Pol}}({\mathbb{G}})$ and that the
normalized matrix coefficients $\sqrt{n_{\alpha}}u_{ij}^{\alpha}$ constitute
an orthonormal basis in
$L^{2}({\mathbb{G}}):=L^{2}({\operatorname{Pol}}({\mathbb{G}}),\tau)$. Let
$T_{1},\dots,T_{r}\in{\operatorname{Pol}}({\mathbb{G}})$ and $\varepsilon>0$
be given and denote by $S$ the joint support of $T_{1},\dots,T_{r}$; that is
$S=\\{u^{\gamma}\mid\exists l\in\\{1,\dots,r\\}\exists
p,q\in\\{1,\dots,n_{\gamma}\\}:\tau(u_{pq}^{\gamma*}T_{l})\neq 0\\}.$
In other words, $S$ consists of all the irreducible corepresentations that has
at least one of its matrix coefficients entering in the linear expansion of
one of the operators in question. According to the quantum Følner condition
[Kye08a, Corollary 4.10] we can find a sequence of finite subset
$F_{k}\subseteq(u^{\alpha})_{\alpha\in I}$ such that
$\sum_{u^{\alpha}\in\partial_{S}(F_{k})}n_{\alpha}^{2}<\frac{1}{k}\sum_{u^{\alpha}\in
F_{k}}n_{\alpha}^{2},$
where the boundary $\partial_{S}(F)$ is as in [Kye08a, Definition 3.2]. Define
$\displaystyle\mathcal{P}_{k}$
$\displaystyle:={\operatorname{span}}_{\mathbb{C}}\\{u_{ij}^{\alpha}\mid
u^{\alpha}\in F,1\leqslant i,j\leqslant n_{\alpha}\\};$
$\displaystyle\mathcal{S}_{k}$
$\displaystyle:={\operatorname{span}}_{\mathbb{C}}\\{u_{ij}^{\alpha}\mid
u^{\alpha}\in F\setminus\partial_{S}(F),1\leqslant i,j\leqslant
n_{\alpha}\\}.$
By construction we now have $T_{l}(\mathcal{S}_{k})\subseteq\mathcal{P}_{k}$
for every $l\in\\{1,\dots,r\\}$ and that
$\frac{\dim_{\mathbb{C}}\mathcal{S}_{k}}{\dim_{\mathbb{C}}\mathcal{P}_{k}}=\frac{\sum_{u^{\alpha}\in
F_{k}\setminus\partial_{S}(F_{k})}n_{\alpha}^{2}}{\sum_{\alpha\in
F_{k}}n_{\alpha}^{2}}\underset{n\to\infty}{\longrightarrow}1.$
Furthermore, denoting by $P_{\mathcal{P}_{n}}\in B(L^{2}({\mathbb{G}}))$ the
projection onto the closed subspace $\mathcal{P}_{n}\subseteq
L^{2}({\mathbb{G}})$ we get
$\displaystyle\operatorname{Tr}_{\mathbb{C}}(P_{\mathcal{P}_{n}}TP_{\mathcal{P}_{n}})$
$\displaystyle=\sum_{\alpha\in
F}\sum_{i,j=1}^{n_{\alpha}}\left\langle\sqrt{n_{\alpha}}u_{ij}^{\alpha},T\sqrt{n_{\alpha}}u_{ij}^{\alpha}\right\rangle$
$\displaystyle=\sum_{\alpha\in
F}\sum_{i,j=1}^{n_{\alpha}}n_{\alpha}\tau(u_{ij}^{\alpha*}Tu_{ij}^{\alpha})$
$\displaystyle=\sum_{\alpha\in
F}n_{\alpha}\tau\left(\left(\sum_{i,j=1}^{n_{\alpha}}u_{ij}^{\alpha}u_{ij}^{\alpha*}\right)T\right)$
$\displaystyle=\tau(T)\sum_{\alpha\in F}n_{\alpha}^{2}$
$\displaystyle=\tau(T)\dim_{{\mathbb{C}}}\mathcal{P}_{n},$
for every $T\in L^{\infty}({\mathbb{G}})$; hence
$\varphi_{\mathcal{P}_{n}}(T)=\tau(T)$. Thus
${\operatorname{Pol}}({\mathbb{G}})$ has the strong Følner property and
dimension flatness follows from Theorem 4.4. The vanishing of the
$L^{2}$-Betti numbers follows from dimension flatness since
$\beta_{p}^{(2)}(\widehat{{\mathbb{G}}}):=\dim_{L^{\infty}({\mathbb{G}})}\operatorname{Tor}_{p}^{{\operatorname{Pol}}({\mathbb{G}})}(L^{\infty}({\mathbb{G}}),{\mathbb{C}})=0.\qed$
Note that Corollary 6.1 is actually just a special case of Corollary 6.2, but
since we expect most readers to be more familiar with groups than quantum
groups we singled out this case in form of Corollary 6.1 and its proof.
###### Remark 6.3.
From Theorem 5.1 and Corollary 6.1 we re-obtain the classical fact that the
group von Neumann algebra $L\Gamma$ is hyperfinite when $\Gamma$ is an
amenable discrete group. Furthermore, we see that the group algebra
${\mathbb{C}}\Gamma$ associated with a discrete group $\Gamma$ has the strong
Følner property (relative to the chain
${\mathbb{C}}\subseteq{\mathbb{C}}\Gamma\subseteq L\Gamma$) if and only if
$\Gamma$ is amenable: We already saw in Corollary 6.1 that amenability implies
the strong Følner property. Conversely, if ${\mathbb{C}}\Gamma$ has the strong
Følner property then $L\Gamma$ is hyperfinite by Theorem 5.1 and thus $\Gamma$
is amenable (see e.g. [BO08, Theorem 2.6.8 and 9.3.3]). By a similar argument,
using [Rua96, Theorem 4.5] and Corollary 6.2, we obtain that a discrete
quantum group $\widehat{{\mathbb{G}}}$ of Kac type is amenable if and only if
the dual Hopf $*$-algebra ${\operatorname{Pol}}({\mathbb{G}})$ has the strong
Følner property relative to the chain
${\mathbb{C}}\subseteq{\operatorname{Pol}}({\mathbb{G}})\subseteq
L^{\infty}({\mathbb{G}})$. Thus, for algebras arising from groups and quantum
groups the strong Følner condition coincides exactly with the notion of
amenability of the underlying object.
Next we take a step up in generality by considering $L^{2}$-Betti numbers in a
purely operator algebraic context. In [CS05], Connes and Shlyakhtenko
introduce a notion of $L^{2}$-homology and $L^{2}$-Betti numbers for a dense
$*$-subalgebra in a tracial von Neumann algebra $(M,\tau)$. More precisely, if
$\mathcal{A}\subseteq M$ is a weakly dense unital $*$-subalgebra its
$L^{2}$-homology and $L^{2}$-Betti numbers are defined, respectively, as
$H_{p}^{(2)}(\mathcal{A})=\operatorname{Tor}_{p}^{\mathcal{A}\odot\mathcal{A}^{\operatorname{{op}}}}(M\bar{\otimes}M^{{\operatorname{{op}}}},\mathcal{A})\
\text{ and }\
\beta_{p}^{(2)}(\mathcal{A},\tau)=\dim_{M\bar{\otimes}M^{{\operatorname{{op}}}}}H^{(2)}_{p}(\mathcal{A}).$
This definition extends the definition for groups [CS05, Proposition 2.3] by
means of the formula
$\beta_{p}^{(2)}({\mathbb{C}}\Gamma,\tau)=\beta_{p}^{(2)}(\Gamma)$ and it also
fits with the notion for quantum groups studied above [Kye08b, Theorem 4.1].
In order to apply our techniques in the Connes-Shlyakhtenko setting we first
need to prove that the strong Følner property is preserved under forming
algebraic tensor products.
###### Proposition 6.4.
Let $\mathcal{A}$ and $\mathcal{B}$ be dense $*$-subalgebras in the tracial
von Neumann algebras $(M,\tau)$ and $(N,\rho)$, respectively. If $\mathcal{A}$
and $\mathcal{B}$ have the strong Følner property with respect to the towers
${\mathbb{C}}\subseteq\mathcal{A}\subseteq M$ and
${\mathbb{C}}\subseteq\mathcal{B}\subseteq N$ then so does
$\mathcal{A}\odot\mathcal{B}$ with respect to the tower
${\mathbb{C}}\subseteq\mathcal{A}\odot\mathcal{B}\subseteq M\bar{\otimes}N$.
###### Proof.
Let $T_{1},\dots,T_{r}\in\mathcal{A}\odot\mathcal{B}$ be given and write each
$T_{i}$ as $\sum_{k=1}^{N_{i}}a_{k}^{(i)}\otimes b_{k}^{(i)}$. Choose a strong
Følner sequence $\mathcal{P}_{n}^{\prime}\subseteq\mathcal{A}$, with
associated subspaces $\mathcal{S}_{n}^{\prime}$, for the $a_{k}^{(i)}$’s and a
strong Følner sequence $\mathcal{P}_{n}^{\prime\prime}\subseteq\mathcal{B}$,
with associated subspaces $\mathcal{S}_{n}^{\prime\prime}$, for the
$b_{k}^{(i)}$’s . Putting
$\mathcal{P}_{n}:=\mathcal{P}_{n}^{\prime}\odot\mathcal{P}_{n}^{\prime\prime}$
and
$\mathcal{S}_{n}:=\mathcal{S}_{n}^{\prime}\otimes\mathcal{S}_{n}^{\prime\prime}$
we clearly have that the sequence $(\mathcal{P}_{n},\mathcal{S}_{n})$
satisfies the first to requirements in Proposition 3.3, and since
$\varphi_{\mathcal{P}_{n}}=\varphi_{\mathcal{P}_{n}^{\prime}}\otimes\varphi_{\mathcal{P}_{n}^{\prime\prime}}$
we get
$\displaystyle\|\varphi_{\mathcal{P}_{n}}-\tau\otimes\rho\|_{(M\bar{\otimes}N)_{*}}$
$\displaystyle=\|\varphi_{\mathcal{P}_{n}^{\prime}}\otimes\varphi_{\mathcal{P}_{n}^{\prime\prime}}-\tau\otimes\varphi_{\mathcal{P}_{n}^{\prime\prime}}+\tau\otimes\varphi_{\mathcal{P}_{n}^{\prime\prime}}-\tau\otimes\rho\|_{(M\bar{\otimes}N)_{*}}$
$\displaystyle\leqslant\|\varphi_{\mathcal{P}_{n}^{\prime}}-\tau\|_{M_{*}}+\|\rho-\varphi_{\mathcal{P}_{n}^{\prime\prime}}\|_{N_{*}}\underset{n\to\infty}{\longrightarrow}0.$
Thus, $(\mathcal{P}_{n},\mathcal{S}_{n})$ is a strong Følner sequence for
$T_{1},\dots,T_{r}$. ∎
###### Corollary 6.5.
If $\mathcal{A}$ has the strong Følner property with respect to the tower
${\mathbb{C}}\subseteq\mathcal{A}\subseteq M$ then the Connes-Shlyakhtenko
$L^{2}$-Betti numbers of $(\mathcal{A},\tau)$ vanish in positive degrees.
###### Proof.
Since $\mathcal{A}$ is strongly Følner so is the opposite algebra
$\mathcal{A}^{\operatorname{{op}}}$ (see e.g. Remark 3.2) and by Proposition
6.4 the same is true for $\mathcal{A}\odot\mathcal{A}^{\operatorname{{op}}}$.
Hence by Theorem 4.4 the inclusion
$\mathcal{A}\odot\mathcal{A}^{\operatorname{{op}}}\subseteq
M\bar{\otimes}M^{\operatorname{{op}}}$ is dimension flat so in particular
$\beta^{(2)}_{p}(\mathcal{A},\tau)=\dim_{M\bar{\otimes}M^{\operatorname{{op}}}}\operatorname{Tor}_{p}^{\mathcal{A}\otimes\mathcal{A}^{\operatorname{{op}}}}(M\bar{\otimes}M^{\operatorname{{op}}},\mathcal{A})=0$
for all $p\geqslant 1$. ∎
###### Corollary 6.6 (Actions of amenable groups on probability spaces).
Let $\Gamma$ be a discrete, countable amenable group acting on a probability
space $(X,\mu)$ in a probability measure-preserving way and denote by $\alpha$
the induced action $\Gamma\to\operatorname{Aut}(L^{\infty}(X))$. Consider the
crossed product von Neumann algebra $M=L^{\infty}(X)\rtimes\Gamma$ and its
subalgebras $N=L^{\infty}(X)$ and
$\mathcal{A}=L^{\infty}(X)\rtimes_{\mathrm{alg}}\Gamma$. Then $\mathcal{A}$
has the strong Følner property relative to $N$ and hence the inclusion
$L^{\infty}(X)\rtimes_{\mathrm{alg}}\Gamma\subseteq
L^{\infty}(X)\rtimes\Gamma$ is dimension-flat. Moreover, if the action of
$\Gamma$ is essentially free, then the $L^{2}$-Betti numbers (in the sense of
Sauer [Sau05]) of the groupoid defined by the action vanish in positive
degrees.
We denote by $(u_{g})_{g\in\Gamma}$ the natural unitaries on $L^{2}(X)$
implementing the action; i.e. $\alpha_{g}(a)=u_{g}^{\phantom{*}}au_{g}^{*}$
for all $g\in\Gamma$ and all $a\in L^{\infty}(X)$.
###### Proof.
For any finite set $F\subseteq\Gamma$, the right $L^{\infty}(X)$-module
$\mathcal{F}=\left.\left\\{\sum_{g\in F}a_{g}u_{g}\,\right|\,a_{g}\in
L^{\infty}(X)\right\\}$
is free of dimension $\operatorname{card}(F)$ and the inner product on
$\mathcal{F}$ arising from the conditional expectation $E\colon M\to N$ is
given by
$\left\langle\sum_{g\in F}a_{g}u_{g},\sum_{g\in
F}b_{g}u_{g}\right\rangle_{N}=\sum_{g\in
F}\alpha_{g^{-1}}(a_{g}^{*}b_{g}^{\phantom{*}}).$
From this it follows that the map
$(L^{\infty}(X)^{\operatorname{card}(F)},\left\langle\cdot,\cdot\right\rangle_{\operatorname{st}})\to(\mathcal{F},\left\langle\cdot,\cdot\right\rangle_{N})$
given by $(a_{g})_{g\in F}\mapsto\sum_{g\in F}a_{g}u_{g}$ is a unitary
isomorphism and that $u_{1},\dots,u_{n}$ is a basis for $\mathcal{F}$. In
particular, $\mathcal{F}$ is complete and finitely generated projective. The
trace-approximation condition is automatic, because for $T\in M$ we have
$\varphi_{\mathcal{F}}(T)=\frac{\sum_{g\in F}\left\langle
u_{g},Tu_{g}\right\rangle}{\dim_{N}\mathcal{F}}=\frac{\sum_{g\in
F}\tau(u_{g}^{*}Tu_{g}^{\phantom{*}})}{\operatorname{card}(F)}=\tau(T).$
Hence the strong Følner condition for $\mathcal{A}$ relative to $N$ follows
from the classical Følner condition for $\Gamma$ and the inclusion
$L^{\infty}(X)\rtimes_{\mathrm{alg}}\Gamma\subseteq
L^{\infty}(X)\rtimes\Gamma$ is therefore dimension-flat. In the case of an
essentially free action we combine this with [Sau05, Lemma 5.4] and get that
the $L^{2}$-Betti numbers of the groupoid defined by an action of an amenable
group vanish. ∎
###### Remark 6.7.
Note that the last part of the statement in Corollary 6.6 is a well known
result which, for instance, follows from combining [Sau05, Theorem 5.5] and
[CG86, Theorem 0.2].
Next we generalize the above result to the setting of amenable discrete
measured groupoids; $L^{2}$-Betti numbers for discrete measured groupoids were
introduced by Sauer [Sau05] and generalize the notion for equivalence
relations [Gab02]. We recall only the basic definitions and notation here
referring to [Sau05, ADR01] for the general theory of discrete measured
groupoids. Recall that the object set $\mathscr{G}^{0}$ of a discrete measured
groupoid $\mathscr{G}$ comes equipped with a $\mathscr{G}$-invariant measure
$\mu$ which gives rise to a measure $\mu_{\mathscr{G}}$ on $\mathscr{G}$
defined on a Borel subset $A\subseteq\mathscr{G}$ as
$\mu_{\mathscr{G}}(A):=\int_{\mathscr{G}^{0}}\operatorname{card}(s^{-1}(x)\cap
A)\,d\mu(x).$
Here $s\colon\mathscr{G}\to\mathscr{G}^{0}$ denotes the source map of the
groupoid. A measurable subset $K\subseteq\mathscr{G}$ is called _bounded_ if
there exists an $M>0$ such that for almost every $x\in\mathscr{G}^{0}$ both
$\operatorname{card}(s^{-1}(x)\cap K)<M$ and
$\operatorname{card}(s^{-1}(x)\cap K^{-1})<M$. Let $\mathbb{C}\mathscr{G}$ be
the groupoid ring of $\mathscr{G}$, defined as
$\mathbb{C}\mathscr{G}:=\\{f\in
L^{\infty}(\mathscr{G},\mu_{\mathscr{G}})\,|\,\operatorname{supp}(f)\text{ is
a bounded subset of }\mathscr{G}\\}.$
(this is well-defined independently of the choice of the representative of
$f$). The convolution and involution on $\mathbb{C}\mathscr{G}$ are defined as
$(f\ast
g)(\gamma):=\sum_{\begin{subarray}{c}\gamma^{\prime},\gamma^{\prime\prime}\in\mathscr{G},\\\
\gamma^{\prime}\gamma^{\prime\prime}=\gamma\end{subarray}}f(\gamma^{\prime})g(\gamma^{\prime\prime}),$
$f^{*}(\gamma):=\overline{f(\gamma^{-1})}.$
This gives $\mathbb{C}\mathscr{G}$ the structure of a $\ast$-algebra which
forms a strongly dense $\ast$-subalgebra in the groupoid von Neumann algebra
$L\mathscr{G}$ [ADR00]. For arbitrary subsets $K\subseteq\mathscr{G}$,
$A\subseteq\mathscr{G}^{0}$, $B\subseteq\mathscr{G}^{0}$ we denote by
$K_{A}^{B}$ the set $K\cap s^{-1}(A)\cap r^{-1}(B)$. Notice in particular that
for every $x\in\mathscr{G}^{0}$ the set $\mathscr{G}^{x}_{x}$ is a group with
respect to the composition. It is called the stabilizer group of $x$. A
discrete measured groupoid is called ergodic if for every set
$A\subseteq\mathscr{G}^{0}$ of positive measure
$r(s^{-1}(A))=\mathscr{G}^{0}$, and every discrete measured groupoid admits an
ergodic decomposition into a direct integral of ergodic groupoids. If
$K\subseteq\mathscr{G}$ is a bounded measurable subset, then
$\mathcal{F}_{K}:=\\{f\in\mathbb{C}\mathscr{G}\,|\,\operatorname{supp}f\subseteq
K\\}$
is a finitely generated projective right Hilbert
$L^{\infty}(\mathscr{G}^{0})$-module when equipped with the
$L^{\infty}(\mathscr{G}^{0})$-valued inner product arising from the
conditional expectation $E\colon L\mathscr{G}\to L^{\infty}(\mathscr{G}^{0})$.
This can be easily seen as follows: take an $n\in\mathbb{N}$ such that almost
every $x\in\mathscr{G}^{0}$ has at most $n$ preimages from $K$ with respect to
$s$ and decompose $K$ as $K=K_{1}\sqcup K_{2}\sqcup\dots\sqcup K_{n}$ in such
a way that the map $s|_{K_{i}}$ is a measurable isomorphism onto its image
$X_{i}\subseteq\mathscr{G}^{0}$ [Sau05, Lemma 3.1]. Then $\mathcal{F}_{K}$ is
isomorphic as a (pre-)Hilbert $C^{*}$-module over
$L^{\infty}(\mathscr{G}^{0})$ to the direct sum
$\oplus_{i=1}^{n}\mathbf{1}_{X_{i}}L^{\infty}(\mathscr{G}^{0})$ endowed with
the standard inner product. This follows from the fact that for
$f\in\mathcal{F}_{K_{i}}$ and $x\in X_{i}$ we have
$E(f^{*}f)(x)=(f^{*}\ast
f)({\operatorname{id}}_{x})=\sum_{\begin{subarray}{c}\gamma^{\prime},\gamma\in
K_{i},\\\
\gamma^{\prime}\circ\gamma={\operatorname{id}}_{x}\end{subarray}}\overline{f\left({\gamma^{\prime}}^{-1}\right)}f(\gamma)=|f(\gamma(x))|^{2},$
where $\gamma(x)\in K_{i}$ is uniquely determined by the property
$s(\gamma(x))=x$. Note in particular that
$\dim_{L^{\infty}(\mathscr{G}^{0})}\mathcal{F}_{K}=\mu_{\mathscr{G}}(K)$.
Moreover, a direct computation gives the following formula for
$\varphi_{\mathcal{F}_{K}}$:
(6.1)
$\varphi_{\mathcal{F}_{K}}(T)=\frac{\sum_{i=1}^{n}\int_{X_{i}}E(T)\,d\mu}{\mu_{\mathscr{G}}(K)}=\frac{\sum_{i=1}^{n}\int_{X_{i}}E(T)\,d\mu}{\sum_{i=1}^{n}\mu(X_{i})},\quad
T\in L\mathscr{G}.$
The result now is as follows.
###### Corollary 6.8 (Discrete measured amenable groupoids).
Let $\mathscr{G}$ be an amenable discrete measured groupoid for which the
stabilizers $\mathscr{G}_{x}^{x}$ are finite for almost every
$x\in\mathscr{G}^{0}$. Then the $L^{2}$-Betti numbers of $\mathscr{G}$ vanish
in positive degrees:
$\beta_{p}^{(2)}(\mathscr{G}):=\dim_{L\mathscr{G}}\operatorname{Tor}_{p}^{\mathbb{C}\mathscr{G}}(L\mathscr{G},L^{\infty}(\mathscr{G}^{0}))=0.$
In particular, the $L^{2}$-Betti numbers of an amenable equivalence relation
vanish.
Note that Corollary 6.8 is just a special case of [ST10, Theorem 1.1]. In
particular, the assumption that the stabilizers are finite is not important
for the validity of the statement in Corollary 6.8, but only needed in order
for our methods to apply.
###### Proof.
First of all, we observe that by [ST10, Remark 1.7] it is sufficient to prove
the statement for ergodic groupoids, and we will restrict ourselves to this
case for the rest of the proof. Let $\mathscr{R}$ be the equivalence relation
on $\mathscr{G}^{0}$ induced by $\mathscr{G}$; it can be considered as a
quotient groupoid of $\mathscr{G}$ after dividing out all stabilizers and is
known as the derived groupoid of $\mathscr{G}$. Since $\mathscr{G}$ is ergodic
the same is true for $\mathscr{R}$. By amenability of $\mathscr{G}$, the
quotient $\mathscr{R}$ is an amenable equivalence relation [ADR01, Theorem
2.11], which is therefore hyperfinite [Tak03, Theorem XIII.4.10]. That is,
there exists an increasing sequence of subrelations $\mathscr{R}_{n}$ each of
which has finite orbits of cardinality $k_{n}$ and such that
$\mathscr{R}\setminus\bigcup_{n}\mathscr{R}_{n}$ is a zero set. Let
$\mathscr{G}_{n}$ be the subgroupoid of $\mathscr{G}$ generated by the
stabilizers $\mathscr{G}_{x}^{x}$ and the lifts of the morphisms from
$\mathscr{R}_{n}$ to $\mathscr{G}$. This subgroupoid is well-defined because
two lifts of a morphism from $\mathscr{R}_{n}$ differ by an element in the
stabilizer. Let $\mathcal{A}:=\varinjlim\mathbb{C}\mathscr{G}_{n}$ be the
algebraic direct limit of the corresponding groupoid algebras. By construction
of $\mathscr{G}_{n}$, $\mathscr{G}\setminus\bigcup_{n}\mathscr{G}_{n}$ is a
measure zero subset of $\mathscr{G}$ and therefore $\mathcal{A}$ is an
_$L^{\infty}(\mathscr{G}^{0})$ -rank dense_ subalgebra of
$\mathbb{C}\mathscr{G}$; i.e. for each $f\in\mathbb{C}\mathscr{G}$ and for
each $\varepsilon>0$ there exists a projection $p\in
L^{\infty}(\mathscr{G}^{(0)})$ with $\tau(p)>1-\varepsilon$ such that
$pf\in\mathcal{A}$. (for instance, one can take the characteristic function of
a subset $X_{f}$ in $\mathscr{G}^{0}$ of measure bigger than $1-\varepsilon$
with the property that $f$ is non-zero only on those morphisms from
$\mathscr{G}\setminus\mathscr{G}_{n}$ whose target is in the complement of
$X_{f}$). Furthermore, as $\mathbb{C}\mathscr{G}$ is dimension-compatible as
an $L^{\infty}(\mathscr{G}^{0})$-bimodule [Sau05, Lemma 4.8] the rank-density
together with [Sau05, Theorem 4.11] implies that
(6.2)
$\beta_{p}^{(2)}(\mathscr{G}):=\dim_{L\mathscr{G}}\operatorname{Tor}^{\mathbb{C}\mathscr{G}}_{p}(L\mathscr{G},L^{\infty}(\mathscr{G}^{0}))=\dim_{L\mathscr{G}}\operatorname{Tor}^{\mathcal{A}}_{p}(L\mathscr{G},L^{\infty}(\mathscr{G}^{0})).$
Let us now consider $\mathbb{C}\mathscr{G}_{n}$ as a right
$L^{\infty}(\mathscr{G}^{0})$-module. By construction, it is equal to
$\mathcal{F}_{\mathscr{G}_{n}}$, where $\mathscr{G}_{n}$ is considered as a
bounded subset of $\mathscr{G}$. Its dimension can be easily computed as
$\dim_{L^{\infty}(\mathscr{G}^{0})}\mathbb{C}\mathscr{G}_{n}=k_{n}\cdot\operatorname{card}\mathscr{G}_{x}^{x},$
where $\operatorname{card}\mathscr{G}_{x}^{x}$ is the cardinality of the
stabilizer at $x\in\mathscr{G}^{0}$; it is an essentially constant function on
$\mathscr{G}^{0}$ because of the ergodicity hypothesis. Moreover, (6.1) gives
that
$\varphi_{\mathscr{G}_{n}}(T)=\tau(T),\quad T\in L\mathscr{G}.$
Now, putting $M:=L(\mathscr{G})$ and considering
$N:=L^{\infty}(\mathscr{G}^{0})$ as a subalgebra of $\mathcal{A}$ via the
inclusion $\mathscr{G}^{0}\subseteq\mathscr{G}_{n}$, we infer that the
subalgebra $\mathcal{A}$ has the strong Følner property. Therefore the
inclusion $\mathcal{A}\subseteq M$ is dimension-flat and formula (6.2) gives
us the vanishing result for the positive degree $L^{2}$-Betti numbers of
$\mathscr{G}$. ∎
###### Example 6.9 (The irrational rotation algebra).
Let $\theta\in]0,1[$ be an irrational number and recall that the irrational
rotation algebra $A_{\theta}$ is the universal $C^{*}$-algebra generated by
two unitaries $u$ and $v$ subject to the relation $uv=e^{2\pi i\theta}vu$. The
set
$\mathcal{A}_{\theta}:=\left.\left\\{\sum_{p,q\in{\mathbb{Z}}}z_{p,q}u^{p}v^{q}\,\right|\,z_{p,q}\in{\mathbb{C}},z_{p,q}=0\text{
for all but finitely many }(p,q)\in{\mathbb{Z}}^{2}\right\\}$
forms a dense subalgebra in $A_{\theta}$ and carries a natural filtering
sequence of subspaces
$\mathcal{F}_{n}:=\left.\left\\{\sum_{-n\leqslant p+q\leqslant
n}z_{p,q}u^{p}v^{q}\,\right|\,z_{p,q}\in{\mathbb{C}}\right\\}.$
Recall that $A_{\theta}$ has a unique trace which on $\mathcal{A}_{\theta}$ is
given by
$\tau\left(\sum_{p,q\in{\mathbb{Z}}}z_{p,q}u^{p}v^{q}\right)=z_{0,0}$. We aim
at proving that $\mathcal{A}_{\theta}$ has the strong Følner property relative
to $N:={\mathbb{C}}$ and the enveloping von Neumann algebra
$M_{\theta}:=A_{\theta}^{\prime\prime}\subseteq B(L^{2}(A_{\theta},\tau))$.
Let ${\varepsilon}>0$ and $T_{1},\dots,T_{r}\in\mathcal{A}_{\theta}$ be given.
Then there exists an $m_{0}\in{\mathbb{N}}$ such that
$T_{i}(\mathcal{F}_{n})\subseteq\mathcal{F}_{n+m_{0}}$ for each
$i\in\\{1,\dots,r\\}$ and $n\in{\mathbb{N}}$, and we now define
$\mathcal{P}_{n}:=\mathcal{F}_{nm_{0}}$ and
$\mathcal{S}_{n}:=\mathcal{F}_{(n-1)m_{0}}$. Using the fact that
$H=L^{2}(A_{\theta},\tau)$ has an orthonormal basis consisting of the
unitaries
$\\{u^{p}v^{q}\mid p,q\in{\mathbb{Z}}\\}$
it is not difficult to see that $\varphi_{\mathcal{P}_{n}}(T)=\tau(T)$ for
every $T\in M_{\theta}$ and furthermore it implies that
$\dim_{\mathbb{C}}{\mathcal{P}_{n}}=2(nm_{0})^{2}+2nm_{0}+1$ from which it
follows that
$\frac{\dim_{\mathbb{C}}\mathcal{S}_{n}}{\dim_{\mathbb{C}}\mathcal{P}_{n}}\longrightarrow
1$. Thus $(\mathcal{P}_{n},\mathcal{S}_{n})$ is a strong Følner sequence for
$T_{1},\dots,T_{r}$. From our results we therefore obtain that the Connes-
Shlyakhtenko $L^{2}$-Betti numbers
$\beta^{(2)}_{p}(\mathcal{A}_{\theta},\tau)$ vanish for $p\geqslant 1$ as well
as the well-known fact that the enveloping von Neumann algebra $M_{\theta}$ is
hyperfinite.
###### Example 6.10 (UHF-algebras).
Let $A$ be a UHF-algebra; i.e. $A$ is a $C^{*}$-algebraic direct limit of a
sequence $({\mathbb{M}}_{k(n)}({\mathbb{C}}),\alpha_{n})$ of full matrix
algebras. Denoting by $\mathcal{A}$ the algebraic direct limit we get a
natural dense $\ast$-subalgebra in $A$ and the tracial states on the matrix
algebras ${\mathbb{M}}_{k(n)}({\mathbb{C}})$ give rise to a unique tracial
state $\tau$ on $A$, for which the enveloping von Neumann algebra
$M:=A^{\prime\prime}\subseteq B(L^{2}(A,\tau))$ is the hyperfinite II1-factor.
Denote by $\mathcal{F}_{n}$ the image of ${\mathbb{M}}_{k(n)}({\mathbb{C}})$
in $\mathcal{A}$. Then for given $T_{1},\dots,T_{r}\in\mathcal{A}$ there
exists an $n_{0}$ such that they are all in $\mathcal{F}_{n_{0}}$ and thus
$T_{i}(\mathcal{F}_{n})\subseteq\mathcal{F}_{n}$ for all $i\in\\{1,\dots,r\\}$
and $n\geqslant n_{0}$. Putting
$\mathcal{P}_{n}=\mathcal{S}_{n}:=\mathcal{F}_{n_{0}+n}$ we get a strong
Følner sequence (relative to ${\mathbb{C}}\subseteq\mathcal{A}\subseteq M)$
for $T_{1},\dots,T_{r}$: the two first conditions in Proposition 3.3 are
obviously fulfilled and the a direct computation444This can be seen either
using matrix units or a collection of unitaries matrices forming an
orthonormal basis for the Hilbert-Schmidt inner product. shows that
$\varphi_{\mathcal{P}_{n}}(T)=\tau(T)$. Thus the Connes-Shlyakhtenko
$L^{2}$-Betti numbers of $(\mathcal{A},\tau)$ vanish in positive degrees.
###### Remark 6.11.
Note that by [Soł10, Corollary 3.2 & Theorem 3.3] neither the UHF-algebra nor
the irrational rotation algebra is the algebra associated with a discrete
group or quantum group; hence Example 6.10 and Example 6.9 provide honest new
examples of dimension flat inclusions.
###### Remark 6.12.
Generalizing Example 6.10, we may consider an arbitrary finite factor $N$ and
an inductive system of the form $({\mathbb{M}}_{k(n)}(N),\alpha_{n})$ and the
inclusion of the algebraic direct limit $\mathcal{A}$ in the von Neumann
algebraic direct limit $M$. Then, by exactly the same reasoning as in Example
6.10, $\mathcal{A}$ has the strong Følner property relative to the chain
$N\subseteq\mathcal{A}\subseteq M$. Hence the inclusion $\mathcal{A}\subseteq
M$ is dimension flat even though $\mathcal{A}$ can be “far from amenable” ($N$
might for instance be a II1-factor with property (T)). Compare e.g. with
[Lüc02, Conjecture 6.48] where it is conjectured that for $N={\mathbb{C}}$ and
$\Gamma$ a discrete group, dimension flatness of the inclusion
${\mathbb{C}}\Gamma\subseteq L\Gamma$ is equivalent to amenability of
$\Gamma$.
## References
* [ADR00] C. Anantharaman-Delaroche and J. Renault. Amenable groupoids, volume 36 of Monographies de L’Enseignement Mathématique [Monographs of L’Enseignement Mathématique]. L’Enseignement Mathématique, Geneva, 2000. With a foreword by Georges Skandalis and Appendix B by E. Germain.
* [ADR01] C. Anantharaman-Delaroche and J. Renault. Amenable groupoids. In Groupoids in analysis, geometry, and physics (Boulder, CO, 1999), volume 282 of Contemp. Math., pages 35–46. Amer. Math. Soc., Providence, RI, 2001.
* [Ati76] M.F. Atiyah. Elliptic operators, discrete groups and von Neumann algebras. In Colloque “Analyse et Topologie” en l’Honneur de Henri Cartan (Orsay, 1974), pages 43–72. Astérisque, No. 32–33. Soc. Math. France, Paris, 1976.
* [BO08] N.P. Brown and N. Ozawa. C*-algebras and finite-dimensional approximations. American Mathematical Society, 2008.
* [BP92] R. Benedetti and C. Petronio. Lectures on hyperbolic geometry. Universitext. Springer-Verlag, Berlin, 1992.
* [CG86] J. Cheeger and M. Gromov. $L_{2}$-cohomology and group cohomology. Topology, 25(2):189–215, 1986.
* [CS05] A. Connes and D. Shlyakhtenko. $L^{2}$-homology for von Neumann algebras. J. Reine Angew. Math., 586:125–168, 2005.
* [Eck99] B. Eckmann. Approximating $l_{2}$-Betti numbers of an amenable covering by ordinary Betti numbers. Comment. Math. Helv., 74(1):150–155, 1999.
* [Ele03] G. Elek. On the analytic zero divisor conjecture of Linnell. Bull. London Math. Soc., 35(2):236–238, 2003.
* [Far98] M. Farber. von Neumann categories and extended $L^{2}$-cohomology. $K$-Theory, 15(4):347–405, 1998.
* [Fra01] M. Frank. Hilbertian versus Hilbert $W^{*}$-modules and applications to $L^{2}$\- and other invariants. Acta Appl. Math., 68(1-3):227–242, 2001. Noncommutative geometry and operator $K$-theory.
* [Gab02] D. Gaboriau. Invariants $l^{2}$ de relations d’équivalence et de groupes. Publ. Math. Inst. Hautes Études Sci., (95):93–150, 2002.
* [KT99] J. Kustermans and L. Tuset. A survey of $C^{*}$-algebraic quantum groups. I. Irish Math. Soc. Bull., 43:8–63, 1999.
* [KT09] D. Kyed and A. Thom. Applications of Følner’s condition to quantum groups. To appear in J. Noncommut. Geom., 2009. ArXiv:0912.0166.
* [KW00] T. Kajiwara and Y. Watatani. Jones index theory by Hilbert $C^{*}$-bimodules and $K$-theory. Trans. Amer. Math. Soc., 352(8):3429–3472, 2000.
* [Kye08a] D. Kyed. $L^{2}$-Betti numbers of coamenable quantum groups. Münster J. Math., 1(1):143–179, 2008.
* [Kye08b] D. Kyed. $L^{2}$-homology for compact quantum groups. Math. Scand., 103(1):111–129, 2008.
* [Lan95] E. C. Lance. Hilbert $C^{*}$-modules, volume 210 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge, 1995. A toolkit for operator algebraists.
* [LRS99] W. Lück, H. Reich, and T. Schick. Novikov-Shubin invariants for arbitrary group actions and their positivity. In Tel Aviv Topology Conference: Rothenberg Festschrift (1998), volume 231 of Contemp. Math., pages 159–176. Amer. Math. Soc., Providence, RI, 1999.
* [Lüc02] W. Lück. $L^{2}$-invariants: theory and applications to geometry and $K$-theory, volume 44 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. Springer-Verlag, Berlin, 2002.
* [MT05] V.M. Manuilov and E.V. Troitsky. Hilbert $C^{*}$-modules, volume 226 of Translations of Mathematical Monographs. American Mathematical Society, Providence, RI, 2005. Translated from the 2001 Russian original by the authors.
* [MVD98] A. Maes and A. Van Daele. Notes on compact quantum groups. Nieuw Arch. Wisk. (4), 16(1-2):73–112, 1998.
* [OP10] Narutaka Ozawa and Sorin Popa. On a class of ${\rm II}_{1}$ factors with at most one Cartan subalgebra. Ann. of Math. (2), 172(1):713–749, 2010.
* [Pop86] Sorin Popa. Correspondences. INCREST Preprint, 1986.
* [Rua96] Z.-J. Ruan. Amenability of Hopf von Neumann algebras and Kac algebras. J. Funct. Anal., 139(2):466–499, 1996.
* [Sau05] R. Sauer. $L^{2}$-Betti numbers of discrete measured groupoids. Internat. J. Algebra Comput., 15(5-6):1169–1188, 2005.
* [Soł10] P. M. Sołtan. Quantum spaces without group structure. Proc. Amer. Math. Soc., 138(6):2079–2086, 2010.
* [ST10] R. Sauer and A. Thom. A spectral sequence to compute $L^{2}$-Betti numbers of groups and groupoids. J. Lond. Math. Soc. (2), 81(3):747–773, 2010.
* [Tak03] M. Takesaki. Theory of operator algebras. III, volume 127 of Encyclopaedia of Mathematical Sciences. Springer-Verlag, Berlin, 2003. Operator Algebras and Non-commutative Geometry, 8.
* [Wor98] S.L. Woronowicz. Compact quantum groups. In Symétries quantiques (Les Houches, 1995), pages 845–884. North-Holland, Amsterdam, 1998.
|
arxiv-papers
| 2011-05-17T15:00:39 |
2024-09-04T02:49:18.820187
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Vadim Alekseev, David Kyed",
"submitter": "Vadim Alekseev",
"url": "https://arxiv.org/abs/1105.3406"
}
|
1105.3492
|
# Tailoring Native Defects in LiFePO4: Insights from First-Principles
Calculations
Khang Hoang Center for Computational Materials Science, Naval Research
Laboratory, Washington, DC 20375, USA School of Physics, Astronomy, and
Computational Sciences, George Mason University, Fairfax, Virginia 22030, USA
Michelle Johannes michelle.johannes@nrl.navy.mil Center for Computational
Materials Science, Naval Research Laboratory, Washington, DC 20375, USA
###### Abstract
We report first-principles density-functional theory studies of native point
defects and defect complexes in olivine-type LiFePO4, a promising candidate
for rechargeable Li-ion battery electrodes. The defects are characterized by
their formation energies which are calculated within the GGA+$U$ framework. We
find that native point defects are charged, and each defect is stable in one
charge state only. Removing electrons from the stable defects always generates
defect complexes containing small hole polarons. Defect formation energies,
hence concentrations, and defect energy landscapes are all sensitive to the
choice of atomic chemical potentials which represent experimental conditions.
One can, therefore, suppress or enhance certain native defects in LiFePO4 via
tuning the synthesis conditions. Based on our results, we provide insights on
how to obtain samples in experiments with tailored defect concentrations for
targeted applications. We also discuss the mechanisms for ionic and electronic
conduction in LiFePO4 and suggest strategies for enhancing the electrical
conductivity.
lithium iron phosphate, defects, first-principles, polaron, ionic conduction
## I Introduction
Olivine-type LiFePO4 is a promising candidate for rechargeable Li-ion battery
electrodes.Padhi et al. (1997) The material is known for its structural and
chemical stabilities, high intercalation voltage ($\sim$3.5 V relative to
lithium metal), high theoretical discharge capacity (170 mAh/g), environmental
friendliness, and potentially low costs.Ellis et al. (2010); Manthiram (2011)
The major drawback of LiFePO4 is poor ionic and electronic conduction (with an
electrical conductivity of about 10-9 S/cm at 298 K)Delacourt et al. (2005)
that limits its applicability to devices. While the conduction can be improved
by, e.g., making LiFePO4 nanoparticles and coating with conductive
carbon,Huang et al. (2001); Ellis et al. (2007) the high processing cost
associated with the manufacturing of carbon-coated LiFePO4 nanoparticles may
make it less competitive than other materials. Another approach is to dope
LiFePO4 with aliovalent impurities (Mg, Ti, Zr, Nb) which was reported to have
enhanced the conductivity by eight orders of magnitude.Chung et al. (2002) The
role of these dopants in the conductivity enhancement, however, remains
controversial.Ravet et al. (2003); Wagemaker et al. (2008) A better
understanding of aliovalent doping, and also better solutions for improving
the performance, first requires a deeper understanding of the fundamental
properties, especially those associated with native defects, which is
currently not available. First-principles density-functional theory (DFT)
studies of native point defects and defect complexes in LiFePO4 can help
address these issues.
It is now generally accepted that LiFePO4 is an insulating, large band-gap
material in which electronic conduction proceeds via hopping of small hole
polarons.Zhou et al. (2004); Maxisch et al. (2006); Ellis et al. (2006);
Zaghib et al. (2007) These polarons may be coupled to other defects such as
lithium vacancies.Maxisch et al. (2006); Ellis et al. (2006) Iron antisites
(FeLi) have also been reported to be present in LiFePO4 samples.Yang et al.
(2002); Maier and Amin (2008); Chen et al. (2008); Chung et al. (2008); Axmann
et al. (2009); Chung et al. (2010) This native defect is believed to be
responsible for the loss of electrochemical activity in LiFePO4 due to the
blockage of lithium channels caused by its low mobility.Axmann et al. (2009);
Chung et al. (2010) Clearly, native defects have strong effects on the
material’s performance. Experimental reports on the defects have, however,
painted different pictures. Some authors reported evidence of some iron and
lithium atoms exchanging sites and forming the antisite pair FeLi-LiFe,Chung
et al. (2008, 2010) while others determined that FeLi is formed in association
with lithium vacancies ($V_{\rm{Li}}$).Maier and Amin (2008); Axmann et al.
(2009) These conflicting reports suggest that the results may be sensitive to
the actual synthesis conditions, and indicate that a better understanding of
the formation of native defects in LiFePO4 is needed in order to produce
samples with controlled defect concentrations.
Computational studies of native defects in LiFePO4 and related compounds have
been reported by several research groups.Maxisch et al. (2006); Morgan et al.
(2004); Islam et al. (2005); Fisher et al. (2008); Adams (2010); Malik et al.
(2010) Notably, Maxisch et al. studied the migration of small hole polarons in
LiFePO4 using first-principles calculations where the polarons were created
both in the absence and in the presence of lithium vacancies.Maxisch et al.
(2006) The first systematic study of native defects in LiFePO4 was, however,
carried out by Islam et al. using interatomic-potential simulations where they
found the antisite pair FeLi-LiFe to be energetically most favorable.Islam et
al. (2005); Fisher et al. (2008) Based on results of first-principles
calculations, Malik et al. recently came to a similar conclusion about the
antisite pair.Malik et al. (2010) Although these studies have provided
valuable information on the native defects in LiFePO4, they have three major
limitations. First, studies that make use of interatomic potentials may not
well describe all the defects in LiFePO4. Second, these studies seem to have
focused on neutral defect complexes and did not explicitly report the
structure and energetics of native point defects as individuals. Third, and
most importantly, none of these previous studies have thoroughly investigated
the dependence of defect formation energies and hence defect concentrations on
the atomic chemical potentials which represent experimental conditions during
synthesis.
We herein report our first-principles studies of the structure, energetics,
and migration of native point defects and defect complexes in LiFePO4. We find
that defect formation is sensitive to the synthesis conditions. Native defects
can occur in the material with high concentrations and therefore are expected
to have important implications for ionic and electronic conduction. We will
show how conflicting experimental data on the native defects can be reconciled
under our results and provide general guidelines for producing samples with
tailored defect concentrations. Comparison with previous theoretical works
will be made where appropriate. In the following, we provide technical details
of the calculations and present the theoretical approach. Next, we discuss the
structural and electronic properties of LiFePO4 which form the basis for our
discussion of the formation of native defects in the material. We then present
results of the first-principles calculations for native point defects and
defect complexes, focusing on their formation energies and migration barriers,
and discuss the dependence of defect formation energies on the atomic chemical
potentials. Based on our results, we discuss the implications of native
defects on ionic and electronic conduction, and suggest strategies for
enhancing the electrical conductivity. Finally, we end this Article with some
important conclusions.
## II Methodology
Computational Details. Our calculations were based on density-functional
theory within the GGA+$U$ framework,Anisimov et al. (1991, 1993);
Liechtenstein et al. (1995) which is an extension of the generalized-gradient
approximation (GGA),Perdew et al. (1996) and the projector augmented wave
method,Blöchl (1994); Kresse and Joubert (1999) as implemented in the VASP
code.Kresse and Hafner (1993); Kresse and Furthmüller (1996); Kresse and
Furthm ller (1996) In this work, we used $U$=5.30 eV and $J$=1.00 eV for iron
in all the calculations (except otherwise noted), i.e., the effective
interaction parameter $U$$-$$J$=4.30 eV (hereafter $U$$-$$J$ will be referred
to as $U$ for simplicity). This value of $U$ is the averaged value based on
those Zhou et al. calculated self-consistently for iron in LiFePO4 (i.e.,
Fe2+: $U$=3.71 eV) and in FePO4 (i.e., Fe3+: $U$=5.90 eV), which has been
shown to correctly reproduce the experimental intercalation potential of
LiFePO4.Zhou et al. (2004) It is known that the results obtained within
GGA+$U$ depend on the value of $U$. However, we have checked the $U$
dependence in our calculations and find that the physics of what we are
presenting is insensitive to the $U$ value for 3.71 eV $\leq U\leq$ 5.90 eV.
Calculations for bulk olivine-type LiFePO4 (orthorhombic $Pnma$; 28 atoms/unit
cell) were performed using a 4$\times$7$\times$9 Monkhorst-Pack
$\mathbf{k}$-point mesh.Monkhorst and Pack (1976) For defect calculations, we
used a (1$\times$2$\times$2) supercell, which corresponds to 112 atoms/cell,
and a 2$\times$2$\times$2 $\mathbf{k}$-point mesh. The plane-wave basis-set
cutoff was set to 400 eV. Convergence with respect to self-consistent
iterations was assumed when the total energy difference between cycles was
less than 10-4 eV and the residual forces were less than 0.01 eV/Å. In the
defect calculations, the lattice parameters were fixed to the calculated bulk
values, but all the internal coordinates were fully relaxed. The migration of
selected defects in LiFePO4 was studied using the climbing-image nudged
elastic-band method (NEB).Henkelman et al. (2000) All calculations were
performed with spin polarization and, unless otherwise noted, the
antiferromagnetic spin configuration of LiFePO4 was used.Rousse et al. (2003)
Defect Formation Energies. Throughout this Article, we employ defect formation
energies to characterize different native point defects and defect complexes
in LiFePO4. The formation energy of a defect is a crucial factor in
determining its concentration. In thermal equilibrium, the concentration of
the defect X at temperature $T$ can be obtained via the relationVan de Walle
and Neugebauer (2004); Janotti and Van de Walle (2009)
$c(\mathrm{X})=N_{\mathrm{sites}}N_{\mathrm{config}}\mathrm{exp}[-E^{f}(\mathrm{X})/k_{B}T],$
(1)
where $N_{\mathrm{sites}}$ is the number of high-symmetry sites in the lattice
per unit volume on which the defect can be incorporated, and
$N_{\mathrm{config}}$ is the number of equivalent configurations (per site).
Note that the energy in Eq. (1) is, in principle, a free energy; however, the
entropy and volume terms are often neglected because they are negligible at
relevant experimental conditions.Janotti and Van de Walle (2009) It emerges
from Eq. (1) that defects with low formation energies will easily form and
occur in high concentrations.
The formation energy of a defect X in charge state $q$ is defined asVan de
Walle and Neugebauer (2004)
$\displaystyle
E^{f}({\mathrm{X}}^{q})=E_{\mathrm{tot}}({\mathrm{X}}^{q})-E_{\mathrm{tot}}({\mathrm{bulk}})-\sum_{i}{n_{i}\mu_{i}}$
$\displaystyle+q(E_{\mathrm{v}}+\Delta V+\epsilon_{F}),$ (2)
where $E_{\mathrm{tot}}(\mathrm{X}^{q})$ and $E_{\mathrm{tot}}(\mathrm{bulk})$
are, respectively, the total energies of a supercell containing the defect X
and of a supercell of the perfect bulk material; $\mu_{i}$ is the atomic
chemical potential of species $i$ (and is referenced to the standard state),
and $n_{i}$ denotes the number of atoms of species $i$ that have been added
($n_{i}$$>$0) or removed ($n_{i}$$<$0) to form the defect. $\epsilon_{F}$ is
the electron chemical potential, i.e., the Fermi level, referenced to the
valence-band maximum in the bulk ($E_{\mathrm{v}}$). $\Delta V$ is the
“potential alignment” term, i.e., the shift in the band positions due to the
presence of the charged defect and the neutralizing background, obtained by
aligning the average electrostatic potential in regions far away from the
defect to the bulk value.Van de Walle and Neugebauer (2004) Note that we
denote defect X in charge state $q$ as Xq. For example, Fe${}_{\rm Li}^{+}$
indicates that defect FeLi occurs with charge $q$=+1, which is equivalent to
Fe${}_{\rm Li}^{\bullet}$ in the Kröger-Vink notation. For a brief discussion
on the use of notations, see, e.g., Ref.Van de Walle and Janotti (2010).
Chemical Potentials. The atomic chemical potentials $\mu_{i}$ are variables
and can be chosen to represent experimental conditions. $\mu_{i}$ can, in
principle, be related to temperatures and pressures via standard thermodynamic
expressions. The chemical potential for O2 in oxygen gas, for example, is
given byOng et al. (2008)
$\mu_{\mathrm{O}_{2}}(T,p)=\mu_{\mathrm{O}_{2}}(T,p_{\circ})+kT{\rm
ln}\frac{p}{p_{\circ}},$ (3)
where $p$ and $p_{\circ}$ are, respectively, the partial pressure and
reference partial pressure of oxygen; $k$ is Boltzmann’s constant. This
expression allows us to calculate $\mu_{\mathrm{O}_{2}}(T,p)$ if we know the
temperature dependence of $\mu_{\mathrm{O}_{2}}(T,p_{\circ})$ at a particular
pressure $p_{\circ}$. In this work, we choose the reference state of
$\mu_{\mathrm{O}_{2}}(T,p)$ to be the total energy of an isolated O2 molecule
($E_{{\rm O}_{2}}^{\rm tot}$).Not
The value of $\mu_{i}$ is subject to various thermodynamic limits. For
LiFePO4, the stability condition requires that
$\mu_{\rm Li}+\mu_{\rm Fe}+\mu_{\rm P}+2\mu_{{\rm O}_{2}}=\Delta H^{f}({\rm
LiFePO}_{4}),$ (4)
where $\Delta H^{f}$ is the formation enthalpy. This condition places a lower
bound on the value of $\mu_{i}$. Additionally, one needs to avoid
precipitating bulk Li, Fe, and P, or forming O2 gas. This sets an upper bound
on the chemical potentials: $\mu_{i}$$\leq$0.Van de Walle and Neugebauer
(2004) There are, however, further constraints imposed by other competing Li-
Fe-P-O2 phases which usually place stronger bounds on $\mu_{i}$. For example,
in order to avoid the formation of Li3PO4,
$3\mu_{\rm Li}+\mu_{\rm P}+2\mu_{{\rm O}_{2}}\leq\Delta H^{f}({\rm Li}_{3}{\rm
PO}_{4}).$ (5)
After taking into account the constraints imposed by all possible competing
phases, one can define the chemical potential range of Li, Fe, and O2 that
stabilizes LiFePO4 which is, in fact, bound in a polyhedron in the three-
dimensional ($\mu_{\rm Li}$, $\mu_{\rm Fe}$, $\mu_{{\rm O}_{2}}$) space. For a
given point in the polyhedron, one can determine the remaining variable
$\mu_{\rm P}$ via Eq. (4). In this work, the formation enthalpies of all
different Li-Fe-P-O2 phases are taken from Ong et al.Ong et al. (2008) who
have computed the energies using a methodology similar to ours. For example,
the calculated formation enthalpy of LiFePO4 at $T$=0 K (with respect to its
constituents) is $-$18.853 eV per formula unit (f.u.),Ong et al. (2008) almost
identical to that (of $-$18.882 eV/f.u.) obtained in our calculations. Ong et
al. have also calculated the phase diagrams of the quaternary Li-Fe-P-O2
system at 0 K that involve all possible phases between Li, Fe, P, and O2.
These phase diagrams show LiFePO4 is stable over a range of the oxygen
chemical potential values, from $-$11.52 eV (where the first Fe2+-containing
phase appears) to $-$8.25 eV (the last of the Fe2+-containing phosphates being
reduced).Ong et al. (2008) This corresponds to $\mu_{{\rm O}_{2}}$ ranging
from $-$3.03 to $-$8.25 eV with respect to our chosen reference ($E_{{\rm
O}_{2}}^{\rm tot}$).
Figure 1: Chemical-potential diagram for LiFePO4 at $\mu_{{\rm
O}_{2}}$=$-$4.59 eV. The $\mu_{{\rm O}_{2}}$ axis extends out of the page.
Only phases that can be in equilibrium with LiFePO4 are included and the lines
delineating these phases define the stability region of LiFePO4, here shown as
a shaded polygon.
Figure 1 shows the slice of the ($\mu_{\rm Li}$, $\mu_{\rm Fe}$, $\mu_{{\rm
O}_{2}}$) polyhedron in the $\mu_{{\rm O}_{2}}$=$-$4.59 eV plane, constructed
with the calculated formation enthalpies (taken from Ref.Ong et al. (2008))
for different Li-Fe-P-O2 phases. The shaded area (marked by Points A, B, C, D,
and E) shows the range of $\mu_{\rm Li}$ and $\mu_{\rm Fe}$ values where
LiFePO4 is stable. Point A, for example, corresponds to equilibrium of LiFePO4
with Fe2O3 and Fe3(PO4)2. At this point in the chemical-potential diagram, the
system is close to forming Fe-containing secondary phases (i.e., Fe2O3 and
Fe3(PO4)2) and far from forming Li-containing secondary phases. This can be
considered as representing a “Li-deficient” environment. Similarly, Point D
can be considered as representing a “Li-excess” environment, where the system
is close to forming Li-containing secondary phases (i.e., Li4P2O7 and Li3PO4).
Note that “Li-deficient” and “Li-excess” environments in this sense do not
necessarily mean that $\mu_{\rm Li}$ in the latter is higher than in the
former, as seen in Fig. 1. Reasonable choices of the atomic chemical
potentials should be those that ensure the stability of the host compound. In
the next sections we will present our calculated formation energies for
various native defects in LiFePO4 and discuss how these defects are formed
under different experimental conditions.
Defect Complexes. Native point defects in LiFePO4 may not stay isolated but
could instead agglomerate and form defect complexes. For a complex XY
consisting of X and Y, its binding energy $E_{b}$ can be calculated using the
formation energy of the complex and those of its constituentsVan de Walle and
Neugebauer (2004)
$E_{b}=E^{f}({\rm X})+E^{f}({\rm Y})-E^{f}({\rm XY}),$ (6)
where the relation is defined such that a positive binding energy corresponds
to a stable, bound defect complex. Having a positive binding energy, however,
does not mean that the complex will readily form. For example, under thermal
equilibrium, the binding energy $E_{b}$ needs to be greater than the larger of
$E^{f}(\rm X)$ and $E^{f}(\rm Y)$ in order for the complex to have higher
concentration than its constituents.Van de Walle and Neugebauer (2004) For
further discussions on the formation of defect complexes, see, e.g., Ref.Van
de Walle and Neugebauer (2004).
## III Bulk Properties
Before presenting our results for native defects in LiFePO4, let us discuss
some basic properties of the pristine compound. Olivine-type LiFePO4 was
reported to crystallize in the orthorhombic space group $Pnma$ with
$a$=10.3377(5), $b$=6.0112(2), and $c$=4.6950(2) Å.Rousse et al. (2003) The
compound can be regarded as an ordered arrangement of Li+, Fe2+, and (PO4)3-
units. Li+ forms Li channels along the $b$-axis whereas Fe2+ stays at the
center of a slightly distorted FeO6 octahedron (interwoven with PO4
tetrahedra). This simple bonding picture will be very useful when interpreting
the structure and energetics of native defects in LiFePO4. The calculated
lattice parameters are $a$=10.461, $b$=6.061, and $c$=4.752 Å, in satisfactory
agreement with the experimental values. The calculated values are slightly
larger than the experimental ones as expected since it is well known that GGA
tends to overestimate the lattice parameters. The calculated magnetic moment
for iron (Fe2+) is 3.76 $\mu_{\rm B}$, comparable to the experimental value of
4.19(5) $\mu_{\rm B}$ at 2 K.Rousse et al. (2003)
Figure 2: Electronic density of states (DOS) of LiFePO4 in (a)
antiferromagnetic (AFM) and (b) ferromagnetic (FM) spin configurations. The
zero of the energy is set to the highest occupied state.
Figure 2 shows the total electronic density of states of LiFePO4 in
antiferromagnetic (AFM) and ferromagnetic (FM) spin configurations. An
analysis of the wavefunctions shows that, in both configurations, the valence-
band maximum (VBM) and conduction-band minimum (CBM) are Fe 3$d$ states.
Between the highly localized Fe $d$ states just below the Fermi level (at 0
eV) and the lower valence band (which consists predominantly of O 2$p$ and Fe
3$d$ states) there is an energy gap of about 0.40 eV (AFM). The Li 2$s$ state
is high up in the conduction band, suggesting that Li donates its electron to
the lattice and becomes Li+. There is strong mixing between P 3$p$ and O 2$p$
states, indicating covalent bonding within the (PO4)3- unit. The calculated
band gap is 3.62 and 3.58 eV for AFM and FM spin configurations, respectively,
in agreement with previously reported value (of 3.7 eV).Zhou et al. (2004)
Experimentally, LiFePO4 has been reported to have a band gap of about
3.8$-$4.0 eV, obtained from diffuse reflectance measurements.Zhou et al.
(2004); Zaghib et al. (2007) The compound is therefore an insulating, large
band-gap material.
In the GGA+$U$ framework, the electronic structure can depend on the $U$
value. Indeed, we find that the calculated band gap of LiFePO4 is 3.20 and
4.00 eV in the AFM spin configuration for $U$=3.71 and 5.90 eV, respectively,
compared to 3.62 eV obtained in calculations using $U$=4.30 eV mentioned
earlier. The energy gap between the highest valence band (Fe 3$d$ states) and
the lower valence band (predominantly O 2$p$ and Fe 3$d$ states) is also
larger for smaller $U$ value: 0.58 and 0.20 eV for $U$=3.71 and 5.90 eV,
respectively. However, our GGA+$U$ calculations show that the electronic
structure near the band gap region is not sensitive to the choice of $U$
value, for $U$ lying within the range from 3.71 to 5.90 eV. As we illustrate
in the next section, knowing the structural and electronic properties,
especially the nature of the electronic states near the VBM and CBM, is
essential in understanding the formation of native defects in LiFePO4.
## IV Formation of Native Defects
In insulating, large band-gap materials such as LiFePO4, native point defects
are expected to exist in charged states other than neutral, and charge
neutrality requires that defects with opposite charge states coexist in equal
concentrations.Peles and Van de Walle (2007); Hoang and Van de Walle (2009);
Wilson-Short et al. (2009) We therefore investigated various native defects in
LiFePO4 in all possible charge states. These defects include hole polarons
(hereafter denoted as $p^{+}$), lithium vacancies ($V_{\rm Li}$) and
interstitials (Lii), iron antisites (FeLi), lithium antisites (LiFe), iron
vacancies ($V_{\rm Fe}$), and PO4 vacancies ($V_{{\rm PO}_{4}}$). We also
considered defect complexes that consist of certain point defects such as
FeLi-$V_{\rm Li}$ (a complex of FeLi and $V_{\rm Li}$), FeLi-LiFe (a complex
of FeLi and LiFe), and 2FeLi-$V_{\rm Fe}$ (a complex of two FeLi and one
$V_{\rm Fe}$).
Figure 3: Calculated formation energies of native point defects and defect
complexes in LiFePO4, plotted as a function of Fermi level with respect to the
VBM. The energies are obtained at Point A in the chemical-potential diagram
for $\mu_{{\rm O}_{2}}$=$-$4.59 eV (cf. Fig. 1), representing equilibrium with
Fe2O3 and Fe3(PO4)2.
Figure 3 shows the calculated formation energies of relevant native point
defects and defect complexes in LiFePO4 for a representative oxygen chemical
potential value, $\mu_{{\rm O}_{2}}$=$-$4.59 eV, and $\mu_{{\rm Li}}$=$-$2.85
eV, $\mu_{{\rm Fe}}$=$-$2.18 eV, and $\mu_{{\rm P}}$=$-$4.64 eV. This set of
atomic chemical potentials corresponds to Point A in Fig. 1, representing the
limiting case (Li-deficient) where Fe2O3, Fe3(PO4)2, and LiFePO4 are in
equilibrium. The slope in the formation energy plots indicates the charge
state. Positive slope indicates that the defect is positively charged,
negative slope indicates the defect is negatively charged. With the chosen set
of atomic chemical potentials, the positively charged iron antisite Fe${}_{\rm
Li}^{+}$ and negatively charged lithium vacancy ($V_{\rm Li}^{-}$) have the
lowest formation energies among the charged point defects for a wide range of
Fermi-level values. While there are different charged point defects coexisting
in the system with different concentrations, the ones with the lowest
formation energies have the highest concentrations and are dominant.Peles and
Van de Walle (2007); Hoang and Van de Walle (2009); Wilson-Short et al. (2009)
Figure 3 indicates that, in the absence of electrically active impurities that
can affect the Fermi-level position, or when such impurities occur in much
lower concentrations than charged native defects, the Fermi level will be
pinned at $\epsilon_{F}$=1.06 eV, where the formation energies and hence,
approximately, the concentrations of Fe${}_{\rm Li}^{+}$ and $V_{\rm Li}^{-}$
are equal. Also, charged native defects have positive formation energies only
near $\epsilon_{F}$=1.06 eV. Therefore, any attempt to deliberately shift the
Fermi level far away from this position and closer to the VBM or CBM, e.g.,
via doping with acceptors or donors, will result in positively or negatively
charged native defects having negative formation energies, i.e., the native
defects will form spontaneously and counteract the effects of doping.Van de
Walle and Neugebauer (2004); Janotti and Van de Walle (2009); Van de Walle et
al. (2010); Catlow, C. R. A.; Sokol, A. A.; Walsh, A. (2011) This indicates
that LiFePO4 cannot be doped $p$-type or $n$-type. In the following, we
analyze in detail the structure and energetics of the native defects. The
dependence of defect formation energies on the choice of atomic chemical
potentials will be discussed in the next section.
Small Hole Polarons. The creation of a free positively charged (hole) polaron
$p^{+}$ (i.e., $p^{+}$ in the absence of other defects or extrinsic
impurities) involves removing one electron from the LiFePO4 supercell
(hereafter referred to as “the system”). This results in the formation of a
Fe3+ site in the system. The calculated magnetic moment at this (Fe3+) site is
4.28 $\mu_{\rm B}$, compared to 3.76 $\mu_{\rm B}$ at other iron (Fe2+) sites.
The local geometry near the Fe3+ site is slightly distorted with the
neighboring O atoms moving toward Fe3+; the average Fe-O bond length is 2.07
Å, compared to 2.18 Å of the other Fe-O bonds. Note that in pristine FePO4,
the delithiated phase of LiFePO4, the calculated magnetic moment is 4.29
$\mu_{\rm B}$ at the iron (Fe3+) sites, and the calculated average Fe-O bond
length is 2.06 Å. This indicates that a hole (created by removing an electron
from the system) has been successfully stabilized at one of the iron sites and
the lattice geometry is locally distorted, giving rise to a hole polaron in
LiFePO4. Since the local distortion is found to be mostly limited to the
neighboring O atoms of the Fe3+ site, this hole polaron is considered as small
polaron where the hole is “self-trapped” in its own potential.Shluger and
Stoneham (1993); Stoneham et al. (2007) The formation of free hole polarons in
LiFePO4 is necessarily related to the rather strong interaction between Fe
3$d$ and O $p$ states, and the fact that the VBM consists predominantly of the
highly localized $d$ states.
We have investigated the migration path of $p^{+}$ and estimated the energy
barrier using the NEB method.Henkelman et al. (2000) The migration of $p^{+}$
involves an electron and its associated lattice distortion being transferred
from a Fe2+ site to a neighboring Fe3+ site. Since spin conservation is
required in this process, we carried out our calculations not using the
ground-state AFM structure of LiFePO4 but the FM one where all the spins are
aligned in the same direction. We calculated the migration path by sampling
the atomic positions between ground-state configurations. For those
configurations other than ground-state ones, the atomic positions were kept
fixed and only electron density was relaxed self-consistently, similar to the
method presented in Ref.Maxisch et al. (2006). The migration barrier is the
energy difference between the highest-energy configuration and the ground
state. We find that the migration barrier of $p^{+}$ is 0.25 eV between the
two nearest Fe sites approximately in the $b$-$c$ plane, which is comparable
to that (0.22 eV) reported by in Ref.Maxisch et al. (2006).
Figure 4: Defects in LiFePO4: (a) $V_{\rm Li}^{0}$ can be regarded as a
complex of $V_{\rm Li}^{-}$ (represented by the empty sphere) and hole polaron
$p^{+}$ (i.e., Fe3+; decorated with the square of the wavefunctions of the
lowest unoccupied state in the electronic structure of LiFePO4 in the presence
of $V_{\rm Li}^{0}$); (b) Fe${}_{\rm Li}^{+}$-$V_{\rm Li}^{-}$, a complex of
Fe${}_{\rm Li}^{+}$ and $V_{\rm Li}^{-}$; and (c) Fe${}_{\rm
Li}^{+}$-Li${}_{\rm Fe}^{-}$, a complex of Fe${}_{\rm Li}^{+}$ and Li${}_{\rm
Fe}^{-}$. Large (gray) spheres are Li, medium (blue) spheres Fe, small
(yellow) spheres P, and smaller (red) spheres O.
Vacancies and Interstitials. Negatively charged lithium vacancies ($V_{\rm
Li}^{-}$) are created by removing a Li+ ion from the system. Since, in
LiFePO4, Li donates one electron to the lattice one expects that the removal
of Li+ causes only a small disturbance in the system. Indeed we see that
lattice relaxations around the void formed by the removed Li+ are negligible.
The energy needed to form $V_{\rm Li}^{-}$ should also be small, consistent
with our results in Fig. 3. $V_{\rm Li}^{0}$, on the other hand, is created by
removing a Li atom (i.e., Li+ and an electron) from the system. This leads to
the formation of a void (at the site of the removed Li+) and an Fe3+ (formed
by the removed electron) at the neighboring Fe site. Similar to the free hole
polaron, the neighboring O atoms of the Fe3+ site in $V_{\rm Li}^{0}$ also
move toward Fe3+, with the average Fe-O distance being 2.07 Å. The calculated
magnetic moment is 4.29 $\mu_{\rm B}$ at the Fe3+ site, equal to that at the
Fe3+ site in the case of a free polaron. $V_{\rm Li}^{0}$, therefore, should
be regarded as a complex of $V_{\rm Li}^{-}$ and $p^{+}$, with the two defects
being 3.26 Å apart. Figure 4(a) shows the structure of $V_{\rm Li}^{0}$. The
binding energy of $V_{\rm Li}^{0}$ is 0.34 eV (with respect to $V_{\rm
Li}^{-}$ and $p^{+}$). Note that this value is 0.42 eV in our calculations
using (1$\times$3$\times$3) supercells which have 252 atoms/cell. Our
estimated binding energy is thus comparable to that of 0.39 and about 0.50 eV
reported by Fisher et al.Fisher et al. (2008) and Maxisch et al.,Maxisch et
al. (2006) respectively. For lithium interstitials, the stable defect is
Li${}_{i}^{+}$, created by adding Li+ into the system. Other charge states of
$V_{\rm Li}$ and Lii are not included in Fig. 3 because they either have too
high energies to be relevant or are unstable.
The migration path of $V_{\rm Li}^{-}$ is calculated by moving a Li+ unit from
a nearby lattice site into the vacancy. The energy barrier for $V_{\rm
Li}^{-}$ is estimated to be 0.32 eV along the $b$-axis and 2.27 eV along the
$c$-axis. This suggests that, in the absence of other native defects and
extrinsic impurities, lithium diffusion in LiFePO4 is highly one-dimensional
along the Li channels ($b$-axis) because the energy barrier to cross between
the channels is too high. The migration path of $V_{\rm Li}^{-}$ is, however,
not a straight line but a curved path along the $b$-axis. Our results are thus
in general agreement with previously reported theoretical studiesMorgan et al.
(2004); Islam et al. (2005); Fisher et al. (2008); Adams (2010) and
experimental observation.Nishimura et al. (2008) The estimated energy barriers
for the migration of $V_{\rm Li}^{-}$ along the $b$ and $c$ axes are lower
than those (0.55 and 2.89 eV, respectively) reported by Islam et al.Islam et
al. (2005) obtained from calculations using inter-atomic potentials, but
closer to those (0.27 and about 2.50 eV) reported by Morgan et al.Morgan et
al. (2004) obtained in GGA calculations with smaller supercells. For $V_{\rm
Li}^{0}$, a complex of $p^{+}$ and $V_{\rm Li}^{-}$, one can estimate the
lower bound of the migration barrier by taking the higher of the migration
energies of the constituents,Wilson-Short et al. (2009) which is 0.32 eV
(along the $b$-axis), the value for $V_{\rm Li}^{-}$.
Other possible vacancies in LiFePO4 are those associated with Fe2+ and (PO4)3-
units. The creation of $V_{\rm Fe}^{2-}$ corresponds to removing Fe2+ from the
system. We find that this negatively charged defect causes significant
relaxations in the lattice geometry. The neighboring Li+ ions move toward the
defect, resulting in the Li channels being bent near $V_{\rm Fe}^{2-}$ where
Li+ ions are displaced up to 0.27 Å from their original positions. $V_{\rm
Fe}^{-}$ can be considered as a complex of $V_{\rm Fe}^{2-}$ and $p^{+}$ with
the distance between the two defects being 3.81 Å. $V_{\rm Fe}^{3-}$, on the
other hand, corresponds to removing Fe2+ but leaving an electron in the
system. This defect can be regarded as a complex of $V_{\rm Fe}^{2-}$ and a
negatively charged (electron) polaron (hereafter denoted as $p^{-}$). At the
Fe site where the electron polaron resides, which is 7.68 Å from the vacancy,
the calculated magnetic moment is 2.86 $\mu_{\rm B}$; the average Fe-O
distance is 2.30 Å, which is larger than that associated with other Fe sites
(2.18 Å). Finally, $V_{{\rm PO}_{4}}$ is stable as $V_{{\rm PO}_{4}}^{3+}$ as
expected. This positively charged defect corresponds to removing the whole
(PO4)3- unit from the system. With the chosen set of atomic chemical
potentials, $V_{\rm Fe}^{-}$ and $V_{{\rm PO}_{4}}^{3+}$ have very high
formation energies (2.33 eV and 3.56 eV, respectively, at $\epsilon_{F}$=1.06
eV) and are therefore not included in Fig. 3.
Antisite Defects. Lithium antisites LiFe are created by replacing Fe at an Fe
site with Li. Li${}_{\rm Fe}^{-}$ can be considered as replacing Fe2+ with
Li+. Due to the Coulombic interaction, the two nearest Li+ ion neighbors of
Li${}_{\rm Fe}^{-}$ are pulled closer to the negatively charged defect with
the distance being 3.25 Å, compared to 3.32 Å of the equivalent bond in
pristine LiFePO4. Li${}_{\rm Fe}^{0}$, on the other hand, can be regarded as a
complex of Li${}_{\rm Fe}^{-}$ and $p^{+}$ with the distance between the two
defects being 3.98 Å. The binding energy of Li${}_{\rm Fe}^{0}$ (with respect
to Li${}_{\rm Fe}^{-}$ and $p^{+}$) is 0.30 eV. Similarly, one can replace Li
at an Li site with Fe, which creates an iron antisite FeLi. Fe${}_{\rm
Li}^{+}$ corresponds to replacing Li+ with Fe2+, whereas Fe${}_{\rm Li}^{2+}$
can be regarded as a complex of Fe${}_{\rm Li}^{+}$ and $p^{+}$. For
Fe${}_{\rm Li}^{0}$, which corresponds to replacing one Li+ with Fe2+ and
adding an extra electron to the system, the extra electron is stabilized at
the substituting Fe atom, where the calculated magnetic moment is 2.95
$\mu_{\rm B}$. One might also regard Fe${}_{\rm Li}^{0}$ as a complex of
Fe${}_{\rm Li}^{+}$ and $p^{-}$, but in this case the two defects stay at the
same lattice site. With the chosen set of chemical potentials, Fe${}_{\rm
Li}^{0}$ has a very high formation energy (2.04 eV) and is therefore not
included in Fig. 3. Again, other native defects that are not included here are
unstable or have too high formation energies to be relevant.
Defect Complexes. From the above analyses, it is clear that defects such as
$p^{+}$ ($p^{-}$), $V_{\rm Li}^{-}$, $V_{\rm Fe}^{2-}$, Fe${}_{\rm Li}^{+}$,
Li${}_{\rm Fe}^{-}$, and $V_{{\rm PO}_{4}}^{3+}$ can be considered as
elementary native defects in LiFePO4, i.e., the structure and energetics of
other native defects can be interpreted in terms of these basic building
blocks. This is similar to what has been observed in complex hydrides.Wilson-
Short et al. (2009) These elementary defects (except the free polarons) are,
in fact, point defects that are formed by adding and/or removing only Li+,
Fe2+, and (PO4)3- units. They have low formation energies (cf. Fig. 3) because
the addition/removal of these units causes the least disturbance to the
system, which is consistent with the simple bonding picture for LiFePO4
presented in the previous section. The identification of the elementary native
defects, therefore, not only helps us gain a deeper understanding of the
structure and energetics of the defects in LiFePO4 but also has important
implications. For example, one should treat the migration of defects such as
$V_{\rm Li}^{0}$ as that of a $V_{\rm Li}^{-}$ and $p^{+}$ complex with a
finite binding energy, rather than as a single point defect.
In addition to the defect complexes that involve $p^{+}$ and $p^{-}$ such as
$V_{\rm Li}^{0}$, $V_{\rm Fe}^{-}$, $V_{\rm Fe}^{3-}$, Li${}_{\rm Fe}^{0}$,
Fe${}_{\rm Li}^{0}$, and Fe${}_{\rm Li}^{2+}$ described above, we also
considered those consisting of $V_{\rm Li}^{-}$, Fe${}_{\rm Li}^{+}$,
Li${}_{\rm Fe}^{-}$, and $V_{\rm Fe}^{2-}$ such as Fe${}_{\rm
Li}^{+}$-Li${}_{\rm Fe}^{-}$, Fe${}_{\rm Li}^{+}$-$V_{\rm Li}^{-}$, and
2Fe${}_{\rm Li}^{+}$-$V_{\rm Fe}^{2-}$. Figure 4(b) shows the structure of
Fe${}_{\rm Li}^{+}$-$V_{\rm Li}^{-}$. The distance between Fe${}_{\rm Li}^{+}$
and $V_{\rm Li}^{-}$ is 2.96 Å (along the $b$-axis), compared to 3.03 Å
between the two Li sites in pristine LiFePO4. We find that this complex has a
formation energy of 0.36$-$0.56 eV for reasonable choices of atomic chemical
potentials, and a binding energy of 0.49 eV. With such a relatively high
binding energy, even higher than the formation energy of isolated Fe${}_{\rm
Li}^{+}$ and $V_{\rm Li}^{-}$ (0.42 eV at $\epsilon_{F}$=1.06 eV, cf. Fig. 3),
Fe${}_{\rm Li}^{+}$-$V_{\rm Li}^{-}$ is expected to occur with a concentration
larger than either of its constituents under thermal equilibrium conditions
during synthesis.Van de Walle and Neugebauer (2004) In Fe${}_{\rm
Li}^{+}$-$V_{\rm Li}^{-}$, the energy barrier for migrating Fe${}_{\rm
Li}^{+}$ to $V_{\rm Li}^{-}$ is about 0.74 eV, comparable to that (0.70 eV)
reported by Fisher et al.Fisher et al. (2008) This value is twice as high as
the migration barrier of $V_{\rm Li}^{-}$, indicating Fe${}_{\rm Li}^{+}$ has
low mobility.
Figure 4(c) shows the structure of Fe${}_{\rm Li}^{+}$-Li${}_{\rm Fe}^{-}$.
This antisite pair has a formation energy of 0.51 eV. This value is
independent of the choice of chemical potentials because the chemical
potential term in the formation energy formula cancels out, cf. Eq. (II).
Fe${}_{\rm Li}^{+}$-Li${}_{\rm Fe}^{-}$ has a binding energy of 0.44 eV; the
distance between Fe${}_{\rm Li}^{+}$ and Li${}_{\rm Fe}^{-}$ is 3.45 Å,
compared to 3.32 Å between the lithium and iron sites. Finally, we find that
2Fe${}_{\rm Li}^{+}$-$V_{\rm Fe}^{2-}$ has a formation energy of 1.47$-$1.67
eV for reasonable choices of the atomic chemical potentials, and a binding
energy of 1.25 eV. With this high formation energy, the complex is unlikely to
form in LiFePO4, and is therefore not included in Fig. 3. Note that the
formation energies of Fe${}_{\rm Li}^{+}$-$V_{\rm Li}^{-}$ and 2Fe${}_{\rm
Li}^{+}$-$V_{\rm Fe}^{2-}$ have the same dependence on the atomic chemical
potentials (both contain the term $-\mu_{\rm Fe}+2\mu_{\rm Li}$) and, hence,
the same dependence on $\mu_{{\rm O}_{2}}$. For any given set of chemical
potentials, the formation energy of 2Fe${}_{\rm Li}^{+}$-$V_{\rm Fe}^{2-}$ is
higher than that of Fe${}_{\rm Li}^{+}$-$V_{\rm Li}^{-}$ by 1.11 eV. We also
considered possible lithium and iron Frenkel pairs (i.e., interstitial-vacancy
pairs), but these pairs are unstable toward recombination, probably because
there is no energy barrier or too small of a barrier between the vacancy and
the interstitial.
The above mentioned neutral defect complexes have also been studied by other
research groups using either interatomic-potential simulationsIslam et al.
(2005); Fisher et al. (2008) or first-principles DFT calculations.Malik et al.
(2010) Islam et al. found that Fe${}_{\rm Li}^{+}$-Li${}_{\rm Fe}^{-}$ has a
formation energy of 0.74 eV (or 1.13 eV if the two defects in the pair are
considered as isolated defects) and a binding energy of 0.40 eV, and is
energetically most favorable among possible native defects.Islam et al. (2005)
The reported formation energy is, however, higher than our calculated value by
0.23 eV. This difference may be due to the different methods used in the
calculations. Fisher et al. reported a formation energy of 3.13 eV for
Fe${}_{\rm Li}^{+}$-$V_{\rm Li}^{-}$,Fisher et al. (2008) which is much higher
than our calculated value. Note, however, that Fisher et al. assumed the
reaction ${\rm FeO}+2{\rm Li}_{\rm{Li}}^{0}$$\rightarrow$${\rm Fe}_{\rm
Li}^{+}+V_{\rm Li}^{-}+{\rm Li}_{2}{\rm O}$ for the formation of Fe${}_{\rm
Li}^{+}$-$V_{\rm Li}^{-}$ which implies that LiFePO4 is in equilibrium with
FeO and Li2O. This scenario is unlikely to occur, as indicated in the Li-Fe-
P-O2 phase diagrams calculated by Ong et al.,Ong et al. (2008) where
equilibrium between these phases has never been observed. This may also be the
reason that the formation energy of $V_{\rm Li}^{0}$ reported by the same
authors (4.41 eV)Fisher et al. (2008) is much higher than our calculated
values.
Based on first-principles calculations, Malik et al. reported a formation
energy of 0.515$-$0.550 eV for the antisite pair Fe${}_{\rm
Li}^{+}$-Li${}_{\rm Fe}^{-}$,Malik et al. (2010) which is very close to our
calculated value (0.51 eV). For 2Fe${}_{\rm Li}^{+}$-$V_{\rm Fe}^{2-}$, the
formation energy was reported to be of about 1.60$-$1.70 eV for $\mu_{{\rm
O}_{2}}$ ranging from $-$3.03 to $-$8.21 eV,Malik et al. (2010) which is also
comparable to our results. Malik et al., however, obtained a much higher
formation energy for Fe${}_{\rm Li}^{+}$-$V_{\rm Li}^{-}$, from about 3.60 to
5.10 eV for the same range of $\mu_{{\rm O}_{2}}$ values.Malik et al. (2010)
This energy is much higher than that obtained in our calculations (0.36$-$0.56
eV). Although we have no explanation for this discrepancy, we observe that the
calculated formation energies of 2Fe${}_{\rm Li}^{+}$-$V_{\rm Fe}^{2-}$ and
Fe${}_{\rm Li}^{+}$-$V_{\rm Li}^{-}$ in Malik et al.’s work have distinct
$\mu_{{\rm O}_{2}}$-dependencies (see Figure S1 in the Supporting Information
of Ref. Malik et al. (2010)), instead of having the same dependence on
$\mu_{{\rm O}_{2}}$ as we discussed above, indicating their scheme of
accounting for the atomic chemical potentials differs from the standard
procedure.
## V Tailoring Defect Concentrations
It is important to note that the energy landscape presented in Fig. 3 may
change as one changes the atomic chemical potentials, i.e., synthesis
conditions. The calculated formation energies are a function of four variables
$\mu_{\rm Li}$, $\mu_{\rm Fe}$, $\mu_{\rm P}$, and $\mu_{{\rm O}_{2}}$, which
in turn depend on each other and vary within the established constraints. A
change in one variable leads to changes in the other three. In the following
discussions, we focus on two “knobs” that can be used to experimentally tailor
the formation energy and hence the concentration of different native defects
in LiFePO4, and suppress or enhance certain defects for targeted applications.
One is $\mu_{{\rm O}_{2}}$, which can be controlled by controlling temperature
and pressure and/or oxygen reducing agents. Lower $\mu_{{\rm O}_{2}}$ values
represent the so-called “more reducing environments,” which are usually
associated with higher temperatures and/or lower oxygen partial pressures
and/or the presence of oxygen reducing agents; whereas higher $\mu_{{\rm
O}_{2}}$ values represent “less reducing environments.”Ong et al. (2008) The
other is the degree of lithium off-stoichiometry with respect to LiFePO4
exhibited through the tendency toward formation of Li-containing or Fe-
containing secondary phases in the synthesis of LiFePO4. As discussed
previously, in the environments to which we refer as Li-excess (Li-deficient),
the system is close to forming Li-containing (Fe-containing) secondary phases.
Figure 5: Calculated formation energies of native point defects and defect
complexes in LiFePO4, plotted as a function of Fermi level with respect to the
VBM. The energies are obtained at $\mu_{{\rm O}_{2}}$=$-$3.03 eV, and
equilibrium with Fe2O3 and Fe7(PO4)6 is assumed.
Varying the Atomic Chemical Potentials. Let us assume, for example, Li-
deficient environments and vary $\mu_{{\rm O}_{2}}$ from $-$3.03 (where
LiFePO4 first starts to form) to $-$8.25 eV (where it ceases to form).Ong et
al. (2008) This amounts to choosing different cuts along the $\mu_{{\rm
O}_{2}}$ axis in Fig. 1 to give different two-dimensional polygons of LiFePO4
stability. Figure 5 shows the calculated formation energies for $\mu_{{\rm
O}_{2}}$=$-$3.03 eV, assuming equilibrium with Fe2O3 and Fe7(PO4)6 (i.e., Li-
deficient) which gives rise to $\mu_{\rm Li}$=$-$3.41, $\mu_{\rm Fe}$=$-$3.35,
and $\mu_{\rm P}$=$-$6.03 eV. Figure 5 clearly shows changes in the energy
landscape of the defects, compared to Fig. 3. The lowest energy point defects
that determine the Fermi-level position are now $p^{+}$ and $V_{\rm Li}^{-}$.
Near $\epsilon_{F}$=0.59 eV where $p^{+}$ and $V_{\rm Li}^{-}$ have equal
formation energies, $V_{\rm Li}^{0}$ also has the lowest energy. This
indicates that, under high $\mu_{{\rm O}_{2}}$ and Li-deficient environments,
$p^{+}$ and $V_{\rm Li}^{-}$ are the dominant native point defects in LiFePO4
and are likely to exist in the form of the neutral complex $V_{\rm Li}^{0}$.
Note that, with the chosen set of atomic chemical potentials, Li${}_{\rm
Fe}^{-}$ also has a low formation energy, very close to that of $V_{\rm
Li}^{-}$, indicating the presence of a relatively high concentration of
Li${}_{\rm Fe}^{-}$. Similar to $V_{\rm Li}^{-}$, Li${}_{\rm Fe}^{-}$ can
combine with $p^{+}$ to form Li${}_{\rm Fe}^{0}$. However, since Li${}_{\rm
Fe}^{0}$ has a higher formation energy and a smaller binding energy than
$V_{\rm Li}^{0}$ , only a small portion of Li${}_{\rm Fe}^{-}$ is expected to
be stable in form of Li${}_{\rm Fe}^{0}$ under thermal equilibrium conditions.
Iron vacancies have the lowest energies in a wide range of the Fermi-level
values as expected, given the very low iron chemical potential.
Figure 6: Calculated formation energies of native point defects and defect
complexes in LiFePO4, plotted as a function of Fermi level with respect to the
VBM. The energies are obtained at $\mu_{{\rm O}_{2}}$=$-$8.21 eV, and
equilibrium with Fe2P and Fe3P is assumed.
Figure 6 shows the calculated formation energies for $\mu_{{\rm
O}_{2}}$=$-$8.21 eV. The formation energies are obtained by assuming
equilibrium with Fe2P and Fe3P (i.e., Li-deficient) which gives rise to
$\mu_{\rm Li}$=$-$1.80, $\mu_{\rm Fe}$=$-$0.24, and $\mu_{\rm P}$=$-$0.39 eV.
We find that Fe${}_{\rm Li}^{+}$ and Li${}_{\rm{Fe}}^{-}$ are now the dominant
native point defects, pinning the Fermi level at $\epsilon_{F}$=2.00 eV. The
complex Fe${}_{\rm Li}^{+}$-Li${}_{\rm{Fe}}^{-}$ has a binding energy of 0.44
eV, comparable to the formation energies of Fe${}_{\rm Li}^{+}$ and
Li${}_{\rm{Fe}}^{-}$ (which are both 0.48 eV at $\epsilon_{F}$=2.00 eV). This
suggests that Fe${}_{\rm Li}^{+}$ and Li${}_{\rm{Fe}}^{-}$ are likely to exist
both in the form of Fe${}_{\rm Li}^{+}$-Li${}_{\rm{Fe}}^{-}$, but also as
isolated point defects. With this set of atomic chemical potentials, we find
that $V_{{\rm PO}_{4}}^{3+}$ has the lowest formation energy near the VBM, and
Fe${}_{\rm Li}^{0}$ has a formation energy of 1.15 eV that, while very high,
is lower than $V_{\rm Li}^{0}$ ($1.93$ eV) and Li${}_{\rm Fe}^{0}$ ($1.98$
eV), which were found to have lower formation energies under different
conditions (cf. Figures 3 and 5).
We also investigated the dependence of defect formation energies on
$\mu_{\rm{Li}}$ (and $\mu_{\rm{Fe}}$), i.e., Li-deficiency versus Li-excess,
for a given $\mu_{{\rm O}_{2}}$ value. For $\mu_{{\rm O}_{2}}$=$-4.59$ eV, for
example, the results obtained at Points B and C in Fig. 1 show energy
landscapes that are similar to that at Point A (Li-deficient), namely
Fe${}_{\rm Li}^{+}$ and $V_{\rm Li}^{-}$ are the dominant native point defects
in LiFePO4 and likely to exist in form of Fe${}_{\rm Li}^{+}$-$V_{\rm
Li}^{-}$. At Point D, where LiFePO4 is in equilibrium with Li4P2O7 and Li3PO4
(Li-excess), we find instead that Fe${}_{\rm Li}^{+}$ and Li${}_{\rm Fe}^{-}$
are energetically most favorable, and are likely to exist as Fe${}_{\rm
Li}^{+}$-Li${}_{\rm{Fe}}^{-}$. The calculated formation energy of $p^{+}$ is
only slightly higher than that of Fe${}_{\rm Li}^{+}$, indicating a coexisting
high concentration of $p^{+}$. The hole polarons in this case are expected to
exist as isolated defects under thermal equilibrium since Li${}_{\rm Fe}^{0}$
has a relatively high formation energy (0.66 eV) and a small binding energy
(0.30 eV). Point E gives results that are similar to those at Point D. In
contrast, when we choose $\mu_{{\rm O}_{2}}$=$-8.21$ eV, we find that
Fe${}_{\rm Li}^{+}$ and Li${}_{\rm Fe}^{-}$ are the most energetically
favorable defects regardless of the choice of phase-equilibrium conditions.
Table 1: Calculated formation energies ($E^{f}$) and migration barriers
($E_{m}$) of the most relevant native point defects and defect complexes in
LiFePO4. (1)$-$(8) are the equilibrium conditions; see text. Binding energies
($E_{b}$) of the defect complexes (with respect to their isolated
constituents) are given in the last column. The formation energy of
2Fe${}_{\rm{Li}}^{+}$-$V_{\rm{Fe}}^{2-}$ is high and thus the complex is not
likely to form, but is also given here for comparison.
Defect | $E^{f}$ (eV) | $E_{m}$ (eV) | Constituents | $E_{b}$(eV)
---|---|---|---|---
| (1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) | | |
$p^{+}$ | 0.33 | 0.32 | 0.54 | 0.40 | 0.80 | 0.49 | 1.74 | 1.72 | 0.25 | |
Fe${}_{\rm{Li}}^{+}$ | 0.57 | 0.63 | 0.43 | 0.55 | 0.42 | 0.48 | 0.48 | 0.48 | | |
$V_{\rm{Li}}^{-}$ | 0.33 | 0.41 | 0.43 | 0.47 | 0.42 | 0.56 | 0.53 | 0.55 | 0.32 | |
Li${}_{\rm{Fe}}^{-}$ | 0.39 | 0.32 | 0.52 | 0.40 | 0.53 | 0.48 | 0.48 | 0.48 | | |
$V_{\rm{Li}}^{0}$ | 0.32 | 0.38 | 0.62 | 0.52 | 0.88 | 0.70 | 1.93 | 1.95 | 0.32 _a_ | $V_{\rm{Li}}^{-}$ \+ $p^{+}$ | 0.34, 0.42 _b_
Li${}_{\rm{Fe}}^{0}$ | 0.42 | 0.34 | 0.76 | 0.50 | 1.03 | 0.66 | 1.92 | 1.90 | | Li${}_{\rm{Fe}}^{-}$ \+ $p^{+}$ | 0.30
Fe${}_{\rm{Li}}^{+}$-$V_{\rm{Li}}^{-}$ | 0.41 | 0.55 | 0.37 | 0.53 | 0.36 | 0.55 | 0.52 | 0.56 | | Fe${}_{\rm{Li}}^{+}$ \+ $V_{\rm{Li}}^{-}$ | 0.49
Fe${}_{\rm{Li}}^{+}$-Li${}_{\rm{Fe}}^{-}$ | 0.51 | 0.51 | 0.51 | 0.51 | 0.51 | 0.51 | 0.51 | 0.51 | | Fe${}_{\rm{Li}}^{+}$ \+ Li${}_{\rm{Fe}}^{-}$ | 0.44
2Fe${}_{\rm{Li}}^{+}$-$V_{\rm{Fe}}^{2-}$ | 1.52 | 1.66 | 1.48 | 1.64 | 1.47 | 1.66 | 1.63 | 1.67 | | 2Fe${}_{\rm{Li}}^{+}$ \+ $V_{\rm{Fe}}^{2-}$ | 1.25
_a_ Lower bound, estimated by considering $V_{\rm{Li}}^{0}$ as a complex of
$V_{\rm{Li}}^{-}$ and $p^{+}$ and taking the higher of the migration energies
of the constituents. _b_ The value obtained in calculations using larger
supercells (252 atoms/cell).
Identifying the General Trends. We list in Table 1 the formation energies of
the most relevant native point defects and defect complexes in LiFePO4,
migration barriers of selected point defects, and binding energies of the
defect complexes. The chemical potentials are chosen with representative
$\mu_{{\rm O}_{2}}$ values and Li-deficient and Li-excess environments to
reflect different experimental conditions. Specifically, these conditions
represent equilibrium of LiFePO4 with (1) Fe2O3 and Fe7(PO4)6 and (2)
Li3Fe2(PO4)3 and Li3PO4, for $\mu_{{\rm O}_{2}}$=$-$3.03 eV; (3) Fe2O3 and
Fe3(PO4)2 and (4) Li4P2O7 and Li3PO4, for $\mu_{{\rm O}_{2}}$=$-$3.89 eV; (5)
Fe2O3 and Fe3(PO4)2 (i.e., Point A in Fig. 1) and (6) Li4P2O7 and Li3PO4
(i.e., Point D in Fig. 1), for $\mu_{{\rm O}_{2}}$=$-$4.59 eV; (7) Fe3P and
Fe2P and (8) Li3PO4 and Fe3P, for $\mu_{{\rm O}_{2}}$=$-$8.21 eV. Conditions
(1), (3), (5), and (7) represent Li-deficient environments, whereas (2), (4),
and (6) represent Li-excess. Under each condition, the formation energies for
charged defects are taken at the Fermi-level position determined by relevant
charged point defects: (1) $\epsilon_{F}$=0.59 eV (where $p^{+}$ and
$V_{\rm{Li}}^{-}$ have equal formation energies), (2) $\epsilon_{F}$=0.58 eV
($p^{+}$ and Li${}_{\rm{Fe}}^{-}$), (3) $\epsilon_{F}$=0.79 eV (Fe${}_{\rm
Li}^{+}$ and $V_{\rm{Li}}^{-}$), (4) $\epsilon_{F}$=0.66 eV ($p^{+}$ and
Li${}_{\rm{Fe}}^{-}$), (5) $\epsilon_{F}$=1.06 eV (Fe${}_{\rm Li}^{+}$ and
$V_{\rm{Li}}^{-}$), (6) $\epsilon_{F}$=0.74 eV (Fe${}_{\rm Li}^{+}$ and
Li${}_{\rm{Fe}}^{-}$), (7) $\epsilon_{F}$=2.00 eV (Fe${}_{\rm Li}^{+}$ and
Li${}_{\rm{Fe}}^{-}$), and (8) $\epsilon_{F}$=1.98 eV (Fe${}_{\rm Li}^{+}$ and
Li${}_{\rm{Fe}}^{-}$). The results for other $\mu_{{\rm O}_{2}}$ values are
not included in Table 1 because they give results that are similar to those
presented here. For example, the energy landscapes for $\mu_{{\rm
O}_{2}}$=$-$3.25 eV and $\mu_{{\rm O}_{2}}$=$-7.59$ eV are similar to those
for $\mu_{{\rm O}_{2}}$=$-$3.03 eV and $\mu_{{\rm O}_{2}}$=$-4.59$ eV,
respectively.
Figure 7: Calculated formation energies of the low-energy positively and
negatively charged point defects (i.e., $p^{+}$, Fe${}_{\rm{Li}}^{+}$,
$V_{\rm{Li}}^{-}$, and Li${}_{\rm{Fe}}^{-}$) and their neutral defect
complexes, plotted as a function of Fermi level with respect to the VBM, under
different conditions: $\mu_{{\rm O}_{2}}$=$-$3.03, $-$3.89, $-$4.59, and
$-$8.21 eV, and Li-deficient and Li-excess environments. Panels (a)$-$(h)
correspond to conditions (1)$-$(8), see text. Only the complex that consists
of the lowest-energy negatively and positively charged point defects are
included.
In order to help capture the most general trends in the energy landscape of
native defects in LiFePO4 in going from high to low $\mu_{{\rm O}_{2}}$ values
and from Li-deficient to Li-excess environments, we plot in Fig. 7 the
calculated formation energies of the most relevant native point defects and
their lowest-energy complexes obtained under conditions (1)$-$(8). We find
that, at a given Fermi-level position $\epsilon_{F}$, the formation energy of
Fe${}_{\rm{Li}}^{+}$ decreases as $\mu_{{\rm O}_{2}}$ decreases. This is
because $\mu_{\rm Fe}$ increases more rapidly than $\mu_{\rm Li}$ does as
$\mu_{{\rm O}_{2}}$ decreases from $-$3.03 eV to $-$8.21 eV. The formation
energy of $p^{+}$, on the other hand, is independent of the choice of atomic
chemical potentials, and depends only on $\epsilon_{F}$. At high $\mu_{{\rm
O}_{2}}$ values, $p^{+}$ is lower in energy than Fe${}_{\rm{Li}}^{+}$, but
then the two defects switch orders before $\mu_{{\rm O}_{2}}$ reaches $-$3.89
eV (under Li-deficient environments) or $-$4.59 eV (Li-excess). The formation
energy of the dominant positive defects, $p^{+}$ and Fe${}_{\rm{Li}}^{+}$,
differs by as much as 1.3 eV for some sets of atomic chemical potentials. On
the contrary, the dominant negative defects, $V_{\rm{Li}}^{-}$ and
Li${}_{\rm{Fe}}^{-}$, have comparable formation energies throughout the range
of conditions. The largest formation energy difference between the two defects
is just 0.2 eV. The formation energy of Li${}_{\rm{Fe}}^{-}$ is slightly lower
than that of $V_{\rm{Li}}^{-}$ under Li-excess environments; whereas it is
slightly higher under Li-deficient environments, except near $\mu_{{\rm
O}_{2}}$=$-$8.21 eV where $V_{\rm{Li}}^{-}$ is higher than
Li${}_{\rm{Fe}}^{-}$. Both $V_{\rm{Li}}^{-}$ and Li${}_{\rm{Fe}}^{-}$ have
their formation energies increased as $\mu_{{\rm O}_{2}}$ decreases. In going
from high to low $\mu_{{\rm O}_{2}}$ values, the changes in the formation
energy of Fe${}_{\rm{Li}}^{+}$ and that of $V_{\rm{Li}}^{-}$ and
Li${}_{\rm{Fe}}^{-}$ leads to a shift of the Fermi level from about 0.6 eV
above the VBM to 2.0 eV above the VBM. The variation in the calculated
formation energy of $p^{+}$ as seen in Table 1 is a result of this shift.
Under Li-deficient environments, we find that Fe${}_{\rm{Li}}^{+}$ and
$V_{\rm{Li}}^{-}$ are energetically most favorable over a wide range of
$\mu_{{\rm O}_{2}}$ values, from $-$3.89 to $-$7.59 eV, and are likely to
exist in form of the neutral complex Fe${}_{\rm{Li}}^{+}$-$V_{\rm{Li}}^{-}$.
At the higher end in the range of $\mu_{{\rm O}_{2}}$ values, from $-$3.03 to
$-$3.25 eV, $p^{+}$ and $V_{\rm{Li}}^{-}$ are the most favorable, and are
likely to exist in form of the complex $V_{\rm{Li}}^{0}$. Finally, only at the
lowest end of the $\mu_{{\rm O}_{2}}$ values, the most favorable point defects
are Fe${}_{\rm{Li}}^{+}$ and Li${}_{\rm{Fe}}^{-}$, which may exist in form of
the neutral complex Fe${}_{\rm{Li}}^{+}$-Li${}_{\rm{Fe}}^{-}$. Under Li-excess
environments, Li${}_{\rm{Fe}}^{-}$ dominates the negatively charged point
defects in the whole range of $\mu_{{\rm O}_{2}}$ values. This makes $p^{+}$
and Li${}_{\rm{Fe}}^{-}$ the most favorable point defects for $\mu_{{\rm
O}_{2}}$ ranging from $-$3.03 to $-$3.89 eV, and Fe${}_{\rm{Li}}^{+}$ and
Li${}_{\rm{Fe}}^{-}$ the most favorable defects for $\mu_{{\rm O}_{2}}$
ranging from $-$4.59 to $-$8.21 eV. Note that, although the formation energy
difference between $V_{\rm{Li}}^{-}$ and Li${}_{\rm{Fe}}^{-}$ is small (less
than 0.2 eV), the difference in their concentrations can still be significant,
as indicated by the exponential dependence in Eq. (1).
Overall, we find that the calculated formation energies of the dominant native
point defects are low, from about 0.3 to 0.5 eV for $\mu_{{\rm O}_{2}}$ from
$-$3.03 to $-$8.21 eV (cf. Table 1). With such low formation energies, the
defects will easily form and occur with high concentrations. The dominant
defects may be different, however, if one changes the experimental conditions
during synthesis, as discussed above. This is consistent with the reported
experimental data showing the presence of various native defects in LiFePO4
samples.Ellis et al. (2006); Zaghib et al. (2007); Yang et al. (2002); Maier
and Amin (2008); Chen et al. (2008); Chung et al. (2008); Axmann et al.
(2009); Chung et al. (2010) We note that there are several limitations
inherent in our calculations. The first set of limitations comes from standard
methodological uncertainties contained in the calculated formation enthalpies
and phase diagrams as discussed in Ref.Ong et al. (2008). The second set comes
from the calculation of defect formation energies using supercell models where
supercell finite-size effects are expected.Van de Walle and Neugebauer (2004)
Since applying approximations indiscriminately in an attempt to correct for
finite-size effects tends to “overshoot” and makes the energies even less
accurate,Van de Walle and Neugebauer (2004) we did not include any corrections
pertaining to such effects in our defect calculations. A proper treatment of
finite-size effects, however if applied, will lead to an increase in the
calculated formation energy of the charged native point defects, and hence the
binding energy of the neutral defect complexes. In spite of the limitations,
the general trends discussed above should still hold true. And, our results
therefore can serve as guidelines for tailoring the defect concentrations in
LiFePO4, and suppressing or enhancing certain defects for targeted
applications.
## VI Electronic and Ionic Conduction
Strictly speaking, lithium vacancies are only stable as $V_{\rm{Li}}^{-}$, and
$V_{\rm{Li}}^{0}$ (which is, in fact, a complex of $V_{\rm{Li}}^{-}$ and
$p^{+}$) cannot be considered the vacancies’s neutral charge state. Likewise,
lithium antisites, iron antisites, and iron vacancies also have one stable
charge state only and occur as, respectively, Li${}_{\rm{Fe}}^{-}$,
Fe${}_{\rm{Li}}^{+}$, and $V_{\rm{Fe}}^{2-}$. Removing/adding electrons
from/to these stable point defects always results in defect complexes
consisting of the point defects and small hole/electron polarons, as presented
in the previous sections. The fact that small polarons can be stabilized, both
in the absence and in the presence of other native defects is necessarily
related to the electronic structure of LiFePO4 where the VBM and CBM consist
predominantly of the highly localized Fe 3$d$ states. Combined with the fact
that charged native defects have negative formation energies near the VBM and
CBM (cf. Figures 3, 5, and 6), our results therefore indicate that native
defects in LiFePO4 cannot act as sources of band-like hole and electron
conductivities. These defects will, however, act as compensating centers in
donor-like doping (for the negatively charged defects) or acceptor-like doping
(the positively charged defects). The electronic conduction in LiFePO4 thus
occurs via hopping of small hole polarons. This mechanism, in fact, has been
proposed for LiFePO4 in several previous works.Zhou et al. (2004); Maxisch et
al. (2006); Ellis et al. (2006); Zaghib et al. (2007) Zaghib et al.Zaghib et
al. (2007) found experimental evidence of intra-atomic Fe2+$-$Fe3+ transitions
in the optical spectrum of LiFePO4, which indirectly confirms the formation of
small hole polarons. The activation energy for electronic conductivity in
LiFePO4 was estimated to be 0.65 eV,Zaghib et al. (2007) comparable to that of
0.55$-$0.78 eV reported by several other experimental groups.Delacourt et al.
(2005); Ellis et al. (2006); Amin et al. (2008)
In order to compare our results with the measured activation energies, let us
assume two scenarios for hole polaron hopping in LiFePO4. In the first
scenario, we assume self-diffusion of free $p^{+}$ defects. The activation
energy $E_{a}$ for this process is calculated as the summation of the
formation energy and migration barrier of $p^{+}$, where the former is
associated with the intrinsic concentration and the latter with the
mobility,Balluffi et al. (2005)
$E_{a}=E^{f}(p^{+})+E_{m}(p^{+}),$ (7)
which gives $E_{a}$=0.57 eV, if $E^{f}(p^{+})$ is taken under the most
favorable condition where $p^{+}$ has the lowest formation energy (cf. Table
1). In the second scenario, we assume that $p^{+}$ and $V_{\rm{Li}}^{-}$ are
formed via the formation of the neutral complex $V_{\rm{Li}}^{0}$, similar to
cases where defects are created via a Frenkel or Schottky mechanism.Balluffi
et al. (2005) At high temperatures, the activation energy for the diffusion of
$p^{+}$ can be calculated as
$E_{a}=\frac{1}{2}E^{f}(V_{\rm{Li}}^{0})+E_{m}(p^{+}),$ (8)
which results in $E_{a}$=0.41 eV, assuming the condition where
$V_{\rm{Li}}^{0}$ has the lowest formation energy (cf. Table 1). The lower
bound of the activation energy for polaron conductivity is therefore
0.41$-$0.57 eV. This range of $E_{a}$ values is comparable to that obtained in
experiments.Delacourt et al. (2005); Ellis et al. (2006); Zaghib et al.
(2007); Amin et al. (2008)
Among the native defects, $V_{\rm{Li}}^{-}$ is the most plausible candidate
for ionic conduction in LiFePO4, because of its low formation energy and high
mobility. Using formulae similar to Eqs. (7) and (8), we estimate the lower
bound of the activation energy for self-diffusion of $V_{\rm{Li}}^{-}$ along
the $b$-axis to be 0.48$-$0.65 eV. The reported experimental value is 0.54
eVLi et al. (2008) or 0.62$-$0.74 eV,Amin et al. (2008) obtained from ionic
conductivity measurements carried out on LiFePO4 single crystals, which is
comparable to our calculated value. The diffusion of $V_{\rm{Li}}^{-}$,
however, may be impeded by other native defects or extrinsic impurities that
have lower mobility. The presence of Fe${}_{\rm Li}^{+}$ in LiFePO4 has been
reported and the defect is believed to reduce the electrochemical performance
of the material by blocking the lithium channels.Yang et al. (2002); Maier and
Amin (2008); Chen et al. (2008); Chung et al. (2008); Axmann et al. (2009);
Chung et al. (2010) Indeed, our results also show that Fe${}_{\rm Li}^{+}$
occurs with a high concentration under various conditions (cf. Table 1) and
has low mobility. Whether Fe${}_{\rm Li}^{+}$ is stable in form of the neutral
complex Fe${}_{\rm{Li}}^{+}$-$V_{\rm{Li}}^{-}$ or
Fe${}_{\rm{Li}}^{+}$-Li${}_{\rm{Fe}}^{-}$ as suggested in several experimental
works,Maier and Amin (2008); Chung et al. (2008); Axmann et al. (2009); Chung
et al. (2010) however, depends on the specific conditions under which the
samples are produced.
What is most fascinating about our results is that one can suppress
Fe${}_{\rm{Li}}^{+}$ by adjusting suitable experimental conditions during
synthesis. In fact, $p^{+}$-rich and Fe${}_{\rm Li}^{+}$-free LiFePO4 samples,
which are believed to be desirable for high intrinsic electronic and ionic
conductivities, can be produced if one maintains high $\mu_{{\rm O}_{2}}$
values (cf. Fig. 7). Of course, $\mu_{{\rm O}_{2}}$ should not be so high that
LiFePO4 becomes unstable toward forming secondary phases. Although LiFePO4
cannot be doped $p$-type or $n$-type as discussed earlier, the incorporation
of suitable electrically active impurities in the material can enhance the
electronic (ionic) conductivity via increasing the concentration of $p^{+}$
($V_{\rm{Li}}^{-}$). These impurities, if present in the samples with a
concentration higher than that of the charged native defects, can shift the
Fermi level,Peles and Van de Walle (2007); Hoang and Van de Walle (2009);
Wilson-Short et al. (2009) and hence lower the formation energy of either
$p^{+}$ or $V_{\rm{Li}}^{-}$. A decrease in the formation energy of $p^{+}$,
however, may result in an increase in the formation energy of
$V_{\rm{Li}}^{-}$ and vice versa. For example, impurities with positive
effective charges (i.e., donor-like doping) may shift the Fermi level to the
right (cf. Fig. 7), resulting in an increased (decreased) formation energy of
$p^{+}$ ($V_{\rm{Li}}^{-}$). Impurities with negative effective charges (i.e.,
acceptor-like doping), on the other hand, may produce the opposite effects,
namely, decreasing (increasing) the formation energy of $p^{+}$
($V_{\rm{Li}}^{-}$). An enhancement in both electronic and ionic
conductivities would, therefore, require a delicate combination of defect-
controlled synthesis, doping with suitable electrically active impurities, and
post-synthesis treatments. An example of the latter would be thermal treatment
which, in fact, has been reported to cause lithium loss in LiFePO4 and lower
the activation energy of the electrical conductivity.Amin and Maier (2008)
## VII Conclusion
In summary, we have carried out comprehensive first-principles studies of
native point defects and defect complexes in LiFePO4. We find that lithium
vacancies, lithium antisites, iron antisites, and iron vacancies each have one
stable charge state only and occur as, respectively, $V_{\rm{Li}}^{-}$,
Li${}_{\rm{Fe}}^{-}$, Fe${}_{\rm{Li}}^{+}$, and $V_{\rm{Fe}}^{2-}$. The
removal/addition of electrons from/to these stable native point defects does
not result in a transition to other charge states of the same defects, but
instead generates small hole/electron polarons. The fact that small polarons
can be stabilized, both in the presence and in the absence of other native
defects, is necessarily related to the electronic structure of LiFePO4. Our
analysis thus indicates that native defects in the material cannot act as
sources of band-like electron and hole conductivities, and the electronic
conduction, in fact, proceeds via hopping of small hole polarons ($p^{+}$).
The ionic conduction, on the other hand, occurs via diffusion of lithium
vacancies.
Among all possible native defects, $p^{+}$, $V_{\rm{Li}}^{-}$,
Li${}_{\rm{Fe}}^{-}$, and Fe${}_{\rm{Li}}^{+}$ are found to have low formation
energies and are hence expected to occur in LiFePO4 with high concentrations.
The dominant point defects in the samples are likely to exist in forms of a
neutral defect complex such as $V_{\rm{Li}}^{0}$, Li${}_{\rm{Fe}}^{0}$,
Fe${}_{\rm{Li}}^{+}$-$V_{\rm{Li}}^{-}$, or
Fe${}_{\rm{Li}}^{+}$-Li${}_{\rm{Fe}}^{-}$. The energy landscape of these
defects is, however, sensitive to the choice of atomic chemical potentials
which represent experimental conditions during synthesis. This explains the
conflicting experimental data on defect formation in LiFePO4. Our results also
raise the necessity of having prior knowledge of the native defects in LiFePO4
samples before any useful interpretations of the measured transport data can
be made. We suggest that one can suppress or enhance certain native defects in
LiFePO4 via tuning the experimental conditions during synthesis, and thereby
produce samples with tailored defect concentrations for optimal performance.
The electrical conductivity may be enhanced through increase of hole polaron
and lithium vacancy concentrations via a combination of defect-controlled
synthesis, incorporation of suitable electrically active impurities that can
shift the Fermi level, and post-synthesis treatments.
###### Acknowledgements.
The authors acknowledge helpful discussions with S. C. Erwin and J. Allen, and
the use of computing facilities at the DoD HPC Centers. K.H. was supported by
Naval Research Laboratory through Grant No. NRL-N00173-08-G001, and M.J. by
the Office of Naval Research.
## References
* Padhi et al. (1997) Padhi, A. K.; Nanjundaswamy, K. S.; Goodenough, J. B. _J. Electrochem. Soc._ 1997, _144_ , 1188–1194.
* Ellis et al. (2010) Ellis, B. L.; Lee, K. T.; Nazar, L. F. _Chem. Mater._ 2010, _22_ , 691–714.
* Manthiram (2011) Manthiram, A. _J. Phys. Chem. Lett._ 2011, _2_ , 176–184.
* Delacourt et al. (2005) Delacourt, C.; Laffont, L.; Bouchet, R.; Wurm, C.; Leriche, J.-B.; Morcrette, M.; Tarascon, J.-M.; Masquelier, C. _J. Electrochem. Soc._ 2005, _152_ , A913–A921.
* Huang et al. (2001) Huang, H.; Yin, S.-C.; Nazar, L. F. _Electrochem. Solid-State Lett._ 2001, _4_ , A170–A172.
* Ellis et al. (2007) Ellis, B.; Kan, W. H.; Makahnouk, W. R. M.; Nazar, L. F. _J. Mater. Chem._ 2007, _17_ , 3248–3254.
* Chung et al. (2002) Chung, S.; Bloking, J.; Chiang, Y. _Nat. Mater._ 2002, _1_ , 123–128.
* Ravet et al. (2003) Ravet, N.; Abouimrane, A.; Armand, M. _Nat. Mater._ 2003, _2_ , 702.
* Wagemaker et al. (2008) Wagemaker, M.; Ellis, B. L.; Lützenkirchen-Hecht, D.; Mulder, F. M.; Nazar, L. F. _Chem. Mater._ 2008, _20_ , 6313–6315.
* Zhou et al. (2004) Zhou, F.; Kang, K.; Maxisch, T.; Ceder, G.; Morgan, D. _Solid State Commun._ 2004, _132_ , 181–186.
* Maxisch et al. (2006) Maxisch, T.; Zhou, F.; Ceder, G. _Phys. Rev. B_ 2006, _73_ , 104301\.
* Ellis et al. (2006) Ellis, B.; Perry, L. K.; Ryan, D. H.; Nazar, L. F. _J. Am. Chem. Soc._ 2006, _128_ , 11416–11422.
* Zaghib et al. (2007) Zaghib, K.; Mauger, A.; Goodenough, J. B.; Gendron, F.; Julien, C. M. _Chem. Mater._ 2007, _19_ , 3740–3747.
* Yang et al. (2002) Yang, S. F.; Song, Y. N.; Zavalij, P. Y.; Whittingham, M. S. _Electrochem. Commun._ 2002, _4_ , 239–244.
* Maier and Amin (2008) Maier, J.; Amin, R. _J. Electrochem. Soc._ 2008, _155_ , A339–A344.
* Chen et al. (2008) Chen, J. J.; Vacchio, M. J.; Wang, S. J.; Chernova, N.; Zavalij, P. Y.; Whittingham, M. S. _Solid State Ionics_ 2008, _178_ , 1676–1693.
* Chung et al. (2008) Chung, S.-Y.; Choi, S.-Y.; Yamamoto, T.; Ikuhara, Y. _Phys. Rev. Lett._ 2008, _100_ , 125502.
* Axmann et al. (2009) Axmann, P.; Stinner, C.; Wohlfahrt-Mehrens, M.; Mauger, A.; Gendron, F.; Julien, C. M. _Chem. Mater._ 2009, _21_ , 1636–1644.
* Chung et al. (2010) Chung, S.-Y.; Kim, Y.-M.; Choi, S.-Y. _Adv. Funct. Mater._ 2010, _20_ , 4219–4232.
* Morgan et al. (2004) Morgan, D.; Van der Ven, A.; Ceder, G. _Electrochem. Solid-State Lett._ 2004, _7_ , A30–A32.
* Islam et al. (2005) Islam, M. S.; Driscoll, D. J.; Fisher, C. A. J.; Slater, P. R. _Chem. Mater._ 2005, _17_ , 5085–5092.
* Fisher et al. (2008) Fisher, C. A. J.; Prieto, V. M. H.; Islam, M. S. _Chem. Mater._ 2008, _20_ , 5907–5915.
* Adams (2010) Adams, S. _J. Solid State Electrochem._ 2010, _14_ , 1787–1792.
* Malik et al. (2010) Malik, R.; Burch, D.; Bazant, M.; Ceder, G. _Nano Letters_ 2010, _10_ , 4123–4127.
* Anisimov et al. (1991) Anisimov, V. I.; Zaanen, J.; Andersen, O. K. _Phys. Rev. B_ 1991, _44_ , 943–954.
* Anisimov et al. (1993) Anisimov, V. I.; Solovyev, I. V.; Korotin, M. A.; Czyżyk, M. T.; Sawatzky, G. A. _Phys. Rev. B_ 1993, _48_ , 16929–16934.
* Liechtenstein et al. (1995) Liechtenstein, A. I.; Anisimov, V. I.; Zaanen, J. _Phys. Rev. B_ 1995, _52_ , R5467–R5470.
* Perdew et al. (1996) Perdew, J. P.; Burke, K.; Ernzerhof, M. _Phys. Rev. Lett._ 1996, _77_ , 3865–3868.
* Blöchl (1994) Blöchl, P. E. _Phys. Rev. B_ 1994, _50_ , 17953–17979.
* Kresse and Joubert (1999) Kresse, G.; Joubert, D. _Phys. Rev. B_ 1999, _59_ , 1758–1775.
* Kresse and Hafner (1993) Kresse, G.; Hafner, J. _Phys. Rev. B_ 1993, _47_ , 558–561.
* Kresse and Furthmüller (1996) Kresse, G.; Furthmüller, J. _Phys. Rev. B_ 1996, _54_ , 11169–11186.
* Kresse and Furthm ller (1996) Kresse, G.; Furthm ller, J. _Comput. Mat. Sci._ 1996, _6_ , 15–50.
* Zhou et al. (2004) Zhou, F.; Cococcioni, M.; Marianetti, C.; Morgan, D.; Ceder, G. _Phys. Rev. B_ 2004, _70_ , 235121.
* Monkhorst and Pack (1976) Monkhorst, H. J.; Pack, J. D. _Phys. Rev. B_ 1976, _13_ , 5188–5192.
* Henkelman et al. (2000) Henkelman, G.; Uberuaga, B. P.; Jónsson, H. _J. Chem. Phys._ 2000, _113_ , 9901–9904.
* Rousse et al. (2003) Rousse, G.; Rodriguez-Carvajal, J.; Patoux, S.; Masquelier, C. _Chem. Mater._ 2003, _15_ , 4082–4090.
* Van de Walle and Neugebauer (2004) Van de Walle, C. G.; Neugebauer, J. _J. Appl. Phys._ 2004, _95_ , 3851–3879.
* Janotti and Van de Walle (2009) Janotti, A.; Van de Walle, C. G. _Rep. Prog. Phys._ 2009, _72_ , 126501.
* Van de Walle and Janotti (2010) Van de Walle, C. G.; Janotti, A. _IOP Conf. Ser.: Mater. Sci. Eng._ 2010, _15_ , 012001.
* Ong et al. (2008) Ong, P. S.; Wang, L.; Kang, B.; Ceder, G. _Chem. Mater._ 2008, _20_ , 1798–1807.
* (42) Note that we have applied a shift of 1.36 eV per O2 molecule to $E_{{\rm O}_{2}}^{\rm tot}$ as suggested by Wang et al. [Wang, L.; Maxisch, T.; Ceder, G. Phys. Rev. B 2006, 73, 195107] and discussed in Ref.Ong et al. (2008). This constant shift is to correct for the O2 binding energy error and the error in charge transfer ($d$$\rightarrow$$p$) energy due to improper treatment of correlation.
* Peles and Van de Walle (2007) Peles, A.; Van de Walle, C. G. _Phys. Rev. B_ 2007, _76_ , 214101\.
* Hoang and Van de Walle (2009) Hoang, K.; Van de Walle, C. G. _Phys. Rev. B_ 2009, _80_ , 214109\.
* Wilson-Short et al. (2009) Wilson-Short, G. B.; Janotti, A.; Hoang, K.; Peles, A.; Van de Walle, C. G. _Phys. Rev. B_ 2009, _80_ , 224102.
* Van de Walle et al. (2010) Van de Walle, C. G.; Lyons, J. L.; Janotti, A. _phys. status solidi (a)_ 2010, _207_ , 1024–1036.
* Catlow, C. R. A.; Sokol, A. A.; Walsh, A. (2011) Catlow, C. R. A.; Sokol, A. A.; Walsh, A., _Chem. Commun._ 2011, _47_ , 3386–3388.
* Shluger and Stoneham (1993) Shluger, A. L.; Stoneham, A. M. _J. Phys.: Condens. Matter_ 1993, _5_ , 3049–3086.
* Stoneham et al. (2007) Stoneham, A. M.; Gavartin, J.; Shluger, A. L.; Kimmel, A. V.; Muñoz Ramo, D.; Rønnow, H. M.; Aeppli, G.; Renner, C. _J. Phys.: Condens. Matter_ 2007, _19_ , 255208.
* Nishimura et al. (2008) Nishimura, S.-i.; Kobayashi, G.; Ohoyama, K.; Kanno, R.; Yashima, M.; Yamada, A. _Nature Mater._ 2008, _7_ , 707–711.
* Amin et al. (2008) Amin, R.; Maier, J.; Balaya, P.; Chen, D. P.; Lin, C. T. _Solid State Ionics_ 2008, _179_ , 1683–1687.
* Balluffi et al. (2005) Balluffi, R. W.; Allen, S. M.; Carter, W. C.; Kemper, R. A. _Kinetics of Materials_ ; John Wiley & Sons: New Jersey, 2005.
* Li et al. (2008) Li, J.; Yao, W.; Martin, S.; Vaknin, D. _Solid State Ionics_ 2008, _179_ , 2016–2019.
* Amin and Maier (2008) Amin, R.; Maier, J. _Solid State Ionics_ 2008, _178_ , 1831–1836.
|
arxiv-papers
| 2011-05-17T21:13:27 |
2024-09-04T02:49:18.836025
|
{
"license": "Public Domain",
"authors": "Khang Hoang and Michelle Johannes",
"submitter": "Michelle Johannes",
"url": "https://arxiv.org/abs/1105.3492"
}
|
1105.3509
|
11institutetext: Institute of High Energy Physics, Chinese Academy of
Sciences, 100049 Beijing, China 22institutetext: Theoretical Physics Center
for Science Facilities, Chinese Academy of Sciences, 100049 Beijing, China
# An Inflationary Scenario Taking into Account of Possible
Dark Energy Effects in the Early Universe
Zhe Chang 1122 Ming-Hua Li 1122 Sai Wang 1122 Xin Li 1122
(Received: date / Revised version: date)
###### Abstract
We investigate the possible effect of cosmological-constant type dark energy
during the inflation period of the early universe. This is accommodated by a
new dispersion relation in de Sitter space. The modified inflation model of a
minimally-coupled scalar field is still able to yield an observation-
compatible scale-invariant primordial spectrum, simultaneously having
potential to generate a spectrum with lower power at large scales. A
qualitative match to the WMAP 7-year data is presented. We obtain an
$\Omega_{\Lambda}$ of the same order of that in the $\Lambda$-CDM model.
Possible relations between the de Sitter scenario and the Doubly Special
Relativity(DSR) are also discussed.
###### pacs:
98.80.CqInflationary universe and 98.80.JkMathematical and relativistic
aspects of cosmology and 95.36.+xDark energy
## 1 Introduction
The anisotropy of the Cosmic Microwave Background(CMB) radiation, which was
first discovered by the NASA Cosmic Background Explorer(COBE) satellite in the
early 1990s, has been confirmed by subsequent balloon experiments and more
recently by the Wilkinson Microwave Anisotropy Probe(WMAP)’s 7-year results
wmap7 0067-0049-192-2-18 . The temperature fluctuations of the CMB are
believed by most cosmologists to be generated as quantum fluctuations of a
weakly self-coupled scalar matter field $\phi$ , which later leads to the
exponential inflation of the early universe Star -Mukh2 . The WMAP 7-year
results provide a strong confirmation of the inflationary paradigm. With the
prestigious $\Lambda$-CDM model (also known as the “concordance model” or the
“cosmological standard model”) and the basic set of six cosmological
parameters H2010- Larson2011- , one can now give a unprecedently good global
description of the universe, with an accuracy down to $10\%$ level.
However, various “anomalies” have been reported about the CMB data. One of
them is the low-$\ell$ multipole controversy J2003-small Efst Kurki-Suonio2010
. It was reported as “smaller than the standard model-predicted” observed
values of $C_{\ell}$ for low $\ell$, especially for the quadrupole component
$\ell=2$. This issue had been widely investigated in the last decade
Kowaski2003- ; Feng2003- ; C2003-Suppressing ; Y2005-Possible ; Kofman:1989ed
; Hodges:1989dw ; Rodrigues2008- ; Copi2010 . To one’s surprise, H. Liu and
T.-P. Li Li2010- Li2011- claimed that the CMB quadrupole released by the WMAP
team is almost completely artificial and the real quadrupole of the CMB
anisotropy should be near zero. C. Bennett et al. Bennett2011 had recently
examined the properties of the power spectrum data from the WMAP 7-year
release with respect to the $\Lambda$-CDM model carefully. On the contrary,
they reported that the amplitude of the quadrupole is well within the expected
$95\%$ confidence range and therefore is not anomalously low.
Considering that the WMAP’s results have shown strong substantiating evidence
for the concordance cosmological model, the remarkable agreement between the
theory and the observational data should not be taken lightly. But in
consideration of the work listed above, it may be prudent for one to leave the
low-$\ell$ multipole issue as an open question for further investigations and
observations.
Besides, there is another problem needed to be considered in most of the
inflation models. That is the “trans-Planckian” problem. Since the inflation
period has to lasts for sufficiently long to generate a $60$ to $65$
e-foldings number in order to solve the flatness and horizon problem of the
universe, the wavelengths corresponding to the large-scale structure at
present must be once smaller than the Planck length $\ell_{P}$, for which
these theories break down. Similar problem appears in black hole physics. The
calculations of Hawking radiation would become irrational if one traces the
modes infinitely to the past. To address this issue, two major approaches are
proposed by the black hole physicists. One is to apply the stringy space-time
uncertainty relation Ho on the fluctuation modes to pose a modification of
the boundary condition Brand2003- . The other is to mimic the quantum gravity
effects by replacing the linear dispersion relation $\omega^{2}=k^{2}$ (for
photons) by a non-standard one that derived from a quantum gravity theory
J2001-Trans-Planckian .
On the other hand, some cosmologists believe that our universe in its early
history can actually be approximated by a de Sitter one. The connection
between a de Sitter space and the early universe lies on one or more of the
hottest issues in modern cosmology—the cosmological constant and dark energy
Peebles2003- . The cosmological constant $\Lambda$, also taken as one form of
dark energy, gives rise to the famous $\Lambda$-CDM model. Although the
universe after inflation is described well by the standard cosmological model,
the physical implications of a $\Lambda$-type dark energy during the inflation
period is rarely discussed. Its possible effects are worth considering for a
unified scenario of cosmological theory. Moreover, a de Sitter universe is a
cosmological solution to Einstein’s field equations of general relativity with
a positive cosmological constant $\Lambda$. It models our universe as a
spatially flat one and neglects ordinary matter. The dynamics of the universe
are dominated by the cosmological constant $\Lambda$, which is thought to
correspond to dark energy in our universe. All these give a physically
befitting description of the universe at about a time $t=10^{-33}$ seconds
after the fiducial Big Bang.
In this paper, we take into account the effect of a cosmological-constant type
dark energy during the inflation period in the early universe. We try to
construct a unified scenario of the $\Lambda$-CDM and the inflation model.
This is accommodated by a new dispersion relation—a dispersion relation in de
Sitter space. It stems from the kinematics of free particles in a four
dimensional de Sitter space. The CMB TT spectrum under the influence of such a
form of dark energy during inflation is presented. We find that for certain
parameter values, the modified inflation model yields an angular spectrum with
lower power at large scales. An $\Omega_{\Lambda}$ of the same order of that
in the $\Lambda$-CDM model is obtained. The relation with the Doubly Special
Relativity(DSR) is also discussed. All the numerical results in this paper are
qualitative. A full Bayesian analysis of the data like that carried by the
WMAP team is necessary to prove that the model is actually statistically more
consistent with the observations Bennett2011 .
The rest of the paper is organized as follows. In Section 2, a remarkable
dispersion relation in a four dimensional de Sitter space is introduced to
investigate possible effects of the $\Lambda$-type dark energy on the single-
scalar-field inflation model. In Section 3, we obtain the corresponding
primordial spectrum in the modified inflationary scenario. In Section 4, we
compare the corresponding CMB angular power spectrum with the WMAP 7-year
data. An observation-compatible result has been obtained in a qualitative
manner. Conclusions and discussions are presented in Section 5. The relation
between our model and the Doubly Special Relativity(DSR) is discussed in this
section.
## 2 Dispersion Relation in de Sitter Space
De Sitter space, first discovered by Willem de Sitter in the 1920s, is a
maximally symmetric space in mathematics. It is a space with constant positive
curvature. In the language of general relativity, de Sitter space is the
maximally symmetric, vacuum solution of Einstein’s field equations with a
positive (or physically repulsive) cosmological constant $\Lambda$. It
corresponds to a positive vacuum energy density with negative pressure of our
universe, i.e. one form of dark energy.
A four dimensional de Sitter space (three space dimensions plus one time
dimension) describes a cosmological model for the physical universe. It can be
realized as a four-dimensional pseudo-sphere imbedded in a five dimensional
Minkowski flat space with coordinates $\xi_{\mu}$ ($\mu=0,1,2,3,4$), to wit
S2004-Cosmic G2003-Beltrami
$\begin{array}[]{l}\displaystyle-\xi_{0}^{2}+\xi_{1}^{2}+\xi_{2}^{2}+\xi_{3}^{2}+\xi_{4}^{2}=\frac{1}{K}=R^{2}\
,\\\\[14.22636pt]
ds^{2}=-d\xi_{0}^{2}+d\xi_{1}^{2}+d\xi_{2}^{2}+d\xi_{3}^{2}+d\xi_{4}^{2}\
,\end{array}$ (1)
where $K$ and $R$ respectively denotes the Riemannian curvature and radius of
the de Sitter spacetime. For mathematical reasons, we adopt the _Beltrami
coordinates_ given by
$x_{\mu}\equiv R\frac{\xi_{\mu}}{\xi_{4}}\
,~{}~{}~{}~{}~{}\mu=0,1,2,3,~{}\textmd{and}~{}\xi_{4}\not=0\,.$ (2)
In the Beltrami de Sitter(BdS) space, the line element in (1) can now be
rewritten as
$\begin{array}[]{l}\sigma\equiv\sigma(x,~{}x)=1-K\eta_{\mu\nu}x^{\mu}x^{\nu}(>0)\
,\\\\[8.53581pt] \displaystyle
ds^{2}=\left(\frac{\eta^{\mu\nu}}{\sigma}+\frac{K\eta^{\mu\alpha}\eta^{\nu\beta}x_{\alpha}x_{\beta}}{\sigma^{2}}\right)dx_{\mu}dx_{\nu}\
,~{}~{}~{}~{}~{}\mu,\nu=0,1,2,3\ ,\end{array}$ (3)
where $\eta_{\mu\nu}={\rm diag}(-1,+1,+1,+1)$ is the Minkowski metric.
The five dimensional angular momentum $M_{\mu\nu}$ of a free particle with
mass $m_{0}$ is defined as
$M_{\mu\nu}\equiv
m_{0}\left(\xi_{\mu}\frac{d\xi_{\nu}}{ds}-\xi_{\nu}\frac{d\xi_{\mu}}{ds}\right)\
,~{}~{}~{}~{}~{}\mu=0,1,2,3,4\ ,$ (4)
where $s$ is the affine parameter along the geodesic. In the de Sitter
spacetime, there is no translation invariance so that one can not introduce a
momentum vector. However, it should be noticed that, at least somehow, we may
define a counterpart of the four dimensional momentum $P_{\mu}$ of a free
particle in the de Sitter spacetime:
$P_{\mu}\equiv R^{-1}M_{4\mu}=m_{0}\sigma^{-1}\frac{dx_{\mu}}{ds}\
,~{}~{}~{}~{}~{}\mu,\nu=0,1,2,3\,.$ (5)
For the rest of the article, the Greek indices (i.e. $\mu$, $\nu$, $\alpha$,
$\beta$, etc.), if not specifically pointed out, run from $0$ to $3$. The
Latin indices (i.e. $i$, $j$, $k$, etc.) run from $1$ to $3$.
In the same manner, the counterparts of the four dimensional angular momentum
$J_{\mu\nu}$ can be assigned as
$J_{\mu\nu}\equiv
M_{\mu\nu}=x_{\mu}P_{\nu}-x_{\nu}P_{\mu}=m_{0}\sigma^{-1}\left(x_{\mu}\frac{dx_{\nu}}{ds}-x_{\nu}\frac{dx_{\mu}}{ds}\right)\,.$
(6)
Under the de Sitter transformations for a free particle, an invariant (which
turns out to be just ${m^{2}_{0}}$) can be constructed in terms of the angular
momentum $M^{\mu\nu}$ as
$\begin{array}[]{c}m^{2}_{0}=\displaystyle\frac{\lambda}{2}M_{\mu\nu}M^{\mu\nu}=E^{2}-{\bf
P}^{2}+\frac{K}{2}J_{ij}J^{ij}~{},\\\\[14.22636pt] E=P_{0}~{},~{}~{}~{}~{}{\bf
P}=(P_{1},~{}P_{2},~{}P_{3})~{}.\end{array}$ (7)
To discuss the quantum kinematics of a free particle, it is natural to realize
the five dimensional angular momentum $M_{\mu\nu}$ as the infinitesimal
generators $\hat{M}_{\mu\nu}$ of the de Sitter group $SO(1,4)$, to wit
$\hat{M}_{\mu\nu}\equiv-i\left(\xi_{\mu}\frac{\partial}{\partial\xi^{\nu}}-\xi_{\nu}\frac{\partial}{\partial\xi^{\mu}}\right)\
,~{}~{}~{}~{}~{}\mu,\nu=0,1,2,3\,.$ (8)
Since $\xi_{0}(\equiv\sigma(x,x)^{-1/2}x_{0})$ is invariant under the spatial
transformations of $x_{\alpha}$ acted by the subgroup $SO(4)$ of $SO(4,1)$,
two space-like events are considered to be _simultaneous_ if they satisfy
$\sigma(x,~{}x)^{-\frac{1}{2}}x^{0}=\xi^{0}={\rm constant}~{}.$ (9)
Therefore, it is convenient to discuss physics of the de Sitter spacetime in
the coordinate $(\xi^{0},~{}x^{\alpha})$. With the de Sitter invariant (or the
Casimir operator) in the relation (7), one can write down the _equation of
motion of a free scalar particle with mass $m_{0}$ in the de Sitter space_ as
$\left(\frac{K}{2}\hat{M}_{\mu\nu}\hat{M}^{\mu\nu}-m^{2}_{0}\right)\phi(\xi_{0},x_{\alpha})=0~{},$
(10)
where the $\phi(\xi_{0},x_{\alpha})$ denotes the scalar field.
A laborious but straight forward process of solving the equation (10) can be
found in S2004-Cosmic , from which one obtains the _dispersion relation_ for a
free scalar particle in the de Sitter space:
$E^{2}=m_{0}^{2}+\varepsilon^{\prime 2}+K(2n+l)(2n+l+2)\ ,$ (11)
where $n$ and $l$ refer to the radial and the angular quantum number
respectively. For massless particles such as photons, the above dispersion
relation becomes
$\begin{array}[]{c}\omega^{2}=k^{2}+\varepsilon^{*2}_{\gamma}\
,\\\\[14.22636pt]
\varepsilon^{*}_{\gamma}\equiv\sqrt{K(2n_{\gamma}+l_{\gamma})(2n_{\gamma}+l_{\gamma}+2)}\
,\end{array}$ (12)
where $n_{\gamma}$ and $l_{\gamma}$ respectively denotes the radial and the
angular quantum number. $w$ and $k$ are in turn the frequency and the
wavenumber.
## 3 The Modified Primordial Spectrum
Given the dispersion relation (12), we are now in the position to calculate
the primordial power spectrum ${\cal P}_{\delta\phi}(k)$, from which later the
anisotropy spectrum of the CMB is obtained.
Let us denote the inhomogeneous perturbation to the inflaton field by
$\delta\phi({\bf x},t)$. In the Fourier representation, the evolution equation
of the primordial perturbation $\delta\sigma_{\bf k}$, which is defined as
$\delta\sigma_{\bf k}\equiv a\delta\phi_{\mathbf{k}}$, reads A2002-
$\delta\sigma^{\prime\prime}_{\bf
k}+\left(k^{2}_{\textmd{eff}}-\frac{2}{\tau^{2}}\right)\delta\sigma_{\bf k}=0\
,~{}~{}~{}~{}~{}k^{2}_{\textmd{eff}}\equiv
k^{2}+\varepsilon^{*2}_{\gamma}~{}.$ (13)
A prime indicates differentiation with respect to the conformal time $\tau$
Dodelson2003- .
The equation (13) has an exact particular solution:
$\displaystyle\delta\sigma_{\bf k}$ $\displaystyle=$
$\displaystyle\frac{e^{-ik_{\textmd{eff}}\tau}}{\sqrt{2k_{\textmd{eff}}}}\left(1+\frac{i}{k_{\textmd{eff}}\tau}\right)$
(14) $\displaystyle=$
$\displaystyle\frac{e^{-i\sqrt{k^{2}+\varepsilon^{*2}_{\gamma}}~{}\tau}}{\sqrt{2}\left(k^{2}+\varepsilon^{*2}_{\gamma}\right)^{1/4}}\left(1+\frac{i}{\sqrt{k^{2}+\varepsilon^{*2}_{\gamma}}~{}\tau}\right)\
.$
For the redefinition
$\delta\phi_{\textbf{k}}\equiv\delta\sigma_{\textbf{k}}/a$ , one has
$\displaystyle\left|\delta\phi_{\textbf{k}}\right|$ $\displaystyle=$
$\displaystyle\left|\frac{1}{a}\cdot\frac{e^{-i\sqrt{k^{2}+\varepsilon^{*2}_{\gamma}}~{}\tau}}{\sqrt{2}\left(k^{2}+\varepsilon^{*2}_{\gamma}\right)^{1/4}}\left(1+\frac{i}{\sqrt{k^{2}+\varepsilon^{*2}_{\gamma}}~{}\tau}\right)\right|$
(15) $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}a\left(k^{2}+\varepsilon^{*2}_{\gamma}\right)^{1/4}}\sqrt{1+\frac{1}{\left(k^{2}+\varepsilon^{*2}_{\gamma}\right)~{}\tau^{2}}}\
.$
The power spectrum of $\delta\phi_{\textbf{k}}$, which is denoted by ${\cal
P}_{\delta\phi}(k)$, is defined as A2002-
$\langle 0|\delta\phi^{*}_{{\bf k}_{1}}\delta\phi_{{\bf
k}_{2}}|0\rangle\equiv\delta^{(3)}\left({\bf k}_{1}-{\bf
k}_{2}\right)\,\frac{2\pi^{2}}{k^{3}}\,{\cal P}_{\delta\phi}(k)\ .$ (16)
For $\textbf{k}_{1}=\textbf{k}_{2}=\textbf{k}$, one has
${\cal
P}_{\delta\phi}(k)\equiv\frac{k^{3}}{2\pi^{2}}\,|\delta\phi_{\textbf{k}}|^{2}\
.$ (17)
Given the results (15) and $a=-1/H\tau$ A2002- , one obtains the primordial
power spectrum of the perturbation $\delta\phi({\bf x},t)$,
$\displaystyle{\cal P}_{\delta\phi}(k)$ $\displaystyle=$
$\displaystyle\frac{k^{3}}{2\pi^{2}}\cdot\frac{1}{2a^{2}\sqrt{k^{2}+\varepsilon^{*2}_{\gamma}}}\left(1+\frac{1}{\left(k^{2}+\varepsilon^{*2}_{\gamma}\right)~{}\tau^{2}}\right)$
(18) $\displaystyle=$
$\displaystyle\frac{H^{2}}{4\pi^{2}}\cdot\frac{k^{3}}{\left({k^{2}+\varepsilon^{*2}_{\gamma}}\right)^{3/2}}+\frac{H^{2}}{4\pi^{2}}\cdot\frac{k}{\sqrt{k^{2}+\varepsilon^{*2}_{\gamma}}}\cdot
k^{2}\tau^{2}\ .$
According to the super-horizon criterion, the wavelength $\lambda$ of the
primordial perturbation is beyond the horizon if
$-k\tau=\frac{k}{aH}\ll 1\ .$ (19)
Therefore, given the super-horizon condition (19), the second term at the
right hand side of the expression (18) becomes sufficiently small when
compared to the first term and can be neglected. The power spectrum is then
given approximately as
${\cal
P}_{\delta\phi}(k)\simeq\frac{H^{2}}{4\pi^{2}}\cdot\frac{k^{3}}{\left({k^{2}+\varepsilon^{*2}_{\gamma}}\right)^{3/2}}\
.$ (20)
For large $k$ (i.e. high multipole index $\ell$), one recovers the usual
scale-invariant primordial power spectrum of the perturbation
${\cal
P}_{\delta\phi}(k)=\frac{H^{2}}{4\pi^{2}}\cdot\frac{k^{3}}{\left[k^{2}\left(1+\frac{\varepsilon^{*2}_{\gamma}}{k^{2}}\right)\right]^{3/2}}\simeq\frac{H^{2}}{4\pi^{2}}\cdot\frac{k^{2}}{k^{2}+\frac{3}{2}\varepsilon^{*2}_{\gamma}}~{}\sim~{}k^{0}\,.$
(21)
Plot of the primordial spectrum (20) is shown in Fig.1.
Figure 1: The non-dimensional primordial power spectrum ${\cal
P}_{\delta\phi}(k)$. It experiences a cutoff at large scales for certain
values of $\varepsilon^{*}_{\gamma}$. It is also shown in the bottom panel of
Fig.2.
## 4 The Resulting CMB Spectrum
The temperature fluctuations of the CMB are due to the quantum fluctuations of
a weakly self-coupled scalar matter field $\phi$ , which later results in an
exponential inflation of the early universe Star -Mukh2 . The anisotropic and
inhomogeneous fluctuations $\delta\phi(\textbf{x},t)$ of this scalar field
result in the perturbations of the comoving curvature $\mathcal{R}$. During
inflation, the wavelengths $\lambda$ of these perturbations are then stretched
exponentially out of the horizon $\mathcal{H}$. As time goes by, the horizon
of the universe grows. At some time after inflation, these once “frozen”
perturbations finally reenter the horizon and causal region. For the rest of
the time, they evolve according to the Poisson equation and the collisionless
Boltzmann equation, giving rise to the extant matter and temperature
fluctuations. The primordial perturbation of photons at the surface of last
scattering is the one that responsible for the anisotropic power spectrum of
the CMB radiation we observe today.
The power spectrum of the comoving curvature perturbation $\mathcal{R}$ is
usually denoted by $\mathcal{P}_{\mathcal{R}}(k)$, which is given as A2002-
$\mathcal{P}_{\mathcal{R}}(k)=\frac{H^{2}}{\dot{\phi}^{2}}~{}{\cal
P}_{\delta\phi}(k)\equiv A^{2}_{s}\left(\frac{k}{aH}\right)^{n_{s}-1}\ ,$ (22)
where $A_{s}$ is the normalized amplitude and $n_{s}$ is the scalar spectral
index. In the slow roll inflation model, $n_{s}-1=2\eta-6\epsilon$. The
anisotropy spectrum of the CMB, indicated by the coefficient $C_{\ell}$ , is
obtained by a line-of-sight integration over the spectrum
$\mathcal{P}_{\mathcal{R}}(k)$ with $\Delta_{\ell}(k,\tau)$, which is the
solution of the collisionsless Boltzmann equation for the CMB photons, to wit
C1995- Seljak
$C_{\ell}=4\pi\int
d^{3}k~{}\mathcal{P}_{\mathcal{R}}(k)|\Delta_{\ell}(k,\tau)|^{2}\ .$ (23)
The solution $\Delta_{\ell}(k,\tau)$ is obtained by numerically solving the
coupled Boltzmann equations of the CMB photons under the adiabatic initial
conditions Bucher2000- .
Figure 2: The CMB angular power spectrum $C_{\ell}$ vs.$~{}{\ell}$ with the
primordial power spectrum ${\cal P}_{\delta\phi}(k)$ vs.$~{}k/(aH)$ . Black
dots with error bars represent (part of) the TT data of the WMAP 7-year
results. The black solid curve in the top panel is the theoretical prediction
of the standard $\Lambda$-CDM model with the usual scale-invariant primordial
power spectrum ${\cal P}_{\delta\phi}(k)=H^{2}/4\pi^{2}$ (i.e. the spectral
index $n_{\textmd{s}}=1$), and with the cosmological parameters fixed at
$h=0.73$, $\Omega_{\textmd{b}}h^{2}=0.0226$,
$\Omega_{\textmd{cdm}}h^{2}=0.112$, $\Omega_{\Lambda}=0.728$, $\tau=0.087$.
The dashed and dotted curves are obtained with the same model, but with the
infrared-cutoff primordial power spectrum (20). From up to down each of them
sequently corresponds to the theoretical result with
$\varepsilon^{*}_{\gamma}=7\times 10^{-5}$(dash-dotted), $2.5\times
10^{-4}$(dashed), and $7.6\times 10^{-4}$(dotted) $\textmd{Mpc}^{-1}$. For
$\varepsilon^{*}_{\gamma}=2.5\times 10^{-4}$ $\textmd{Mpc}^{-1}$, the cosmic
age is $13.79$ Gyr and the value of $\sigma_{8}$ is $0.809$.The asymptotic
behavior of the primordial spectrum ${\cal P}_{\delta\phi}(k)$ at large scales
is shown in the bottom panel.
We use a modified version of the publicly available Code for Anisotropies in
the Microwave Background(CAMB) CAMB to compute the CMB temperature-
temperature(TT) spectra with various $\varepsilon^{*}_{\gamma}$ in the
modified primordial spectrum (20). The numerical results are shown in Fig.2
and Fig.3. The black solid curve in Fig.2 indicates the standard scale-
invariant primordial spectrum model with the same cosmological parameters as
those obtained from the best-fit to the WMAP 7-year data. The dashed and
dotted curves represent the theoretical predictions of our model. The cosmic
variance contributions to the errors in both models of inflation (the standard
one and ours) are shown respectively by the dash-dotted upper and lower
contours in the two panels of Fig.3. For a simple numerical analysis, we
arbitrarily choose the value of $\varepsilon^{*}_{\gamma}$ to be $7\times
10^{-5}$, $2.5\times 10^{-4}$, and $7.6\times 10^{-4}$ $\textmd{Mpc}^{-1}$.
The corresponding $\chi^{2}$ are respectively $1530.83063$, $1534.18154$, and
$1560.87900$, while the standard inflation model gives $\chi^{2}=1380.16810$
(in both cases for $1200$ data points). For
$\varepsilon^{*}_{\gamma}=2.5\times 10^{-4}$ $\textmd{Mpc}^{-1}$, the age of
the universe is $13.79$ Gyr and the value of $\sigma_{8}$ is $0.809$. Both of
them are consistent with the values presented in pdg2009 .
From Fig.2, we see that the lower CMB quadrupole is related to the value of
$\varepsilon^{*}_{\gamma}$. It implies that in some sense the lower CMB
quadrupole encodes the information of the geometric properties of the de
Sitter space. A zero $\varepsilon^{*}_{\gamma}$ will land us back at the
standard scale-invariant spectrum model. In order to obtain a more definite
result, more data such as COBE COBE , BOOMERANG Netterfield01 , MAXIMA
Hanany00 , DASI Halverson01 , VSA VSA3 and CBI Pearson02 CBIdata are needed
for further numerical studies. Careful Monte Carlo simulations or a full
Bayesian analysis is also necessary for drawing a more confirmatively
quantitative conclusion Lewis2002-Monte . But these are out of our research
area. A simple numerical-recipe-level analysis may be enough for taking a look
at possible qualitative features of the solution (20).
Figure 3: The CMB angular power spectrum $C_{\ell}$ vs. ${\ell}$ . Black dots
with error bars in the two panels represent (part of) the TT data of the
WMAP-7 results. The contributions of the cosmic variance to the errors in the
two models, the standard one and ours, are shown by the dash-dotted upper and
lower contours in both panels. In the top panel, the black solid curve is the
theoretical prediction of the standard $\Lambda$-CDM model with the usual
scale-invariant primordial power spectrum ${\cal
P}_{\delta\phi}(k)=H^{2}/4\pi^{2}$ (i.e. the spectral index $n_{s}=1$), and
with the cosmological parameters fixed at $h=0.73$,
$\Omega_{\textmd{b}}h^{2}=0.0226$, $\Omega_{\textmd{cdm}}h^{2}=0.112$,
$\Omega_{\Lambda}=0.728$, $\tau=0.087$. In the bottom panel, the curves are
obtained with the same cosmological model, but with the modified power
spectrum (20) when $\varepsilon^{*}_{\gamma}=2.5\times 10^{-4}$
$\textmd{Mpc}^{-1}$. The cosmic age is $13.79$ Gyr and the value of
$\sigma_{8}$ is $0.809$.
## 5 Conclusions and Discussions
In this paper, we studied the possible effects of cosmological-constant type
dark energy on the standard inflationary paradigm of modern cosmology. We
presented a unified scenario of the the $\Lambda$-CDM and the inflation model.
This is accommodated by the new dispersion relation (12). It stems from the
kinematics of free particles in a four dimensional de Sitter space. We got a
modified inflation model of a minimal coupled scalar field. The ultraviolet
behavior of the primordial spectrum in our model differs little from the usual
scale-invariant one, which ensures an agreement with the WMAP 7-year
observations for high-$\ell$ components. For certain values of the model
parameter $\varepsilon^{*}_{\gamma}$, the model is able to generate a power
spectrum with lower energy on large scales. And from the relation (12) and
$R=1/\sqrt{K}$, we approximately have $R\sim 10^{3}$ Mpc for
$\varepsilon^{*}_{\gamma}\sim 10^{-4}$ $\textmd{Mpc}^{-1}$. If one agrees that
our universe is asymptotic to the Robertson-Walker-like de Sitter space of
$R\simeq(3/\Lambda)^{1/2}$ Guo2008- , from
$\Omega_{\Lambda}\equiv\Lambda/3H_{0}^{2}$ one finally obtains
$\Omega_{\Lambda}\sim 10^{-1}$. This is in reasonable agreement with the
current acknowledged value of $\Omega_{\Lambda}\simeq 0.72$ pdg2009 .
We note, however, that as pointed out by C. Bennett et al. Bennett2011 , the
mean value of the CMB quadrupole component $C_{2}$ predicted by the best-fit
$\Lambda$-CDM model lies within the $95\%$ confidence region allowed by the
data. If this is true, that means the measured value of the quadrupole is not
anomalously low. So one has to take this rough model of inflation with a few
grains of salt. In this paper, we just offered a theoretical possibility — the
inflationary paradigm of a single scalar field with $\Lambda$-type dark energy
is still able to yield an observation-compatible scale-invariant primordial
spectrum, while having the potential to generate a spectrum with low-$\ell$
multipoles. The result we obtained is undoubtedly sketchy. As stated at the
end of Section IV, a more careful and comprehensive numerical analysis like
that done by the WMAP team should be carried out before drawing a confirmative
quantitative conclusion. But that requires professional data analysis
techniques which is out of our research field.
Last, a close relation between de Sitter space and Doubly Special
Relativity(DSR) should be noticed. DSR, first proposed by Amelino-Camelia
around the start of this millenium amelino2000- , is one of the possible
explanations of the GZK feature in the energy spectrum of the ultra-high
energy cosmic rays without invoking Lorentz symmetry violation. It is based on
two fundamental assumptions:
* •
The principle of relativity still holds, i.e., all the inertial observers are
equivalent;
* •
There exists two observer-independent scales: one is of dimension of velocity,
which is identified with the speed of light $c$; the other is of dimension of
length $\kappa$ (or mass $\kappa^{-1}$), which is identified with the Planck
length (or mass).
The energy-momentum space of DSR is found to be a four dimensional maximally
symmetric one. In differential geometry, such a manifold must be locally
diffeomorphic to one of the three kind of spaces of constant curvature: _de
Sitter_ , _Minkowski_ , and _Anti-de Sitter_ , of which the sign of curvature
$K$ is respectively $+$, $0$, and $-$. It was first pointed out by J.
Kowalski-Glikman J2002- gac and later reaffirmed by H.-Y. Guo Guo2007- that
DSR can be taken as a theory with its energy-momentum space being a four
dimensional de Sitter space. Different formulations of DSR can be identified
with taking different coordinate systems on this space. In our paper, the
curvature radius $R$ of the de Sitter space plays the role of the length scale
$\kappa$ in DSR and our research is formulated in the Beltrami coordinates.
However, more research, in the theoretical as well as the phenomenological
aspects, needs to be done before claiming that the particle kinematics in DSR
can be identified with that in a de Sitter space. Relevant researches are
currently undertaken.
## References
* (1) N. Jarosik et al., “Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Sky Maps, Systematic Errors, And Basic Results,” Astrophys. J. Suppl. 192, 14 (2011). (The WMAP 7-year data is publicly available on the website http://lambda.gsfc.nasa.gov/product/map/current/m_products.cfm )
* (2) E. Komatsu et al., “Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Interpretation,” Astrophys. J. Suppl. 192, 18 (2011).
* (3) A. A. Starobinsky, “Spectrum Of Relict Gravitational Radiation And The Early State Of The Universe,” JETP Lett. 30, 682 (1979) [Pisma Zh. Eksp. Teor. Fiz. 30, 719 (1979)]; A. A. Starobinsky, “A New Type Of Isotropic Cosmological Models Without Singularity,” Phys. Lett. B 91, 99 (1980).
* (4) V. F. Mukhanov and G. V. Chibisov, “Quantum Fluctuation And ‘Nonsingular’ Universe,” JETP Lett. 33, 532 (1981) [Pisma Zh. Eksp. Teor. Fiz. 33, 549 (1981)].
* (5) A. H. Guth, “The Inflationary Universe: A Possible Solution To The Horizon And Flatness Problems,” Phys. Rev. D 23, 347 (1981).
* (6) A. D. Linde, “A New Inflationary Universe Scenario: A Possible Solution Of The Horizon, Flatness, Homogeneity, Isotropy And Primordial Monopole Problems,” Phys. Lett. B 108, 389 (1982); A. D. Linde, “Coleman-Weinberg Theory And A New Inflationary Universe Scenario,” Phys. Lett. B 114, 431 (1982); A. D. Linde, “Temperature Dependence Of Coupling Constants And The Phase Transition In The Coleman-Weinberg Theory,” Phys. Lett. B 116, 340 (1982); A. D. Linde, “Scalar Field Fluctuations In Expanding Universe And The New Inflationary Universe Scenario,” Phys. Lett. B 116, 335 (1982).
* (7) A. Albrecht and P. J. Steinhardt, “Cosmology For Grand Unified Theories With Radiatively Induced Symmetry Breaking,” Phys. Rev. Lett. 48, 1220 (1982).
* (8) S. W. Hawking, “The Development Of Irregularities In A Single Bubble Inflationary Universe,” Phys. Lett. B 115, 295 (1982); A. A. Starobinsky, “Dynamics Of Phase Transition In The New Inflationary Universe Scenario And Generation Of Perturbations,” Phys. Lett. B 117, 175 (1982); A. H. Guth and S. Y. Pi, “Fluctuations In The New Inflationary Universe,” Phys. Rev. Lett. 49, 1110 (1982); J. M. Bardeen, P. J. Steinhardt and M. S. Turner, “Spontaneous Creation Of Almost Scale - Free Density Perturbations In An Inflationary Universe,” Phys. Rev. D 28, 679 (1983).
* (9) V. F. Mukhanov, “Gravitational Instability Of The Universe Filled With A Scalar Field,” JETP Lett. 41, 493 (1985) [Pisma Zh. Eksp. Teor. Fiz. 41, 402 (1985)]; V. F. Mukhanov, H. A. Feldman and R. H. Brandenberger, “Theory Of Cosmological Perturbations,” Phys. Rept. 215, 203 (1992); V. F. Mukhanov, Physical Foundations of Cosmology, Cambridge University Press, 2008.
* (10) H. Kurki-Suonio, “Physics Of The Cosmic Microwave Background And The Planck Mission,” Proceedings of the 2010 CERN Summer School, Raseborg (Finland), submitted for publication in a CERN Yellow Report (2010) [arXiv:1012.5204v1].
* (11) D. Larson et al., “Seven-year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Power Spectra And WMAP-Derived Parameters,” Astrophys. J. Suppl. 192, 16 (2011).
* (12) J. M. Cline, P. Crotty and J. Lesgourgues, “Does The Small CMB Quadrupole Moment Suggest New Physics?,” J. Cosmol. Astropart. P. 9, (2003).
* (13) G. Efstathiou, “Is The Low CMB Quadrupole A Signature Of Spatial Curvature?,” Mon. Not. Roy. Astron. Soc. 343, L95 (2003) [arXiv:astro-ph/0303127].
* (14) H. Kurki-Suonio, “Physics of the Cosmic Microwave Background and the Planck Mission,” Proceedings of the 2010 CERN Summer School, Raseborg (Finland), submitted for publication in a CERN Yellow Report [arXiv:1012.5204v1].
* (15) Y. P. Jing and L. Z. Fang, “An Infrared Cutoff Revealed By The Two Years Of COBE - DMR Observations Of Cosmic Temperature Fluctuations,” Phys. Rev. Lett. 73, 1882 (1994) [arXiv:astro-ph/9409072].
* (16) D. Scott and G. F. Smoot, “Cosmic Microwave Background Mini-review,” appearing in the 2010 Review of Particle Physics, available on the PDG website at [http://pdg.lbl.gov/2011/astrophysics-cosmology/astro-cosmo.html or http://pdg.lbl.gov/2011/reviews/rpp2011-rev-cosmic-microwave-background.pdf]
* (17) H. Liu, S.-L. Xiong, and T.-P. Li, “The Origin Of The WMAP Quadrupole,” (2010) [arXiv:1003.1073v2].
* (18) H. Liu and T.-P. Li, “Observational Scan Induced Artificial CMB Anisotropy,” Astrophys. J. (2011), to be published [arXiv:1003.1073v2].
* (19) M. Kawasaki and F. Takahashi, “Inflation Model With Lower Multipoles Of The CMB Suppressed,” Phys. Lett. B 570, 151 (2003).
* (20) B. Feng and X.-M. Zhang, “Double Inflation And The Low CMB Quadrupole,” Phys. Lett. B 570, 145 (2003).
* (21) C. R. Contaldi et al., “Suppressing The Lower Multipoles In The CMB Anisotropies,” J. Cosmol. Astropart. P. 9, (2003).
* (22) Y.-S. Piao, “Possible Explanation To A Low CMB Quadrupole,” Phys. Rev. D 71, 087301 (2005).
* (23) L. Kofman, G. R. Blumenthal, H. Hodges and J. R. Primack, “Generation Of Nonflat And Nongaussian Perturbations From Inflation,” ASP Conf. Ser. 15, 339 (1991).
* (24) J. E. Lidsey, A. R. Liddle, E. W. Kolb, E. J. Copeland, T. Barreiro and M. Abney, “Reconstructing The Inflaton Potential–An overview,” Rev. Mod. Phys. 69, 373 (1997) [arXiv:astro-ph/9508078].
* (25) H. M. Hodges and G. R. Blumenthal, “Arbitrariness Of Inflationary Fluctuation Spectra,” Phys. Rev. D 42, 3329 (1990).
* (26) N. Arkani-Hamed, S. Dimopoulos, G. Dvali and G. Gabadadze, “Non-Local Modification of Gravity And The Cosmological Constant Problem,” [arXiv:hep-th/0209227]; N. Arkani-Hamed, S. Dimopoulos, G. Dvali, G. Gabadadze, and A.D. Linde, “Self-Terminating Inflation,” in preparation.
* (27) A. D. Linde, D. A. Linde and A. Mezhlumian, “Do We Live In The Center Of The World?,” Phys. Lett. B 345, 203 (1995) [arXiv:hep-th/9411111]. A. D. Linde, D. A. Linde and A. Mezhlumian, “Nonperturbative Amplifications Of Inhomogeneities In A Self-Reproducing Universe,” Phys. Rev. D 54, 2504 (1996) [arXiv:gr-qc/9601005].
* (28) A. A. Starobinsky, “Spectrum Of Adiabatic Perturbations In The Universe When There Are Singularities In The Inflation Potential,” JETP Lett. 55, 489 (1992) [Pisma Zh. Eksp. Teor. Fiz. 55, 477 (1992)].
* (29) H. M. Hodges, G. R. Blumenthal, L. A. Kofman and J. R. Primack, “Nonstandard Primordial Fluctuations From A Polynomial Inflaton Potential,” Nucl. Phys. B 335, 197 (1990).
* (30) D. C. Rodrigues, “Anisotropic Cosmological Constant And The CMB Quadrupole Anomaly,” Phys. Rev. D 77, 023534 (2008).
* (31) C. J. Copi, “Large-angle Anomalies in the CMB,” Advances in Astronomy 2010, 847541 (2010).
* (32) C. L. Bennett et al., “Seven-Year Wilkinson Microwave Anisotropy Probe (Wmap) Observations: Are There Cosmic Microwave Background Anomalies?” Astrophys. J. Suppl. 192, 17 (2011).
* (33) R. H. Brandenberger and P. M. Ho, “Noncommutative Spacetime, Stringy Spacetime Uncertainty Principle, and Density Fluctuations,” Phys. Rev. D 66, 023517 (2002) [arXiv:hep-th/0203119].
* (34) R. H. Brandenberger, “Trans-Planckian Physics And Inflationary Cosmology,” Proceedings of the 2002 International Symposium on Cosmology and Particle Astrophysics, 101 (2003) [arXiv:hep-th/0210186v2].
* (35) J. Martin and R. H. Brandenberger, “Trans-Planckian Problem Of Inflationary Cosmology,” Phys. Rev. D 63, 123501 (2001).
* (36) P. J. E. Peebles and B. Ratra, “The Cosmological Constant And Dark Energy,” Rev. Mod. Phys. 75, 559 (2003) [arXiv:astro-ph/0207347v2].
* (37) Z. Chang, S.-X. Chen and C.-B. Guan, “Cosmic Ray Threshold Anomaly And Kinematics In The dS Spacetime,” (2004) [arXiv:astro-ph/0402351v1]; Z. Chang, S.-X. Chen, C.-B. Guan and C.-G. Huang, “Cosmic Ray Threshold In An Asymptotically DS Spacetime,” Phys. Rev. D 71, 103007 (2005) [arXiv:astro-ph/0505612v1].
* (38) H.-Y. Guo, C.-G. Huang, Z. Xu and B. Zhou, “On Beltrami Model Of De Sitter Spacetime,”, Mod. Phys. Lett. A 19, 1701 (2004).
* (39) A. Riotto, “Inflation And The Theory Of Cosmological Perturbations,” Lectures given at the: Summer School on Astroparticle Physics and Cosmology (2002) [arXiv:hep-ph/0210162v1].
* (40) S. Dodelson, Modern Cosmology, Elsevier(Singapore) Pte Ltd., 2003.
* (41) C.-P. Ma and E. Bertschinger, ”Cosmological Perturbation-Theory In The Synchronous And Conformal Newtonian Gauges,” Astrophys. J. 455, 7 (1995) [arXiv:astro-ph/9506072v1].
* (42) U. Seljak and M. Zaldarriaga, “A Line Of Sight Approach To Cosmic Microwave Background Anisotropies,” Astrophys. J. 469, 437 (1996) [arXiv:astro-ph/9603033].
* (43) M. Bucher, K. Moodley and N. Turok, “The General Primordial Cosmic Perturbation,” Phys. Rev. D 62, 083508 (2000).
* (44) A. Lewis and A. Challinor, Code for Anisotropies in the Microwave Background(CAMB) (2011) [http://camb.info/] (This code is based on CMBFAST by U. Seljak and M. Zaldarriaga (1996) [http://lambda.gsfc.nasa.gov/toolbox/tb_ cmbfast_ ov.cfm]).
* (45) O. Lahav and A. Liddle, “The Cosmological Parameters,” appearing in the 2010 Review of Particle Physics, available on the PDG website at [http://pdg.lbl.gov/2011/astrophysics-cosmology/astro-cosmo.html or http://pdg.lbl.gov/2011/reviews/rpp2011-rev-cosmic-microwave-background.pdf].
* (46) C. L. Bennett et al., “Four-Year COBE* DMR Cosmic Microwave Background Observations: Maps And Basic Results,” Astrophys. J. Lett. 464, (1996).
* (47) C. B. Netterfield et al., “A Measurement By BOOMERANG Of Multiple Peaks In The Angular Power Spectrum Of The Cosmic Microwave Background,” Astrophys. J. 571, 604 (2002) [arXiv:astro-ph/0104460].
* (48) S. Hanany et al., “MAXIMA-1: A Measurement Of The Cosmic Microwave Background Anisotropy On Angular Scales of 10 Arcminutes To 5 Degrees ,” Astrophys. J. 545, L5 (2000) [arXiv:astro-ph/0005123].
* (49) N. W. Halverson et al., “DASI First Results: A Measurement Of The Cosmic Microwave Background Angular Power Spectrum ,” Astrophys. J. 568, 38 (2002) [arXiv:astro-ph/0104489].
* (50) P. F. Scott et al., “First Results From The Very Small Array—III. The CMB power spectrum,” Mon. Not. Roy. Astron. Soc. 341, 1076 (2003) [arXiv:astro-ph/0205380].
* (51) T. J. Pearson et al., “The Anisotropy of the Microwave Background To $\ell$ = 3500: Mosaic Observations With The Cosmic Background Imager ,” Astrophys. J. 591, 556 (2003) [arXiv:astro-ph/0205388].
* (52) CBI Supplementary Data, 10 July (2002) [http://www.astro.caltech.edu/$\sim$tjp/CBI/data/].
* (53) A. Lewis and S. Bridle, “Cosmological Parameters From CMB And Other Data: A Monte-Carlo Approach,” Phys. Rev. D 66, 103511 (2002) [arXiv:astro-ph/0205436].
* (54) H. Y. Guo, Science in China A, 51, 568 (2008).
* (55) G. Amelino-Camelia, “Relativity In Space-times With Short-Distance Structure Governed By An Observer-Independent (Planckian) Length Scale,” Int.J.Mod.Phys. D 11, 35 (2002) [arXiv:gr-qc/0012051]; G. Amelino-Camelia, “Testable Scenario For Relativity With Minimum-Length,” Phys. Lett. B 510, 255, (2001) [arXiv:hep-th/0012238]; G. Amelino-Camelia, “Doubly Special Relativity,” Nature 418, 34 (2002) [arXiv:gr-qc/0207049].
* (56) J. Kowalski-Glikman, “De Sitter Space As An Arena For Doubly Special Relativity,” Phys. Lett. B 547, 291 (2002).
* (57) J. Kowalski-Glikman, “Introduction To Doubly Special Relativity,” Lect. Notes Phys. 669, 131 (2005).
* (58) H. Y. Guo et al., “Snyder’s Model - De Sitter Special Relativity Duality And De Sitter Gravity”, Class. Quantum Grav. 24, 4009 (2007).
|
arxiv-papers
| 2011-05-18T01:34:29 |
2024-09-04T02:49:18.848178
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Zhe Chang, Ming-Hua Li, Sai Wang, and Xin Li",
"submitter": "Ming-Hua Li",
"url": "https://arxiv.org/abs/1105.3509"
}
|
1105.3518
|
ON THE NONTRIVIAL REAL ZERO OF THE REAL
PRIMITIVE CHARACTERISTIC DIRICHLET L FUNCTION
JINHUA FEI
ChangLing Company of Electronic Technology Baoji ShaanXi P.R.China
E-mail: feijinhuayoujian@msn.com
Abstract. This paper prove that the real primitive characteristic Dirichlet L
function does not exist the non-trivial real zero .
Keyword. Dirichlet L function , Real zero
AMS subject classification. 11M20
Set $\,\,s=\sigma+it\,$ is the complex number , suppose that $\,\,Re\,s>1$ ,
the definition of Riemann Zeta function is
$\zeta(s)=\,\sum_{n=1}^{+\infty}\,\frac{1}{n^{s}}$
The definition of Dirichlet L function is
$L(s,\chi)=\,\sum_{n=1}^{+\infty}\,\frac{\chi(n)}{n^{s}}$
where $\,\chi(n)$ is Dirichlet characteristic of mod q .
The product of $\zeta(s)$ and $\,L(s,\chi)$ is
$\zeta(s)L(s,\chi)=\,\sum_{n=1}^{+\infty}\,\,\frac{a_{n}}{n^{s}}$
where $\,\,a_{n}=\sum_{d|n}\chi(n)$ . If $\,n=p_{1}^{\alpha_{1}}\cdots
p_{u}^{\alpha_{u}}$ is the standard prime factor expression of $\,n$ , then
from multiplicative of $\,a_{n}$ , have
$a_{n}=\prod_{r=1}^{u}(1+\chi(p_{r})+\cdots+\chi(p_{r}^{\alpha_{r}}))$
easy to see , if $\,\,\chi$ is the real characteristic, then there must be
$\,\,a_{n}\geq 0$ .
LEMMA 1. (1) When $\,\,0<\sigma<1$ , there is a constant $\,\,0\leq\alpha\leq
1$ , we have the formula below established
$\sum_{1\leq
n\leq\xi}\,\,\frac{1}{n^{\sigma}}\,=\,\frac{\xi^{1-\sigma}-1}{1-\sigma}\,+\,\alpha\,+\,O\left(\frac{1}{\xi^{\sigma}}\right)$
(2) when $\,\,1<\sigma\,,\,1\leq\xi\,\,$ , we have
$\sum_{\xi\leq
n}\,\,\frac{1}{n^{\sigma}}\,=\,\frac{1}{(\sigma-1)\xi^{\sigma-1}}\,+\,\,O\left(\frac{1}{\xi^{\sigma}}\right)$
The proof of the lemma see the theorem 2 of the page 101 , and the example 4
of the page 103 , of the references [1]
LEMMA 2. Set $\,s=\sigma+it$ is the complex number , when
$\,\,\,\sigma>\,0\,\,,\,\,x\,\,\geq\,4q\,(|t|+2)\,$ , for the arbitrary
nonprincipal character $\,\,\chi$ of mod $\,q$ , have the formula below
established
$L(s,\chi)\,=\,\sum_{1\leq n\leq
x}\,\frac{\chi(n)}{n^{s}}\,\,+\,\,O\left(\,q\,x^{-\sigma}\right)$
The proof of the lemma see the theorem 1 of the page 447 of the references [ 2
]
LEMMA 3. Set $\chi$ is the arbitrary real primitive character of mod q ,
$\,q\,\geq 3\,$ , $L(s,\chi)$ is the corresponding Dirichlet L function.
(1) There exist a positive absolute constant $\,c_{1}\,$ , such that
$L(1,\chi)\,\,\geq\,\,c_{1}\,(\sqrt{q}\log^{2}q)^{-1}$
(2) there exist a positive absolute constant $\,c_{2}\,$ , the real zero
$\beta$ of the function $\,\,L(s,\chi)\,\,$ satisfy
$\beta\,\,\leq\,\,1-c_{2}\,(\sqrt{q}\log^{4}q)^{-1}$
The proof of the lemma see the theorem 2 of the page 296 , and the theorem 3
of the page 299 of the references [ 2 ].
LEMMA 4. Set $\,\,\frac{1}{2}\leq\beta<1\,\,$ is a real zero of the real
primitive character Dirichlet $\,\,L(s,\chi)\,\,$ function , and
$\,\,a_{n}=\sum_{d|n}\chi(n)\,\,$ , when positive integer $\,\,x\geq
q^{6}\,\,$ , have
(1)
$\sum_{1\leq n\leq
x}\,\,\frac{a_{n}}{n^{\beta}}\,\,=\,\,\frac{x^{(1-\beta)}}{1-\beta}\,\,L(1,\chi)\,\,+\,\,O\,(\,\,x^{(1-\beta)-\frac{1}{2}}\log
x\,\,q^{\frac{3}{2}}\log^{4}q\,)$
(2)
$\sum_{n\,=\,x+1}^{\infty}\,\,\frac{a_{n}}{n^{3}}\,\,=\,\,\frac{L(1,\chi)}{2}(x+1)^{-2}\,\,+\,\,O(\,x^{-\frac{5}{2}}q)$
Proof. (1) When positive integer $\,\,x\geq q^{6}\,\,$ , have
$\sum_{1\leq n\leq x}\,\,\frac{a_{n}}{n^{\beta}}\,\,=\,\,\sum_{1\leq n\leq
x}\,\,\frac{1}{n^{\beta}}\,\,\sum_{d|n}\chi(d)=\,\,\sum_{1\leq d\leq
x}\,\,\frac{\chi(d)}{d^{\beta}}\,\,\sum_{m\leq\frac{x}{d}}\frac{1}{m^{\beta}}$
$=\,\,\sum_{1\leq d\leq
x^{\frac{1}{2}}}\,\,\frac{\chi(d)}{d^{\beta}}\,\,\sum_{m\leq\frac{x}{d}}\frac{1}{m^{\beta}}\,\,+\,\,\sum_{x^{\frac{1}{2}}\leq
d\leq
x}\,\,\frac{\chi(d)}{d^{\beta}}\,\,\sum_{m\leq\frac{x}{d}}\frac{1}{m^{\beta}}\,\,=\,\,\sum\nolimits_{1}\,\,+\,\,\sum\nolimits_{2}$
From lemma 1 (1) , lemma 2 , and lemma 3 (2) , have
$\sum\nolimits_{1}\,\,=\,\,\sum_{1\leq d\leq
x^{\frac{1}{2}}}\,\,\frac{\chi(d)}{d^{\beta}}\left(\frac{x^{1-\beta}}{(1-\beta)d^{1-\beta}}\,\,-\,\,\frac{1}{1-\beta}\,\,+\,\,\alpha\,\,+\,\,O\left(d^{\beta}x^{-\beta}\right)\right)$
$=\frac{x^{1-\beta}}{1-\beta}\,\,\sum_{1\leq d\leq
x^{\frac{1}{2}}}\,\,\frac{\chi(d)}{d}\,\,\,+\,\,\,\left(\alpha-\frac{1}{1-\beta}\right)\sum_{1\leq
d\leq
x^{\frac{1}{2}}}\,\,\frac{\chi(d)}{d^{\beta}}\,\,+\,\,O\left(x^{-\beta+\frac{1}{2}}\right)$
$=\,\,\frac{x^{1-\beta}}{1-\beta}\,\,L(1,\chi)\,\,+\,\,O\left(\frac{x^{\frac{1}{2}-\beta}}{1-\beta}\,\,q\right)\,\,+\,\,O\left(\frac{x^{-\frac{1}{2}\beta}}{1-\beta}\,q\,\right)\,\,+\,\,O\left(x^{-\beta+\frac{1}{2}}\right)$
$=\,\,\frac{x^{1-\beta}}{1-\beta}\,\,L(1,\chi)\,\,+\,\,O\left(x^{(1-\beta)-\frac{1}{2}}\,q^{\frac{3}{2}}\,\log^{4}q\right)\qquad\qquad\qquad\qquad$
$\sum\nolimits_{2}\,\,=\,\,\sum_{1\leq m\leq
x^{\frac{1}{2}}}\,\frac{1}{m^{\beta}}\,\,\sum_{x^{\frac{1}{2}}\leq
d\leq\frac{x}{m}}\,\frac{\chi(d)}{d^{\beta}}\,\,=\,\,O\left(\sum_{1\leq m\leq
x^{\frac{1}{2}}}\,\frac{1}{m^{\beta}}\left(x^{-\frac{1}{2}\beta}\,\sqrt{q}\,\log
q\right)\right)$
$=O\left(x^{\frac{1}{2}(1-\beta)}\,x^{-\frac{1}{2}\beta}\,\log
x\,\sqrt{q}\,\log q\right)\,\,=\,\,O\left(x^{(1-\beta)-\frac{1}{2}}\,\log
x\,\,\sqrt{q}\,\log q\right)$
(2) Set $\,\,y=x+1\,$ , and the positive integer $\,\,x\geq q^{6}$ , then
$\sum_{y\leq n<\infty}\,\,\frac{a_{n}}{n^{3}}\,\,=\,\,\sum_{y\leq
n<\infty}\frac{1}{n^{3}}\,\sum_{d|n}\chi(d)\,\,=\,\,\sum_{1\leq
d<\infty}\frac{\chi(d)}{d^{3}}\,\,\sum_{\frac{y}{d}\leq
m<\infty}\frac{1}{m^{3}}$
$=\,\,\sum_{1\leq
d<y^{\frac{1}{2}}}\frac{\chi(d)}{d^{3}}\,\,\sum_{\frac{y}{d}\leq
m<\infty}\frac{1}{m^{3}}\,\,+\,\,\sum_{y^{\frac{1}{2}}\leq
d<\infty}\frac{\chi(d)}{d^{3}}\,\,\sum_{\frac{y}{d}\leq
m<\infty}\frac{1}{m^{3}}\,\,=\,\,\sum\nolimits_{3}\,\,+\,\,\sum\nolimits_{4}$
From lemma 1 (2) and lemma 2 , have
$\sum\nolimits_{3}\,\,=\,\,\sum_{1\leq d\leq
y^{\frac{1}{2}}}\,\,\frac{\chi(d)}{d^{3}}\,\,\left(\,\,\frac{d^{2}}{2y^{2}}\,\,+\,\,O\left(\,\frac{d^{3}}{y^{3}}\right)\right)\,\,=\,\,\frac{1}{2y^{2}}\sum_{1\leq
d\leq
y^{\frac{1}{2}}}\,\,\frac{\chi(d)}{d}\,\,+\,\,O\left(\,y^{-\frac{5}{2}}\right)$
$=\,\,\frac{L(1,\chi)}{2y^{2}}\,\,+\,\,O\left(\,y^{-\frac{5}{2}}q\,\right)\,\,\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$
$\sum\nolimits_{4}\,\,=\,\,\sum_{1\leq m\leq
y^{\frac{1}{2}}}\,\,\frac{1}{m^{3}}\,\,\sum_{\frac{y}{m}\leq
d<\infty}\,\,\frac{\chi(d)}{d^{3}}\,\,=\,\,O\left(\,\,\sum_{1\leq m\leq
y^{\frac{1}{2}}}\,\,\frac{1}{m^{3}}\,\,\left(\frac{m^{3}\,\sqrt{q}\,\log
q}{y^{3}}\right)\right)\,\,$
$=\,\,O\left(y^{-\frac{5}{2}}\,\sqrt{q}\,\log
q\right)\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$
This completes the proof.
LEMMA 5. Set $\,\,\Gamma(s)\,$ is Euler $\,\,\Gamma\,\,$ function.
(1) When $\,\,Re\,s>0\,\,$ , have
$\Gamma(s)\,\,=\,\,\int_{0}^{\infty}\,\,e^{-u}\,u^{s-1}\,d\,u$
(2) When $\,\,y>0\,,\,\,b>0\,\,$ , have
$e^{-y}\,=\,\,\frac{1}{2\pi i}\int_{(b)}\,y^{-s}\,\Gamma(s)\,d\,s$
where $\,\,\int_{(b)}\,\,=\,\,\int_{b-i\infty}^{b+i\infty}$
(3) Set $\,s\,$ is the arbitrary complex number , we have
$-\,\,\frac{\Gamma^{\prime}(s)}{\Gamma(s)}\,\,=\,\,\frac{1}{s}\,\,+\,\,\gamma\,\,+\,\,\sum_{n=1}^{\infty}\,\left(\frac{1}{n+s}\,-\,\frac{1}{n}\right)$
where $\,\gamma\,$ is Eluer constant.
The proof of the lemma see the page 20 , the properties 4 of the page 45 , and
the properties 8 of the page 48 , of the references [ 2 ]
LEMMA 6. Set $\,\,\Gamma(s)\,$ is Euler $\,\,\Gamma\,\,$ function.
(1) When $\,\,\frac{1}{8}\leq\sigma\leq\frac{1}{2}\,$, have
$\,\,\Gamma^{\prime}(\sigma)\,<\,0$, in other words, $\,\,\Gamma(\sigma)$
decrease monotonically in this interval .
(2)
$\,\,\Gamma(\frac{1}{3})\,\,\geq\,\,2.67\,\,,\,\,\Gamma(\frac{1}{6})\,\,\leq\,6$
.
Proof. (1) When $\,\,\frac{1}{8}\leq\sigma\leq\frac{1}{2}$ , from the lemma 5
(3) , have
$-\,\,\frac{\Gamma^{\prime}(\sigma)}{\Gamma(\sigma)}\,\,=\,\,\frac{1}{\sigma}\,\,+\,\,\gamma\,\,+\,\,\sum_{n=1}^{\infty}\,\left(\frac{1}{n+\sigma}\,-\,\frac{1}{n}\right)$
$\frac{\Gamma^{\prime}(\sigma)}{\Gamma(\sigma)}\,\,=\,\,-\,\frac{1}{\sigma}\,\,-\,\,\gamma\,\,+\,\,\sigma\,\sum_{n=1}^{\infty}\,\frac{1}{(n+\sigma)n}$
$\leq\,-\,2\,-\,\gamma\,+\,\frac{1}{2}\,\sum_{n=1}^{\infty}\frac{1}{n^{2}}\,\,=\,\,-\,2\,-\,\gamma\,+\,\frac{\pi^{2}}{12}\,<\,-1$
Because when $\,\,\frac{1}{8}\,\leq\,\sigma\,\leq\,\frac{1}{2}\,$ , have
$\,\,\Gamma(\sigma)\,>\,0$ , so $\,\,\Gamma^{\prime}(\sigma)\,<\,0\,\,$,
namely , $\,\,\Gamma(\sigma)$ decrease monotonically in this interval .
(2) From the current lemma (1), have
$\,\,\Gamma(\frac{1}{3})\,\geq\,\Gamma(0.334)\,,$ from the functional equation
$\,\,\Gamma(s+1)=\,s\,\Gamma(s)\,$, and consult the page 1312 of the
references [3], have
$\Gamma(0.334)\,=\,\frac{\Gamma(1.334)}{0.334}\,\,\geq\,\frac{0.8929}{0.334}\,\geq\,2.67\,\,$
. On the same score,
$\Gamma(\frac{1}{6})\leq\,\Gamma(0.166)\,=\,\frac{\Gamma(1.166)}{0.166}\,\leq\,\frac{0.928}{0.166}\,\leq\,6$
. This completes the proof.
LEMMA 7. Set $\,\chi\,$ is a real primitive characteristic of mod q , and
$\,\,a_{n}\,=\,\sum_{d|n}\,\chi(d)\,$ . When
$\,\,y>0\,\,,\,\,Re\,s>\frac{1}{3}$ , we have
$\int_{0}^{y}\,\left(\,\,\sum_{n=1}^{+\infty}\,a_{n}\,e^{-n^{3}u}\,\right)u^{s-1}\,d\,u\,\,=\,\,L(1,\chi)\,\,\Gamma(\frac{1}{3})\,\,\frac{3y^{s-\frac{1}{3}}}{3s-1}\,\,+\,\,\delta\,L(0,\chi)\,\zeta(0)\,\frac{y^{s}}{s}$
$+\,\,\int_{(-\frac{1}{3})}\,L(3w,\chi)\,\zeta(3w)\,\Gamma(w)\,\frac{y^{s-w}}{s-w}\,d\,w\qquad\qquad\qquad\qquad\qquad$
where $\,\,\delta\,=\,\frac{1}{2}(1-\chi(-1))$
Proof. From the lemma 5 (2) , when $\,\,u>0\,\,$ , we have
$e^{-u}\,\,=\,\,\frac{1}{2\pi i}\,\,\int_{(2)}\,u^{-w}\,\,\Gamma(w)\,\,d\,w$
Set $n\,$ is a positive integer , now let us do the integral transformation in
the above formula : $\,\,u\,\rightarrow\,n^{3}u\,\,$ , we have
$e^{-n^{3}u}\,\,=\,\,\frac{1}{2\pi
i}\,\int_{(2)}\,\frac{u^{-w}}{n^{3w}}\,\Gamma(w)\,d\,w$
$\sum_{n=1}^{+\infty}\,a_{n}\,e^{-n^{3}u}\,\,=\,\,\frac{1}{2\pi
i}\,\int_{(2)}\,\left(\,\sum_{n=1}^{+\infty}\,\frac{a_{n}}{n^{3w}}\right)\,u^{-w}\,\Gamma(w)\,d\,w$
$=\,\,\frac{1}{2\pi
i}\,\int_{(2)}\,L(3w,\chi)\,\zeta(3w)\,u^{-w}\,\Gamma(w)\,d\,w$
When $\,\,Re\,w\,\geq-\frac{1}{3}$ , integrand function in
$\,\,w=\frac{1}{3}\,\,$ place has a pole of order 1. When $\,\,\delta=1\,$ ,
integrand function in $\,\,w=0\,\,$ place has a pole of order 1. from the
residue theorem have the above formula
$=\,\,L(1,\chi)\,\Gamma(\frac{1}{3})\,u^{-\frac{1}{3}}\,\,+\,\,\delta\,L(0,\chi)\,\zeta(0)\,\,+\,\,\frac{1}{2\pi
i}\int_{(-\frac{1}{3})}\,L(3w,\chi)\,\zeta(3w)\,\Gamma(w)\,u^{-w}\,d\,w$
Suppose that $\,\,Re\,s>\frac{1}{3}\,\,,\,\,y>0\,\,$ the above equation
multiply by $\,\,u^{s-1}$ on both sides , then integral , we have
$\int_{0}^{y}\,\left(\,\sum_{n=1}^{+\infty}\,a_{n}\,e^{-n^{3}u}\,\right)\,u^{s-1}\,d\,u\,\,=\,\,L(1,\chi)\,\Gamma(\frac{1}{3})\,\int_{0}^{y}\,\,u^{s-\frac{4}{3}}\,d\,u\,\,$
$+\,\,\delta\,L(0,\chi)\,\zeta(0)\,\int_{0}^{y}\,u^{s-1}\,d\,u\,\,+\,\,\frac{1}{2\pi
i}\int_{(-\frac{1}{3})}\,L(3w,\chi)\,\zeta(3w)\,\Gamma(w)\,\left(\int_{0}^{y}\,u^{-w+s-1}\,d\,u\right)\,d\,w$
$=\,\,L(1,\chi)\,\Gamma(\frac{1}{3})\,\frac{3\,y^{s-\frac{1}{3}}}{3\,s-1}\,\,+\,\,\delta\,L(0,\chi)\,\zeta(0)\,\frac{y^{s}}{s}\,\,+\,\,\frac{1}{2\pi
i}\int_{(-\frac{1}{3})}\,L(3w,\chi)\,\zeta(3w)\,\Gamma(w)\,\frac{y^{s-w}}{s-w}\,d\,w$
This completes the proof.
THEOREM. Set $\chi$ is a real primitive characteristic of mod q , the
corresponding Dirichlet L function $\,L(s,\chi)\,$ does not exist the
nontrivial real zero.
Proof. Set $n$ is a positive integer and $\,\,Re\,s>\frac{1}{3}$ , as variable
transformation for below formula : $\,\,n^{3}\,u\,\rightarrow\,u\,$ , from the
lemma 5 (1) , have
$\int_{0}^{\infty}\,e^{-n^{3}u}\,u^{s-1}\,d\,u\,\,=\,\,\frac{1}{n^{3s}}\,\int_{0}^{\infty}\,e^{-u}\,u^{s-1}\,d\,u\,\,=\,\,\frac{1}{n^{3s}}\,\Gamma(s)$
so
$\Gamma(s)\,\sum_{n=1}^{\infty}\,\frac{a_{n}}{n^{3s}}\,\,=\,\,\int_{0}^{\infty}\,\left(\,\sum_{n=1}^{+\infty}\,a_{n}\,e^{-n^{3}u}\,\right)\,u^{s-1}\,d\,u$
$\Gamma(s)\,L(3s,\chi)\,\zeta(3s)\,\,=\,\,\int_{0}^{\infty}\,\left(\,\sum_{n=1}^{+\infty}\,a_{n}\,e^{-n^{3}u}\,\right)\,u^{s-1}\,d\,u$
$=\,\,\int_{y}^{\infty}\,\left(\,\sum_{n=1}^{+\infty}\,a_{n}\,e^{-n^{3}u}\,\right)\,u^{s-1}\,d\,u\,\,+\,\,\int_{0}^{y}\,\left(\,\sum_{n=1}^{+\infty}\,a_{n}\,e^{-n^{3}u}\,\right)\,u^{s-1}\,d\,u$
where $\,\,y>0\,\,$ .
from the lemma 7 , we have
$\Gamma(s)\,L(3s,\chi)\,\zeta(3s)\,\,=\,\,\int_{y}^{\infty}\,\left(\,\sum_{n=1}^{+\infty}\,a_{n}\,e^{-n^{3}u}\,\right)\,u^{s-1}\,d\,u$
$+\,\,L(1,\chi)\,\Gamma(\frac{1}{3})\,\frac{3\,y^{s-\frac{1}{3}}}{3\,s-1}\,\,+\,\,\delta\,L(0,\chi)\,\zeta(0)\,\frac{y^{s}}{s}\,\,+\,\,\frac{1}{2\pi
i}\int_{(-\frac{1}{3})}\,L(3w,\chi)\,\zeta(3w)\,\Gamma(w)\,\frac{y^{s-w}}{s-w}\,d\,w$
From the above formula , We put the function
$\,\,\Gamma(s)\,L(3s,\chi)\,\zeta(3s)\,$ analytic continuation to the half
plane $\,\,Re\,s>-\frac{1}{3}$ , this function have a pole of order 1 on
$\,\,s=\frac{1}{3}\,\,$ , and when $\,\,\delta\,=\,1\,$ , have a pole of order
1 on $\,\,s=0\,\,$ .
Now assume , $\,L(s,\chi)\,$ exists a nontrivial real zero $\beta$ , from the
functional equation of $\,L(s,\chi)\,$ , we can assume that
$\frac{1}{2}\leq\beta<1$ .
Now take $\,\,s=\frac{\beta}{3}\,$ , we have
$0\,\,=\,\,\int_{y}^{\infty}\,\left(\,\sum_{n=1}^{+\infty}\,a_{n}\,e^{-n^{3}u}\,\right)\,u^{\frac{\beta}{3}-1}\,d\,u\,\,+\,\,L(1,\chi)\,\Gamma(\frac{1}{3})\,\frac{3\,y^{\frac{\beta-1}{3}}}{\beta-1}\,\,$
$+\,\,\delta\,L(0,\chi)\,\zeta(0)\,\frac{3\,y^{\frac{\beta}{3}}}{{\beta}}\,\,+\,\,\frac{1}{2\pi
i}\int_{(-\frac{1}{3})}\,L(3w,\chi)\,\zeta(3w)\,\Gamma(w)\,\frac{3\,y^{\frac{\beta}{3}-w}}{\beta-3w}\,d\,w$
in other words
$\,\,L(1,\chi)\,\Gamma(\frac{1}{3})\,\frac{3\,y^{\frac{\beta-1}{3}}}{1-\beta}\,\,=\,\,\int_{y}^{\infty}\,\left(\,\sum_{n=1}^{+\infty}\,a_{n}\,e^{-n^{3}u}\,\right)\,u^{\frac{\beta}{3}-1}\,d\,u\,\,$
$+\,\,\delta\,L(0,\chi)\,\zeta(0)\,\frac{3\,y^{\frac{\beta}{3}}}{{\beta}}\,\,+\,\,\frac{1}{2\pi
i}\int_{(-\frac{1}{3})}\,L(3w,\chi)\,\zeta(3w)\,\Gamma(w)\,\frac{3\,y^{\frac{\beta}{3}-w}}{\beta-3w}\,d\,w\,\,=\,\,I_{1}\,\,+\,\,I_{2}\,\,+\,\,I_{3}$
Now take $\,\,y=x^{-3}\,$ , $x$ is a positive integer , and $\,\,x\geq
q^{6}\,$ . we calculated the above formula every one on the right
$I_{2}\,\,=\,\,O\left(\delta\,L(1,\chi)\,q^{\frac{1}{2}}\,y^{\frac{1}{6}}\right)\,\,=\,\,O\left(x^{-\frac{1}{2}}q^{\frac{1}{2}}\log
q\right)$
From the functional equation of $\,L(s,\chi)\,,\,\zeta(s)\,$ , and the
asymptotic formula of $\,\Gamma(s)$, we have
$|\,I_{3}\,|\,\,\leq\,\,\frac{1}{2\pi}\,\int_{-\infty}^{+\infty}\,|\,L(-1+i\nu\,,\chi)\,|\,|\,\zeta(-1+i\nu)\,|\,\Gamma(-\frac{1}{3}+i\nu)\,|\,\left|\,\frac{3\,y^{\frac{1+\beta}{3}}}{\beta+1-3i\nu}\,\right|\,d\,\nu$
$=\,\,O\left(q^{\frac{3}{2}}\,y^{\frac{1+\beta}{3}}\right)\,\,=\,\,O\left(x^{-\frac{3}{2}}\,q^{\frac{3}{2}}\right)\qquad\qquad\qquad\qquad\qquad$
Now calculate $I_{1}$
$I_{1}\,\,=\,\,\sum_{n=1}^{x}\,a_{n}\,\int_{y}^{\infty}\,e^{-n^{3}u}\,u^{\frac{\beta}{3}-1}\,d\,u\,\,+\,\,\sum_{n=x+1}^{+\infty}\,a_{n}\,\int_{y}^{\infty}\,e^{-n^{3}u}\,u^{\frac{\beta}{3}-1}\,d\,u\,\,=\,\,J_{1}\,\,+\,\,J_{2}$
Now calculate $J_{1}$
$J_{1}\,\,\leq\,\,\sum_{n=1}^{x}\,a_{n}\,\int_{0}^{\infty}\,e^{-n^{3}u}\,u^{\frac{\beta}{3}-1}\,d\,u\,\,$
as variable transformation for above formula : $\,n^{3}u\rightarrow u\,$ ,
have the above formula
$=\,\,\sum_{n=1}^{x}\,\frac{a_{n}}{n^{\beta}}\,\int_{0}^{\infty}\,e^{-u}\,u^{\frac{\beta}{3}-1}\,d\,u\,\,=\,\,\Gamma(\frac{\beta}{3})\,\sum_{n=1}^{x}\,\frac{a_{n}}{n^{\beta}}$
From the lemma 4 (1) , have the above formula
$=\,\,\Gamma(\frac{\beta}{3})\,\frac{x^{(1-\beta)}}{1-\beta}\,\,L(1,\chi)\,\,+\,\,O(\,\,x^{(1-\beta)-\frac{1}{2}}q^{\frac{3}{2}}\,\log
x\,\,\log^{4}q)$
From the lemma 6 (1) and (2), have the above formula
$\leq\,\,\Gamma(\frac{1}{6})\,\frac{x^{(1-\beta)}}{1-\beta}\,\,L(1,\chi)\,\,+\,\,O(\,\,x^{(1-\beta)-\frac{1}{2}}\,q^{\frac{3}{2}}\log
x\,\log^{4}q)$
$\leq\,\,6\,\frac{x^{(1-\beta)}}{1-\beta}\,\,L(1,\chi)\,\,+\,\,O(\,\,x^{(1-\beta)-\frac{1}{2}}\,q^{\frac{3}{2}}\log
x\,\log^{4}q)$
Now calculate $J_{2}$
$J_{2}\,\,\leq\,\,y^{\frac{\beta}{3}-1}\,\sum_{n=x+1}^{+\infty}\,a_{n}\,\int_{y}^{\infty}\,e^{-n^{3}u}\,d\,u$
as variable transformation for above formula : $\,\,n^{3}u\rightarrow\,u\,$ ,
have the above formula
$=\,\,y^{\frac{\beta}{3}-1}\,\sum_{n=x+1}^{+\infty}\,\frac{a_{n}}{n^{3}}\,\int_{n^{3}y}^{\infty}\,e^{-u}\,d\,u\,\,\leq\,\,y^{\frac{\beta}{3}-1}\,\sum_{n=x+1}^{+\infty}\,\frac{a_{n}}{n^{3}}\,\int_{1}^{\infty}\,e^{-u}\,d\,u\,\,=\,\,\frac{y^{\frac{\beta}{3}-1}}{e}\,\sum_{n=x+1}^{+\infty}\,\frac{a_{n}}{n^{3}}$
From the lemma 4 (2) , have the above formula
$=\,\,\frac{y^{\frac{\beta}{3}-1}}{e}\,\,\frac{L(1,\chi)}{2}(x+1)^{-2}\,\,+\,\,O(\,y^{\frac{\beta}{3}-1}x^{-\frac{5}{2}}\,q)\,\,\,\leq\,\,\,\frac{x^{(1-\beta)}}{2e}\,\,L(1,\chi)\,+\,\,O(\,x^{(1-\beta)-\frac{1}{2}}\,q)$
we synthesize the above calculation , have
$\,\,L(1,\chi)\,\Gamma(\frac{1}{3})\,\frac{3\,x^{(1-\beta)}}{1-\beta}\,\,\leq\,\,6\,\frac{x^{(1-\beta)}}{1-\beta}\,\,L(1,\chi)\,\,+\,\,\frac{x^{(1-\beta)}}{2e}\,\,L(1,\chi)\,+\,\,O(\,\,x^{(1-\beta)-\frac{1}{2}}q^{\frac{3}{2}}\log
x\log^{4}q)$
From the lemma 6 (2) , have
$8\,\,L(1,\chi)\,\,\frac{x^{(1-\beta)}}{1-\beta}\,\,\leq\,\,6\,\,\frac{x^{(1-\beta)}}{1-\beta}\,\,L(1,\chi)\,\,+\,\,\frac{x^{(1-\beta)}}{2e}\,\,L(1,\chi)\,\,+\,\,O\,(\,\,x^{(1-\beta)-\frac{1}{2}}q^{\frac{3}{2}}\log
x\log^{4}q\,)$
so
$2\,\,L(1,\chi)\,\,\frac{x^{(1-\beta)}}{1-\beta}\,\,\,\leq\,\,\frac{x^{(1-\beta)}}{2e}\,\,L(1,\chi)\,\,+\,\,O\,(\,\,x^{(1-\beta)-\frac{1}{2}}q^{\frac{3}{2}}\log
x\log^{4}q\,)$
Divided by $\,2\,L(1,\chi)\,\,x^{(1-\beta)}\,\,$ on both sides , and from the
lemma 3 (1) , have
$\frac{1}{1-\beta}\,\,\leq\,\,\frac{1}{4e}\,\,+\,\,O\left(\,x^{-\frac{1}{2}}\,q^{2}\,\log
x\,\log^{6}q\,\right)$
Make $\,\,x\rightarrow+\infty\,$ , we have
$\frac{1}{1-\beta}\,\,\leq\,\,\frac{1}{4e}\,\,\leq\,\,\frac{1}{10}$
so $\,\,\beta\,\,\leq\,\,-9\,\,$ , with
$\,\,\frac{1}{2}\,\,\leq\,\,\beta<\,\,1$ contradictions .
This completes the proof.
REFERENCES
[1] Hua L.G , Introduction of number theory , BeiJing: Science Press, 1979 (In
Chinese)
[2] Pan C.D , Pan C.B , Fundamentals of analytic number theory, BeiJing:
Science Press, 1999 (In Chinese)
[3] Write Group , Mathematical Handbook , BeiJing: People Education Press ,
1979 (In Chinese)
|
arxiv-papers
| 2011-05-18T02:51:15 |
2024-09-04T02:49:18.860370
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "JinHua Fei",
"submitter": "JinHua Fei",
"url": "https://arxiv.org/abs/1105.3518"
}
|
1105.3538
|
# The Exact Schema Theorem
Alden H. Wright
Computer Science Department
University of Montana
Missoula, MT 59812
alden.wright@umontana.edu
http://web-dev.cs.umt.edu/~wright/wright.htm
###### Abstract
A schema is a naturally defined subset of the space of fixed-length binary
strings. The Holland Schema Theorem [Hol75] gives a lower bound on the
expected fraction of a population in a schema after one generation of a simple
genetic algorithm. This paper gives formulas for the exact expected fraction
of a population in a schema after one generation of the simple genetic
algorithm.
Holland’s schema theorem has three parts, one for selection, one for
crossover, and one for mutation. The selection part is exact, whereas the
crossover and mutation parts are approximations. This paper shows how the
crossover and mutation parts can be made exact. Holland’s schema theorem
follows naturally as a corollary.
There is a close relationship between schemata and the representation of the
population in the Walsh basis. This relationship is used in the derivation of
the results, and can also make computation of the schema averages more
efficient.
This paper gives a version of the Vose infinite population model where
crossover and mutation are separated into two functions rather than a single
“mixing” function.
Comment added on May 18, 2011: This paper was written in 1999 with the last
revision done on January 28, 2000. A colleague disuaded me from submitting it
for publication, but it does contain useful results. It has been published on
my website (formerly http://www.cs.umt.edu/u/wright) since that 1999. It has
been referenced by other publications.
On May 18, 2011, I reprocessed the LaTex source using LaTex, dvips, and
ps2pdf. (There were some fractions that did not display correctly.) Except for
the addition of this comment, updating my e-mail and website URL, and the
reprocessing, there have been no changes since the verion of January 28, 2000.
## 1 Introduction
Holland’s schema theorem [Hol75] has been widely used for the theoretical
analysis of genetic algorithms. However, it has two limitations. First, it
only gives information for a single generation. Second, it is an
approximation, giving only lower bounds on the schema frequencies. This paper
removes the second limitation.
Michael Vose and coworkers have introduced exact models of the simple genetic
algorithm. The Vose infinite population model exactly describes the expected
behavior from one generation to the next. The Markov chain model is an exact
model of the finite population behavior of the simple GA.
Stephens et. al. [SW97] describe these models as “fine-grained”. They can be
used to qualitatively describe certain aspects of the behavior of the i simple
GA. For example, the fixed points of the infinite population model can be used
to describe phenomena such as punctuated equilibria. (See [VL91] and [Vos99a]
for example.) However, due to the large size of the models, it is generally
impossible to apply these models quantitatively to practical-sized problems.
Thus, as is pointed out in [SW97] and [SWA97], a more coarse-grained version
of these models is needed. Models are needed that describe the behavior of a
subset of of the variables included in the exact models. For example, a
higher-level organism may have in the order of magnitude of 100,000 genes.
However, population geneticists generally do not try to model all of these;
instead they may use 1-locus and 2-locus models. Modeling using schemata is
the equivalent technique for string-representation genetic algorithms; they
model the behavior of the GA at a subset of the string positions.
In earlier work, Bridges and Goldberg [BG87] derived an exact expression for
expected number of copies of a string under one generation of selection and
one-point crossover, and they claim that their formulas can be extended to
find the expected number of elements in a schema under the same conditions.
Their formulas are complex and not particularly illuminating.
As mentioned before, Stephens and coworkers ([SW97] and [SWA97]) have results
similar to ours for one-point crossover. Our results are more general than
these results in that they for general crossover, and they include mutation.
[SW97] includes references to other related papers. Of particular note is
[Alt95] which relates an exact version of the schema theorem to Price’s
theorem in population genetics.
Chapter 19 of [Vos99b] (which the author had not seen when he wrote this
paper) also contains a version of the exact schema theorem as theorem 19.2 for
mixing, where mixing includes crossover and mutation. Theorem 19.2 assumes
that mutation is independent, which is similar to the assumptions on mutation
in this paper.
## 2 Notation
Let $\Omega$ be the space of length $\ell$ binary strings, and let
$n=2^{\ell}$. For $u,v\in\Omega$, let $u\otimes v$ denote the bitwise-and of
$u$ and $v$, and let $u\oplus v$ denote the bitwise-xor of $u$ and $v$. Let
$\overline{u}$ denote the ones-complement of $u$, and $\\#u$ denote the number
of ones in the binary representation of $u$.
Integers in the interval $[0,n)=[0,2^{\ell})$ are identified with the elements
of $\Omega$ through their binary representation. This correspondence allows
$\Omega$ to be regarded as the product group
$\Omega=Z_{2}\times\ldots\times Z_{2}$
where the group operation is $\oplus$. The elements of $\Omega$ corresponding
to the integers $2^{i}$, $i=0,\ldots,\ell-1$ form a natural basis for
$\Omega$.
We will also use column vectors of length $\ell$ to represent elements of
$\Omega$. Let $\bf 1$ denote the vector of ones (or the integer $2^{\ell}-1$).
Thus, $u^{T}v=\\#(u\otimes v)$, and $\overline{u}={\bf 1}\oplus u$.
For any $u\in\Omega$, let $\Omega_{u}$ denote the subgroup of $\Omega$
generated by $\langle 2^{i}:u\otimes 2^{i}=2^{i}\rangle$. In other words,
$v\in\Omega_{u}$ if and only if $v\otimes u=v$. For example, if $\ell=6$, then
$\Omega_{9}=\\{0,1,8,9\\}=\\{000000,000001,001000,001001\\}$.
A schema is a subset of $\Omega$ where some string positions are specified
(fixed) and some are unspecified (variable). Schemata are traditionally
denoted by pattern strings, where a special symbol is used to denote a
unspecified bit. We use the $*$ symbol for this purpose (Holland used the
$\\#$ symbol). Thus, the schema denoted by the pattern string $10\\!*\\!01*$
is the set of strings $\\{100010,100011,101010,101011,\\}$.
Alternatively, we can define a schema to be the set $\Omega_{u}\oplus v$,
where $u,v\in\Omega$, and where $u\otimes v=0$. In this notation, $u$ is a
mask for the variable positions, and $v$ specifies the fixed positions. For
example, the schema $\Omega_{001001}\oplus 100010$ would be the schema
$10\\!*\\!01*$ described above.
This definition makes it clear that a schema $\Omega_{u}\oplus v$ with $v=0$
is a subgroup of $\Omega$, and a schema $\Omega_{u}\oplus v$ is a coset of
this subgroup.
Following standard practice, we will define the order of a schema as the
number of fixed positions. In other words, the order of the schema
$\Omega_{\overline{u}}\oplus v$ is $\\#u$ (since $u$ is a mask for the fixed
positions).
A population for a genetic algorithm over length $\ell$ binary strings is
usually interpreted as a multiset (set with repetitions) of elements of
$\Omega$. A population can also be interpreted as a $2^{\ell}$ dimensional
incidence vector over the index set $\Omega$: if $X$ is a population vector,
then $X_{i}$ is the number of occurences of $i\in\Omega$ in the population. A
population vector can be normalized by dividing by the population size. For a
normalized population vector $x$, $\sum_{i}x_{i}=1$. Let
$\Lambda=\\{x\in R^{n}:\sum_{i}x_{i}=1\mbox{ and }x_{i}\geq 0\mbox{ for all
}i\in\Omega\\}.$
Thus a normalized population vector is an element of $\Lambda$. Geometrically,
$\Lambda$ is the $n-1$ dimensional unit simplex in $R^{n}$. Note that elements
of $\Lambda$ can be interpreted as probability distributions over $\Omega$.
If $expr$ is a Boolean expression, then
$[expr]=\left\\{\begin{array}[]{ll}1&\mbox{~{}~{}~{}if }expr\mbox{ is true
}\\\ 0&\mbox{~{}~{}~{}if }expr\mbox{ is false }\end{array}\right.$
## 3 The fraction of a population in a schema
Let $X$ be a population (not necessarily normalized). We will be interested in
the fraction $X_{k}^{(u)}$ of the elements of $X$ that are elements of the
schema $\Omega_{\overline{u}}\oplus k$:
$X_{k}^{(u)}=\frac{\sum_{i\in\Omega_{\overline{u}}}X_{i\oplus
k}}{\sum_{i\in\Omega}X_{i}}\mbox{~{}~{}~{}~{}~{}~{}~{}~{}~{}for
}k\in\Omega_{u}.$
Note that here $u$ is a mask for the fixed positions of the schema.
If we divide the numerator and denominator of this fraction by the population
size $r$, and if we let $x=X/r$, then we get
$x_{k}^{(u)}=\frac{\sum_{i\in\Omega_{\overline{u}}}x_{i\oplus
k}}{\sum_{i\in\Omega}x_{i}}=\sum_{i\in\Omega_{\overline{u}}}x_{i\oplus k}$
In other words, for a normalized population $x$, we use the notation
$x_{k}^{(u)}$ to denote the schema average for the schema
$\Omega_{\overline{u}}\oplus k$. Note that $x_{0}^{(0)}=1$ since
$\sum_{i\in\Omega}x_{i}=1$.
Let $x^{(u)}$ denote the vector of schema averages, where the vector is
indexed over $\Omega_{u}$. Note that $\sum_{v\in\Omega_{u}}x_{v}^{(u)}=1$.
For a fixed $u$, the family of schemata $\\{\Omega_{\overline{u}}\oplus
v\>:\>v\in\Omega_{u}\\}$ is called a competing family of schemata.
## 4 The Simple Genetic Algorithm
The material in this section is mostly taken from [Vos99b], [Vos96], and
[VW98a].
The simple genetic algorithm can be described through a heuristic function
${\cal G}:\Lambda\rightarrow\Lambda$. As we will show later, $\cal G$ contains
all of the details of selection, crossover, and mutation. The simple genetic
algorithm is given by:
1 | Choose a random population of size $r$ from $\Omega$.
---|---
2 | Express the population as an incidence vector $X$ indexed over $\Omega$.
3 | Let $y={\cal G}(X/r)$. (Note that $X/r$ and $y$ are probability distributions over $\Omega$.)
4 | for k from 1 to $r$ do
5 | | Select individual $i\in\Omega$ according to the probability distribution $y$.
6 | | Add $i$ to the next generation population $Z$.
7 | endfor
8 | Let $X=Z$.
9 | Go to step 3.
It is shown in [Vos99b] that if $X$ is a population, then $y={\cal G}(X/r)$ is
the expected population after one generation of the simple genetic algorithm.
Thus, the schema theorem is a statement about the schema averages of the
population $y$.
The heuristic function $\cal G$ can be written as the composition of three
hueristic functions $\cal F$, $\cal C$, and $\cal U$ which describe selection,
crossover, and mutation respectively. In other words, ${\cal G}(x)={\cal
U}({\cal C}({\cal F}(x)))={\cal U\circ C\circ F}(x)$. Later sections describe
each of the three heuristic functions in more detail.
## 5 Selection
The selection heuristic $\cal F$ for proportional selection is given by:
${\cal F}_{k}(x)=\frac{f_{k}x_{k}}{\sum_{j\in\Omega}f_{j}x_{j}}$
where $f_{k}$ denotes the fitness of $k\in\Omega$.
Let $F$ denote the diagonal matrix over $\Omega\times\Omega$ whose diagonal
entries are given by $F_{j,j}=f_{j}$. Then the selection heuristic can be
expressed in terms of matrices by
${\cal F}(x)=\frac{Fx}{{\bf 1}^{T}Fx}$
If $X$ is a finite population represented as an incidence vector over
$\Omega$, and if $x=X/r$, then $x_{k}$ is nonzero only for those $k$ that are
in the population $X$ considered as a multiset. Thus, the computation of
${\cal F}(x)$ is feasible in practice even for long string lengths. Further,
the computation of the schema averages after selection can be done directly
from the definition.
###### Theorem 5.1
(Exact schema theorem for proportional selection.) Let $x\in\Lambda$ be a
population, and let $s={\cal F}(x)$. Then
$s_{k}^{(u)}=\frac{\sum_{j\in\Omega_{\overline{u}}}f_{j\oplus k}x_{j\oplus
k}}{\sum_{k\in\Omega}f_{k}x_{k}}$
We give the following algorithm for computing the schema average vector
$s^{(u)}$ from a finite population $X$. Let $I(u)=\\{i\>:\>0\leq i<\ell\mbox{
and }u_{i}=1\\}$, where $u_{i}$ denotes bit $i$ of $u$. Let $P^{(u)}$ be the
function which projects $\Omega$ into $\Omega_{u}$: for $j\in\Omega$, let
$P_{i}^{(u)}(j)=j_{i}$ for $i\in I(u)$.
---
for each $k\in\Omega_{u}$ do
| $s_{k}^{(u)}\leftarrow 0$
endfor
for each $j\in X$ do $\triangleright$ see note below
| $k\leftarrow P^{(u)}(j)$
| $s_{k}^{(u)}\leftarrow s_{k}^{(u)}+f_{j}$
endfor
$\overline{f}\leftarrow 0$
for each $k\in\Omega_{u}$ do
| $\overline{f}\leftarrow\overline{f}+s_{k}^{(u)}$
endfor
for each $k\in\Omega_{u}$ do
| $s_{k}^{(u)}\leftarrow s_{k}^{(u)}/\overline{f}$
endfor
return $s_{k}^{(u)}$
In this algorithm, the population $X$ is interpreted as a multiset. Thus, it
is assumed that “for each $j\in X$ do” means that the loop following is done
once for each of the possibly multiple occurences of $j$ in $X$. In an
implementation, it would be useful to identify the elements of $\Omega_{u}$
with the integers in the interval $[0,2^{\\#u})$, and to interpret $s^{(u)}$
as a vector indexed over these integers.
Clearly, the complexity of this algorithm is $\Theta(2^{\\#u}+rK)$, where $K$
denotes the complexity of one fitness evaluation.
We now give an example which we will continue through the remaining sections.
Let $\ell=5$, $u=10=01010_{2}$, $r=5$,
$X=\\{6,7,10,13,21\\}=\\{00110,00111,01010,01101,10101\\}$. The schema sum
vector is $x^{(10)}=<\frac{1}{5},\frac{2}{5},\frac{1}{5},\frac{1}{5}>$. Let
$f_{6}=5$, $f_{7}=3$, $f_{10}=4$, $f_{13}=1$, $f_{21}=7$. This gives
$\overline{f}=20$. The schema sum vector after selection is
$s^{(10)}=\frac{1}{20}<7,8,1,4>$.
## 6 Holland’s Schema Theorem
We can now state Holland’s Schema theorem [Hol75].
As in [VW98a], for $u\in\Omega$, define
$\mbox{hi}(u)=\left\\{\begin{array}[]{ll}0&\mbox{~{}~{}~{}if }u=0\\\
\max\\{i:2^{i}\otimes u>0\\}&\mbox{~{}~{}~{}otherwise}\end{array}\right.$
$\mbox{lo}(u)=\left\\{\begin{array}[]{ll}\ell-1&\mbox{~{}~{}~{}if }u=0\\\
\min\\{i:2^{i}\otimes u>0\\}&\mbox{~{}~{}~{}otherwise}\end{array}\right.$
Intuitively, the function $\mbox{hi}(u)$ returns the index high-order bit of
$u$, and $\mbox{lo}(u)$ returns the index of the low-order bit. Let ${\cal
L}(u)=\mbox{hi}(u)-\mbox{lo}(u)$. ${\cal L}(u)$ is often called the defining
length of $u$.
###### Theorem 6.1
(Holland’s approximate schema theorem.) Let $x\in\Lambda$ be a normalized
population, and let $y={\cal G}(x)$, where $\cal G$ includes proportional
selection, one-point crossover with crossover rate $c$, and bitwise mutation
with mutation rate $p$. Then,
$y^{(u)}_{v}\geq\frac{\sum_{j\in\Omega_{\overline{u}}}f_{j\oplus k}x_{j\oplus
k}}{\sum_{j\in\Omega}f_{j}x_{j}}\left(1-c\frac{{\cal
L}(u)}{\ell-1}\right)\left(1-p\right)^{\\#u}$
## 7 The Walsh Basis
The Walsh matrix $W$ has dimension $2^{\ell}$ by $2^{\ell}$, and has elements
defined by
$W_{i,j}=2^{-\ell/2}(-1)^{i^{T}j}=\frac{1}{\sqrt{n}}(-1)^{i^{T}j}$
Note that $W$ is symmetric and orthogonal ($WW=I$). The columns of $W$ define
a basis for $R^{n}$ called the Walsh basis.
As an example, for $\ell=2$,
$W=\frac{1}{2}\left[\begin{array}[]{rrrr}1&1&1&1\\\ 1&-1&1&-1\\\ 1&1&-1&-1\\\
1&-1&-1&1\end{array}\right]$
If $x$ is a vector over $\Omega$, then $\widehat{x}=Wx$ can be interpreted as
$x$ written in the Walsh basis, and if $M$ is a matrix over
$\Omega\times\Omega$, then $\widehat{M}=WMW$ can be interpreted as $M$ written
in the Walsh basis.
We are also interested in vectors and matrices indexed over $\Omega_{u}$. If
$\widehat{x}$ is a vector over $\Omega$ written in the Walsh basis, let
$\widehat{x}_{k}^{(u)}=2^{\\#\overline{u}/2}\widehat{x}_{k}$. Theorem 7.1 will
show that $\widehat{x}^{(u)}$ is the Walsh transform of $x^{(u)}$.
We can define a Walsh matrix $W^{(u)}$ indexed over
$\Omega_{u}\times\Omega_{u}$. For $i,j\in\Omega_{u}$, define
$W_{i,j}^{(u)}=2^{-\\#u/2}(-1)^{i^{T}j}$
The following theorem shows how the schema sum vector is related to the Walsh
coefficients of the population.
###### Theorem 7.1
For any $u\in\Omega$,
$x^{(u)}=W^{(u)}\widehat{x}^{(u)}$
Proof.
$\displaystyle(W^{(u)}\widehat{x}^{(u)})_{k}$ $\displaystyle=$ $\displaystyle
2^{-\\#u/2}\sum_{j\in\Omega_{u}}(-1)^{j^{T}k}\widehat{x}_{j}^{(u)}$ (1)
$\displaystyle=$ $\displaystyle
2^{\\#\overline{u}/2-\\#u/2}\sum_{j\in\Omega_{u}}(-1)^{j^{T}k}\widehat{x}_{j}$
$\displaystyle=$ $\displaystyle
2^{\\#\overline{u}/2-\\#u/2}\;2^{-\ell/2}\sum_{j\in\Omega_{u}}(-1)^{j^{T}k}\sum_{v\in\Omega}(-1)^{j^{T}v}x_{v}$
$\displaystyle=$ $\displaystyle
2^{-\\#u}\sum_{v\in\Omega}x_{v}\sum_{j\in\Omega_{u}}(-1)^{j^{T}(v\oplus k)}$
$\displaystyle=$ $\displaystyle 2^{-\\#u}\sum_{w\in\Omega}x_{w\oplus
k}\sum_{j\in\Omega_{u}}(-1)^{j^{T}w}$ $\displaystyle=$ $\displaystyle
2^{-\\#u}\sum_{w\in\Omega_{\overline{u}}}x_{w\oplus
k}\sum_{j\in\Omega_{u}}(-1)^{j^{T}w}$ $\displaystyle=$ $\displaystyle
2^{-\\#u}\sum_{w\in\Omega_{\overline{u}}}x_{w\oplus k}2^{\\#u}$
$\displaystyle=$ $\displaystyle x_{k}^{(u)}$
To see how equation 7 follows from 1, note that
$\sum_{j\in\Omega_{u}}(-1)^{j^{T}v}=0$ if $v\in\Omega_{u}$ and $v\neq 0$.
$\blacksquare$
Theorem 7.1 shows that the schema averages of a family of competing schemata
determine the Walsh coefficients of the population in a coordinate subspace in
the Walsh basis. To be more specific, consider $u$ as fixed. Then $x^{(u)}$
denotes the schema averages of the family of competing schemata
$\Omega_{\overline{u}}+k$, where $k$ varies over $\Omega_{u}$. Theorem 7.1
shows that these schema averages determine $\widehat{x}^{(u)}$, which is a
rescaling of the projection of $\widehat{x}$ into the coordinate subspace
generated by the elements of $\Omega_{u}$.
To continue the example started in section 5, if
$s^{(10)}=\frac{1}{20}<7,8,1,4>$, then
$\widehat{s}^{(10)}=W^{(10)}s^{(10)}=\frac{1}{20}<10,-2,5,1>$.
## 8 Crossover
If parent strings $i,j\in\Omega$ are crossed using a crossover mask
$m\in\Omega$, the children are $(i\otimes m)\oplus(j\otimes\overline{m})$ and
$(i\otimes\overline{m})\oplus(j\otimes m)$. In the simple genetic algorithm,
one child is chosen randomly from the pair of children.
For each binary string $m\in\Omega$, let
$\mbox{\raisebox{2.8903pt}{$\chi$}}_{m}$ be the probability of using $m$ as a
crossover mask.
The crossover matrix is given by
$C_{i,j}=\sum_{m\in\Omega}\frac{\mbox{\raisebox{2.8903pt}{$\chi$}}_{m}+\mbox{\raisebox{2.8903pt}{$\chi$}}_{\overline{m}}}{2}[i\otimes
m\oplus j\otimes\overline{m}=0]$
$C_{i,j}$ is the probability of obtaining $0$ as the result of the crossover
of $i$ and $j$.
Let $\sigma_{k}$ be the permutation matrix with $i,j$th entry given by
$[j\oplus i=k]$. Then $(\sigma_{k}x)_{i}=x_{i\oplus k}$. Define the crossover
heuristic ${\cal C}:\Lambda\rightarrow\Lambda$ by
${\cal C}_{k}(x)=(\sigma_{k}x)^{T}C\sigma_{k}x\mbox{~{}~{}~{}~{}~{}~{}for
}k\in\Omega$
Corollary 3.3 of [VW98a] gives that the Walsh transform of the crossover
matrix $\widehat{C}$ is equal to $C$.
Vose and Wright [VW98a] show that the $k$th component of ${\cal C}(x)$ with
respect to the Walsh basis is
$\sqrt{n}\sum_{i\in\Omega_{k}}\widehat{x}_{i}\widehat{x}_{i\oplus
k}\widehat{C}_{i,i\oplus k}$
where $n=2^{\ell}$.
###### Theorem 8.1
(The crossover heuristic in the Walsh basis.) Let $x\in\Lambda$ and let
$\widehat{y}$ denote ${\cal C}(x)$ expressed in the Walsh basis. Then
$\widehat{y}_{k}=\sqrt{n}\sum_{m}\frac{\mbox{\raisebox{2.8903pt}{$\chi$}}_{m}+\mbox{\raisebox{2.8903pt}{$\chi$}}_{\overline{m}}}{2}\widehat{x}_{k\otimes
m}\widehat{x}_{k\otimes\overline{m}}.$
Proof.
$\displaystyle\widehat{y}_{k}$ $\displaystyle=$
$\displaystyle\sqrt{n}\sum_{i\in\Omega_{k}}\widehat{x}_{i}\widehat{x}_{i\oplus
k}C_{i,i\oplus k}$ $\displaystyle=$
$\displaystyle\sqrt{n}\sum_{i\in\Omega_{k}}\widehat{x}_{i}\widehat{x}_{i\oplus
k}\sum_{m}\frac{\mbox{\raisebox{2.8903pt}{$\chi$}}_{m}+\mbox{\raisebox{2.8903pt}{$\chi$}}_{\overline{m}}}{2}[(i\otimes
m=0)\wedge((i\oplus k)\otimes\overline{m}=0)]$ $\displaystyle=$
$\displaystyle\sqrt{n}\sum_{m}\frac{\mbox{\raisebox{2.8903pt}{$\chi$}}_{m}+\mbox{\raisebox{2.8903pt}{$\chi$}}_{\overline{m}}}{2}\sum_{i\in\Omega_{k}}\widehat{x}_{i}\widehat{x}_{i\oplus
k}[(i\otimes m=0)\wedge((i\oplus k)\otimes\overline{m}=0)]$
The condition in the square brackets can only be satisfied when
$i=k\otimes\overline{m}$, and in this case $i\oplus
k=(k\otimes\overline{m})\oplus k=k\otimes m$. Thus,
$\displaystyle\widehat{y}_{k}$ $\displaystyle=$
$\displaystyle\sqrt{n}\sum_{m}\frac{\mbox{\raisebox{2.8903pt}{$\chi$}}_{m}+\mbox{\raisebox{2.8903pt}{$\chi$}}_{\overline{m}}}{2}\widehat{x}_{k\otimes
m}\widehat{x}_{k\otimes\overline{m}}$
$\blacksquare$
###### Theorem 8.2
(The crossover heuristic for schema in the Walsh basis.) Let $x\in\Lambda$ and
let $\widehat{y}$ denote ${\cal C}(x)$ expressed in the Walsh basis. Then
$\widehat{y}_{k}^{(u)}=\sum_{m}\frac{\mbox{\raisebox{2.8903pt}{$\chi$}}_{m}+\mbox{\raisebox{2.8903pt}{$\chi$}}_{\overline{m}}}{2}\;\;\widehat{x}_{k\otimes
m}^{(u\otimes
m)}\;\widehat{x}_{k\otimes\overline{m}}^{(u\otimes\overline{m})}$
Proof.
$\displaystyle\widehat{y}_{k}^{(u)}$ $\displaystyle=$ $\displaystyle
2^{\\#\overline{u}/2}\widehat{y}_{k}$ $\displaystyle=$ $\displaystyle
2^{\\#\overline{u}+\\#u/2}\sum_{m}\frac{\mbox{\raisebox{2.8903pt}{$\chi$}}_{m}+\mbox{\raisebox{2.8903pt}{$\chi$}}_{\overline{m}}}{2}\;\;\widehat{x}_{k\otimes
m}\;\widehat{x}_{k\otimes\overline{m}}$ $\displaystyle=$ $\displaystyle
2^{\\#\overline{u}+\\#u/2}\sum_{m}\frac{\mbox{\raisebox{2.8903pt}{$\chi$}}_{m}+\mbox{\raisebox{2.8903pt}{$\chi$}}_{\overline{m}}}{2}\;\left(2^{-\\#(\overline{u\otimes
m})/2}\;\widehat{x}_{k\otimes m}^{(u\otimes
m)}\right)\left(2^{-\\#(\overline{u\otimes\overline{m}})/2}\;\widehat{x}_{k\otimes\overline{m}}^{(u\otimes\overline{m})}\right)$
Consider the exponents:
$\displaystyle-\\#(\overline{u\otimes
m})/2-\\#(\overline{u\otimes\overline{m}})/2$ $\displaystyle=$
$\displaystyle-\ell/2+\\#(u\otimes m)/2-\ell/2+\\#(u\otimes\overline{m})/2$
$\displaystyle=$ $\displaystyle-\ell+\\#u/2$
Thus,
$\displaystyle\widehat{y}_{k}^{(u)}$ $\displaystyle=$ $\displaystyle
2^{\\#\overline{u}+\\#u/2}2^{-\ell+\\#u/2}\sum_{m}\frac{\mbox{\raisebox{2.8903pt}{$\chi$}}_{m}+\mbox{\raisebox{2.8903pt}{$\chi$}}_{\overline{m}}}{2}\;\;\widehat{x}_{k\otimes
m}^{(u\otimes
m)}\;\widehat{x}_{k\otimes\overline{m}}^{(u\otimes\overline{m})}$
$\displaystyle=$
$\displaystyle\sum_{m}\frac{\mbox{\raisebox{2.8903pt}{$\chi$}}_{m}+\mbox{\raisebox{2.8903pt}{$\chi$}}_{\overline{m}}}{2}\;\;\widehat{x}_{k\otimes
m}^{(u\otimes
m)}\;\widehat{x}_{k\otimes\overline{m}}^{(u\otimes\overline{m})}$
$\blacksquare$
To continue the numerical example, suppose that 1-point crossover with
crossover rate $1/2$ is applied to the $\ell=5$ population for which
$\widehat{s}^{(10)}=\frac{1}{20}<10,-2,5,1>$. We want to compute
$\widehat{y}_{k}^{(10)}$ for $k=0,2,8,10$. For $k=0,2,8$, for every crossover
mask $m$, either $k\otimes m=0$ or $k\otimes\overline{m}=0$, so
$\widehat{y}_{k}^{(10)}=\widehat{s}_{k}^{(k)}\widehat{s}_{0}^{(10\oplus
k)}=\widehat{s}_{k}^{(10)}$.
For $k=10$, there are four possible nontrivial crossover masks, each with
probability $1/8$. For two of these, $k\otimes m\neq 0$ and
$k\otimes\overline{m}\neq 0$. This gives
$\widehat{y}_{10}^{(10)}=\frac{3}{4}\widehat{s}_{10}^{(10)}+\frac{1}{4}\widehat{s}_{2}^{(2)}\widehat{s}_{8}^{(2)}=\frac{3}{4}\widehat{s}_{10}^{(10)}+\frac{1}{4}\left(\sqrt{2}\widehat{s}_{2}^{(10)}\right)\left(\sqrt{2}\widehat{s}_{8}^{(10)}\right)=\frac{3}{4}\cdot\frac{1}{20}+\frac{1}{4}\cdot
2\cdot\frac{-2}{20}\cdot\frac{5}{20}=\frac{3}{80}-\frac{1}{80}=\frac{1}{40}$
Thus, $\widehat{y}^{(10)}=\frac{1}{40}<20,-4,10,1>$.
The following theorem gives a simple formula for the exact change in the
expected schema averages after crossover. It is a restatment of theorem ?? of
[SW97] and theorem ?? of [SWA97]. It can also be easily derived from theorem
19.2 of [Vos99b] by setting the mutation rate to be zero.
###### Theorem 8.3
(Exact schema theorem for crossover.) Let $x$ be a population, and let
$y={\cal C}(x)$. Then
$y_{k}^{(u)}=\sum_{m}\frac{\mbox{\raisebox{2.8903pt}{$\chi$}}_{m}+\mbox{\raisebox{2.8903pt}{$\chi$}}_{\overline{m}}}{2}x_{k\otimes
m}^{(u\otimes m)}x_{k\otimes\overline{m}}^{(u\otimes\overline{m})}$ (3)
Proof.
$\displaystyle y_{k}^{(u)}$ $\displaystyle=$ $\displaystyle
2^{-\\#u/2}\sum_{v\in\Omega_{u}}(-1)^{k^{T}v}\widehat{y}_{v}^{(u)}$
$\displaystyle=$ $\displaystyle
2^{-\\#u/2}\sum_{v\in\Omega_{u}}(-1)^{k^{T}v}\sum_{m}\frac{\mbox{\raisebox{2.8903pt}{$\chi$}}_{m}+\mbox{\raisebox{2.8903pt}{$\chi$}}_{\overline{m}}}{2}\;\;\widehat{x}_{v\otimes
m}^{(u\otimes
m)}\;\widehat{x}_{v\otimes\overline{m}}^{(u\otimes\overline{m})}$
$\displaystyle=$ $\displaystyle 2^{-\\#u/2}\sum_{i\in\Omega_{u\otimes
m}}\sum_{j\in\Omega_{u\otimes\overline{m}}}\sum_{m}\frac{\mbox{\raisebox{2.8903pt}{$\chi$}}_{m}+\mbox{\raisebox{2.8903pt}{$\chi$}}_{\overline{m}}}{2}\;\;(-1)^{k^{T}(i\oplus
j)}\;\widehat{x}_{(i\oplus j)\otimes m}^{(u\otimes
m)}\;\;\widehat{x}_{(i\oplus j)\otimes\overline{m}}^{(u\otimes\overline{m})}$
$\displaystyle=$ $\displaystyle
2^{-\\#u/2}\sum_{m}\frac{\mbox{\raisebox{2.8903pt}{$\chi$}}_{m}+\mbox{\raisebox{2.8903pt}{$\chi$}}_{\overline{m}}}{2}\sum_{i\in\Omega_{u\otimes
m}}(-1)^{k^{T}i}\widehat{x}_{i}^{(u\otimes
m)}\sum_{j\in\Omega_{u\otimes\overline{m}}}(-1)^{k^{T}j}\widehat{x}_{j}^{(u\otimes\overline{m})}$
$\displaystyle=$ $\displaystyle
2^{-\\#u/2}\sum_{m}\frac{\mbox{\raisebox{2.8903pt}{$\chi$}}_{m}+\mbox{\raisebox{2.8903pt}{$\chi$}}_{\overline{m}}}{2}2^{\\#(u\otimes
m)/2}x_{k\otimes m}^{(u\otimes
m)}2^{\\#(u\otimes\overline{m})/2}x_{k\otimes\overline{m}}^{(u\otimes\overline{m})}$
$\displaystyle=$
$\displaystyle\sum_{m}\frac{\mbox{\raisebox{2.8903pt}{$\chi$}}_{m}+\mbox{\raisebox{2.8903pt}{$\chi$}}_{\overline{m}}}{2}x_{k\otimes
m}^{(u\otimes m)}x_{k\otimes\overline{m}}^{(u\otimes\overline{m})}$
$\blacksquare$
Theorems 8.1, 8.2, and 3 show that the effect of crossover using mask $m$ is
to move the population towards linkage equilibrium relative to $m$. Following
the population biologists (see [CK70] for example), we define the population
$x$ to be in linkage equilibrium relative to mask $m$ if
$\widehat{x}_{k}=\widehat{x}_{k\otimes m}\widehat{x}_{k\otimes\overline{m}}$,
or equivalently if $x_{k}^{(u)}=x_{k\otimes m}^{(u\otimes
m)}x_{k\otimes\overline{m}}^{(u\otimes\overline{m})}$ for all $k\in\Omega$. If
a population is in linkage equilibrium with respect to all masks of a family
of crossover masks that separates any pair of bit positions, then the
population will be completely determined by the order 1 schemata averages (or
equivalently the Walsh coefficients $\widehat{x}_{k}$ with $\\#k=1$). This is
formalized in theorem 3.0 of [VW98b] (Geiringer’s theorem).
Continuing the numerical example, suppose that one-point crossover with
crossover rate $1/2$ is applied to the $\ell=5$ population whose schema
averages for $u=10$ are given by $s^{(10)}=\frac{1}{20}<7,8,1,4>$. To apply
theorem 3, we need $s^{(2)}$ and $s^{(8)}$. These are easily obtained from
$s^{(10)}$: $s_{0}^{(2)}=s_{0}^{(10)}+s_{8}^{(10)}=\frac{2}{5}$,
$s_{2}^{(2)}=s_{2}^{(10)}+s_{10}^{(10)}=\frac{3}{5}$,
$s_{0}^{(8)}=s_{0}^{(10)}+s_{2}^{(10)}=\frac{3}{4}$,
$s_{8}^{(8)}=s_{8}^{(10)}+s_{10}^{(10)}=\frac{1}{4}$.
Let $y={\cal C}(s)$. As before, the probability of a crossover mask for which
$u\otimes m\neq 0$ and $u\otimes\overline{m}\neq 0$ is $1/4$. Thus,
$\displaystyle y_{0}^{(10)}$ $\displaystyle=$
$\displaystyle\frac{3}{4}s_{0}^{(10)}+\frac{1}{4}s_{0}^{(2)}s_{0}^{(8)}=\frac{3}{4}\cdot\frac{7}{20}+\frac{1}{4}\cdot\frac{2}{5}\cdot\frac{3}{4}=\frac{27}{80}$
$\displaystyle y_{2}^{(10)}$ $\displaystyle=$
$\displaystyle\frac{3}{4}s_{2}^{(10)}+\frac{1}{4}s_{2}^{(2)}s_{0}^{(8)}=\frac{3}{4}\cdot\frac{8}{20}+\frac{1}{4}\cdot\frac{3}{5}\cdot\frac{3}{4}=\frac{33}{80}$
$\displaystyle y_{8}^{(10)}$ $\displaystyle=$
$\displaystyle\frac{3}{4}s_{8}^{(10)}+\frac{1}{4}s_{0}^{(2)}s_{8}^{(8)}=\frac{3}{4}\cdot\frac{1}{20}+\frac{1}{4}\cdot\frac{2}{5}\cdot\frac{1}{4}=\frac{5}{80}=\frac{1}{16}$
$\displaystyle y_{10}^{(10)}$ $\displaystyle=$
$\displaystyle\frac{3}{4}s_{10}^{(10)}+\frac{1}{4}s_{2}^{(2)}s_{8}^{(8)}=\frac{3}{4}\cdot\frac{4}{20}+\frac{1}{4}\cdot\frac{3}{5}\cdot\frac{1}{4}=\frac{15}{80}=\frac{3}{16}$
One can check that $y^{(10)}$ is the Walsh transform of $\widehat{y}^{(10)}$
computed earlier.
###### Corollary 8.4
(Approximate schema theorem for crossover.) Let $x$ be a population, and let
$y={\cal C}(x)$. Then
$y_{k}^{(u)}\geq
x_{k}^{(u)}\sum_{m}\frac{\mbox{\raisebox{2.8903pt}{$\chi$}}_{m}+\mbox{\raisebox{2.8903pt}{$\chi$}}_{\overline{m}}}{2}[(u\otimes
m=u)\vee(u\otimes\overline{m}=u)]$
Note that the summation over $m$ includes just those crossover masks that do
not “split” the mask $u$.
Proof. For $u$ such that $u\otimes m=u$, we have:
$\displaystyle u\otimes\overline{m}$ $\displaystyle=$ $\displaystyle
0\mbox{~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} and }$ $\displaystyle
x^{(u\otimes\overline{m})}_{k\otimes\overline{m}}$ $\displaystyle=$
$\displaystyle x^{(0)}_{0}=1\mbox{~{}~{}~{}~{}~{}~{} and }$ $\displaystyle
x^{(u\otimes m)}_{k\otimes m}$ $\displaystyle=$ $\displaystyle x^{(u)}_{k}$
Similarly, for $u$ such that $u\otimes\overline{m}=u$, we have:
$\displaystyle u\otimes m$ $\displaystyle=$ $\displaystyle
0\mbox{~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} and }$ $\displaystyle
x^{(u\otimes m)}_{k\otimes m}$ $\displaystyle=$ $\displaystyle
x^{(0)}_{0}=1\mbox{~{}~{}~{}~{}~{}~{} and }$ $\displaystyle
x^{(u\otimes\overline{m})}_{k\otimes\overline{m}}$ $\displaystyle=$
$\displaystyle x^{(u)}_{k}$
Those terms in the summation of equation (3) for which $(u\oplus
m=u)\vee(u\oplus\overline{m}=u)$ is not true are nonnegative. Thus, if we drop
those terms from the summation, we get the equation of the corollary.
$\blacksquare$
###### Corollary 8.5
(Holland’s approximate schema theorem for 1-point crossover.) Let $x$ be a
population, and let $y={\cal C}(x)$, where ${\cal C}$ is defined through
1-point crossover with a crossover rate of $c$. Then
$y_{k}^{(u)}\geq x_{k}^{(u)}\left(1-c\frac{{\cal L}(u)}{\ell-1}\right)$
Proof. One-point crossover can be defined using $\ell$ crossover masks with a
nonzero probability. The crossover mask $0$ has probability $1-c$, and the
masks of the form $2^{i}-1$, $i=1,\ldots,\ell-1$ have probability
$c/(\ell-1)$. The number of crossover masks such that $(u\otimes
m=u)\vee(u\otimes\overline{m}=u)$ is not true is ${\cal L}(u)$. Thus, the
probability that $(u\otimes m=u)\vee(u\otimes\overline{m}=u)$ is true is
$\left(1-c\frac{{\cal L}(u)}{\ell-1}\right)$
$\blacksquare$
It is not hard to give similar approximate schema theorems for other forms of
crossover, such as two-point crossover and uniform corssover.
## 9 Mutation
In the Vose model, mutation is defined by means of mutation masks. If
$j\in\Omega$, then the result of mutating $j$ using a mutation mask
$m\in\Omega$ is $j\oplus m$. The mutation heuristic is defined by giving a
probability distribution $\mu\in\Lambda$ over mutation masks. In other words,
$\mu_{m}$ is the probability that $m\in\Omega$ is used. Given a population
$x\in\Lambda$, the mutation heuristic ${\cal U}:\Lambda\rightarrow\Lambda$ is
defined by
${\cal U}_{k}(x)=\sum_{j\in\Omega}\mu_{j\oplus k}x_{j}$
The mutation heuristic is a linear operator: it can be defined as
multiplication by the matrix $U$, where $U_{j,k}=\mu_{j\oplus k}$. In other
words, ${\cal U}(x)=Ux$.
In the Walsh basis, the mutation heuristic is represented by a diagonal
matrix.
###### Lemma 9.1
The $k$th component of the mutation heuristic in the Walsh basis is given by
$\sqrt{n}\>\widehat{\mu}_{k}\>\widehat{x}_{k}$
where $n=2^{\ell}$.
Proof. It is sufficient to show that the Walsh transform $\widehat{U}$ of $U$
is diagonal since
$\widehat{{\cal U}(x)}=WUx=(WUW)(Wx)=\widehat{U}\widehat{x}$
The following shows that $\widehat{U}$ is diagonal.
$\displaystyle\widehat{U}_{j,k}$ $\displaystyle=$
$\displaystyle\frac{1}{n}\sum_{v,w}(-1)^{j^{T}v}(-1)^{k^{T}w}U_{v,w}$
$\displaystyle=$
$\displaystyle\frac{1}{n}\sum_{v}\sum_{w}(-1)^{j^{T}v+k^{T}w}\mu_{v\oplus w}$
We now do a change of variable. Let $u=v\oplus w$, which implies that
$w=v\oplus u$.
$\displaystyle\widehat{U}_{j,k}$ $\displaystyle=$
$\displaystyle\frac{1}{n}\sum_{v}\sum_{u}(-1)^{j^{T}v+k^{T}(v\oplus
u)}\mu_{u}$ $\displaystyle=$
$\displaystyle\frac{1}{n}\sum_{v}\sum_{u}(-1)^{(j\oplus
k)^{T}v+k^{T}u}\mu_{u}$ $\displaystyle=$
$\displaystyle\frac{1}{n}\sum_{u}(-1)^{k^{T}u}\mu_{u}\sum_{v}(-1)^{(j\oplus
k)^{T}v}$ $\displaystyle=$ $\displaystyle\sum_{u}(-1)^{k^{T}u}\mu_{u}$
$\displaystyle=$ $\displaystyle\sqrt{n}\widehat{\mu}_{k}[j=k]$
$\blacksquare$
Define $\mu_{k}^{(u)}=\sum_{j\in\Omega_{\overline{u}}}\mu_{k\oplus j}$.
Theorem 7.1 shows that $\mu^{(u)}=W^{(u)}\widehat{\mu}^{(u)}$, where
$\widehat{\mu}_{k}=2^{\\#\overline{u}/2}\widehat{\mu}_{k}$ for all
$k\in\Omega_{u}$.
Define the $2^{\\#u}\times 2^{\\#u}$ matrix $U^{(u)}$ by
$U^{(u)}_{j,k}=\mu^{(u)}_{j\oplus k}$. Note that $U=U^{({\bf 1})}$. The proof
of lemma 9.1 shows that the Walsh transform $\widehat{U^{(u)}}$ of $U^{(u)}$
is diagonal and $\widehat{U^{(u)}}_{k,k}=2^{\\#u/2}\widehat{\mu}_{k}$. Thus,
it is consisitent to write $\widehat{U}^{(u)}$ for $\widehat{U^{(u)}}$.
We now assume that each string position $i$, $i=0,1,\ldots,\ell-1$, is mutated
independently of other positions: with a probability of $p_{i}$, the bit at
position $i$ is flipped. If all of the $p_{i}$ are equal to a common value
$p$, then $p$ is called the mutation rate.
Under this assumption, the probability distribution for mutation masks is
given by
$\mu_{m}=\prod_{i=0}^{\ell-1}p_{i}^{m_{i}}(1-p_{i})^{1-m_{i}}$ (4)
where $m_{i}$ denotes bit $i$ of $m$, and where $0^{0}$ is interpreted to be
$1$. For example, the distribtuion for $\ell=2$ is the vector
$<\begin{array}[]{cccc}(1-p_{0})(1-p_{1})&p_{0}(1-p_{1})&(1-p_{0})p_{1}&p_{0}p_{1}\end{array}>^{T}$
We now want to show that there is an equation similar to (4) for
$\mu_{m}^{(u)}$. The next lemma is a step in that direction. For $u\in\Omega$,
define $I(u)=\\{i\;:\;0\leq i<\ell\mbox{ and }u_{i}=1\\}$.
###### Lemma 9.2
For $v\in\Omega$,
$\sum_{k\in\Omega_{u}}\prod_{i\in I(u)}p_{i}^{k_{i}}(1-p_{i})^{1-k_{i}}=1$ (5)
Proof. The proof is by induction on $\\#u$.
If $\\#u=1$, then $u=2^{j}$ for some $j$, and $I(u)=j$. Also,
$\Omega_{u}=\\{0,u\\}$. Thus, the left side of equation (5) is
$p_{j}^{0}(1-p_{j})^{1}+p_{j}^{1}(1-p_{j})^{0}=(1-p_{j})+p_{j}=1$.
If $\\#u>1$, let $u=v\oplus w$ with $v\otimes w=0$, $\\#v>0$, $\\#w>0$. Then
$\displaystyle\sum_{k\in\Omega_{u}}\prod_{i\in
I(u)}p_{i}^{k_{i}}(1-p_{i})^{1-k_{i}}$ $\displaystyle=$
$\displaystyle\sum_{k\in\Omega_{v}}\sum_{r\in\Omega_{w}}\left(\prod_{i\in
I(v)}p_{i}^{k_{i}}(1-p_{i})^{1-k_{i}}\right)\left(\prod_{j\in
I(w)}p_{j}^{r_{j}}(1-p_{j})^{1-r_{j}}\right)$ $\displaystyle=$
$\displaystyle\left(\sum_{k\in\Omega_{v}}\prod_{i\in
I(v)}p_{i}^{k_{i}}(1-p_{i})^{1-k_{i}}\right)\left(\sum_{r\in\Omega_{w}}\prod_{j\in
I(w)}p_{i}^{r_{j}}(1-p_{j})^{1-r_{j}}\right)$ $\displaystyle=$ $\displaystyle
1$
$\blacksquare$
###### Lemma 9.3
For $u\in\Omega$ and $m\in\Omega_{u}$,
$\mu_{m}^{(u)}=\prod_{i\in I(u)}p_{i}^{m_{i}}(1-p_{i})^{1-m_{i}}$ (6)
Proof.
$\displaystyle\mu_{m}^{(u)}$ $\displaystyle=$
$\displaystyle\sum_{v\in\Omega_{\overline{u}}}\mu_{m\oplus v}$
$\displaystyle=$
$\displaystyle\sum_{v\in\Omega_{\overline{u}}}\left(\prod_{i\in
I(u)}p_{i}^{(m\oplus v)_{i}}(1-p_{i})^{1-(m\oplus
v)_{i}}\right)\left(\prod_{j\in I(\overline{u})}p_{j}^{(m\oplus
v)_{j}}(1-p_{j})^{1-(m\oplus v)_{j}}\right)$ $\displaystyle=$
$\displaystyle\left(\prod_{i\in
I(u)}p_{i}^{m_{i}}(1-p_{i})^{1-m_{i}}\right)\left(\sum_{v\in\Omega_{\overline{u}}}\;\;\prod_{j\in
I(\overline{u})}p_{j}^{(m\oplus v)_{j}}(1-p_{j})^{1-(m\oplus v)_{j}}\right)$
Do a change of variable: let $w=v\oplus(m\otimes\overline{u})$. Then
$\displaystyle\sum_{v\in\Omega_{\overline{u}}}\;\;\prod_{j\in
I(\overline{u})}p_{j}^{(m\oplus v)_{j}}(1-p_{j})^{1-(m\oplus v)_{j}}$
$\displaystyle=$ $\displaystyle\sum_{w\in\Omega_{\overline{u}}}\;\;\prod_{j\in
I(\overline{u})}p_{j}^{w_{j}}(1-p_{j})^{1-w_{j}}$ $\displaystyle=$
$\displaystyle 1$
$\blacksquare$
The next step is to compute the Walsh transform of the mutation probability
distribution under this assumption. It is helpful to do a change of
coordinates. For each $i=0,1,\ldots,\ell-1$, let $q_{i}=1-2p_{i}$. Under this
change of coordinates, equation (6) is equivalent to
$\mu_{m}^{(u)}=2^{-\\#u}\prod_{i\in I(u)}\left(1+(1-2m_{i})q_{i}\right)$
###### Lemma 9.4
For $m\in\Omega_{u}$,
$\widehat{\mu}_{m}^{(u)}=2^{-\\#u/2}\prod_{i\in I(m)}q_{i}$
Proof. The proof is by induction on $\\#u$. For the base case, assume that
$\\#u=1$. Then $u=2^{i}$ for some $i$, and
$\displaystyle\widehat{\mu}^{(u)}$ $\displaystyle=$ $\displaystyle
W^{(u)}\mu^{(u)}$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\left[\begin{array}[]{cc}1&1\\\
1&-1\end{array}\right]\frac{1}{2}\left[\begin{array}[]{c}1+q_{i}\\\
1-q_{i}\end{array}\right]$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\left[\begin{array}[]{c}1\\\
q_{i}\end{array}\right]$
For $\\#u>1$, we have
$\displaystyle\widehat{\mu}_{m}^{(u)}$ $\displaystyle=$ $\displaystyle
2^{-\\#u/2}\sum_{j\in\Omega_{u}}(-1)^{m^{T}j}\;\;2^{-\\#u}\prod_{i\in
I(u)}\left(1+(1-2j_{i})q_{i}\right)$
Let $u=v\oplus w$ where $v\otimes w=0$, $v\neq 0$, and $w\neq 0$.
$\displaystyle\widehat{\mu}_{m}^{(u)}$ $\displaystyle=$ $\displaystyle
2^{-\frac{3}{2}\\#(v\oplus w)}\sum_{j\in\Omega_{v\oplus
w}}(-1)^{m^{T}j}\prod_{i\in I(v\oplus w)}\left(1+(1-2j_{i})q_{i}\right)$
$\displaystyle=$
$\displaystyle\left(2^{-\frac{3}{2}\\#v}\sum_{j\in\Omega_{v}}(-1)^{(m\otimes
v)^{T}j}\prod_{i\in I(v)}\left(1+(1-2j_{i})q_{i}\right)\right)$
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\left(2^{-\frac{3}{2}\\#w}\sum_{j\in\Omega_{w}}(-1)^{(m\otimes
w)^{T}j}\prod_{i\in I(w)}\left(1+(1-2j_{i})q_{i}\right)\right)$
$\displaystyle=$ $\displaystyle\left(\widehat{\mu}_{m\otimes
v}^{(v)}\right)\left(\widehat{\mu}_{m\otimes w}^{(w)}\right)$ $\displaystyle=$
$\displaystyle\left(2^{-\\#v/2}\prod_{i\in I(m\otimes
v)}q_{i}\right)\left(2^{-\\#w/2}\prod_{i\in I(m\otimes w)}q_{i}\right)$
$\displaystyle=$ $\displaystyle 2^{-\\#u/2}\prod_{i\in I(m)}q_{i}$
$\blacksquare$
###### Lemma 9.5
$\widehat{\mu}_{m}=\frac{1}{\sqrt{n}}\prod_{i\in I(m)}q_{i}$
Proof.
$\widehat{\mu}_{m}=\widehat{\mu}_{m}^{({\bf 1})}=2^{\\#{\bf 1}/2}\prod_{i\in
I(m)}q_{i}=\frac{1}{\sqrt{n}}\prod_{i\in I(m)}q_{i}$
$\blacksquare$
###### Theorem 9.6
(The mutation hueristic in the Walsh Basis.) Let $x\in\Lambda$ be a
population, and let $y={\cal U}(x)$. If $k\in\Omega_{u}$, then
$\widehat{y}_{k}^{(u)}=\widehat{x}_{k}^{(u)}\prod_{i\in I(k)}q_{i}$
Proof.
$\displaystyle\widehat{y}_{k}^{(u)}$ $\displaystyle=$ $\displaystyle
2^{\\#\overline{u}/2}\widehat{y}_{k}$ $\displaystyle=$ $\displaystyle
2^{\\#\overline{u}/2}2^{\ell/2}\widehat{\mu}_{k}\widehat{x}_{k}\mbox{~{}~{}~{}~{}~{}~{}~{}~{}~{}by
lemma \ref{lem:mut_heur}}$ $\displaystyle=$ $\displaystyle
2^{\\#\overline{u}/2}2^{\ell/2}\left(2^{-\\#\overline{u}/2}\mu_{k}^{(u)}\right)\left(2^{-\\#\overline{u}/2}x_{k}^{(u)}\right)$
$\displaystyle=$ $\displaystyle 2^{\\#u/2}\left(2^{-\\#u/2}\prod_{i\in
I(k)}q_{i}\right)\widehat{x}_{k}^{(u)}$ $\displaystyle=$
$\displaystyle\widehat{x}_{k}^{(u)}\;\prod_{i\in I(k)}q_{i}$
$\blacksquare$
Theorem 9.6 shows how mutation affects a population. If $k\neq 0$, and if for
every $i$, $0<p_{i}\leq 1/2$, then $\prod_{i\in I(k)}q_{i}<1$. Thus,
$|\widehat{y}_{k}^{(u)}|<|\widehat{x}_{k}^{(u)}|$. Mutation is decreasing the
magnitude of the schema Walsh coefficients (except for the index 0 coefficient
which is constant at $2^{-\\#u/2}$). If all of these Walsh coefficients were
zero, then Theorem 7.1 shows that all of the corresponding schema averages
would be equal. In other words, mutation drives the population towards
uniformity.
To continue the numerical example, we take apply mutation with a mutation rate
of $1/8$ to the population $y$ of the previous section. We start with
$\widehat{y}^{(10)}=\frac{1}{40}<20,-4,10,1>$. Let $z={\cal U}(y)$. For all
$i$, $q=q_{i}=1-2p_{i}=1-1/4=3/4$. Thus,
$\displaystyle\widehat{z}_{0}^{(10)}$ $\displaystyle=$
$\displaystyle\widehat{y}_{0}^{(10)}=\frac{1}{2}$
$\displaystyle\widehat{z}_{2}^{(10)}$ $\displaystyle=$
$\displaystyle\widehat{y}_{2}^{(10)}\cdot
q=\frac{-1}{10}\cdot\frac{3}{4}=-\frac{3}{40}$
$\displaystyle\widehat{z}_{8}^{(10)}$ $\displaystyle=$
$\displaystyle\widehat{y}_{8}^{(10)}\cdot
q=\frac{1}{4}\cdot\frac{3}{4}=\frac{3}{16}$
$\displaystyle\widehat{z}_{10}^{(10)}$ $\displaystyle=$
$\displaystyle\widehat{y}_{10}^{(10)}\cdot
q^{2}=\frac{1}{40}\cdot\frac{9}{16}=\frac{9}{640}$
###### Lemma 9.7
For $u\in\Omega$,
$\widehat{y}^{(u)}=\widehat{U}^{(u)}\widehat{x}^{(u)}$
Proof. This is just a rewriting of the equation of theorem 9.6 into matrix
form. $\blacksquare$
The following theorem can be easily derived from theorem 19.2 of [Vos99b] by
setting the crossover rate to be zero.
###### Theorem 9.8
(The exact schema theorem for mutation.) Let $x\in\Lambda$ be a population,
and let $y={\cal U}(x)$ where $\cal U$ corresponds to mutating bit $i$ with
probability $p_{i}$ for $i=0,1,\dots\ell-1$. Then
$y^{(u)}=U^{(u)}x^{(u)}$ (9)
Proof.
$\displaystyle y^{(u)}$ $\displaystyle=$ $\displaystyle
W^{(u)}\widehat{y}^{(u)}$ $\displaystyle=$ $\displaystyle
W^{(u)}\widehat{U}^{(u)}\widehat{x}^{(u)}$ $\displaystyle=$ $\displaystyle
W^{(u)}(W^{(u)}U^{(u)}W^{(u)})W^{(u)}x^{(u)}$ $\displaystyle=$ $\displaystyle
U^{(u)}x^{(u)}$
$\blacksquare$
We continue the numerical example. We start with the schema averages computed
in the crossover section: $y^{(10)}=\frac{1}{80}<27,33,5,15>$ and let $z={\cal
U}(y)$ where $\cal U$ corresponds to mutation with a mutation rate of $1/8$.
Recall that $\mu^{(u)}$ is given by equation (6), so
$\mu^{(10)}=<(1-p)^{2},p(1-p),(1-p)p,p^{2}>=\frac{1}{64}<49,7,7,1>$
The entries of $U^{(u)}$ are given by $U_{j,k}^{(u)}=\mu_{j\oplus k}^{(u)}$,
so
$\displaystyle
z^{(10)}=U^{(10)}y^{(10)}=\frac{1}{64}\left[\begin{array}[]{rrrr}49&7&7&1\\\
7&49&1&7\\\ 7&1&49&7\\\ 1&7&7&49\\\
\end{array}\right]\cdot\frac{1}{80}\left[\begin{array}[]{r}27\\\ 33\\\ 5\\\
15\end{array}\right]=\frac{1}{1280}\left[\begin{array}[]{r}401\\\ 479\\\
143\\\ 257\end{array}\right]$
###### Corollary 9.9
(The approximate schema theorem for mutation.) Let $x\in\Lambda$ be a
normalized population, and let $y={\cal U}(x)$. Assume that ${\cal U}$
corresponds to mutation where each bit is mutated (flipped) with probability
$p$. Then
$y_{k}^{(u)}\geq(1-p)^{\\#u}x_{k}^{(u)}$
Proof. The diagonal entries of $U^{(u)}$ are all equal to
$\widehat{\mu}_{0}^{(u)}=\prod_{i\in I(u)}(1-p_{i})$. Under the assumption of
this corollary, $p_{i}=p$ for all $i$, so the diagonal entries of $U^{(u)}$
are all equal to $(1-p)^{\\#u}$. The off-diagonal entries of $U^{(u)}$ are all
nonnegative. If we drop the off-diagonal entries in the computation of
equation (9), we get the result of this corollary. $\blacksquare$
## 10 Computational Complexity
In this section we give the computational complexity of computing the schema
averages for a family of competing schema averages after one generation of the
simple GA.
It is more efficient to compute the schema averages after selection using the
normal basis using the algorithm given in section 5, convert to the Walsh
basis using the Fast Walsh transform (see Appendix A), compute the effects of
crossover and mutation in the Walsh basis, and convert back to normal
coordinates using the fast Walsh transform. To convert from $x^{(u)}$ to
$\widehat{x}^{(u)}$ by the fast Walsh transform has complexity
$\Theta(\\#u\cdot 2^{\\#u})$ ([Vos99b]). The complexity of the computation of
theorem 8.2 is $\Theta(\\#u\cdot 2^{\\#u})$ for one or two point crossover
(since the summation over $m$ is $\Theta(\\#u)$). The complexity of the
computation of theorem 9.6 is also $\Theta(\\#u\cdot 2^{\\#u})$. Thus, the
overall computational complexity (assuming an initial finite population and
one or two point crossover) is $\Theta(\\#u\cdot 2^{\\#u}+rK)$ where $K$ was
defined as the cost of doing one function evaluation. Note that the only
dependence on the string length is through $K$. Thus, it is possible to
compute schema averages exactly for very long string lengths.
## 11 Conclusion
We have given a version of the Vose infinite population model where the
crossover heuristic function and the mutation heuristic function are separate
functions, rather than combined into a single mixing heuristic function.
We have shown how the expected behavior of a simple genetic algorithm relative
to a family of competing schemata can be computed exactly over one generation.
As was mentioned in section 7, these schema averages over a family of
competing schemata correspond to a coordinate subspace of $\Lambda$ as
expressed in the Walsh basis. In [VW98a], it was shown that the mixing
(crossover and mutation) heuristic is invariant over coordinate subspaces in
the Walsh basis. We have explicitly shown how the Vose infinite population
model (the heuristic function $\cal G$) can be computed on these subspaces. In
fact, the model works in essentially the same way on schema averages as it
does on individual strings.
The formulas are simply stated and easy to understand. They are
computationally feasible to apply even for very long string lengths if the
order of the family of competing schemata is small. (The formulas are
exponential in the order of the schemata.)
A result like the exact schema theorem is most useful if it can be applied
over multiple generations. The results of this paper show that the obstacle to
doing this is selection, rather than crossover and mutation. The result of the
exact schema theorem is the exact schema averages of the family of competing
schemata (or the corresponding Walsh coefficients) after one generation. These
correspond to an “infinite population” which has nonzero components over all
elements of $\Omega$. If the string length is long and no assumptions are made
about the fitness function, then the effect of selection on the schema
averages for the next generation will be computationally infeasible to
compute. Thus, in order to apply the exact schema theorem over multiple
generations for practically realistic string lengths, one will have to make
assumptions about the fitness function. A subsequent paper will explore this
problem.
Acknowledgements: The author would like to thank Yong Zhao, who proofread a
version of this paper.
## References
* [Alt95] Lee Altenberg. The schema theorem and Price’s theorem. In L. Darrell Whitley and Michael D. Vose, editors, Foundations of genetic algorithms 3, pages 23–49. Morgan Kaufmann, 1995.
* [BG87] C. L. Bridges and D. E. Goldberg. An analysis of reproduction and crossover in a binary-coded genetic algorithm. In J. Grefenstette, editor, Proceedings of the Second International Conference on Genetic Algorithms, pages 9–13, Hillsdale, N. J., 1987. Lawrence Erlbaum Associates.
* [CK70] James F. Crow and Motoo Kimura. An Introduction to Population Genetics Theory. Burgess Publishing Company, Minneapolis, Minnesota, 1970.
* [Hol75] John Holland. Adapdation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor, Michigan, 1975.
* [SW97] C. R. Stephens and H. Waelbroeck. Effective degrees of freedom in genetic algorithms and the block hypothesis. In Thomas Bäck, editor, Proceedings of the Seventh International Conference on Genetic Algorithms, pages 34–40, San Mateo, 1997\. Morgan Kaufman.
* [SWA97] C. R. Stephens, H. Waelbroeck, and R. Aguirre. Schemata as building blocks: does size matter. In Foundations of Genetic Algorithms (FOGA-5), pages 117–133, San Mateo, 1997. Morgan Kaufmann.
* [VL91] M. D. Vose and G. E. Liepins. Punctuated equilibria in genetic search. Complex Systems, 5:31–44, 1991.
* [Vos96] M. D. Vose. Modeling simple genetic algorithms. Evolutionary Computation, 3(4):453–472, 1996.
* [Vos99a] M. D. Vose. Random heuristic search. submitted to Theoretical Computer Science, 1999.
* [Vos99b] M. D. Vose. The Simple Genetic Algorithm: Foundations and Theory. MIT Press, Cambridge, MA, 1999.
* [VW98a] M. D. Vose and A. H. Wright. The simple genetic algorithm and the Walsh transform: Part I, theory. Evolutionary Computation, 6(3):253–273, 1998.
* [VW98b] M. D. Vose and A. H. Wright. The simple genetic algorithm and the Walsh transform: Part II, the inverse. Evolutionary Computation, 6(3):275–289, 1998.
Appendix: Table of Notation
$[e]$ | = 1 if $e$ is true, 0 if $e$ is false
---|---
$\ell$ | The string length
$c$ | The arity of the alphabet used in the string representation
$\Omega$ | The set of binary strings of length $\ell$
$n$ | $=c^{\ell}$, the number of elements of $\Omega$
$r$ | The population size
$u\oplus v$ | The strings $j$ and $k$ are bitwise added mod 2, (or bitwise XORed)
$u\otimes v$ | The strings $j$ and $k$ are bitwise multiplied mod 2, (or bitwise ANDed)
$\overline{u}$ | The ones complement of the string $j$
$\\#u$ | The number of ones in the binary string $u$
$k^{T}j$ | The same as $\\#(k\otimes j)$, the number of ones in $k\otimes j$
$\Lambda$ | The set of nonnegative real-valued vectors indexed over $\Omega$ whose sum is $1$
| = the set of normalized populations
| = the set of probability distributions over $\Omega$
$\Omega_{u}$ | $=\\{k\in\Omega\;:\;u\otimes k=k\\}$
$\Omega_{\overline{u}}\oplus v$ | $=\\{j\oplus v\;:\;j\in\Omega_{\overline{u}}\\}=$ the schema with fixed positions masked by $u$ and specified by $v$
$x_{v}^{(u)}$ | $=\sum_{j\in\Omega_{\overline{u}}}x_{j\oplus v}$ (assuming that $\sum_{j}x_{j}=1$).
| The schema average or sum for the schema $\Omega_{\overline{u}}\oplus v$
$x^{(u)}$ | The vector of schema averages for the family of schemat $\\{\Omega_{\overline{u}}\oplus v\>:\>v\in\Omega_{u}\\}$
$W$ | The Walsh transform matrix, indexed over $\Omega\times\Omega$. $W_{i,j}=\frac{1}{\sqrt{n}}(-1)^{i^{T}j}$
$W^{(u)}$ | The Walsh transform matrix, indexed over $\Omega_{u}\times\Omega_{u}$. $W_{i,j}^{(u)}=2^{-\\#u/2}(-1)^{i^{T}j}$
$\widehat{x}$ | $=Wx$, the Walsh transform of normalized population $x$
$\widehat{x}^{(u)}$ | $=2^{\\#\overline{u}/2}\widehat{x}$, also the Walsh transform $W^{(u)}x^{(u)}$ of $x^{(u)}$ with respect to $\Omega_{u}$
$\mbox{\raisebox{2.8903pt}{$\chi$}}_{m}$ | The probability that $m\in\Omega$ is used as a crossover mask
$\mu_{m}$ | The probability that $m\in\Omega$ is used as a mutation mask
$\mu_{m}^{(u)}$ | $\sum_{j\in\Omega_{\overline{u}}}\mu_{k\oplus j}$
$p_{i}$ | The probability that bit $i$ is flipped in the mutation step
$q_{i}$ | $=1-2p_{i}$
$U$ | The matrix indexed over $\Omega\times\Omega$ and defined by $U_{j,k}=\mu_{j\oplus k}$
$U^{(u)}$ | The matrix indexed over $\Omega_{u}\times\Omega_{u}$ and defined by $U^{(u)}_{j,k}=\mu^{(u)}_{j\oplus k}$
|
arxiv-papers
| 2011-05-18T05:37:36 |
2024-09-04T02:49:18.866584
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Alden H. Wright",
"submitter": "Alden Wright Alden Wright",
"url": "https://arxiv.org/abs/1105.3538"
}
|
1105.3543
|
Physical properties of the circumnuclear starburst ring in the barred Galaxy NGC 1097
Pei-Ying Hsieh1,2,
Satoki Matsushita2,7,
Guilin Liu3,
Paul T. P. Ho2,6,
Nagisa Oi4, and
Ya-Lin Wu2,5,
$^1$ Institute of Astrophysics, National Central University,
No.300, Jhongda Rd., Jhongli City, Taoyuan County 32001, Taiwan, R.O.C.
$^2$ Academia Sinica Institute of Astronomy and
Astrophysics, P.O. Box 23-141, Taipei 10617, Taiwan, R.O.C.
$^3$ Astronomy Department, University of Massachusetts,
Amherst, MA 01003, USA
$^4$ Department of Astronomy, School of Science, Graduate University for Advanced Studies,
2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
$^5$ Institute of Astrophysics, National Taiwan University, No. 1, Sec. 4, Roosevelt Road,
Taipei 10617, Taiwan, R.O.C.
$^6$ Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA
$^7$ Joint ALMA Office, Alonso de C$\acute{\rm o}$rdova 3107, Vitacura 763 0355, Santiago, Chile
We report high resolution $^{12}$CO(J = 2–1), $^{13}$CO(J = 2–1),
and $^{12}$CO(J = 3–2) imaging of the Seyfert 1/starburst ring
galaxy NGC 1097 with the Submillimeter Array to study the
physical and kinematic properties of the 1-kpc circumnuclear
starburst ring. Individual star clusters as detected in the HST map of Pa$\alpha$
line emission have been used to determine the star formation rate, and are
compared with the properties of the molecular gas.
The molecular ring has been resolved into individual clumps at
GMA-scale of 200–300 pc in all three CO lines.
The intersection between the dust lanes and the starburst ring, which is
associated with orbit-crowding region, is resolved into two
physically/kinematically distinct features in the
15$\times$10 (105$\times$70 pc) $^{12}$CO(J = 2–1) map. The clumps associated with the
dust lanes have broader line width, higher surface gas density, and
lower star formation rate, while the narrow line clumps associated
with the starburst ring have opposite characteristics. Toomre-Q value
under unity at the radius of the ring suggests that the molecular ring
is gravitationally unstable to fragment at the scale of the GMA.
The line widths and surface density of gas mass of the clumps show
an azimuthal variation related to the large scale dynamics. The star
formation rate, on the other hand, is not significantly affected by the
dynamics, but has a correlation with the intensity ratio of $^{12}$CO
(J = 3–2) and $^{12}$CO(J = 2–1), which traces the denser gas
associated with star formation. Our resolved CO map, especially in
the orbit-crowding region, for the first time demonstrates observationally that
the physical/kinematic properties of the GMAs are affected
by the large scale bar-potential dynamics in NGC 1097.
§ INTRODUCTION
NGC 1097 [SB(s)b; de Vaucouleurs et al., 1991] is a nearby
(D = 14.5 Mpc; 1$\arcsec$ = 70 pc, Tully, 1988) barred spiral
galaxy. A pair of dust lanes are located at the leading edges of
the major bar. A radio continuum image at 1.465 GHz shows faint
ridges coinciding with the dust lanes [Hummel et al., 1987]. The nucleus
is thought to be a transition object from LINER to Seyfert 1
[Storchi-Bergmann et al., 2003]. Detailed studies on the nucleus show
morphological and kinematic evidences of the nuclear spirals
on the order of 30 pc, and was interpreted as part of the fueling chain
to the very center [Fathi et al., 2006, Davies et al., 2009, van de Ven & Fathi, 2010].
NGC 1097 is also an IRAS bright galaxy [Sanders et al., 2003].
The contribution of large amount of IR flux
arise from its 1 kpc-circumnuclear starburst ring
<cit.>. The starburst ring
hosts ”hot-spots“ composed with super star clusters
identified in HST images [Barth et al., 1995], and was suggested
to have an instantaneous burst of star formation which occurred
$\sim$ 6–7 Myr ago [Kotilainen et al., 2000].
The molecular gas of NGC 1097 in the nuclear region has
been previously mapped in the dense gas tracer of HCN(J = 1–0),
low excitation lines of $^{12}$CO(J = 1–0)
and $^{12}$CO(J = 2–1) <cit.>.
These maps show a central concentration coincident with the peak of the 6-cm radio
continuum core [Hummel et al., 1987], as well as a molecular ring
coincident with the starburst ring. A pair of molecular ridges
coincident with the dust lanes are also detected, and show
non-circular motions, possibly caused by the bar-potential
dynamics <cit.>. The molecular ring
has a typical warm temperature ($T_{\rm K} \sim 100$ K) and
denser gas ($n_{\rm H_{2}} \sim 10^{3}$ cm$^{-3}$) consistent
with the starburst environments [Wild et al., 1992, Aalto et al., 1995].
The molecular ring exhibits a
twin-peak structure in the 4–10 resolution
CO and HCN maps, where
a pair of molecular concentrations are located in the intersection
of the molecular dust lanes and
the star forming ring. Its orientation is nearly perpendicular
to the stellar bar.
The twin-peak has higher H$_{2}$ column density
than the surrounding ring, and similar features have been seen in other
barred galaxies, and can be explained by the crowding of
gas streamlines <cit.>.
The gas flow gradually changes direction and migrates
toward the center of the galaxy to accumulate to form a ring
[Schwarz, 1984, Athanassoula, 1992, Piner et al., 1995].
Subsequent enhanced star formation may occur through
gravitational fragmentation stochastically [Elmegreen, 1994],
or dynamically driven by collision of molecular clouds [Combes & Gerin, 1985],
or alternatively originated from the shock compressed gas near the contact point
of the dust lanes and the ring [B$\ddot{\rm o}$ker et al., 2008]. Another intriguing
topic is thus whether the occurrence of the circumnuclear starburst ring
would prohibit or boost its nuclear
activities <cit.>.
High spatial/kinematic resolution observations of molecular lines
are essential to study the circumnuclear ring structures, since they are the
sites of star formation and respond to the large scale dynamics.
NGC 1097 is one of the best example to study the circumnuclear ring
for its typical structures of dust lanes, starburst ring, and nuclear
In order to study the physical and kinematic properties
of the starburst ring, especially in the twin-peak region,
we have now obtained higher resolution
$^{12}$CO(J = 2–1) (15$\times$10),
$^{13}$CO(J = 2–1) (18$\times1\farcs4$),
and $^{12}$CO(J = 3–2) (35$\times2\farcs1$)
maps down to 100 pc. By virtue of the high angular resolution
multi-J lines, we derive the fundamental properties of
molecular gas as well as star formation in the ring in order
to give a comprehensive view of this system.
§ OBSERVATIONS AND DATA REDUCTION
§.§ SMA observations
We observed NGC 1097 with the Submillimeter Array[
The SMA is a joint project between the Smithsonian
Astrophysical Observatory and the Academia Sinica Institute
of Astronomy and Astrophysics and is funded by the
Smithsonian Institution and the Academia Sinica.] [Ho et al., 2004]
at the summit of Mauna Kea, Hawaii. The array consists of
eight 6-m antennas. Four basic configurations of the antennas
are available. With the compact configuration, two nights of
$^{12}$CO(J = 2–1) data were obtained in 2004 (Paper I).
To achieve higher spatial resolution, we obtained two further
nights of $^{12}$CO(J = 2–1) data with the extended and the
very extended configurations in 2005. To study the excitation
of the gas, we also obtained one night of $^{12}$CO(J = 3–2)
data in 2006 with the compact configuration. All the observations
have the same phase center. We located the
phase center at the 6-cm peak of the nucleus [Hummel et al., 1987]. Detailed observational parameters,
sky conditions, system performances, and calibration sources
are summarized in Table <ref>.
The SMA correlator processes two IF sidebands separated by
10 GHz, with $\sim$2 GHz bandwidth each. The upper/lower
sidebands are divided into slightly overlapping 24 chunks of
104 MHz width. With the advantage of this wide bandwidth of
the SMA correlator, the receivers were tuned to simultaneously
detect three CO lines in the 230 GHz band. The
$^{12}$CO(J = 2–1) line was set to be in the upper sideband, while
the $^{13}$CO(J = 2–1) and the C$^{18}$O(J = 2–1) lines
were set to be in the lower sideband. For the $^{12}$CO(J = 3–2) line,
we placed the redshifted frequency of 344.35 GHz in the upper sideband.
We calibrated the SMA data with the MIR-IDL software package.
The detailed calibration procedures of the data of compact
configuration were described in Paper I.
For the extended and very extended configurations, where Uranus
is resolved quite severely, we observed two bright quasars
(3C454.3 & 3C111), and adopted a similar bandpass/flux
calibration method as described in Paper I. At 345 GHz, we used
both Uranus and Neptune for the flux and bandpass calibrators.
Mapping and analysis were done with the MIRIAD and NRAO AIPS packages.
The visibility data were CLEANed in AIPS by task IMAGR.
We performed the CLEAN process to deconvolve the dirty
image to clean image. The deconvolution procedure was adopted
with the H$\ddot{\rm o}$gbom algorithm [H$\ddot{\rm o}$gbom, 1974] and the Clark algorithm [Clark, 1980].
We used a loop gain of 10$\%$ and did restrict the CLEAN area by
iterative examinings.
The CLEAN iterations were typically stopped at the 1.5$\sigma$ residual levels.
The number of CLEAN components in individual channel maps was typically about 300.
All of the 230 GHz visibility data were combined to achieve a better $uv$
coverage and sensitivity. Our 230 GHz continuum data were constructed by
averaging the line-free channels. Due to the sideband leakage at 230 GHz,
a limited bandwidth of 1.3 GHz and 0.5 GHz were obtained respectively in the
upper and lower sidebands to make a continuum image. Weak
continuum emission at 230 GHz with a peak intensity of 10
mJy beam$^{-1}$ was
detected at about 4$\sigma$ at the southern part of the ring.
The 1$\sigma$ noise level of the continuum emission at 345 GHz
averaged over the line-free bandwidth of 0.4 GHz is 6 mJy beam$^{-1}$,
and there is an
$\sim3\sigma$ detection in the nucleus and in the ring.
In this paper we did not subtract the continuum emission since it is too faint
as compared to the noise level in the line maps.
The spectral line data of $^{12}$CO(J = 2–1) and $^{12}$CO(J = 3–2)
were binned to 10 km s$^{-1}$ resolution. As the $^{13}$CO(J = 2–1)
emission is fainter, those data were binned to 30 km s$^{-1}$ resolution to
increase the S/N.
The C$^{18}$O(J = 2–1) line emission
was detected in the lower sideband of the 230 GHz data. However,
the leakage from the $^{12}$CO(J = 2–1) line in the upper sideband
was significant. Therefore, in this paper we will not make use of
the C$^{18}$O(J = 2–1) data for further analysis.
In this paper, we present the maps with natural
weighting. The angular resolution and rms noise level per channel
are $1\farcs5\times1\farcs0$ and 26 mJy beam$^{-1}$ (400 mK)
for the $^{12}$CO(J = 2–1), $1\farcs8\times 1\farcs4$ and 30 mJy
beam$^{-1}$ (320 mK) for the $^{13}$CO(J = 2–1) data, and
$3\farcs5\times2\farcs1$ and 35 mJy beam$^{-1}$ (47 mK)
for the $^{12}$CO(J = 3–2) data.
We used AIPS task MOMNT to construct the integrated
intensity-weighted maps. The task would reject pixels
lower than the threshold intensity, set to be 2.5–3$\sigma$ in the
CLEANed cube after smoothing in velocity and spatial direction.
The smoothing kernels are 30 km s$^{-1}$ in the velocity
direction, and a factor of two of the synthesized beam in
the spatial direction in the maps we made.
In the following line ratio analysis, we use the maps truncated to the same
$uv$ coverage as in Sect. <ref>
§.§ HST NICMOS Pa${\alpha}$ image
As one of the sample galaxies in the HST NICMOS survey of nearby galaxies
(PI: Daniela Calzetti, GO: 11080), NGC 1097 was observed on November 15, 2007 by
HST equipped with NIC3 camera. NIC3 images have a
51$\times$51 field of view and a plate scale of
02 with an undersampled PSF. In this survey, each observation consists
of images taken in two narrowband filters: one centered on the Pa$\alpha$ recombination line
of hydrogen (1.87 $\mu$m) (F187N), and the other on the adjacent narrow-band continuum
exposure (F190N), which provides a reliable continuum subtraction. Each set of exposures was made with a 7-position small ($<1\arcsec$ step) dithered. Exposures of 160 and 192 seconds per dither position in F187N and F190N, respectively, reach a 1$\sigma$
detection limit of 2.3$\times$10$^{-16}$ erg s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ in the
continuum-subtracted Pa$\alpha$ image of NGC 1097.
Using the STSDAS package of IRAF, we removed the NICMOS Pedestal
effect, masked out bad pixels and cosmic rays, and drizzled the dither
images onto a finer (01 per pixel) grid frame. The resultant drizzled
F190N image is then scaled and subtracted from its F187N peer after
carefully aligning to the latter using foreground stars. The residual shading
effect in the pure Pa$\alpha$ image is removed by subtracting the median
of each column. This strategy works well for the NGC 1097 data which
contain relatively sparse emission features. The PSF of the final
Pa$\alpha$ image has a 026 FWHM. Further details of the
data reduction and image processing are described in Liu et al., 2010.
§ RESULTS
§.§ Morphologies of the molecular gas
The integrated intensity maps of $^{12}$CO(J = 2–1),
$^{12}$CO(J = 3–2) and $^{13}$CO(J = 2–1) lines are shown
in Figure <ref>.
The maps show a central concentration and a ring-like structure
with a radius of 700 pc ($\sim$10).
The gas distribution of $^{12}$CO(J = 3–2) map is similar to
that of the $^{12}$CO(J = 2–1) map, where the central concentration
has a higher integrated intensity than the ring. The $^{13}$CO(J = 2–1) map,
on the other hand, shows comparable intensity between the ring and the nucleus.
Comparing with the previous observations (Paper I),
the molecular ring and the central concentration have been resolved
into individual clumps, especially for the twin-peak structure
in the molecular ring, with
the higher resolution $^{12}$CO(J = 2–1) map.
We also show the peak brightness temperature map
of the $^{12}$CO(J = 2–1) emission in Figure <ref>
made by the AIPS task SQASH to extract the maximum
intensity along the velocity direction at each pixel.
Note that the ring and
the nucleus have comparable brightness temperatures.
The position of the AGN <cit.>,
which is assumed to be the dynamical center of the galaxy,
seems to be offset by 07 northwest to the central peak of the
integrated CO maps.
This is also seen in our previous results
and we interpreted it as a result of the intensity weighting in
the integration (Paper I). To confirm if the dynamical center as
derived from the $^{12}$CO(J = 1–0) emission [Kohno et al., 2003] is
consistent with that of $^{12}$CO(J = 2–1), we will have
some more analysis in Sect. <ref>.
In Figures <ref> and <ref>, we show
the $^{12}$CO(J = 2–1) and $^{12}$CO(J = 3–2) channel
maps overlaid on the archival HST I-band image (F814W) to
show the gas distribution at different velocities. We show
these images at the resolution of 40 km s$^{-1}$, but we use
the resolution of 10 km s$^{-1}$ for actual analysis.
The astrometry of the HST I-band image was corrected by
the known positions of the foreground 19 stars in the catalog of
USNO-A 2.0.
After astrometry correction, we found the intensity peak of
the nucleus of HST image has a 08 offset northeast to the position
of AGN, this may be due to extinction, or inaccurate astrometry.
We can not rule out these factors. However this offset is within
the uncertainty of the CO synthesized beam.
Both $^{12}$CO transitions show emission with
a total velocity extent of $\sim$550–600 km s$^{-1}$ at the
2$\sigma$ intensity level. The western molecular ridge/arms
(coincident with the dust lane in the optical image) joins the
southwestern part of the ring from –162 km s$^{-1}$ to 37
km s$^{-1}$, and the eastern molecular ridge joins the northeast
ring from –42 km s$^{-1}$ to 157 km s$^{-1}$. We find that the
velocity extent of these ridges is $\sim$200 km s$^{-1}$ at
the 2$\sigma$ intensity level. In between the molecular ring
and the nuclear disk, there are also extended ridge emission
connecting the nuclear disk and the ring in both lines.
This ridge emission is more significant in the $^{12}$CO(J = 2–1)
than in the $^{12}$CO(J = 3–2) map. However, this may be an
angular resolution effect.
The ridge emission looks more significant in the $^{12}$CO(J = 2–1) map,
but the ridge emission does not have enough flux density (Jy/beam)
to reach the same S/N in the high resolution image, which means it
has extended structure, i.e., the faint ridge emission in the high
resolution map is more or less resolved.
In Figure <ref>, we show the $^{12}$CO(J = 2–1)
integrated intensity map overlaid on the HST I-band (F814W)
archival image to compare the optical and radio morphologies.
The Pa$\alpha$ image (near-IR) is also shown in Figure <ref>.
The optical image shows a pair of spiral arms in the central
1 kpc region. The arms consist of dusty filaments and star clusters.
Two dust lanes at the leading side of the major bar connect to
the stellar arms. Dusty filaments can also be seen to be filling
the area between the spiral arms and the nucleus. The
$^{12}$CO(J = 2–1) map (15$\times$10)
shows a central concentration, a molecular ring and molecular
ridges with good general correspondence to the optical image.
However, there are significant differences in a detailed comparison.
Although the starburst ring looks like spiral arms in the optical image,
the molecular gas shows a more complete ring, and the molecular
ridges are seen to join the ring smoothly.
Inside of the stellar ring, the major dust lanes are offset from the
molecular ring. Assuming that the $^{12}$CO emission is faithfully
tracing the total mass, the prominent dark dust lanes are not
significant features while they join the molecular ridges as the edge
of the ring. The stellar ring or star formation activities are then
correlated with the peaks in the total mass distribution.
The central molecular concentration corresponds in general with
the stellar nucleus, but is offset to the south of the stellar light.
The nuclear molecular distribution is quite asymmetric with lots of
protrusions at the lower intensity contours, which may
correspond to gas and dust filaments which connect the
nuclear concentration to the molecular ring.
§.§ Properties of the molecular gas
The molecular ring has been resolved into a
complex structure of compact sources immersed in a diffuse emission
with lower surface brightness as shown in Figure <ref>.
At our resolution (100 pc), the contrast between clumps
and diffuse emission is
not high. Hence, the clumps are possibly connected to each other via
the diffuse emission. With such a physical configuration, it is difficult
to uniquely isolate the individual clumps by some clump finding
algorithm <cit.>.
Moreover, the detection of clumps is dependent on the angular resolution
which is available.
In this paper, we select the peaks of the main
structures in the $^{12}$CO(J = 2–1) integrated intensity map, in order
to locate the individual clumps within the ring and the molecular ridges.
We define the clumps located outside of the 10$\arcsec$ radius as the
dust lane clumps, and those within the 10$\arcsec$ radius as the
ring clumps.
The typical size of Giant Molecular Clouds (GMCs)
is on the order of a few to few tens pc [Scoville et al., 1987].
The size of the clumps in the ring of NGC 1097, as detected with our
synthesized beam, is at least $\sim$200 pc. We are therefore detecting
molecular clumps larger than GMCs, most likely a group of GMCs,
namely Giant Molecular Cloud Associations <cit.>.
Here we still use the term “clumps" to describe the individual peaks
at the scale of GMA. We expect that the clumps would be resolved
into individual GMCs when higher angular resolution is available.
In Table <ref>, we show the quantities
measured within one synthesized beam to study the kinematic properties
with a high resolution (i.e., one synthesize beam) in the following sections.
The observed peak brightness temperatures of individual clumps are
in the range from $\sim$2 – 8 K.
To show the general properties of the GMAs, we also measured
the physical properties integrated over their size in Table <ref>.
We define the area of the GMA by measuring
the number of pixels above the threshold intensity of 5$\sigma$
in the $^{12}$CO(J = 2–1) integrated intensity map.
We calculated the “equivalent radius” if the measured areas are modeled as
spherical clumps. The results
are reported in Table <ref>
together with the resulting $M_{\rm H_{2}}$ integrated over the area.
The derived values of $M_{\rm H_2}$ are therefore larger than that
in Table <ref>, which only measured the mass within one synthesized beam
at the intensity peak.
The method to derive the $M_{\rm H_{2}}$ will be described
in Sect. <ref>.
Several factors are essential to consider for fair comparisons
of our GMAs with other galaxies, such as beam size, filling factor, etc.
A rough comparisons show that our GMAs in the starburst ring have
physical extent of $\sim$200–300 pc, which have similar order
with the GMAs in other galaxies <cit.>.
§.§.§ Spectra of the molecular clumps
We show in Figure <ref> the spectra of the individual clumps
selected above at their peak positions (i.e., within a synthesized beam).
Most of the clumps show single gaussian profiles.
The gaussian FWHM line widths of the clumps are determined by
gaussian fitting, and are listed in Table <ref>.
The line widths we observed are quadratic sums of the
intrinsic line widths and the galactic motion across
the synthesized beam, and the relation can be expressed as
\begin{equation}
\sigma_{\rm obs}^{2} = \sigma_{\rm rot}^{2} + \sigma_{\rm int}^{2},
\end{equation}
where $\sigma_{\rm obs}$ is the observed FWHM line width,
$\sigma_{\rm rot}$ is the line width due to the galactic rotation
and the radial motions within the beam, and the $\sigma_{\rm int}$ is the
intrinsic line width. The systematic galactic motion would be negligible if
the spatial resolution is small enough. We subtract $\sigma_{\rm rot}$
as derived from a circular rotation model by fitting the rotation curve
in Sect. <ref>.
The derived intrinsic line width of the clumps are reported in Table <ref>.
In general, there is $\sim$10–20% difference between the
observed and intrinsic line widths, which suggests that
the galactic circular motions do not dominate the line broadening at this
high resolution.
However, the large “intrinsic” velocity dispersion thus derived by subtracting
circular motions could still be dominated by non-circular motions,
especially in the twin-peak region and in the dust lanes.
We will mention this effect in the discussion.
In Table <ref>, we define the clumps based on their location
and their velocity dispersions. The clumps in the ring are further
designated by their velocity dispersion being broader or narrower
than 30 km s$^{-1}$, and named respectively as B1,..., B3, and N1,..., N11,
respectively. The clumps located at the dust lanes are named D1,..., D5.
§.§.§ Mass of the clumps
Radio interferometers have a discrete sampling of the $uv$ coverage
limited by both the shortest and longest baselines. Here, we estimate the
effects due to the missing information.
In Figure <ref>, we convolved our newly combined
SMA data to match the beam size of the JCMT data (21$\arcsec$),
and overlaid the spectra to compare the flux.
The integrated $^{12}$CO(J = 2–1) fluxes of the JCMT and SMA
data are $\sim$120 K km s$^{-1}$ and $\sim$71 K km s$^{-1}$,
respectively. The integrated $^{13}$CO(J = 2–1) fluxes of the JCMT and SMA
data are $\sim$15 K km s$^{-1}$ and $\sim$7 K km s$^{-1}$
respectively. Therefore our SMA data
recover $\sim60\%$ and $\sim47\%$ of the $^{12}$CO(J = 2–1)
and $^{13}$CO(J = 2–1) fluxes measured by the JCMT, respectively.
If the missing flux
is attributed to the extended emission, then the derived fluxes for the compact
clumps will remain reliable.
However, the spectra of the SMA data seem to have similar line profiles as that
of the JCMT. Part of the inconsistency could then possibly be due to
the uncertainty of the flux calibration, which are $15\%$ and $20\%$ for the
JCMT and SMA, respectively.
We note therefore that the following quantities
measured from the flux will have uncertainties of at least $20\%$
from the flux calibration.
There are several ways to calculate the molecular gas mass.
The conventional H$_{2}$/$I_{\rm CO}$ ($X_{\rm CO}$) conversion
factor is one of the methods to derive the gas mass assuming that the
molecular clouds are virialized. First, we use the $^{12}$CO emission
to calculate the molecular H$_{2}$ column density, $N_{\rm H_{2}}$, as
\begin{equation}
N_{\rm H_{2}} = X_{\rm CO} \int{{T_{\rm b}} dv} = X_{\rm CO} I_{\rm CO},
\end{equation}
where $N_{\rm H_{2}}$ for each clump is calculated by
adopting the $X_{\rm CO}$ conversion factor of 3$\times$10$^{20}$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$ [Solomon et al., 1987], $T_{\rm b}$ is the brightness temperature, and $dv$ is the line width.
The values are
listed in Table <ref>. Note that the $X_{\rm CO}$ has
wide range of 0.5–4$\times$10$^{20}$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$
[Young & Scoville, 1991, Strong & Mattox, 1996, Dame et al., 2001, Draine et al., 2007, Meier & Turner, 2001, Meier et al., 2008]. We adopted
3$\times$10$^{20}$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$ to be consistent
with Paper I.
The surface molecular gas mass density
$\Sigma_{\rm H_2}$ and the molecular gas mass $M_{\rm H_2}$
are thus calculated as,
\begin{equation}
\Sigma_{\rm H_{2}} = N_{\rm H_{2}} m_{\rm H_{2}},
\end{equation}
\begin{equation}
{M}_{\rm H_{2}} = \Sigma_{\rm H_{2}} {\rm d}\Omega,
\end{equation}
where $m_{\rm H_{2}}$ and d$\Omega$ are the mass of a hydrogen molecule
and the solid angle of the integrated area, respectively.
Since this conversion factor is used for the $^{12}$CO(J = 1–0) emission,
we assume the simplest case where the ratio of
$^{12}$CO(J = 2–1)/$^{12}$CO(J = 1–0) is unity.
The H$_{2}$ mass of the clumps measured within one synthesized beam
are listed in Table <ref>. We would get the
molecular gas mass ($M_{\rm gas}$) by multiplying the H$_{2}$ mass by the mean atomic weight of 1.36 of the He correction.
The $^{12}$CO(J = 2–1) is usually optically thick and only traces the
surface properties of the clouds. Optically thinner
$^{13}$CO(J = 2–1) would be a better estimator of the total column density.
We therefore calculate the average $M_{\rm H_{2}}$ in the nucleus and the ring to
compare the $M_{\rm H_{2}}$ derived from both $^{12}$CO(J = 2–1) and $^{13}$CO(J = 2–1) lines.
In the case of $^{12}$CO(J = 2–1), the average $M_{\rm H_{2}}$ of the ring is
about 1.7$\times10^{7}$ M$_{\odot}$ within one synthesized beam, and the
corresponding value for the nucleus is 3.4$\times10^{7}$ M$_{\odot}$.
However, in Paper I we derived the intensity ratio of
$^{12}$CO(J = 2–1)/$^{12}$CO(J = 1–0) is $\sim$2 for the nucleus
in the lower resolution map. Therefore the $M_{\rm H_{2}}$ of the nucleus
is possibly smaller than that derived above.
The conventional Galactic $X_{\rm CO}$ of 3$\times10^{20}$ cm$^{-2}$
(K km s$^{-1}$)$^{-1}$ is often suggested to be overestimated in the galactic center and the
starburst environment by a factor of 2 to 5 <cit.>.
Therefore, our estimated $M_{\rm H_{2}}$ in the ring and the nucleus
might be smaller at least by a factor of 2.
Assuming the $^{13}$CO(J = 2–1) emission is optically thin, and
the $^{13}$CO/H$_{2}$ abundance of 1$\times10^{-6}$ [Solomon et al., 1979].
We calculate the $M_{\rm H_{2}}$ with excitation temperature ($T_{\rm ex}$)
of 20 K and 50 K in LTE condition.
The $M_{\rm H_{2}}$ of the nucleus averaged over one synthesized beam
are (7.0$\pm$2.5)$\times10^{6}$ M$_{\odot}$ and (1.3$\pm$0.4)$\times10^{7}$
M$_{\odot}$ for 20 K and 50 K, respectively.
The $M_{\rm H_{2}}$ of the ring averaged over one synthesized beam
are (5.8$\pm$2.5)$\times10^{6}$ M$_{\odot}$ and (1.1$\pm$0.4)$\times10^{7}$
M$_{\odot}$ for 20 K and 50 K, respectively. With the assumption of
constant $^{13}$CO/H$_{2}$ abundance, if the $T_{\rm ex}$ is $\le$
20 K, then the conversion factor we adopted for the $^{12}$CO line
is overestimated by a factor of $\sim$3 to 5. The overestimation
is smaller (a factor of 2 to 3) if the gas is as warm as 50 K.
We measured the $^{12}$CO(J = 2–1) flux higher than 3$\sigma$
to derive the total H$_{2}$ mass of the nucleus and the ring.
The total flux of the nucleus and the ring are
490.2$\pm$59.3 Jy km s$^{-1}$ and 2902.2$\pm$476.8 Jy km s$^{-1}$,
respectively. If we adopted the intensity ratios of $^{12}$CO(J = 2–1)
and $^{12}$CO(J = 1–0) are 1.9$\pm$0.2 and 1.3$\pm$0.2
for the nucleus and the ring (Paper I), the $M_{\rm H_2}$ of the
nucleus and the ring are (1.6$\pm$0.3)$\times$10$^{8}$ M$_{\odot}$
and (1.4$\pm$0.3)$\times$10$^{9}$ M$_{\odot}$, respectively.
Thus the nucleus and the ring account for 10% and 90%
of the $M_{\rm H_2}$ within the 2 kpc circumnuclear region, respectively.
We calculate the virial mass of individual clumps by
\begin{equation}
M_{\rm vir} = \frac{2r\sigma_{\rm rms}^{2}}{G},
\end{equation}
where ${\it r}$ is the radius of the clump derived in
Sect. <ref>, and $\sigma_{\rm rms}$ is the
dimensional intrinsic root mean square velocity dispersion,
which is equal to $\sqrt{(3/8ln2)}$$\sigma_{\rm int}$.
Since we observe the $\sigma_{\rm int}$ in one dimension, we
need to multiply the observed $\sigma_{\rm int}^{2}$ by 3.
The radius is adopted with the size of the clumps assuming
that $\sigma_{\rm int}$ is isotropic.
The results are shown in
Table <ref>. Note that the virial mass is for GMAs not
GMCs, and we assume that the GMAs are bounded structures. The ratio of the virial mass $M_{\rm vir}$ and
$M_{\rm gas}$ are shown in Table <ref>.
We found that the narrow line clumps have $M_{\rm vir}$/$M_{\rm gas}$
that are more or less about unity, but are larger than unity for the broad
line and dust lane clumps.
We plot the general properties of the molecular gas mass of individual
molecular clumps in Figure <ref>.
In Figure <ref>a, the histogram
of the gas mass integrated over their size seems to be a power law with a sharp drop at the
low mass end. This is due to the sensitivity limit,
since the corresponding 3$\sigma$ mass limit is $\sim27\times10^{6}$ M$_{\odot}$ for a clump with a diameter of 33 (the average
of the clumps in Table <ref>).
In Figure <ref>b, the FWHM intrinsic line
widths have a weak correlation with gas mass in the ring.
In Figure <ref>a, we show the azimuthal
variation of $\Sigma_{\rm H_{2}}$ calculated at the emission peaks of the clumps
integrated over one synthesized beam.
If we assume that NGC 1097 has a trailing spiral, then the
direction of rotation is clockwise from east (0$\degr$)
to north (90$\degr$).
The $\Sigma_{\rm H_{2}}$ in the orbit crowding region is roughly from
0 to 45, and from 180 to 225.
Note that the dust lane clumps typically have $\Sigma_{\rm H_{2}}$ similar to
the narrow line ring clumps, which are lower than the broad line
ring clumps in the orbit crowding region. The average
$\Sigma_{\rm H_{2}}$ of the narrow line ring and dust lane
clumps is $\sim1800$ M$_{\odot}$ pc$^{-2}$, and
$\sim2300$ M$_{\odot}$ pc$^{-2}$ for the broad line ring clumps.
In Figure <ref>b,
the velocity dispersion of clumps located at the orbit crowding
region and the dust lane are larger than that of the narrow line
ring clumps. The average velocity dispersion of the narrow line
clumps is $\sim50$ km s$^{-1}$, and $\sim90$ km s$^{-1}$
for the broad line ring/dust lane clumps.
Given the similar line brightnesses (Table <ref>) of the peaks, the increased
$\Sigma_{\rm H_{2}}$ are probably the results of increased line widths as it is
indicated in equation (2).
On the other hand, since the $N_{\rm H_2}$
is proportional to the number density of gas and line-of-sight path (i.e., optical depth),
the higher $\Sigma_{\rm H_2}$ of the broad line clumps may be due
to either a larger number density, or a larger line-of-sight path. We will discuss
these effects in the discussion.
§.§ Young star clusters
To check how the star formation properties are associated with the molecular
clumps in the ring, we compare the 6-cm radio
continuum, V-band, and Pa${\alpha}$ images with the molecular gas images.
The 6-cm radio continuum sources are selected from
the intensity peaks [Hummel et al., 1987]
at an angular resolution of 25. The V-band
clusters ($<$ 13 mag) are selected from
the HST F555W image [Barth et al., 1995].
These V-band selected clusters have typical size of 2 pc (003),
and are suspected to be super star clusters by Barth et al., 1995.
Pa$\alpha$ clusters are identified in our HST F187N image
described in the Sect. <ref>.
The Pa$\alpha$ clusters are identified by SExtractor with
main parameters of detection threshold of 15$\sigma$ (DETECT_THRESH)
and minimum number of pixels above threshold of 1 (DETECT_MINAREA). The
number of deblending sub-thresholds is 50 for DEBLEND_NTHRESH
and 0.0005 for DEBLEND_MINCONT. The parameters
were chosen by a wide range of verifying and visual inspection.
We used 6-cm radio continuum and V-band selected clusters for
the phenomenological comparison with the molecular clumps.
To get a high resolution and uniform sample of star formation
rate (SFR), we use the Pa$\alpha$ clusters in the following analysis.
In the $^{12}$CO(J = 2–1) integrated intensity map (Figure <ref>),
the star clusters and radio continuum sources are located
in the vicinity of molecular clumps within the synthesized beam.
The distribution of the massive star clusters is uniform in the ring instead
of showing clustering in certain regions. The star clusters
do not coincide with most of the CO peaks.
The spatial correlation seems to be better in the peak brightness
temperature map in Figure <ref>. Furthermore, there are no detected
star clusters and radio continuum sources in the dust lane
clumps, namely, clumps D1, D2, D3, and D4.
We corrected the extinction of the Pa$\alpha$ emission by the intensity ratio
of H$\alpha$ (CTIO 1.5 m archived image) and Pa$\alpha$. The PSFs of H$\alpha$ and Pa$\alpha$
are $\sim$10 and $\sim$03, respectively, and an additional
convolving Gaussian kernel has been applied to both H$\alpha$
and Pa$\alpha$ images to match the CO beam size.
The observed intensity ratio of H$\alpha$ and Pa$\alpha$,
(I$_{\rm H\alpha}$/I$_{\rm Pa\alpha}$)$_{\rm o}$, and
the predicted intensity ratio
(I$_{\rm H\alpha}$/I$_{\rm Pa\alpha}$)$_{\rm i}$ are
multiplied by the extinction as follows:
\begin{equation}
\left (\frac{I_{\rm H\alpha}}{I_{\rm Pa\alpha}}\right)_{\rm o} =
\left (\frac{I_{\rm H\alpha}}{I_{\rm Pa\alpha}}\right)_{\rm i} \times
10^{-0.4{E(\rm B-V)}(\kappa_{\rm H{\alpha}}-\kappa_{\rm Pa\alpha})},
\end{equation}
where $E$(B–V) is the color excess, $\kappa_{\rm H\alpha}$ and $\kappa_{\rm Pa\alpha}$
are the extinction coefficients at the wavelength of H$\alpha$
and Pa$\alpha$, respectively.
The predicted intensity ratio of 8.6 is derived in the case B
recombination with temperature
and electron density of 10000 K and 10$^{4}$ cm$^{-3}$, respectively [Osterbrock, 1989].
The extinction coefficient of H$\alpha$ and Pa$\alpha$ are
adopted from Cardelli's extinction curve [Cardelli et al., 1989],
with $\kappa_{\rm H\alpha}$ = 2.535 and $\kappa_{\rm Pa\alpha}$ = 0.455, respectively.
We derive the color excess $E$(B–V) using equation (6),
and derive the extinction of
Pa$\alpha$ using the following equation,
\begin{equation}
A_{\rm \lambda} = \kappa_{\rm \lambda} E(B-V),
\end{equation}
where $A_{\rm \lambda}$ is the
extinction at wavelength $\lambda$.
We show the derived values for $A_{\rm Pa\alpha}$ and $A_{\rm V}$
in Table <ref>, where $\kappa_{\rm V}$ is 3.1 at V-band.
The average extinction of the clumps at the wavelength of 1.88 $\micron$ (redshifted
Pa$\alpha$ wavelength) is about 0.6 mag, which is quite transparent
as compared with the extinction at the H$\alpha$ line of $\sim$ 4 mag.
We calculate SFRs using the Pa${\alpha}$ luminosity based on the
equation in Calzetti et al., 2008 of
\begin{equation}
{\rm SFR} (\rm M_{\odot} ~\rm yr^{-1}) = 4.2\times10^{-41} L_{\rm Pa\alpha} (\rm erg ~s^{-1}).
\end{equation}
The SFR surface density ($\rm\Sigma_{SFR}$) is thus
calculated within the size of the CO synthesized beam (15$\times$10).
Note that SFR cannot
be determined in the dust lane clumps since we do not detect significant
star clusters in both Pa$\alpha$ and H$\alpha$ images.
The low $\rm\Sigma_{SFR}$ in the broad line ring and dust lane clumps seems to be
unlikely due to the extinction, since the extinction of the broad line
ring clumps are similar to that of the narrow line ring clumps, and the
$\Sigma_{\rm H_{2}}$ of the dust lane clumps are similar to
that of the narrow line ring clumps.
To compare the star formation activities with the properties of the
molecular gas, we measured the $\rm\Sigma_{SFR}$ at the position
of each clump (Table <ref>). We show the correlation
of $\rm\Sigma_{\rm SFR}$ and the $\rm\Sigma_{H_{2}}$ of the molecular clumps in
Figure <ref>a.
This plot shows very little correlation.
In Figure <ref>, we overlay our data on the plot of $\rm\Sigma_{SFR}$
and $\rm\Sigma_{H_{2}}$ used in Kennicutt (1998) to
compare the small and the large scale star formation.
The average number follows the Kennicutt-Schmidt correlation closely.
However, we have either lower $\rm\Sigma_{\rm SFR}$ or higher $\rm\Sigma_{H_{2}}$
than the global values in Kennicutt, 1998. This might be because our spatial resolution is smaller
than for their data.
Recent investigations have shown that the power scaling
relationship of the spatially-resolved Schmidt-Kennicutt law
remains valid in the sub-kpc scale [Bigiel et al., 2008], to $\sim$200 pc in
M51 [Liu et al., 2010] and M33 [Verley et al., 2010, Bigiel et al., 2010], but becomes
invalid at the scale of GMC/GMAs [Onodera et al., 2010] because the scaling is overcome
by the large scatter. The absence of a correlation in our
100 pc study is thus not surprising because even if the
Schmidt-Kennicutt law is still valid, the scatter is expected
to as large as $\sim$0.7 dex [Liu et al., 2010], larger than
the dynamical range of the gas density in Figure <ref>a.
Other possibilities to explain the inconsistence are the
uncertain conversion factor mentioned in Sect. <ref>.
The conversion factor is likely to be overestimated in the galactic
center and starburst region. Moreover, regarding to the
Schmidt-Kennicutt law was derived from the global galaxies
that might be dominated by disk GMCs, our nuclear ring
might not follow the same relation for its particular
physical conditions.
As for the distribution of $\rm\Sigma_{SFR}$ in the ring,
we find that $\rm\Sigma_{SFR}$ is low in the
broad line clumps in Figure <ref>b, but do not have
an obvious systematic azimuthal variation as $\rm\Sigma_{H_{2}}$ or
intrinsic line width in Figure <ref>.
In general,
$\rm\Sigma_{SFR}$ is higher in the northern
ring than in the southern ring. This distribution, as an
average quantity, shows no strong dependence
on location within the ring.
§.§ Physical conditions
§.§.§ Intensity ratio of multi-J CO lines
We compare the intensity ratio of the different CO lines on the same spatial
scales by restricting the data to the same $uv$-range from 7.3 k$\lambda$ to
79.6 k$\lambda$ for the $^{12}$CO(J = 2–1), $^{13}$CO(J = 2–1) and $^{12}$CO(J = 3–2)
lines. The matched beam size of all maps is 325$\times2\farcs55$.
We corrected for the primary beam attenuation in the maps.
We measured the line intensities of individual
clumps in the $uv$-matched integrated intensity maps, and calculated the intensity
ratios in Table <ref>. The $uv$-matched
low resolution maps do have some beam smearing effects on the spectra.
However, an examination of the line profiles and attempts to
correct for line smearing did not affect the derived line ratios
to within the experimental errors.
We estimate the density and temperature of the clumps
with the LVG analysis [Goldreich & Kwan, 1974] in a one-zone
model. The collision rates of CO are from Flower & Launay, 1985
for temperatures from 10 to 250 K, and from Mckee et al., 1982
for 500 to 2000 K. We assume $^{12}$CO and $^{13}$CO abundances
with respect to H$_{2}$
of 5$\times10^{-5}$ and 1$\times10^{-6}$,
with the observed velocity gradient of $\sim$ 1 km s$^{-1}$ pc$^{-1}$
of the ring.
We determined the velocity gradient in the Paper I by the PV
diagram, and it is consistent in this paper.
The average ratio of the narrow and broad line clumps are used.
Clumps N4, B1, D1, D2, D4, D5 are excluded
in the average ratio because of their large uncertainty in R$_{13}$.
Therefore the average R$_{32}$ and R$_{13}$ of the narrow line clumps
are 1.00$\pm$0.02 and 9.90$\pm$2.11, respectively.
The average R$_{32}$ and R$_{13}$ of the broad line clumps
are 0.72$\pm$0.01 and 9.55$\pm$1.56, respectively.
With the constraint of the intensity ratios within the uncertainty,
the estimated temperature and density of the narrow
line clumps are $\ge$250 K and $(4.5\pm3.5)\times10^{3}$
cm$^{-3}$. The broad line clumps have temperatures of $45\pm15$ K
and density of $(8.5\pm1.5)\times10^{2}$ cm$^{-3}$.
The predicted brightness temperature ($T_{\rm b}$) is $\sim$100 K
for the narrow line clumps and $\sim$20 K for the broad line clumps.
However, it seems to be inconsistent with the high/low $\Sigma_{\rm H_2}$
and low/high number density in the broad/narrow line clumps
if we assume a constant scale height for the clumps. The solution may be a
smaller beam filling factor for the narrow line clumps.
In Figure <ref>c, the R$_{32}$ values have a positive correlation with $\rm\Sigma_{SFR}$.
In Figure <ref>d, similar to $\rm\Sigma_{SFR}$,
R$_{32}$ is slightly lower in the broad line ring clumps
and does not show any systematic pattern in the azimuthal
§.§ Kinematics
Figure <ref>a is the intensity weighted isovelocity map of
$^{12}$CO(J = 2–1). The gas motion in the ring
appears to be dominated by circular motion, while it
shows clear non-circular motions in the $^{12}$CO(J = 1–0) map
as indicated by the S-shape nearly parallel to the
dust lanes. As we discussed in Paper I, the non-significant
non-circular motion in the $^{12}$CO(J = 2–1) maps is
perhaps because the dust lanes are not as strongly
detected in $^{12}$CO(J = 2–1) line, along with the fact
that they are closer to the edge of our primary beam, or
the non-circular motion is not prominent at the high spatial resolution.
The circumnuclear gas is in general in solid body rotation.
The velocity gradient of the blueshifted part is slightly steeper
than the redshifted part.
We also show the intensity weighted velocity dispersion map in
Figure <ref>b.
As we mentioned above, the velocity
dispersion is larger in the twin-peak region, and lower in the region
away from the twin-peak region.
The dynamical center of NGC 1097 was derived by Kohno et al., 2003
in their low resolution $^{12}$CO(J = 1–0) map.
With our high resolution $^{12}$CO(J = 2–1) map, we expect to
determine the dynamical center more accurately.
We use the AIPS task GAL to determine the dynamical center. In the task GAL, $^{12}$CO(J = 2–1) intensity-weighted velocity map (Figure <ref>) is used to fit a rotation curve. The deduced kinematic
parameters are summarized in Table 6. We use an exponential curve to fit
the area within 7$\arcsec$ in radius. The observed rotation curve and the fitted model
curve are shown in Figure <ref>. From the fitted parameters, we find that
the offset ($\sim0\farcs3$) of the dynamical center with respect to the position of the AGN is still within
a fraction of the synthesized beam size. The derived $V_{\rm sys}$ has a difference of $\sim$5 km s$^{-1}$
between $^{12}$CO(J = 1–0) and $^{12}$CO(J = 2–1) data, which is less than the velocity resolution of the data. Upon examining
the channel maps of the $^{12}$CO(J = 2–1) data, we find that the peak
of the nuclear emission is
almost coincident with the position of the AGN with an offset of 03.
We therefore conclude
that the position offset in the integrated intensity map, as mentioned
in Sect. <ref>, is due to the
asymmetric intensity distribution.
§ DISCUSSIONS
§.§ Molecular ring of NGC 1097
§.§.§ Twin-Peak structure
In the low resolution CO maps (Paper I, Kohno et al., 2003),
NGC 1097 shows bright CO twin-peak structure arising at the intersection
of the starburst ring and the dust lanes. The $\ge$300 pc resolution
CO data show that the barred galaxies usually have a
large amount of central concentration of molecular gas [Sakamoto et al., 1999].
Kenney et al., 1992 found that in several barred galaxies which host
circumnuclear rings (M101, NGC 3351, NGC 6951),
the central concentrations of molecular gas were resolved into
twin-peak structures when resolution of $\sim$200 pc is attained.
A pair of CO intensity concentrations are found in these cases,
in the circumnuclear ring, at the intersection of
the ring and the dust lane. Their orientation is
almost perpendicular to the major stellar bar.
The twin-peak structure can be
attributed to the orbit crowding of inflowing gas stream lines.
The gas flow changes from its original orbit (the so called $x_{1}$ orbit)
when it encounters the shocks, which results in a large deflection
angle and migrate to new orbit (the so called $x_{2}$ orbit). The gas then
accumulates in the family of the $x_{2}$ orbits in the
shape of a ring or nuclear spirals [Athanassoula, 1992, Piner et al., 1995].
Intense massive star formation would follow in the
ring/nuclear spiral once the gas becomes dense enough
to collapse [Elmegreen, 1994].
In our 100 pc resolution CO map, the
starburst molecular ring is resolved into individual
GMAs. In the orbit crowding region, we resolve the twin-peak
into broad line clumps associated with the curved dust
lanes. The narrow line clumps are located away from the
twin-peak and are associated with star formation.
This kind of “spectroscopic components” were also shown
in several twin-peak galaxies at the intersection of dust lanes
and circumnuclear ring, such as NGC 1365 [Sakamoto et al., 2007],
NGC 4151 [Dumas et al., 2010], NGC 6946 [Schinnerer et al., 2007], and
NGC 6951 [Kohno et al., 1999]. However, most of the spectra
at these intersections show blended narrow/broad line components,
which is perhaps
due to insufficient angular resolution. Our observations
for the first time spatially resolved these two components
toward the twin-peak region of NGC 1097.
It is interesting to note that the circumnuclear ring is nearly
circular at $\sim$42$\degr$ inclination, which indicates
its intrinsic elliptical shape in the galactic plane. The schematic
sketch is shown in Figure <ref>.
The loci of dust lanes are invoked to trace the galactic
shock wave, and their shapes are dependent
on the parameters of the barred potential. In the case of
NGC 1097, the observed dust lanes resemble the
theoretical studies [Athanassoula, 1992],
with a pair of straight lanes that slightly curve inwards in the
inner ends.
These findings are consistent with the predicted
morphology from bar-driven nuclear inflow.
The physical properties of these clumps are discussed in the following subsection.
§.§.§ The nature of the molecular clumps in the ring
In Figure <ref> and Table <ref>, the peak brightness temperatures
of individual clumps are from 2 to 8 K. These values are lower than the
typical temperatures of molecular gas as expected in the environment of a starburst
($\sim 100$ K), and as estimated by our LVG results in Paper I and this work.
This lower brightness temperature may be due to the small beam filling factor:
\begin{equation}
f_{\rm b} = \left(\frac{\theta^{2}_{\rm s}} {\theta^{2}_{\rm s}+\theta^{2}_{\rm b}}\right) \sim \left(\frac{\theta^{2}_{\rm s}} {\theta^{2}_{\rm b}}\right),
\end{equation}
\begin{equation}
f_{\rm b} = \left(\frac{T_{\rm b}} {T_{\rm c}}\right)
\end{equation}
where $f_{\rm b}$ is the beam filling factor, $\theta_{\rm s}$ and $\theta_{\rm b}$ are
the source size and the beam size, respectively. We assume
$\theta_{\rm s} \ll \theta_{\rm b}$. $T_{\rm b}$ is
the brightness temperature and $T_{\rm c}$ is the actual
temperature of the clouds.
assume here that the source size is compact and much smaller than the beam.
If we assume the LVG predicted $T_{\rm b}\sim$100 K Sect. <ref> for
narrow line ring clumps, and an average observed $T_{\rm b}\sim$5 K
, then $f_{\rm b}\sim$ 0.05.
This suggests $\theta_{\rm s}$ is $\sim$20 pc or association of much smaller clumps
for the narrow line ring clumps.
In the case of the broad line ring clumps, with an
average observed $T_{\rm b}$ of $\sim$5 K and
LVG predicted $T_{\rm b}$ of $\sim$20 K, then $f_{\rm b}$ is
$\sim$0.25 and $\theta_{\rm s}$ $\sim$44 pc.
The size of the broad line ring clumps
is a factor of 2 higher than the narrow line ring clumps.
Given the estimated low volume number density of the broad line clumps
in Sect. <ref>, the inconsistency between high $\Sigma_{\rm H_2}$
and low number density mentioned is possibly
due to the large line-of-sight path, or larger scale height of the broad line clumps.
If the assumption of a spherical shape of the clumps hold, the larger
size estimated above seems to be consistent with this scenario. However,
considering the higher opacity of the broad line clumps as suggested by
the high $\Sigma_{\rm H_2}$, it is likely that we are tracing the
diffuse and cold gas at the surface of GMA rather than the dense and warm gas.
On the other hand, the narrow line clumps show the opposite trends
to trace the relatively warm and dense gas.
§.§.§ Azimuthal variation of the line widths and $\Sigma_{\rm H_{2}}$
In Figure <ref>, we found the observed (and intrinsic)
line widths, and $\Sigma_{\rm H_{2}}$, show variations
along the azimuthal direction in the ring.
There are local maxima
of line widths and $\Sigma_{\rm H_{2}}$ at the position of the twin-peak at the orbit crowding regions.
Since the brightness temperature does not vary dramatically
between the emission peaks along the ring, the deduced
peaks in H$_{2}$ column density can be directly attributed to
the increased line widths.
Beam smearing is possibly important as multiple streams may
converge within a synthesized beam. However, as shown in Fig <ref>b,
enhanced line widths occur even over extended regions.
The intrinsic velocity dispersion of both the narrow and the broad line clumps cannot be thermal,
as the implied temperature would be 2000–10000 K.
In the Galactic
GMCs, non-thermal line broadening has been attributed to
turbulence from unknown mechanisms.
In the case of NGC 1097, turbulence could have been generated
by shocks of the orbit crowding
region. Shocked gas as indicated by
H$_{2}$ S(1-0) emission
has been reported toward the twin-peak of NGC 1097
[Kotilainen et al., 2000]. The narrow line clumps further along
the ring may be due to subsequent dissipation of the kinetic
energy from the shock wave.
This kind of large velocity widths up to $\sim$100 km s$^{-1}$ were also observed
in NGC 6946. With resolutions down to GMC-scale of 10 pc,
multiple components in the spectra were seen at the nuclear twin-peak by
Schinnerer et al., 2007. However, the mechanism to cause the broadened line widths
is not clear at this scale.
As noted earlier, the “intrinsic” line widths we derived contain not only
the random dispersion but also the velocity gradients due to non-circular motions
generated by shock fronts. However, our observations do not have sufficient angular resolution
to resolve the locations and magnitudes of the discrete velocity jumps to be expected across the shock front [Draine & McKee, 1993].
A detailed hydrodynamical model is also needed to quantitatively predict the magnitude of the non-circular velocity gradient in the ring.
§.§ Gravitational stabilities of the GMAs in the starburst ring
How do the GMAs form in the starburst ring? In the ring of NGC 1097,
we consider gravitational
collapse due to Toomre instability [Toomre, 1964].
We estimate the Toomre
Q parameter ($\Sigma_{\rm crit}$/$\Sigma_{\rm gas}$) to see if the rotation of the ring is able
to stabilize against the fragmentation into clumps. Here the Toomre
critical density can be expressed as:
\begin{equation}
\Sigma_{\rm crit} = \alpha
\left(\frac{\kappa \sigma_{\rm int}}{3.36{\rm G}}\right),
\end{equation}
\begin{equation}
\kappa = 1.414
\left(\frac{V}{R}\right)\left(1+\frac{R}{V}\frac{dV}{dR}\right)^{0.5},
\end{equation}
where the constant $\alpha$ is unity,
G is the gravitational constant, $\kappa$ is the epicycle
frequency, $V$ is rotational velocity, and $R$ is the radius of the ring.
If $\Sigma_{\rm H_{2}}$ exceeds $\Sigma_{\rm crit}$,
then the gas will be gravitationally unstable and collapse.
We approximate the velocity gradient d$V$/d$R$ as close to zero
in Figure <ref>.
because of the flat rotation curve of the galaxy at the position of the starburst ring. Therefore $\kappa \sim$
1.414$V$/$R$. $V$ is $\sim$ 338 km s$^{-1}$ in the galactic plane
assuming an inclination of $\sim$42. We find the ratio
of $\Sigma_{\rm crit}$/$\Sigma_{\rm H_{2}}$ is less than unity
in the ring ($\sim$0.6). Since $\Sigma_{\rm gas}$ consists of H$_{2}$,
HI, and metals, this ratio of 0.6 is an upper limit.
However, the HI is often absent in centers of galaxies,
and NGC 1097 also shows a HI hole in the central 1 [Higdon & Wallin, 2003].
Hence the HI gas does not have an important contribution in the nuclear region.
A ratio less than unity suggests that the ring is unstable and will
fragment into clumps.
As for whether the GMA itself is gravitational bound or not, in
Sect. <ref> we found that $M_{\rm vir}$/$M_{\rm gas}$
is around unity in the narrow line clumps and larger
than unity by a factor of more than 2 in the broad line clumps.
This seems to suggest that the broad line clumps
are not virialized, probably because of the larger turbulence.
However, there are several factors to reduce the ratio.
First, since we do not subtract the non-circular motion
in the broad line clumps, the observed
$M_{\rm vir}$/$M_{\rm gas}$ should be an upper limit.
We can estimate how large the non-circular motion is if we
assume the $M_{\rm vir}$/$M_{\rm gas}$ of the broad line clumps
is unity. The magnitude of the non-circular motion is
from 50 to 100 km s$^{-1}$ for the broad line clumps
under this assumption. Athanassoula, 1992 showed that
there is a correlation between the bar axial ratio
and the velocity gradient (jump) across the shock wave.
NGC 1097 has a bar axial ratio of $\sim$2.6 [Men$\acute{\rm e}$ndez-Delmestre et al., 2007],
which indicates the maximum velocity jump is $\sim70$ km
s$^{-1}$ (Figure 12; Athanassoula, 1992) . However, this number is supposed
to be an upper limit since it was measured at the strongest strength
of the shock, where the strength of the shock is a
function of position relative to the nucleus. The shock
strength seems to be weaker at the intersection of the
circumnuclear ring than for the outer straight dust lanes,
as suggested in the model. Hence
we expect the velocity gradient caused by shock front
is smaller than $\sim70$ km s$^{-1}$, based on this correlation.
In this case, the broad line clumps still have 20%
larger velocity dispersion than narrow line clumps.
Second, the size of the clumps
will also be a possible factor to reduce the $M_{\rm vir}$/$M_{\rm gas}$
to unity.
In Sect. <ref>, we estimate that the filling factor
of the broad line clumps is $\sim$0.25, and therefore the
intrinsic radius will be smaller by a factor of 0.25$^{1/2}\sim$0.5.
Third, in Sect. <ref>, we point out the molecular
gas mass derived from $^{12}$CO(J = 2–1) might be
underestimated at least by a factor of $\sim$2.
These factors also can lower the $M_{\rm vir}$/$M_{\rm gas}$
to roughly unity in most of the broad line clumps and hence
the GMA could also be gravitationally bound in the broad line
§.§ Star formation in the ring
§.§.§ Extinction
The distribution of
the massive star clusters is uniform in the ring instead
of highly clustering in certain clumps in Figure <ref>. The star clusters
do not coincide with most of the CO peaks. This could
be due to several reasons such as extinction or the physical
nature of the star clusters. Of course, since we are comparing
the scale of star clusters (2 pc) with that of GMAs (100 pc),
we are not able to conclude the physical correlation
by their spatial distribution.
The Pa$\alpha$ is more transparent than the commonly
used H$\alpha$ through dust extinction.
However, the foreground extinction ($E$(B-V) $\sim$ 1.3) for the
Pa$\alpha$ clusters corresponds to a hydrogen column density of
6.4$\times$10$^{21}$ cm$^{-2}$ [Diplas & Savage, 1994]
averaged over one synthesized beam. This is
much less than the average H$_{2}$ column density for the molecular clumps
in the ring, which is $\sim8.7\times$10$^{22}$ cm$^{-2}$. Therefore,
this suggests that the detected Pa$\alpha$ clusters might be
located on the surface of the clouds instead of being embedded inside
the clumps, or else located away from the clouds. However,
it can not be ruled out that there are deeply embedded stellar
clusters in the CO peaks in this scenario.
There is also a deficiency of star clusters in the broad
line clumps associated with the dust lanes.
This again could be due to extinction although we
found it is similar in the star forming ring and the dust lanes.
However, it is interesting that there is
a spatial offset between the Spitzer 24 $\micron$ peaks and
the CO peaks, and the FIR emission in the dust lane
is intrinsically faint based on
Herschel PACS 70 and 100 $\micron$ maps [Sandstrom et al., 2010].
The long wavelength IR results are less affected by
extinction, and suggest
a lack of newly formed star clusters in the dust lanes.
However, a higher resolution for the FIR observation is needed
to confirm the star formation activities in the broad line clumps.
§.§.§ Molecular gas and star formation
In NGC 1097, the mechanisms of the intensive star formation in the ring
are still uncertain. It could be induced by the gravitational collapse
in the ring stochastically [Elmegreen, 1994]. The
other possible mechanism is that the stars form in the downstream
of the dust lane at the conjunction of the ring, and the
star clusters continue to orbit along the ring <cit.>. The major
difference is that the latter scenario will have an age gradient
for the star clusters along the ring while it is randomized in
the previous case. Several papers have discussed these mechanisms
in the galaxies that have star forming rings and there is no
clear answer so far <cit.>.
Sandstrom et al., 2010 tested the above pictures by
examining if there is an azimuthal gradient of dust temperature
in the ring of NGC 1097, assuming that the younger population
of massive star clusters will heat the dust to higher temperatures
than older clusters. There seems to be no gradient, though it
is difficult to conclude since a few rounds of galactic rotation might
smooth out the age gradient.
We do not aim to solve the above question in this paper
since the most direct way is to measure the age of the star
clusters, and this needs detailed modeling. It is interesting to compare
the properties of the molecular clumps with star clusters since the
molecular clouds are the parent site of star formation.
The one relevant result here is that the line widths
are narrower further away from where the molecular
arms join the ring. This could be related to the dissipation
of turbulence which may allow cloud collapse to proceed.
In Figure <ref>(b), we found that $\Sigma_{\rm SFR}$,
compared with velocity dispersion (Figure <ref>),
has no significant azimuthal correlation. Furthermore,
R$_{32}$ shows a similar trend to $\Sigma_{\rm SFR}$
as a function of azimuthal direction (Figure <ref>(d)),
where R$_{32}$ and $\Sigma_{\rm SFR}$ has a correlation
in Figure <ref>(c).
This suggests that $\Sigma_{\rm SFR}$ and R$_{32}$
are physically related, but might not be associated with the large
scale dynamics in this galaxy. Although $\Sigma_{\rm SFR}$ seems to
be suppressed in the broad line
clumps. Nevertheless, in Figure <ref>(b)&(d)
we consider that the standard deviations of the measured $\Sigma_{\rm SFR}$ do not significantly diverse among global variations. The standard deviations are 0.7 M$_{\odot}$ yr$^{-1}$ kpc$^{-2}$ and 0.4 M$_{\odot}$ yr$^{-1}$ kpc$^{-2}$ of the northeast and southwest of the ring, respectively. The mean values of the
$\Sigma_{\rm SFR}$ are 3.3 M$_{\odot}$ yr$^{-1}$ kpc$^{-2}$
and 1.4 M$_{\odot}$ yr$^{-1}$ kpc$^{-2}$ of the northeast
and southwest of the ring, respectively.
With the limited points, our results
suggest that the star formation activities are randomly generated on the local scale
instead of a systematic distribution.
In Figure <ref>, we show how the R$_{32}$ ratio varies for
different densities and kinetic temperatures.
It shows that when R$_{32}$ varies from 0.3 to 0.9, the required
number density of molecular gas changes from 10$^{2}$ cm$^{-3}$ to 5$\times10^{3}$ cm$^{-3}$.
The R$_{32}$ seems to be dependent on the density more than
the temperature when it is below unity.
Hence the variation of R$_{32}$ indicates different
density among clumps.
It is interesting to note that there is a correlation between
R$_{32}$ and SFR,
and some clumps (N1, N2, N7) which have higher SFR values
are spatially close to the HCN(J = 1–0) peaks
[Kohno et al., 2003]. Higher values of
R$_{32}$ will select denser gas associated with
higher SFR, as has been shown in the large scale observations of HCN and FIR correlation [Gao, Solomon, & Philip, 2004]. In the smaller GMC-scale, Lada, 1992 also showed
that the efficiency of star formation is higher in the dense core rather than in the
diffuse gas.
The ratio
map of R$_{32}$ can be useful to determine
the location of star formation.
§ SUMMARY
* We show the multi-J CO line maps of NGC 1097
toward the 1 kpc circumnuclear region. The molecular
ring is resolved into individual GMAs in the
star forming ring, the dust lanes, and at the twin-peak structures.
For the first time the molecular concentration at the twin-peak is resolved in to
two populations of GMAs in terms of velocity dispersion and physical conditions.
The clumps in the starburst ring
have narrower velocity dispersion while the line widths are
broader in the dust lanes, and for some clumps located in the
twin-peak. The physical and kinematic properties are
different for these clumps.
The narrow line clumps
have higher temperature ($\ge$250 K) and density
($(4.5\pm3.5)\times10^{3}$ cm$^{-3}$)
in contrast to the broad line clumps
(T = 45$\pm$15 K; $N_{\rm H_{2}}$ =
$(8.5\pm1.5)\times10^{2}$ cm$^{-3}$ based on the LVG analysis.
* The Toomre-Q factor is smaller than unity in the molecular ring
suggesting that the GMAs
could form via gravitational instability in the ring,
where the $\Sigma_{\rm H_{2}}$ of the clumps
is large enough to overcome the critical density.
The narrow line clumps are gravitationally
bound as shown by the values of $M_{\rm vir}$/$M_{\rm gas}$
which are
nearly unity. Although $M_{\rm vir}$/$M_{\rm gas}$
is larger than unity in the broad line clumps, by
accounting for non-circular motions,
smaller intrinsic source sizes, and the underestimation of molecular
gas mass, we can lower $M_{\rm vir}$/$M_{\rm gas}$
to unity. Therefore both systems are likely to be
gravitationally bound.
* The SFR is correlated to R$_{32}$, suggesting that
the star formation activities and the physical conditions
of the molecular gas are associated with each other.
In contrast to the velocity dispersion and the
$\Sigma_{\rm H_{2}}$, the SFR and
R$_{32}$ are not correlated with the large scale
dynamics. This suggests that the visible star formation
activities remain a localized phenomenon.
The SFR is lower in the broad line clumps
than in the narrow line clumps, which may be intrinsicly
suppressed in the dust lanes.
We thank the SMA staff for maintaining the operation of the array.
We appreciate for referee's detail comments to improve the manuscript.
We thank G. Petitpas for providing the JCMT data.
P.-Y. Hsieh especially acknowledges the fruitful discussions with L.-H. Lin,
K. Sakamoto, N. Scoville, W. Maciejewski, and L. Ho for the manuscript.
This project is funded by NSC 97-2112-M-001-007-MY3
and NSC 97-2112-M-001-021-MY3.
Facilities: SMA, HST (NICMOS)
SMA observation parameters
1c 230 GHz
230 GHz
230 GHz 345 GHz
1cParameters Compact-N
Very extended Compact
Date 2004-07-23, 2004-10-01 2005-09-25 2005-11-07 2006-09-05
Phase center (J2000.0):
1cR.A. 4c$\alpha_{2000}$ = 02$^{\rm h}$46$^{\rm m}$18$\fs$96
1cDecl. 4c$\delta_{2000}$ = –30${\degr}$16${\arcmin}$28 $
Primary beams 3c52$\arcsec$ 36$\arcsec$
No. of antennas 8, 8 6 7 7
Project baseline range (k$\lambda$) 5 – 74, 10 – 84 23 – 121 12 – 390 7.2 – 80
Bandwidth (GHz) 4c1.989
Spectral resolution (MHz) 0.8125, 3.25 0.8125 0.8125 0.8125
Central frequency, LSB/USB (GHz) 3c219/228 334/344
$\tau_{225}$b 0.15, 0.3 0.06 0.1 0.06
T$_{\rm sys,DSB}$ (K) 200, 350 110 180 300
Bandpass calibrators Uranus, J0423 – 013 3C454.3 3C454.3, 3C111 Uranus, Neptune
Absolute flux calibratorsa Uranus (36.8, 34.2), Uranus (37.2), Uranus (34.9), Neptune (21.1)
J0423 – 013 (2.8, 2.5) 3C454.3 (21.3) 3C454.3 (21.3)
Gain calibrators J0132 – 169 J0132 – 169 J0132 – 169 J0132 – 169, J0423 – 013
The numbers in the parenthesis are the absolute flux in Jy.
b$\tau_{225}$ is the optical depth measured in 225 GHz.
Physical parameters of the peaks of the molecular clouds
ID $\delta$R.A. $\delta$Decl. $I_{\rm CO}$
$\delta$$V_{\rm obs}$ $\delta$V$_{\rm int}$ $N_{\rm H_2}$
$\Sigma_{\rm H_2}$ $M_{\rm H_2}$ $T_{\rm b}$
(1) (2) (3) (4) (5)
(6) (7) (8) (9)
() () (Jy beam$^{-1}$ km s$^{-1}$) (km s$^{-1}$) (km s$^{-1}$)
(10$^{22}$ cm$^{-2}$) (M$_{\odot}$ pc$^{-2}$)
(10$^{6}$ M$_{\odot}$)
N1 9.0 1.6 17.2 54$\pm$3 42$\pm$4 7.9 1270 10.7 4.1
N2 8.6 4.8 27.9 67$\pm$3 61$\pm$4 12.8 2070 17.3 5.5
N3 3.6 8.4 38.3 69$\pm$1 64$\pm$1 17.6 2840 23.8 7.5
N4 0.6 8.2 18.5 62$\pm$4 56$\pm$4 8.5 1370 11.5 5.8
N5 -4.8 7.8 21.3 38$\pm$1 31$\pm$1 9.8 1580 13.2 6.1
N6 -6.4 5.6 19.6 41$\pm$2 37$\pm$2 9.0 1460 12.2 4.1
N7 -8.6 -2.6 40.3 61$\pm$2 52$\pm$3 18.5 2990 25.0 8.0
N8 -5.4 -8.4 28.4 57$\pm$2 51$\pm$2 13.1 2110 17.6 5.8
N9 -2.8 -9.2 24.6 62$\pm$3 57$\pm$3 11.3 1820 15.2 4.1
N10 4.8 -7.8 19.7 43$\pm$2 37$\pm$2 9.1 1460 12.2 6.1
N11 7.8 -6.6 27.7 52$\pm$4 49$\pm$4 12.7 2050 17.2 5.8
B1 10.2 3.6 27.8 113$\pm$10 109$\pm$11 12.7 2060 17.2 2.4
B2 -7.4 -6.8 34.8 99$\pm$5 96$\pm$5 16.0 2580 21.6 4.6
B3 -5.4 -6.6 31.3 94$\pm$8 90$\pm$8 14.4 2320 19.4 4.1
D1 16.8 -1.4 18.9 100$\pm$10 97$\pm$10 8.7 1400 11.7 1.8
D2 12.8 1.6 21.1 86$\pm$17 82$\pm$18 9.7 1560 13.1 2.0
D3 -17.8 2.6 16.2 81$\pm$7 78 $\pm$ 8 7.5 1200 10.1 1.8
D4 -13.0 -1.0 24.4 84$\pm$5 80$\pm$6 11.2 1810 15.1 3.5
D5 -10.0 -4.0 38.6 118$\pm$4 115$\pm$4 17.7 2860 24.0 4.6
Nu 0 0 50.4 52$\pm$5 - 16.1 3740 23.2 -
Nu 0 0 - 57$\pm$5 - - - - -
Nu 0 0 - 186$\pm$20 - - - - -
We define the peaks based on their location and their
velocity dispersions. The clumps in the dust lanes (molecular
spiral arms) are named D1, ..., D5. The clumps in the ring
are further designated by their velocity dispersion being
broader or narrower than 30 km s$^{-1}$, and named respectively
as B1, ..., B3, and N1, ..., N11. Nu is the ID of the nucleus.
(1) R.A. offsets from the phase center.
(2) Dec. offsets from the phase center.
(3) Integrated CO(J = 2 – 1) intensity. The uncertainty is 2.2 Jy beam$^{-1}$ km s$^{-1}$.
(4) Fitted FWHM for the observed line width.
The nucleus has multiple-gaussians profile, and
we list the fitted line widths with 3 gaussians. Note that the $I_{\rm CO}$,
$N_{\rm H_{2}}$, and $\Sigma_{\rm H_2}$ of the nucleus
is the sum value of the three components.
(5) FWHM line width of intrinsic velocity dispersion.
(6) H$_{2}$ column density with the uncertainty of 1.1$\times$10$^{22}$ cm$^{-2}$.
(7) $\Sigma_{\rm H_{2}}$ with the uncertainty of 170 M$_{\odot}$ pc$^{-2}$.
(8) Mass of molecular H$_{2}$ within the synthesized beam (15$\times$10). The uncertainty is 1.4
$\times$10$^{6}$ M$_{\odot}$.
(9) Peak brightness temperature.
Physical parameters of the GMAs
$M_{\rm H_{2}}$
$M_{\rm gas}$
$M_{\rm vir}$
$M_{\rm vir}$/$M_{\rm gas}$
(1) (2) (3)
(4) (5)
() (10$^{6}$ M$_{\odot}$) (10$^{6}$ M$_{\odot}$) (10$^{6}$ M$_{\odot}$)
N1 2.9 32.4 $\pm$ 5.3 44.1 $\pm$ 7.2 44.9 $\pm$ 10.2 1.0 $\pm$ 0.3
N2 3.7 71.2 $\pm$ 8.7 96.9 $\pm$ 11.8 118.0 $\pm$ 16.3 1.2 $\pm$ 0.2
N3 3.5 73.6 $\pm$ 7.9 100.1 $\pm$ 10.8 125.4 $\pm$ 5.9 1.3 $\pm$ 0.1
N4 2.2 17.7 $\pm$ 3.0 24.0 $\pm$ 4.1 58.6 $\pm$ 9.6 2.4 $\pm$ 0.6
N5 2.9 37.5 $\pm$ 5.5 51.0 $\pm$ 7.4 25.1 $\pm$ 2.7 0.5 $\pm$ 0.1
N6 2.9 33.8 $\pm$ 5.6 46.0 $\pm$ 7.6 34.6 $\pm$ 5.2 0.8 $\pm$ 0.2
N7 3.4 79.1 $\pm$ 7.6 107.5 $\pm$ 10.3 79.3 $\pm$ 9.9 0.7 $\pm$ 0.1
N8 2.5 51.5 $\pm$ 4.2 70.0 $\pm$ 5.7 57.5 $\pm$ 5.6 0.8 $\pm$ 0.1
N9 3.7 94.5 $\pm$ 8.8 128.5 $\pm$ 12.0 104.2 $\pm$ 14.1 0.8 $\pm$ 0.1
N10 2.8 32.7 $\pm$ 5.2 44.5 $\pm$ 7.1 34.3 $\pm$ 4.1 0.8 $\pm$ 0.2
N11 4.1 133.1 $\pm$ 11.1 181.0 $\pm$ 15.0 85.8 $\pm$ 15.1 0.5 $\pm$ 0.1
B1 3.1 48.6 $\pm$ 6.3 66.1 $\pm$ 8.6 316.6 $\pm$ 71.0 4.8 $\pm$ 1.2
B2 3.3 80.0 $\pm$ 7.2 108.8 $\pm$ 9.9 267.4 $\pm$ 29.8 2.5 $\pm$ 0.4
B3 3.0 77.0 $\pm$ 5.7 104.7 $\pm$ 7.7 208.8 $\pm$ 43.4 2.0 $\pm$ 0.4
D1 3.3 56.5 $\pm$ 6.9 76.9 $\pm$ 9.4 269.1 $\pm$ 62.8 3.5 $\pm$ 0.9
D2 3.5 58.6 $\pm$ 8.2 79.7 $\pm$ 11.1 207.7 $\pm$ 102.0 2.6 $\pm$ 1.3
D3 2.9 30.1 $\pm$ 5.5 41.0 $\pm$ 7.4 152.3 $\pm$ 34.2 3.7 $\pm$ 1.1
D4 5.4 117.7 $\pm$ 18.9 160.0 $\pm$ 25.7 303.0 $\pm$ 47.2 1.9 $\pm$ 0.4
D5 4.1 112.6 $\pm$ 11.0 153.1 $\pm$ 15.0 474.8 $\pm$ 36.8 3.1 $\pm$ 0.4
(1) Diameter of the clumps.
(2) H$_{2}$ mass integrated over the diameter of the clumps.
(3) Gas mass integrated over the diameter of the clumps. The
$M_{\rm gas}$ is the $M_{\rm H_2}$ corrected with the He
fraction of 1.36 in Sect.r̃efsect-mass.
(4) Virial mass of the clumps.
(5) Ratio of the virial mass to the $M_{\rm gas}$.
We have not corrected for the beam convolution effect as the derived diameters are sufficiently large. We think the assumption of Gaussian shapes may be the greater source of error.
Star formation properties
ID $F_{\rm Pa{\alpha}} $ $E$(B-V)
$A_{\rm Pa\alpha}$
$A_{\rm v}$ $\Sigma_{\rm SFR}$
(1) (2) (3) (4) (5)
(10$^{-14}$ erg s$^{-1}$ cm$^{-2}$) (mag) (mag) (mag) (M$_{\odot}$ yr$^{-1}$ kpc$^{-2}$)
N1 6.03 $\pm$ 0.55 1.41 $\pm$ 0.01 0.64 $\pm$ 0.30 4.36 $\pm$ 0.04 2.74 $\pm$ 0.25
N2 7.01 $\pm$ 0.39 1.06 $\pm$ 0.01 0.48 $\pm$ 0.23 3.30 $\pm$ 0.04 3.18 $\pm$ 0.18
N3 4.00 $\pm$ 0.22 1.02 $\pm$ 0.02 0.46 $\pm$ 0.22 3.16 $\pm$ 0.06 1.81 $\pm$ 0.10
N4 8.95 $\pm$ 0.62 1.21 $\pm$ 0.01 0.55 $\pm$ 0.26 3.75 $\pm$ 0.03 4.06 $\pm$ 0.28
N5 7.52 $\pm$ 0.38 1.00 $\pm$ 0.01 0.46 $\pm$ 0.22 3.11 $\pm$ 0.03 3.41 $\pm$ 0.17
N6 7.94 $\pm$ 0.37 0.97 $\pm$ 0.01 0.44 $\pm$ 0.21 3.00 $\pm$ 0.03 3.60 $\pm$ 0.17
N7 8.97 $\pm$ 0.75 1.36 $\pm$ 0.01 0.62 $\pm$ 0.29 4.20 $\pm$ 0.03 4.07 $\pm$ 0.34
N8 4.21 $\pm$ 0.41 1.45 $\pm$ 0.02 0.66 $\pm$ 0.31 4.50 $\pm$ 0.06 1.91 $\pm$ 0.19
N9 3.72 $\pm$ 0.35 1.41 $\pm$ 0.02 0.64 $\pm$ 0.30 4.36 $\pm$ 0.07 1.69 $\pm$ 0.16
N10 2.90 $\pm$ 0.30 1.47 $\pm$ 0.03 0.67 $\pm$ 0.32 4.56 $\pm$ 0.09 1.32 $\pm$ 0.14
N11 1.92 $\pm$ 0.22 1.49 $\pm$ 0.05 0.68 $\pm$ 0.32 4.62 $\pm$ 0.14 0.87 $\pm$ 0.10
B1 2.55 $\pm$ 0.19 1.17 $\pm$ 0.03 0.53 $\pm$ 0.25 3.62 $\pm$ 0.10 1.16 $\pm$ 0.09
B2 2.60 $\pm$ 0.24 1.35 $\pm$ 0.03 0.61 $\pm$ 0.29 4.18 $\pm$ 0.10 1.18 $\pm$ 0.11
B3 1.29 $\pm$ 0.17 1.44 $\pm$ 0.07 0.65 $\pm$ 0.31 4.45 $\pm$ 0.21 0.58 $\pm$ 0.08
D1 $\le$0.12 - - - -
D2 $\le$0.12 - - - -
D3 $\le$0.12 - - - -
D4 $\le$0.12 - - - -
D5 1.56 $\pm$ 0.16 1.31 $\pm$ 0.05 0.60 $\pm$ 0.28 4.07 $\pm$ 0.17 0.71 $\pm$ 0.07
(1) Pa$\alpha$ flux corrected by the extinction measured in the
CO clumps. The upper limit of the clumps (D1, ..., D4) is
1.21 $\times10^{-15}$ erg s$^{-1}$ cm$^{-2}$.
(2) Color excess.
(3) Extinction at wavelength of Pa$\alpha$ in the unit of magnitude.
(4) Extinction at V-band in the unit of magnitude.
(5) Surface density of SFR.
The upper limit is 0.05 M$_{\odot}$ yr$^{-1}$ kpc$^{-2}$.
Intensity ratios of the CO clumps
ID R$_{32}$ R$_{13}$
(1) (2)
N1 1.18 $\pm$ 0.07 (0.78 $\pm$ 0.18) 10.52 $\pm$ 7.05
N2 0.89 $\pm$ 0.03 (1.02 $\pm$ 0.14) 9.17 $\pm$ 2.92
N3 1.22 $\pm$ 0.04 8.12 $\pm$ 2.35
N4 1.41 $\pm$ 0.09 -
N5 1.68 $\pm$ 0.09 12.44 $\pm$ 9.74
N6 1.19 $\pm$ 0.07 12.47 $\pm$ 9.50
N7 0.92 $\pm$ 0.02 (0.99 $\pm$ 0.06) 8.52 $\pm$ 1.71
N8 0.62 $\pm$ 0.02 (0.50 $\pm$ 0.06) 10.24 $\pm$ 3.17
N9 0.66 $\pm$ 0.03 10.88 $\pm$ 5.14
N10 0.85 $\pm$ 0.05 7.18 $\pm$ 2.73
N11 0.78 $\pm$ 0.03 13.04 $\pm$ 6.45
B1 0.79 $\pm$ 0.03 -
B2 0.71 $\pm$ 0.02 6.51 $\pm$ 1.10
B3 0.72 $\pm$ 0.02 11.49 $\pm$ 3.74
D1 0.76 $\pm$ 0.08 -
D2 0.86 $\pm$ 0.05 -
D3 1.19 $\pm$ 0.12 -
D4 0.72 $\pm$ 0.03 -
D5 0.72 $\pm$ 0.02 (0.39 $\pm$ 0.03) 10.65 $\pm$ 2.59 (8.38 $\pm$ 3.41)
Nu 0.93 $\pm$ 0.02 23.37 $\pm$ 9.79
(1) $^{12}$CO(J = 3–2)/(J = 2–1) intensity ratios derived from CO brightness temperature. Numbers within parenthesis are beam smearing corrected ratios.
(2) $^{12}$CO(J = 2–1)/$^{13}$CO(J = 2–1) intensity ratios derived from CO
brightness temperature. Numbers within parenthesis are beam smearing corrected ratios.
All the quantities are measured with a beam size of
Dynamical parameters fitted by GAL
R.A. 02$^{\rm h}$46$^{\rm m}$18$\fs$95
Decl. –30$\degr$16$\arcmin$2913
Position angle 1330$\pm0\fdg1$
Inclination 417$\pm0\fdg6$
Systemetic velocity (km s$^{-1}$; $V_{\rm sys}$) 1249.0$\pm$0.5
$V_{\rm max}$ (km s$^{-1}$) 387.6$\pm$4.3
$R_{\rm max}$ 95$\pm$01
The 6-cm peak of the nucleus is
R.A. = 02$^{\rm h}$46$^{\rm m}$18$\fs$96, Decl. = –30$\degr$16$\arcmin$28897.
[Aalto et al., 1995] Aalto, S., Booth, R. S., Black, J. H. & Johansson, L. E. B.
1995, , 300, 369
[Athanassoula, 1992] Athanassoula, E. 1992a, MNRAS, 259, 328
[Athanassoula, 1992] Athanassoula, E. 1992b, MNRAS, 259, 345
[Barth et al., 1995] Barth, A. J., Ho, L. C.,
Filippenko, A. V., & Sargent, W. L. 1995,
, 110, 1009
[B$\ddot{\rm o}$ker et al., 2008] B$\ddot{\rm o}$ker, T., Falc$\acute{\rm o}$n-Barroso, J., Schinnerer, E., Knapen, J. H., & Ryder, S. 2008, , 135, 479
[Bigiel et al., 2008] Bigiel, F., Leroy, A., Walter, F., Brinks, E., de Blok, W. J. G., Madore, B., & Thornley, M. D. 2008, , 136, 2846
[Bigiel et al., 2010] Bigiel, F., Bolatto, A. D., Leroy, A. K., Blitz, L., Walter, F., Rosolowsky, E. W., Lopez, L. A., & Plambeck, R. L. 2010, , 725, 1159
[Buta et al., 2000] Buta, R., Treuthardt, P. M., Byrd, G. G., & Crocker, D. A.
2000, , 120, 1289
[Cardelli et al., 1989] Cardelli, J. A., Clayton, G. C., & Mathis,
J. S. 1989,
, 345, 245
[Calzetti et al., 2008] Calzetti, D. 2007, NCimB, 122, 971
[Clark, 1980] Clark, B. G. 1980, , 89, 377
[Combes & Gerin, 1985] Combes, F.& Gerin, M. 1985,
, 150, 327
[Dame et al., 2001] Dame, T. M., Hartmann, Dap & Thaddeus, P. 2001,
, 547, 792
[Davies et al., 2009] Davies, R. I., Maciejewski, W., Hicks, E. K. S., Tacconi, L. J., Genzel, R. & Engel, H. 2009
, 702, 114
[Diplas & Savage, 1994] Diplas, A., & Savage, B. D. 1994, , 427, 274
[Draine & McKee, 1993] Draine, B. T., McKee, C. F. 1993, , 31, 373
[de Vaucouleurs et al., 1991] de Vaucouleurs, G.,
de Vaucouleurs, A., Corwin, H. G., Jr., Buta, R. J.,
Paturel, G., & Fouqué, P. 1991, Third Reference
Catalogue of Bright Galaxies (New York: Springer-Verlag)
[Draine et al., 2007] Draine, B. T. et al. 2007,
, 663, 866
[Dumas et al., 2010] Dumas, G., Schinnerer, E., & Mundell, G. C.
2010, , 721, 911
[Elmegreen, 1994] Elmegreen, B. G. 1994,
, 425, 73
[Fathi et al., 2006] Fathi, K., Storchi-Bergmann, T., Riffel, R. A., Winge, C., Axon, D. J., Robinson, A., Capetti, A. & Marconi, A. 2006
, 641, 25
[Flower & Launay, 1985] Flower, D. R. & Launay, J. M. 1985,
, 241, 271
[Gao, Solomon, & Philip, 2004] Gao, Y., & Solomon, Philip M. 2004,
, 606, 271
[Goldreich & Kwan, 1974] Goldreich, P.,
& Kwan, J. 1974,
, 189, 441
[Heckman, 1991] Heckman, T. M. 1991, in Massive
stars in starbursts, Eds: C. Leitherer, N. R. Walborn, T. M. Heckman, C. A. Norman,
Cambridge Univ. Press
[Higdon & Wallin, 2003] Higdon, J. L. & Wallin, J. F. 2003,
, 585, 281
[Ho et al., 1997] Ho, L. C., Filippenko, A. V. & Sargent, W. L. W. 1997,
, 487, 591
[Ho et al., 2004] Ho, P. T. P.,
Moran, J. M., & Lo, F. 2004,
, 616, L1
[H$\ddot{\rm o}$gbom, 1974] H$\ddot{\rm o}$gbom, J. 1974, , 15, 417
[Hsieh et al., 2008] Hsieh, P.-Y., Matsushita, S., Lim, J., Kohno, K., & Sawada-Satoh, S. 2008,
, 683, 70
[Hummel et al., 1987]
Hummel, E., van der Hulst, J. M., & Keel, W. C. 1987,
, 172, 32
[Kenney et al., 1992] Kenney, J. D. P., Wilson, C. D.,
Scoville, N. Z., Devereux, N. A., & Young, J. S. 1992,
, 395, 79
[Kennicutt, 1998] Kennicutt, R. C. Jr. 1998,
, 498, 541
[Kohno et al., 1999] Kohno, K., Kawabe, R., & Vila-Vilaró, B. 1999,
, 511,157
[Kohno et al., 2003] Kohno, K., Ishizuki, S.,
Matsushita, S., Vila-Vilaró, B., & Kawabe, R. 2003,
, 55, L1
[Kotilainen et al., 2000] Kotilainen, J. K.,
Reunanen, J., Laine, S., & Ryder, S. D. 2000, , 353, 834
[Lada, 1992] Lada, E. A. 1992, , 393,25
[Liu et al., 2010] Liu, G., Calzetti, D., Kennicutt, R. C., Jr., Schinnerer, E., Sofue, Y., Komugi, S., & Egusa, F. 2010a, in preparation
[Liu et al., 2010] Liu, G., Koda, J., Calzetti, D., Fukuhara, M., & Momose, R. 2010b, , in press
[Maloney & Black, 1988] Maloney, P. & Black J. H. 1988, , 325, 389
[Mazzuca et al., 2008] Mazzuca, L. M., Knapen, J. H., Veilleux, S., & Regan, M. W. 2008, , 174, 337
[Meier et al., 2008] Meier, D. S., Turner, J. L. & Hurt, R. L. 2008,
, 675, 281
[Mckee et al., 1982] McKee, C. F., Storey, J. W. V., Watson, D. W.,& Green, S. 1982, , 259, 647
[Meier & Turner, 2001] Meier, D. S., & Turner, J. L. 2001,
, 551, 687
[Meier et al., 2008] Meier, D. S., Turner, J. L. & Hurt, R. L. 2008,
, 675, 281
[Men$\acute{\rm e}$ndez-Delmestre et al., 2007] Men$\acute{\rm e}$ndez-Delmestre, K., Sheth, K., Schinnerer, E., Jarrett, T. H., & Scoville, N. Z. 2007, , 657, 790
[Onodera et al., 2010] Onodera, S., et al. 2010, , 722, L127
[Muraoka et al., 2009] Muraoka K. et al. 2009, , 706, 1213
[Osterbrock, 1989] Osterbrock, D. E. 1989, Astrophysics
of Gaseous Nebulae and Active Galactic Nuclei (Research supported by the
University of California, John Simon Guggenheim Memorial Foundation,
University of Minnesota, et al.; Mill Valley, CA: University Science Books)
[Piner et al., 1995] Piner, B. G., Stone, J. M., Teuben, P. J. 1995, , 449, 508
[Petitpas & Wilson, 2003] Petitpas, G. R.,
& Wilson, C. D. 2003,
, 587, 649
[Rand & Kulkarni, 1990] Rand, R. J.& Kulkarni, S. R. 1990,
, 349, 43
[Reynaud & Downes, 1997] Reynaud, D., & Downes, D. 1997, , 319, 737
[Sakamoto et al., 1999] Sakamoto, K., Okumura, S. K., Ishizuki, S., Scoville, N. Z. 1999, , 525, 691
[Sakamoto et al., 2007] Sakamoto, K., Ho, P. T. P., Mao, R.-Q., Matsushita, S., Peck, A. B. 2007, , 654, 782
[Sanders et al., 2003] Sanders, D. B., Mazzarella, J. M., Kim, D.-C., Surace, J. A., & Soifer, B. T. 2003, , 126, 1607
[Sandstrom et al., 2010] Sandstrom, K. et al. 2010, , 518, 59
[Schwarz, 1984] Schwarz, M. P. 1984,
, 209, 93
[Schinnerer et al., 2007] Schinnerer, E., B$\ddot{\rm o}$ker, T.,
Emsellem, E., & Downes, D. 2007, , 462, 27
[Scoville et al., 1985] Scoville, N. Z., Soifer, B. T., Neugebauer, G., Matthews, K., Young, J. S. & Yerka, J. 1985,
, 289,129
[Scoville et al., 1987]
Scoville, N. Z., Yun, Min Su, Sanders, D. B., Clemens, D. P., & Waller, W. H. 1987, , 63, 821
[Solomon et al., 1979]
Solomon, P. M., Sanders, D. B., & Scoville, N. Z. 1979, , 232, 89
[Solomon et al., 1987] Solomon, P. M., Rivilo, A. R.,
Barrett, J., & Yahil, A. 1987,
, 319, 730
[Storchi-Bergmann et al., 2003]
Storchi-Bergmann, T., Nemmen da Silva, R., Eracleous, M., Halpern, J. P., Wilson, A. S., Filippenko, A. V., Ruiz, M. T., Smith, R. C., & Nagar, N. M. 2003, , 598, 956
[Strong & Mattox, 1996] Strong, A. W. & Mattox, J. R. 1996,
, 308, 21
[Telesco & Gatley, 1981]
Telesco, C. M., & Gatley, I. 1981, , 247, 11
[Telesco et al., 1993] Telesco, C. M., Dressel, L. L., & Wolstencroft, R. D. 1993,
, 414,120
[Toomre, 1964] Toomre, A. 1964, , 139, 1217
[Tosaki et al., 2007] Tosaki, T., Shioya, Y., Kuno N.,
Hasegawa, T., Nakanishi, K., Matsushita, S. & Kohno, K. 2007
, 59, 33
[Tully, 1988] Tully, R. B. 1988, Nearby Galaxies Catalog
(Cambridge: Cambridge University Press)
[van de Ven & Fathi, 2010] van de Ven, G. & Fathi, K. 2010
, 723, 767
[Verley et al., 2010] Verley, S., Corbelli, E., Giovanardi, C., & Hunt, L. K. 2010, , 510, 64
[Vogel et al., 1998]
Vogel, S. N., Kulkarni, S. R., & Scoville, N. Z. 1988, Nature, 334, 402
[Wada & Norman, 2002] Wada, K.,
& Norman, C. A. 2002, , 566, L21
[Wild et al., 1992] Wild, W. et al. 1992, , 265, 447
[Williams. et al., 1994]
Williams, J. P., de Geus, E. J, & Blitz, L. 1994, , 428, 693
[Young & Scoville, 1991] Young, J. S. & Scoville, N. Z. 1991, , 29, 581
Top left image is the $^{12}$CO(J = 2–1) integrated intensity map.
The contour levels are 2, 3, 5, ..., 20, 25, and 30$\sigma$
(1$\sigma$ = 2.3 Jy km s$^{-1}$ beam$^{-1}$). The synthesized beam
is 15$\times$10 (PA = 81). The emission
at (10,–15) are sidelobes.
Top right image is the $^{12}$CO(J = 3–2) integrated intensity map. The
contour levels are 5, 7, 9, 10, 15,..., 60, 80, and 100$\sigma$ (1$\sigma$ = 3.0 Jy km s$^{-1}$ beam$^{-1}$).
The synthesized beam is 35$\times$21 (PA = –44).
Bottom left image is the $^{13}$CO(J = 2–1) integrated intensity map. The contour levels are
2, 3, and 4$\sigma$ (1$\sigma$ = 2.1 Jy km s $^{-1}$ beam$^{-1}$).
The synthesized beam is 18$\times$14 (PA = 19).
Bottom right image is the $^{12}$CO(J = 2–1) peak brightness temperature map. The contours are 2, 3, 4, 5, 6, and 7 K.
All of the maps are overlaid with the positions of Pa$\alpha$ star clusters (squares),
6 cm radio continuum sources (crosses), and V-band ($<$ 13 mag) star clusters (circles).
The central cross in each map is the position of the 6-cm nucleus (Hummel et al. 1987),
which is assumed to be the active nucleus. The beam size is
shown in the lower right corner of each map.
The $^{12}$CO(J = 2–1) channel maps are overlaid on the archival HST I-band
(F814W) image with the corrected astrometry.
The contour levels are –2, 2, 4, 8, 16, and 32$\sigma$, where 1$\sigma$ = 20 mJy
beam$^{-1}$ (306 mK) in 40 km s$^{-1}$ resolution.
The velocity (km s$^{-1}$) with respect to the systemic velocity
of 1254 km s$^{-1}$ (Kohno et al. 2003) is labeled in the top left corner of each map.
The beam size (15$\times$10, PA = 81) is shown in the lower
right corner of each map with solid ellipse. The dirty beam is at the bottom right panel
with a contours level of –100, –50, –10, –5, 5, 10, 50, 100% of the peak.
The $^{12}$CO(J = 3–2) channel maps are overlaid on the archival HST I-band
(F814W) image with the corrected astrometry.
The contour levels are -3, 3, 5, 10, 20, 40, 60, and 80$\sigma$, where
1$\sigma$ = 17.5 mJy beam$^{-1}$ (23 mK) in 40 km s$^{-1}$ resolution.
The velocity (km s$^{-1}$) with respect to the systemic velocity
of 1254 km s$^{-1}$ is labeled in the top left corner of each map.
The beam size (35$\times$21, PA = –44) is shown in the lower
right corner with solid ellipse. The dirty beam is at the bottom right panel
with a contours level of –100, –50, –10, –5, 5, 10, 50, 100% of the peak.
Top image is the $^{12}$CO(J = 2–1) integrated map (contours) overlaid on the archival
HST I-band (Filter F814 W) image (grey scale). Astrometry of the HST
image was corrected using background stars with known positions.
The contour levels for the $^{12}$CO(J = 2–1) are 2, 3, 5, ..., 20, 25, and 30 $\sigma$
(1 $\sigma$ = 2.3 Jy km s$^{-1}$ beam$^{-1}$). The IDs for the individual
peaks of clumps are marked. The CO synthesized beam (15$\times$
10) is shown in the lower right corner.
Bottom image is the HST NICMOS Pa$\alpha$ line image (color) overlaid on the $^{12}$CO(J = 2–1) contour. The contour levels are the same as in upper image.
The high resolution (15$\times$10) $^{12}$CO(J = 2–1)
spectra of individual clumps measured at the $^{12}$CO(J = 2–1) peak
position within one beam. The velocity is relative to the systemic velocity of 1254 km s$^{-1}$ and spectral resolution is 10 km s$^{-1}$. The IDs of
the clumps are labeled in each panel.
Left: We show the $^{12}$CO(J = 2–1) spectra, where
the solid line is the JCMT data (Petitpas et al. 2003) and the dotted
line is our SMA data. The intensity scale is the main beam temperature
at 21 resolution. We only took 6 chunks to make
the SMA maps, so the velocity range is smaller
than that of JCMT.
Right: The $^{13}$CO(J = 2–1) spectra, where
the solid line is the JCMT data, and the dotted line is our SMA data.
The beam size of the two data are matched to 21$\arcsec$.
(a) The number histogram of the total H$_{2}$ mass of the clumps in the ring.
Horizontal and vertical axis are the gas mass and number, respectively.
The gas mass is in units of 10$^{6}$ M$_{\odot}$. The negative horizontal
axis is to show the plot clearly.
(b) The correlation between total H$_{2}$ mass and FWHM intrinsic
line width of the narrow line ring clumps (circles), broad line ring
clumps (squares), and dust lane clumps (triangles).
The H$_{2}$ mass is in units of 10$^{6}$ M$_{\odot}$.
The uncertainties of $\pm1\sigma$ are overlaid on the symbols
with vertical/horizontal bars.
(a) $\Sigma_{\rm H_{2}}$ measured at the position of the intensity
peak in units of M$_{\odot}$ pc$^{-2}$ (see Table <ref>) as a function of azimuthal angle. East direction corresponds to 0, and increase is in clockwise direction.
The dashed lines mark position angle from 0 to 45and
from 180 to 225,
which roughly correspond to the position of the orbit crowding regions.
The meaning of the symbols are the same as in Figure <ref>b.
(b) The FWHM intrinsic line width of clumps as a function of
azimuthal angle. The dashed lines are the same as in (a).
The uncertainties of $\pm1\sigma$ are overlaid on the symbols
with vertical/horizontal bars.
(a) Surface SFR density (M$_{\odot}$ yr$^{-1}$ kpc$^{-2}$) is shown as
a function of $\Sigma_{\rm H_{2}}$ (M$_{\odot}$ pc$^{-2}$).
The symbols are the same as in Figure <ref>.
The meaning of the symbols are the same as in Figure <ref>b.
(b) Surface SFR density is shown as a function of azimuthal angle.
The dashed lines mark position angle from 0 to 45and
from 180 to 225.
(c) Surface SFR density (M$_{\odot}$ yr$^{-1}$ kpc$^{-2}$)
is shown as a correlation of R$_{32}$.
(d) R$_{32}$ is shown as a function of azimuthal angle.
The uncertainties of $\pm1\sigma$ are overlaid on the symbols
with vertical/horizontal bars.
[]The correlation of $\Sigma_{\rm H_{2}}$ and surface
SFR density of individual clumps of NGC 1097
are overlaid on the data used in Kennicutt (1998).
Their data for normal galaxies are represented as plus signs, and
circumnuclear starburst galaxies as asterisks, while their NGC 1097
data point is marked as triangle.
Our spatially resolved clumps of the circumnuclear starburst ring
of NGC 1097 are marked as crosses, and
the average value of the clumps is marked as a square.
[](a) The intensity weighted mean velocity
map (MOM1) of $^{12}$CO(J = 2–1) line
with respect to the systematic velocity (1254 km s$^{-1}$), solid and
dashed lines represent the redshifted and blueshifted velocity
respectively. The first negative contour (close to the central
cross) is 0 km s$^{-1}$, and the contour spacing is in
25 km s$^{-1}$ resolution.
(b) The intensity weighted velocity dispersion map (MOM2).
The contour interval is 10 km s$^{-1}$, note the values are
not FWHM line width but the square root of the dispersion relative to the mean velocity.
Therefore the number is lower than the FWHM we derived by line fitting.
We show the data of rotation curve (circles) of NGC 1097 overlaid with the fitted
curve (solid line) in GAL.
[]The LVG calculations of the R$_{32}$ is shown
as a function of kinetic temperature and H$_{2}$ number density.
The R$_{32}$ are labeled on the contours.
[]The cartoon sketch of the gas morphology in the circumnuclear
region of NGC 1097. The
red circle represents the starburst ring, where the narrow line clumps
are located. The black curves are the dust lane associated with shock
wave, where the broad line clumps are located. The blue line is the
major axis of the large scale stellar bar. We show:
(a) the projected view of the morphology of the starburst ring/dust lane
associated with our observation. It shows a nearly circular starburst ring.
(b) the intrinsic shape of the ring, which is expected to be an ellipse.
|
arxiv-papers
| 2011-05-18T06:23:43 |
2024-09-04T02:49:18.875923
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Pei-Ying Hsieh, Satoki Matsushita, Guilin Liu, Paul T. P. Ho, Nagisa\n Oi, and Ya-Lin Wu",
"submitter": "Pei-Ying Hsieh",
"url": "https://arxiv.org/abs/1105.3543"
}
|
1105.3566
|
# Hybrid quantum repeater with encoding
Nadja K. Bernardes1,2 nadja.bernardes@mpl.mpg.de Peter van Loock1,2,3
peter.vanloock@mpl.mpg.de 1Optical Quantum Information Theory Group, Max
Planck Institute for the Science of Light, Günther-Scharowsky-Str. 1/Bau 24,
91058 Erlangen, Germany 2Institute of Theoretical Physics I, Universität
Erlangen-Nürnberg, Staudtstr. 7/B2, 91058 Erlangen, Germany 3 Institute of
Physics, University of Mainz, Staudingerweg 7, 55128 Mainz, Germany
###### Abstract
We present an encoded hybrid quantum repeater scheme using qubit-repetition
and Calderbank-Shor-Steane codes. For the case of repetition codes, we propose
an explicit implementation of the quantum error-correction protocol. Moreover,
we analyze the entangled-pair distribution rate for the hybrid quantum
repeater with encoding and we clearly identify trade-offs between the
efficiency of the codes, the memory decoherence time, and the local gate
errors. Finally, we show that in the presence of reasonable imperfections our
system can achieve rates of roughly 24 Hz per memory for 20 km repeater
spacing, a final distance of 1280 km, and final fidelity of about 0.95.
quantum repeaters, quantum error correction
###### pacs:
03.67.Hk, 03.67.Pp, 03.67.Bg
††preprint: PRE/003
## I Introduction
In 1982, Wootters, Zurek, and Dieks stated the famous no-cloning theorem
wootters ; dieks . The impossibility to copy an unknown quantum state implies
that common procedures used in classical communication to combat channel
losses, such as amplification, cannot be used in quantum communication. The
problem of distributing entanglement over long distances was then solved in
principle with the proposal of quantum repeaters briegel ; dur . The main idea
behind this proposal is to generate entangled pairs in small segments,
avoiding the exponential decay the distance, and to use entanglement swapping
zukowski and entanglement purification bennett ; deutsch as some of the
building blocks of the protocol. Nonetheless, considering the typically
probabilistic nature of at least some of these steps (generation,
purification, and swapping), the finite decoherence time of the currently
available quantum memories turns out to drastically limit the total
communication distance. However, according to Ref. jiang , with the help of
deterministic quantum error correction (QEC), the initial entangled pairs can
be encoded so that all the swapping steps may be executed at the same time.
This is different from the original approach of a nested scheme for the
repeater (with multiple-round purification and swapping), making the protocol
much faster than usual.
There are many different proposals for implementing a quantum repeater,
utilizing completely different systems, including heralding mechanisms based
on single-photon detection duan ; childress1 ; childress2 ; simon ; sangouard
and schemes based on bright multiphoton signals. Although in the former
schemes, generally, high-fidelity entangled pairs are generated, the latter
schemes are usually more efficient, at least for the initial entanglement
distribution step. In this work, we will concentrate on the so-called hybrid
quantum repeater (HQR) PvLa ; ladd ; PvLb . In this scheme, an entangled pair
is initially generated between the electronic spins of two atoms placed in
not-too-distant cavities through an optical coherent state (the so-called
“qubus”).
The main idea of this paper is to apply QEC to a hybrid quantum repeater
aiming to improve the scheme against practical limitations such as finite
memory decoherence times (relaxing the requirement of perfect memories in our
earlier analysis of the unencoded HQR nadja ) and imperfect two-qubit
operations. More specifically, the QEC codes under consideration here are the
well-known qubit-repetition and Calderbank-Shor-Steane (CSS) codes nielsen .
Due to their transversality property, entanglement connection and error
correction can be performed with the same set of operations. Our treatment is
not restricted to analyzing the in-principle performance of QEC codes for the
hybrid quantum repeater, but it also shows how to actually implement an
encoded HQR.
In Sec. II, we briefly describe the hybrid quantum repeater. The errors
affecting the system are presented in Sec. III and the error correcting
protocol is described in more detail in Sec. IV. In Sec. V, we show how to
implement the repetition codes, starting from the general idea and concluding
with a proposal for a more practical implementation. A protocol using CSS
codes is presented in Sec. VI. A rate analysis of the hybrid quantum repeater
with encoding is presented in Sec. VII. We conclude in Sec. VIII and give more
details of calculations in the appendix.
## II Hybrid Quantum Repeater
A dispersive light-matter interaction provides the essence of the hybrid
quantum repeater. This interaction will occur between an electron spin system
(i.e., a two-level system or a “$\Lambda$-system” as an effective two-level
system) inside a cavity and a bright coherent pulse (probe pulse). Although
the probe and the cavity are in resonance, both are detuned from the
transition between the ground state and the excited state of the atom. More
formally, this interaction is described by the Jaynes-Cummings interaction
Hamiltonian in the limit of large detuning: $H_{int}=\hbar\chi Za^{\dagger}a$,
where $\chi$ is the light-atom coupling strength, $Z$ is the qubit Pauli-$Z$
operator, and $a$ ($a^{\dagger}$) is the annihilation (creation) operator of
the electromagnetic field mode. In practice, this interaction works as a
conditional-phase rotation. Considering the two relevant states of the
electronic spin as $|0\rangle$ and $|1\rangle$, and a probe pulse in a
coherent state $|\alpha\rangle$, we have
$U_{int}^{q}(\theta)\left[(|0\rangle+|1\rangle)|\alpha\rangle\right]=|0\rangle|\alpha
e^{i\theta/2}\rangle+|1\rangle|\alpha e^{-i\theta/2}\rangle$;
$U_{int}^{q}(\theta)=e^{i(\theta/2)Za^{\dagger}a}$ is the operator that
describes the interaction between the probe and the $q$-th qubit, and $\theta$
represents an effective interaction time, $\theta=-2\chi t$.
First the probe (or qubus) interacts with an atomic qubit $A$ initially
prepared in the superposition state $(|0\rangle+|1\rangle)/\sqrt{2}$ placed in
one of the repeater stations resulting in a qubus-qubit entangled state PvLa ;
PvLb ; karsten . Then the qubus is sent to a second qubit $B$ placed in a
neighboring repeater station and interacts with this qubit, also initially
prepared in a superposition state, this time inducing a controlled rotation by
$-\theta/2$. By measuring the qubus (and identifying its state without error,
see below), we are able to conditionally prepare an entangled state between
qubits $A$ and $B$ which has the following form,
$F|\phi^{+}\rangle\langle\phi^{+}|+(1-F)|\phi^{-}\rangle\langle\phi^{-}|,$ (1)
where $|\phi^{\pm}\rangle=(|00\rangle\pm|11\rangle)/\sqrt{2}$ and
$F=[1+e^{-(1-\eta)\alpha^{2}(1-\cos{\theta})}]/2$, with $\alpha$ real. A beam
splitter transmitting on average $\eta$ photons may be used to model the
photon losses in the channel. For a standard telecom fiber, where photon loss
is assumed to be 0.17 dB per km, the transmission parameter will be
$\eta(l,L_{att})=e^{-l/L_{att}}$, where $l$ is the transmission distance of
the channel and the attenuation length is assumed to be $L_{att}=25.5$ km.
When the optical measurement of the probe pulse corresponds to the quantum
mechanically optimal, unambiguous (and hence error-free) state discrimination
(USD) of phase-rotated coherent states, an upper bound for the probability of
success to generate an entangled pair can be derived, PvLb
$P_{success}=1-\left(2F-1\right)^{\eta/(1-\eta)}.$ (2)
This bound can be attained, for instance, following the protocol from Ref.
azuma . Note that for this type of measurement, there is a trade-off: for
large $\alpha$ and hence $F\rightarrow\frac{1}{2}$, we have
$P_{success}\rightarrow 1$, as the coherent states become nearly orthogonal
even for small $\theta$; whereas for small $\alpha\ll 1$ and $F\rightarrow 1$,
the coherent states are hard to discriminate, $P_{success}\rightarrow 0$.
Entanglement swapping and purification can also be performed utilizing the
same interaction as described above. A two-qubit entangling gate may be
employed for both steps. A measurement-free, deterministic controlled-phase
gate can be achieved with a sequence of four conditional displacements of a
coherent-state probe interacting with the two qubits. The conditional
displacements can be each decomposed into conditional rotations and
unconditional displacements, so that eventually there is no need for any
operations other than those already introduced above PvLc . The controlled-
phase rotation, single-qubit operations, and measurements are then sufficient
tools to implement the standard purification and swapping protocols deutsch .
## III Errors and error models
In the previous section, it was described that the photon losses in the
transmission channel cause a random phase-flip error in the initial entangled
state. However, considering a more realistic scheme, photon losses will also
cause local gate errors, and we should also take into account imperfect
memories (i.e., memories with finite decoherence times).
According to Ref. louis , dissipation on quantum gates in our scheme will act
in a two-qubit unitary operation $U_{ij}$ as
$\displaystyle U_{ij}\rho U_{ij}^{\dagger}\rightarrow
U_{ij}\left[(1-q_{g}(x))^{2}\rho+\right.$
$\displaystyle\left.q_{g}(x)(1-q_{g}(x))(Z_{i}\rho Z_{i}+Z_{j}\rho
Z_{j})+q_{g}^{2}(x)Z_{i}Z_{j}\rho Z_{j}Z_{i}\right]U_{ij}^{\dagger},$ (3)
with
$q_{g}(x)=\frac{1-e^{-x}}{2},$ (4)
the probability that each qubit suffers a $Z$ error, where
$x=\frac{\pi}{2}\frac{1-T^{2}}{\sqrt{T}(1+T)}$; here $T$ is the local
transmission parameter that incorporates photon losses in the local gates.
Note that this error model is considering a controlled-Z (CZ) gate operation.
For a controlled-not (CNOT) gate, Hadamard operations should be included and
$Z$ errors can be transformed into $X$ errors.
The errors resulting from the imperfect memories are similarly described by a
dephasing model, such that the qubit state $\rho_{A}$ of memory $A$ will be
mapped, after a decaying time $t$, to
$\Gamma^{A}_{t}(\rho_{A})=(1-q_{m}(t/2))\rho_{A}+q_{m}(t/2)Z\rho_{A}Z,$ (5)
and an initial two-qubit Bell state between qubits $A$ and $B$ will be
transformed as razavi2
$\Gamma^{A}_{t}\otimes\Gamma^{B}_{t}(|\phi^{\pm}_{AB}\rangle\langle\phi^{\pm}_{AB}|)=(1-q_{m}(t))|\phi^{\pm}_{AB}\rangle\langle\phi^{\pm}_{AB}|+q_{m}(t)|\phi^{\mp}_{AB}\rangle\langle\phi^{\mp}_{AB}|,$
(6)
where $q_{m}(t)=(1-e^{-t/\tau_{c}})/2$ and $\tau_{c}$ is the memory
decoherence time.
We shall encode our entangled pair in a qubit-repetition code and in a CSS
code. The advantage of these codes is that, due to their resemblance to
classical codes, the logical operations can simply be understood as the
corresponding operations applied upon each physical qubit individually. This
permits doing the entanglement connection (swapping) between different
repeater stations and the syndrome measurements (for error identification) at
the same time, such that the swappings can all be executed simultaneously
jiang . The correction operations will then be performed only on the initial
and the final qubits of the whole protocol. The encoded quantum repeater
protocol operates much faster than the non-encoded scheme, and, as a result,
still performs well even for rather short memory decoherence times.
## IV Quantum repeater with error correction
An $n$-qubit repetition code encodes one logical qubit using $n$ physical
qubits in the following way, $|\bar{0}\rangle=|0\rangle^{\otimes n}$ and
$|\bar{1}\rangle=|1\rangle^{\otimes n}$. These are the simplest QEC codes,
correcting only one type of error (in this case the $X$ error). A more general
family of codes that corrects any kind of errors are the CSS codes. A CSS code
is constructed from two classical linear codes. Imagine $C_{1}$ is a linear
code that encodes $k_{1}$ bits in $n$ bits and $C_{2}$ a linear code that
encodes $k_{2}$ bits in $n$ bits, such that $C_{2}\subset C_{1}$, and $C_{1}$
and $C_{2}^{\bot}$ both correct $(d-1)/2$ errors ($C_{2}^{\bot}$ is the dual
of code $C_{2}$). The CSS quantum code is defined as the code encoding $k$
qubits, $k=k_{1}-k_{2}$, in $n$ qubits capable of correcting $(d-1)/2$ errors,
and is represented by $[n,k,d]$.111We analyze in this paper only codes with
$k=1$. Note that the letter $k$ is used in the rest of the paper for the
number of rounds of purification. 222Repetition codes in this paper will also
be represented by $[n,k,d]$, more precisely, by $[n,1,n]$, since one qubit
($k=1$) is encoded in $n$ physical qubits and the error correcting code will
correct $(n-1)/2$ errors.
According to Ref. jiang , the complete protocol for a quantum repeater with
encoding should, in principle, work as follows: first, an encoded Bell pair
between two repeater stations is generated. Second, entanglement connection is
performed between neighboring stations. Imagine we want to connect the Bell
pairs ($A$,$B$) and ($C$,$D$). We should then realize a Bell measurement on
the qubits $B$ and $C$. More specifically, this measurement can be performed
using a CNOT operation between qubits $B$ and $C$ and a projective
$X$-measurement for qubit $B$ and a $Z$-measurement for qubit $C$. For the
encoded states, we should be able to perform an encoded version of the Bell
measurement. Due to the transversality property of the codes analyzed here,
the encoded version of this operation is the same as the operation applied
individually for each pair of the $2n$ physical qubits at every repeater
station. Provided the system is not noisy and the operations perfect, this
would be enough to distribute entanglement over the whole distance. However,
of course, this is not a realistic case. The remarkable feature of the encoded
scheme jiang is now that when performing the entanglement connections, we are
able to realize the syndrome measurements at the same time, since we are doing
projective measurements on the $2n$ physical qubits. After identifying the
error, error correction should be applied and it is guaranteed that the new
state is a highly entangled state. All the entanglement connection operations
will be performed simultaneously. However, it is important to know exactly
which final entangled state is generated. For this purpose, the measurements
at the entanglement connection steps will determine the Pauli frame of the
final entangled state jiang .
Figure 1: (Color online) Schematic repeater protocol with encoding. In step 1
an encoded entangled pair is distributed. First, at each repeater station
there are $2n$ physical qubits. Here $n=3$. In step 1(i), these qubits ($n$ of
station 1 and $n$ of station 2) are locally prepared in the encoded state
$|\bar{0}\rangle+|\bar{1}\rangle$. By sending and measuring an ancilla qubus
state, the state $|\bar{0}\bar{0}\rangle+|\bar{1}\bar{1}\rangle$ is
distributed among the two neighboring stations in step 1(ii). Two identical
copies of the encoded entangled state are generated, and so applying local
operations between each of the $n$ physical qubits in stations 1 and 2 gives a
purified entangled encoded state. In step 3, the encoded Bell states are
connected, applying Bell measurements individually on each of the $2n$
physical qubits. The outcomes of the Bell measurements on qubits ${a_{i}}$ and
${b_{i}}$ are used to identify the errors and the operations necessary to
recover the desired Bell state, or to determine the resulting “Pauli frame”
(step 4). Errors are represented by a lightning symbol. Red color (striped
lightning) indicates when memory errors occur for the first time, orange color
(empty lightning) symbolizes imperfect entangled states due to losses in the
transmission channel, and blue color (filled lightning) corresponds to errors
in the two-qubit gates.
The whole protocol, especially a version of it adapted to the use of
repetition codes, can now be divided in the following steps: 1) generation and
distribution of the encoded entangled states, 2) purification of the encoded
entangled states, 3) encoded entanglement connection, and 4) Pauli frame
determination, as illustrated in Fig. 1. Note that in this version, we first
encode and then purify, unlike Ref. jiang . However, as in Ref. jiang , first
the codewords are locally prepared and then, with the help of ancilla states,
the encoded entangled state is generated.
There are some peculiarities regarding the different classes of codes and
schemes. In the scheme of Jiang et al., first codeword states are locally
prepared together with $n$ purified physical Bell states between two repeater
stations. An encoded entangled pair is eventually obtained through $n$
pairwise teleportation-based CNOT gates between the local encoded states and
the corresponding halves of the Bell states. In contrast, our scheme for the
repetition code, as described below, does not require any teleportation-based
CNOT gates for the generation of an encoded entangled pair. Consequently,
neglecting the CNOT gates necessary for the local codeword generation, while
the scheme from Ref. jiang needs in total $4n$ CNOT gates to initially
generate a purified encoded entangled pair (for one round of purification), we
need CNOT gates only for the purification step. As a result, we use just $2n$
CNOT gates in the preparation of the purified encoded entangled pair. In
principle, even these purifications could be done without the use of full CNOT
gates pan ; PvL3 ; denis . Even more importantly, our protocol for the qubit-
repetition code uses only a single lossy channel per encoding block (for any
code size $n$), as opposed to the $n$ attenuated Bell pairs in Ref. jiang .
Nevertheless, for a version of the protocol based on the CSS codes, as
described in detail in Sec. VI, we shall follow a similar strategy to that of
Ref. jiang , by teleporting a logical qubit using already prepared Bell
states.
The effective logical error probability for each encoding block, after
encoding a qubit in an $[n,1,d]$ code and performing syndrome measurement and
correction, is
$Q_{n}=\sum_{j=\frac{d+1}{2}}^{n}\left(\begin{array}[]{c}n\\\
j\end{array}\right)q_{eff}^{j}(1-q_{eff})^{n-j},$ (7)
where $q_{eff}$ is the effective error probability per physical qubit (more
details of this will be given below). So, the leading order of errors
occurring with probability $q_{eff}$ is reduced to $q_{eff}^{\frac{d+1}{2}}$
through the use of QEC.
Since there are some subtleties regarding the repetition codes and the CSS
codes, the two families of codes are analyzed separately below.
## V Hybrid quantum repeater with repetition code against memory errors
Although the repetition code is one of the simplest error-correcting codes, it
is not a full quantum error correction code, as it can correct only one type
of error. With this in mind, the qubit repetition code will be used here to
protect the states against phase-flip ($Z$) errors originating from memory
imperfections. For this purpose, the gate errors are considered sufficiently
small such that the dominating error is caused by memory imperfections.333The
reason that the $Z$ errors originating from the gate errors are not included
in the error correction is that $Z$ and $X$ errors occur with equal
probability in the imperfect CNOT gates. In this case, the scheme with the
repetition code performs worse than the non-encoded scheme. For example, for
the three-repetition code against $X$ errors, the probability of no error will
be $(1-q_{z})^{3}(1-3q_{x}^{2}+2q_{x}^{3})$. Without encoding, the probability
of no error is $(1-q_{z})(1-q_{x})$. If $q_{x}=q_{z}=q$, it is clear that
$(1-q)^{3}(1-3q^{2}+2q^{3})\leq(1-q)(1-q)$. We produce an encoded entangled
state using a qubit-repetition code $[n,1,d]$. For $n=3$, $|0\rangle$ is
encoded in $|\bar{+}\rangle=|+++\rangle$ and $|1\rangle$ is encoded in
$|\bar{-}\rangle=|---\rangle$, where
$|\pm\rangle=\frac{|0\rangle\pm|1\rangle}{\sqrt{2}}$. The encoded entangled
pairs are connected by applying an encoded Bell measurement between the two
half-nodes of the repeater station. This is done by applying pairwise CNOT
gates on qubits $\left\\{a_{i},b_{i}\right\\}$ and by measuring qubits $2a$ in
the logical basis $\left\\{|\bar{+}\rangle,|\bar{-}\rangle\right\\}$ and
qubits $2b$ in the logical basis
$\left\\{|\bar{0}\rangle,|\bar{1}\rangle\right\\}$, as shown in Fig. 1. The
logical computational basis is defined as
$|\bar{0}\rangle=\frac{|\bar{+}\rangle+|\bar{-}\rangle}{\sqrt{2}}=\frac{1}{2}(|000\rangle+|011\rangle+|101\rangle+|110\rangle)$
and
$|\bar{1}\rangle=\frac{|\bar{+}\rangle-|\bar{-}\rangle}{\sqrt{2}}=\frac{1}{2}(|111\rangle+|100\rangle+|010\rangle+|001\rangle)$,
and it is straightforward to see that by measuring each physical qubit in the
$\left\\{|0\rangle,|1\rangle\right\\}$ basis, if the output is an odd number
of $|0\rangle$, the logical qubit is in the state $|\bar{0}\rangle$,
otherwise, the logical qubit is in the state $|\bar{1}\rangle$. Following this
procedure, we will not only connect the encoded entangled states, but we can
also identify if an error occurred.
Ignoring the two-qubit gate for the moment, the probability that a logical
qubit suffers an error after encoding and applying error correction is given
by Eq. (7), where $d=3$ and $q_{eff}=q_{m}(t/2)$:
$Q_{3}=q^{3}_{m}(t/2)+3q^{2}_{m}(t/2)(1-q_{m}(t/2)).$ (8)
For a two-qubit encoded entangled state, the probability that no error occurs
is then given by444Note that this is exactly the same relation that was shown
previously in Eqs. (5, 6); there
$(1-q_{m}(t))=(1-q_{m}(t/2))^{2}+q^{2}_{m}(t/2)$.
$\mathcal{P}_{3}=(1-Q_{3})^{2}+Q_{3}^{2}.$ (9)
The final state after encoding, syndrome measurement, and correction
becomes555Note that for the three-qubit phase-flip code, a logical
$\bar{Z}=ZZZ$ operation on the codewords should be seen as a logical
$\bar{X}=XXX$ operation on the computational basis.
$\displaystyle\mathcal{P}_{3}\left[F|\bar{\phi}^{+}\rangle\langle\bar{\phi}^{+}|+(1-F)|\bar{\psi}^{+}\rangle\langle\bar{\psi}^{+}|\right]+$
(10)
$\displaystyle(1-\mathcal{P}_{3})\left[F|\bar{\phi}^{-}\rangle\langle\bar{\phi}^{-}|+(1-F)|\bar{\psi}^{-}\rangle\langle\bar{\psi}^{-}|\right],$
where the encoded versions of the Bell states are represented by
$|\bar{\phi}^{\pm}\rangle=(|\bar{0}\rangle|\bar{0}\rangle\pm|\bar{1}\rangle|\bar{1}\rangle)/\sqrt{2}$
and
$|\bar{\psi}^{\pm}\rangle=(|\bar{0}\rangle|\bar{1}\rangle\pm|\bar{1}\rangle|\bar{0}\rangle)/\sqrt{2}$.
Although the encoding protects the original state against memory
imperfections, the same does not necessarily happen for the two-qubit gate
imperfections. In fact, the effect of these errors may become even stronger in
the encoded scheme, affecting, in particular, the purification and swapping
steps. We should be very careful here, since the resulting state after the
two-qubit interaction will no longer necessarily remain a mixture of Bell
states. By having this in mind and the error model in Eq. (3), we are able to
estimate the probability of success and the fidelity of the purification and
swapping steps. Before getting into details we should remember that, assuming
perfect two-qubit gates and an initial state of the form
$A|\phi^{+}\rangle\langle\phi^{+}|+B|\phi^{-}\rangle\langle\phi^{-}|+C|\psi^{+}\rangle\langle\psi^{+}|+D|\psi^{-}\rangle\langle\psi^{-}|$
with $|\phi^{\pm}\rangle=(|00\rangle\pm|11\rangle)/\sqrt{2}$ and
$|\psi^{\pm}\rangle=(|01\rangle\pm|10\rangle)/\sqrt{2}$, after purification or
swapping, the state still has the same form, but with new coefficients given
as follows bennett ; deutsch ,
$\displaystyle A^{\prime}_{pur}=\frac{A^{2}+D^{2}}{P_{pur}},\quad\quad
B^{\prime}_{pur}=\frac{2AD}{P_{pur}},$ $\displaystyle
C^{\prime}_{pur}=\frac{B^{2}+C^{2}}{P_{pur}},\quad\quad
D^{\prime}_{pur}=\frac{2BC}{P_{pur}},$ (11) $P_{pur}=(A+D)^{2}+(B+C)^{2},$
(12) $\displaystyle A^{\prime}_{swap}=A^{2}+B^{2}+C^{2}+D^{2},\quad
B^{\prime}_{swap}=2(AB+CD),$ $\displaystyle
C^{\prime}_{swap}=2(AC+BD),\quad\quad\quad D^{\prime}_{swap}=2(BC+AD),$ (13)
and we will use nadja
$P_{swap}\equiv 1.$ (14)
Considering that the final state will be a complicated mixed state, especially
for higher orders of encoding, we include the gate errors by treating these
functions in a worst-case scenario. According to this, lower bounds for
fidelity and probability of success of purification and for the fidelity of
swapping are then given by
$P_{pur,lower}(A,B,C,D)=P_{pur}(A,B,C,D)(1-q_{g}(x))^{4n},$ (15)
$F_{pur,lower}(A,B,C,D)=A^{\prime}_{pur}(A,B,C,D)(1-q_{g}(x))^{4n},$ (16)
$F_{swap,lower}(A,B,C,D)=A^{\prime}_{swap}(A,B,C,D)(1-q_{g}(x))^{2n}.$ (17)
For further details, see Appendix A.
We purify previously encoded states, but, since during the entanglement
distribution the qubits are already subject to errors, we do error correction
before purification. For this, we first apply a Hadamard operation on the
qubits changing $|+\rangle$ back to $|0\rangle$, and similarly, $|-\rangle$
back to $|1\rangle$, and then we measure the qubits with the aid of an ancilla
state, employing a majority voting. In order to do this, we use a qubus
interacting with the atoms in the cavities. The qubus is measured in the $x$
quadrature and so we can find out if an error occurred which can be corrected.
A similar procedure is performed in the implementation of the encoded scheme,
which will become clear soon. Since error correction occurs deterministically
and locally at each repeater station, this does not affect the generation
rates. The purification protocol between the encoded states is very similar to
the original version from Ref. deutsch . First, local operations are applied
on each physical qubit. At side $A$, the $2n$ physical qubits are subject to
the transformation $|0\rangle\rightarrow(|0\rangle+i|1\rangle)/\sqrt{2}$ and
$|1\rangle\rightarrow(i|0\rangle+|1\rangle)/\sqrt{2}$. At side $B$, the $2n$
physical qubits are transformed as
$|0\rangle\rightarrow(|0\rangle-i|1\rangle)/\sqrt{2}$ and
$|1\rangle\rightarrow(-i|0\rangle+|1\rangle)/\sqrt{2}$. On both sides CNOT
operations are applied transversally on each $n$ physical qubits from the
logical control and target qubits. The physical target qubits are measured in
the computational basis, and the logical qubits are identified (remember that
for the repetition code, an odd number of 0 corresponds to the logical state
$|\bar{0}\rangle$ and an even number of 0 refers to the logical state
$|\bar{1}\rangle$). Always when the logical qubits measured on both sides
coincide, we keep the resulting state and this is a purified encoded entangled
state.
The final fidelity of the encoded entangled state, after $k$ rounds of
purification and $N-1$ connections (swappings), is given as a lower bound by
$F_{final}=\underbrace{A^{\prime}_{swap}(...A^{\prime}_{swap}(}_{(\log_{2}{N})-\text{times}}\underbrace{A^{\prime}_{pur}(...A^{\prime}_{pur}}_{k-\text{times}}(A_{eff}(F,t_{k}),B_{eff}(F,t_{k}),C_{eff}(F,t_{k}),D_{eff}(F,t_{k})))))(1-q_{g}(x))^{2n((N-1)+2(2^{k}-1))},$
(18)
where $A_{eff}(F,t)=\mathcal{P}_{n}(t)F$,
$B_{eff}(F,t)=(1-\mathcal{P}_{n}(t))F$,
$C_{eff}(F,t)=\mathcal{P}_{n}(t)(1-F)$,
$D_{eff}(F,t)=(1-\mathcal{P}_{n}(t))(1-F)$, $N=L/L_{0}$ with $L$ the total
distance and $L_{0}$ the fundamental distance between repeater stations,
$T_{0}=2L_{0}/c$ is the minimum time it takes to successfully generate
entanglement over $L_{0}$, and $c$ is the speed of light in an optical fiber
($2\times 10^{8}$ m/s). More details can be found in Appendix A. We should be
careful when defining the dephasing times $t_{k}$. We make use of as many
spatial resources as need to minimize the required temporal resources, such
that the time considered in Eq. (18), $t_{k}=(k/2+1)T_{0}$, is the minimum
time it takes for the entanglement distribution and $k$ rounds of entanglement
purification to succeed. Notice here that $A_{eff}(F,t_{k})$ is smaller than
the fidelity that we obtain after entanglement distribution and error
correction but before purification, because $t_{k}\geq T_{0}$. The probability
of success for one round of purification will be estimated as
$P_{1}=P_{pur,lower}(A_{eff}(F,t_{1}),B_{eff}(F,t_{1}),C_{eff}(F,t_{1}),D_{eff}(F,t_{1}))$.
In the case of two rounds of purification, the probability of success will be
given by
$P_{2}=P_{pur}(A_{eff}(F,t_{2}),B_{eff}(F,t_{2}),C_{eff}(F,t_{2}),D_{eff}(F,t_{2}))\times$
$P_{pur}(A^{\prime}_{pur}(A_{eff}(F,t_{2}),B_{eff}(F,t_{2}),C_{eff}(F,t_{2}),D_{eff}(F,t_{2})),...,$
$D^{\prime}_{pur}(A_{eff}(F,t_{2}),B_{eff}(F,t_{2}),C_{eff}(F,t_{2}),D_{eff}(F,t_{2})))(1-q_{g}(x))^{12n}$.
The time spent for the encoding was neglected here, since it will be much
shorter than the time spent in classical communication between repeater
stations.
How precisely the encoding protocol can be implemented is explained below.
Implementation
The implementation for the repeater with encoding, omitting purification, can
be described as follows. In the unencoded scheme, a probe beam interacts with
two qubits at two neighboring repeater stations. With encoding, it is crucial
to observe (see below) that we may still use only one probe beam, however, $n$
qubits per half node are needed. Initially, the qubits are all in the state
$((|0\rangle+|1\rangle)/\sqrt{2})^{\otimes n}$. It is important for the
encoded scheme that the generation of the locally created encoded states
occurs deterministically, 666This is similar to Ref. jiang where first GHZ
states are produced locally, which are then teleported into the initially
created nonlocal Bell pairs. Note that in Ref. jiang , the purification of
these initially distributed Bells pairs is also made near-deterministic by
using sufficiently many temporal and spatial resources. We shall follow a
similar strategy, but in our case for the initial distributions, see Sec. VII.
since otherwise the whole protocol would again become too slow. Through
interaction of the qubits with a coherent state with sufficiently large
amplitude, $\beta\gg 1$ with $\beta$ real, it is possible to prepare the
$n$-qubit state $(|0\rangle^{\otimes n}+|1\rangle^{\otimes n})/\sqrt{2}$, for
example, employing homodyne measurements. This works because the interaction
between qubits and qubus (probe) functions as a controlled-phase rotation and
we are, in principle, able to deterministically distinguish between the phase-
rotated components of $|\beta\rangle$ by measuring the $x$ quadrature (that is
perpendicular to the direction of the phase rotation). For $\beta\gg 1$, this
can be even achieved in an almost error-free fashion. By preparing the qubits
in this way, the transmitted qubus beam (between two stations) will interact
only with one qubit pair from the chains of $n$ qubits. More specifically, let
us take a look at the 3-qubit repetition code as illustrated in Fig. 2. The
qubits are initiated in the state
$\left(\frac{|0\rangle+|1\rangle}{\sqrt{2}}\right)^{\otimes 3}$. As shown in
step 2.1, this state interacts with a coherent state $|\beta\rangle$, and this
interaction is described by
$\displaystyle
U_{int}^{1}\left(\theta\right)U_{int}^{2}\left(2\theta\right)U_{int}^{3}\left(-3\theta\right)\left[\left(\frac{|0\rangle+|1\rangle}{\sqrt{2}}\right)^{\otimes
3}|\beta\rangle\right]=$
$\displaystyle\frac{1}{2\sqrt{2}}\left[\left(|000\rangle+|111\rangle\right)|\beta\rangle+|001\rangle|\beta
e^{3i\theta}\rangle+|010\rangle|\beta e^{-2i\theta}\rangle+\right.$
$\displaystyle\left.|100\rangle|\beta e^{-i\theta}\rangle+|110\rangle|\beta
e^{-3i\theta}\rangle+|101\rangle|\beta e^{2i\theta}\rangle+|011\rangle|\beta
e^{i\theta}\rangle\right].$ (19)
By measuring the $x$ quadrature of the probe beam, the state
$\frac{|000\rangle+|111\rangle}{\sqrt{2}}$ is deterministically generated up
to a known phase shift nemoto and local bit flip operations. In the next step
2.2, a probe state $|\alpha\rangle$ interacts with only one qubit at each
repeater station. This time, after performing a USD measurement on the qubus,
as was explained in Sec. II, the entangled encoded state
$\frac{|000\rangle|000\rangle+|111\rangle|111\rangle}{\sqrt{2}}$ (20)
is prepared.777Note here that there is an important difference between the
state preparation in step 2.1 and in step 2.2. In the first step, the coherent
state $|\beta\rangle$ interacts with cavities that are locally positioned next
to each other, and by using a sufficiently bright beam, $\beta\gg 1$, homodyne
measurements in the $x$ quadrature will be enough to deterministically prepare
the state up to a known phase shift and local bit flip operations. In the
second step, the probe beam interacts with two cavities spatially separated
from each other. In this case, the effect of photon losses in the channel
depends on the amplitude of the beam, and we cannot make $\alpha$ arbitrarily
large. Consequently, the generation of the entangled state must become non-
deterministic. To prepare an encoded entangled state in the conjugate basis,
we just have to apply Hadamard operations on the physical qubits immediately
after the local codeword state has been produced and the probe beam has
interacted with one of the qubits. We assumed here that the codeword state at
side $B$ is prepared only at the very moment when the probe qubus arrives at
this side, thus avoiding memory dephasing during the transmission time. For
larger codes ($n>3$), similar sequences of interactions can be found. However,
considering the typical size of $\theta$ and the number of interactions, it
will be more practical to use more than one local qubus beam for the
preparation of the encoded state. For more details, see Appendix B.
Although the scheme presented here has a fairly simple description, its
experimental implementation can be technologically challenging. For an almost
error-free and deterministic scheme, there is the constraint
$\beta\theta^{2}\gg 1$,888If we did allow for an almost error-free but
probabilistic scheme, we would have $\beta\theta\gg 1$ using $p$-quadrature
measurements. which for small phase shifts of $\theta\sim 10^{-2}$ requires
bright beams or even ultrabright beams (i.e., pulses with mean photon number
larger than $10^{8}$). Notice that the probability of error caused by the
nonorthogonality of the coherent states with finite amplitude is orders of
magnitude smaller than the other errors considered here ($P_{error}<10^{-5}$
for $\beta\theta^{2}>9$ nemoto ) and will be neglected in our analysis.
Moreover, in our scheme the main contribution of losses is through the
transmission channel; losses that happen in the interaction between the qubus
state $|\beta\rangle$ and the atomic qubits are strictly local and hence will
be neglected here.999Note that unlike these encoding and entangling steps, for
the purification and swapping, local losses will be included, as the latter
require (in the present protocol) full CNOT gates which are most sensitive to
the local losses. Note that prior to entanglement purification, the only
remaining elements which are probabilistic and which contribute to the
infidelity of the encoded pairs are the postselection of the single qubus beam
and its lossy transmission, respectively. The postselection is then achieved
through a USD measurement (Fig. 2 and Eq. (2)). In the protocol of Ref. jiang
, in contrast, first, all of the $N_{0}>n$ Bell pairs are subject to fiber
attenuations through their channel transmissions (see App. D of Ref. jiang ),
prior to their purifications and the encoding steps (in precisely this order,
as opposed to that of our protocol with first encoding, second, transmission,
and third, purification). As a result, in our scheme, we minimize the effect
of the lossy channel transmissions and the corresponding need for entanglement
purification.
Figure 2: (Color online) Hybrid quantum repeater with repetition code. The
qubits are initiated in the state $\left(|0\rangle+|1\rangle\right)^{\otimes
3}$; the normalization factor is omitted. Step 2.1: interacting the qubits
with a coherent state $|\beta\rangle$. By measuring $|\beta\rangle$, the state
$|000\rangle+|111\rangle$ is prepared. Step 2.2: a probe state
$|\alpha\rangle$ interacts with only one qubit at each repeater station. By
measuring the qubus, the entangled encoded state
$|000\rangle|000\rangle+|111\rangle|111\rangle$ is prepared.
## VI Hybrid quantum repeater with CSS code against memory and gate errors
One important property of a CSS code is that the encoded version of many
important gates can be implemented transversally, i.e., the encoded version of
the gate is simply the same gate applied individually to each physical qubit
in the code. For example, the encoded version of the $X$ operation, i.e., the
logical $X$, is represented by $\bar{X}$ with $\bar{X}=X^{\otimes n}$. Not
only does the $X$ operation have a transverse implementation, but so do any
Pauli and Clifford gates such as $Y$, $Z$, Hadamard, CNOT, and CZ. These are
exactly the operations we will need in our scheme. 101010An example of a gate
that cannot be implemented transversally in a CSS code is the $\pi/8$ gate
nielsen , which is a non-Clifford gate.
Moreover, the encoded version of some measurements (for instance, in the
eigenbasis of $X$, $Y$, and $Z$) have also a transverse implementation.
Consider a measurement in the $\bar{Z}$ basis on an arbitrary encoded state
$a|\bar{0}\rangle+b|\bar{1}\rangle$. The resulting state will be with
probability $\left|a\right|^{2}$ the state $|\bar{0}\rangle$ and with
probability $\left|b\right|^{2}$ the state $|\bar{1}\rangle$. Similarly, if we
measure all the $n$ qubits in the $Z$ basis, we obtain the Hamming weight
111111In the present context, the Hamming weight is the number of physical
qubits that are different from 0 in a state. The CSS codes have the nice
property that if one of the codewords, for example $|\bar{0}\rangle$, has an
even Hamming weight, then the complementary codeword, in this example
$|\bar{1}\rangle$, has odd Hamming weight. of the state and, consequently, we
get the correct result.
Exploring then the possibility of the transverse implementation of encoded
operations and measurements, the protocol for the quantum repeater with
encoding using CSS codes can be executed similarly to what was explained
above. The main difference now is the order of the encoding and the
purification steps. For the repetition codes, purification occurs after
encoding. The same procedure could be applied for the CSS codes, but in this
case the distance $d$ is usually smaller than the number of qubits $n$, and
therefore, when more than $\frac{d-1}{2}$ errors occur, the state is not
necessarily defined in the codeword space anymore. This causes the
purification protocol to work extremely inefficiently. Consequently, for the
CSS codes we follow the same strategy as in Ref. jiang : the encoded entangled
state is prepared by teleporting a logical qubit generated locally using $n$
already prepared purified Bell states distributed between the repeater
stations.
Assuming that the error probabilities $q_{g}$, $q_{m}$, and $(1-F)$ are
sufficiently small, as in Ref. jiang , we estimate an effective error
probability per physical qubit as
$q_{eff}=3q_{m}(t^{\prime}_{k}/2)+2q_{g}(x)+(1-F).$ (21)
If purification occurs, $(1-F)$ will be replaced by $(1-F_{k})$. Here, $F_{k}$
is the fidelity after $k$-rounds of purification using the initial state of
Eq. (1), the purification protocol from Ref. deutsch , and the gate error from
Eq. (3); an explicit formula and further details are presented in Appendix C.
We further exploit that the memory decaying time is
$t^{\prime}_{k}=(k+1)T_{0}/2$, assuming that for the distribution of the
entanglement (and for purification) the qubits suffer memory dephasing just
during the time it takes for classical communication of a successful
distribution (purification) event.
The effective logical error probability is given by combining Eqs. (7, 21).
The final fidelity is given by
$F_{final}=(1-Q_{n})^{2N}.$ (22)
Although we do not propose an explicit implementation for the CSS codes, the
generation of the codeword states from the CSS codes (in the form of cluster
states), using weak nonlinearities similar to those employed in the HQR, was
proposed in Ref. louis2 . However, that scheme, in its most practical
manifestation, is probabilistic. This probabilistic feature will drastically
decrease the generation rates and require longer-lasting memories (suppressing
the benefit of the encoding against memory errors). Instead, in Ref. lin1 ,
the codewords are created in a deterministic fashion using a similar hybrid
system. However, the codeword cluster states generated in this proposal are in
fact photonic states, which work badly as a memory. Nonetheless, in principle,
a similar approach appears feasible also in the present context of CSS
encoding for the HQR using atomic memories. We leave a detailed proposal of an
explicit implementation of a CSS HQR for future research.
## VII Rate analysis
Complementary to our analysis in Ref. nadja , the pair creation rates will now
be calculated, assuming, as in Refs. jiang ; perseguers ; razavi2 ; munro2 ,
that there are sufficiently many initial resources, such that it is (almost)
guaranteed that at least one entangled pair will be successfully generated
between two neighboring repeater stations. In other words, for instance, for
the repetition codes, we assume $s\gg 1$, where $s$ is the number of memory
blocks in each half repeater station. In every block there are $n$ memory
qubits, conditionally prepared in the state
$\frac{|\bar{0}\rangle+|\bar{1}\rangle}{\sqrt{2}}$. To give an example, in
Fig. 2, the case of one block, $s=1$, and three physical qubits, $n=3$, is
shown. Assuming that we have to distribute entanglement only for the top
physical qubits of the blocks, the average number of encoded entangled pairs
generated at time $T_{0}$ will be $sP_{0}$, where $P_{0}$ is the probability
of success of generating an entangled pair, in our case given by Eq. (2). The
rate of generating an encoded entangled pair would be given then by
$\frac{sP_{0}}{T_{0}}$. The rate of successfully generating an encoded
entangled pair per each of the $sn$ memories employed in every half node of
the repeater is then $\frac{P_{0}}{nT_{0}}$. Since the swapping step is taken
to be deterministic, this can be considered also as the rate of successful
generation of an encoded entangled pair over the total distance $L$ (without
purification).
For the CSS codes, let us use $s^{\prime}$ as the total number of physical
qubits available at each half node of the repeater station which are involved
in the distribution of the entangled states. This number of entangled pairs is
on average given by $s^{\prime}P_{0}$. Since for each encoding block we need
at least $n$ entangled pairs to teleport the logical qubits, the average
number of encoded entangled pairs is calculated as
$\frac{s^{\prime}P_{0}}{n}$. The rate to generate an encoded pair is then
given by $\frac{s^{\prime}P_{0}}{nT_{0}}$. Note that we use $s^{\prime}$ both
for the “flying” and “stationary” (memory) qubits, such that the rate to
generate an encoded entangled pair per memory can be written as
$\frac{P_{0}}{nT_{0}}$. This is in fact an overestimation of the number of
stationary memories, since not all physical qubits involved need to be memory
qubits.
To summarize, the rate of successful generation of an entangled pair over a
total distance $L$, divided into segments $L_{0}$, per each memory employed in
every half node of the repeater, for both repetition and CSS codes, is given
by
$R_{n}=\frac{P_{0}}{nT_{0}}.$ (23)
Notice here that for the scheme without purification, the memory and gate
errors will affect the final fidelity of the entangled state, but will have no
direct impact on the rates.
Depending on the application aimed at for the resulting large-distance
entangled pair, purification should be included in the QEC protocol. The rates
including purification are described by
$R_{pur,n}=\frac{P_{0}P_{k}}{n2^{k}(k/2+1)T_{0}},$ (24)
where $P_{k}$ is the probability of success for the $k$-th purification step.
For the repetition code, $P_{k}=P_{pur,lower,k}$, defined in Eq. (31) in
Appendix A. In the case of CSS codes, $P_{1}$ is defined using Eq. (37); for
more rounds of purification it is possible to deduce more general expressions
for $P_{k}$ with the help of the results from Appendix C. The factor $2^{k}$
appears, because for each round of the purification, already initially twice
as many entangled pairs are necessary. The time it takes to produce an encoded
purified entangled pair is $(k/2+1)T_{0}$; $T_{0}$ is the time it takes to
distribute successfully the entangled pairs and $k/2$ is the time spent to
communicate that purification succeeded. Compared to the time spent on
classical communication between repeater stations, the time needed for the
local operations is much shorter, and so these operation times are neglected
here. Since all the swappings happen at the same time, purification will occur
only at the first nesting level, as in Ref. nadja . Let us now discuss the
rates that we obtained.
Figure 3: (Color online) Rates for a HQR without purification (solid line),
with one round of purification (dashed line), and two rounds of purification
(dotted line) in the first nesting level with $L=1280$ km, $L_{0}=20$ km,
$\tau_{c}=0.1$ s, and $1-T=0.1\%$. Blue curves are the scheme without
encoding, red (thick) curves for the scheme with encoding in the $[3,1,3]$
code, and black (thin) curves for the scheme with encoding in the $[7,1,3]$
code.
First, in Fig. 3, the rates are shown to generate an entangled pair for a
total distance of $L=1280$ km, $L_{0}=20$ km, without purification, and with
one and two rounds of purification. Here we considered an imperfect memory
with decoherence time $\tau_{c}=0.1$ s,121212Currently experimentally
available memory times are of the order of ms for electronic spins and s for
nuclear spins. while the parameter of local losses in the CNOT gates is
$1-T=0.1\%$. We compared the performance of various schemes, namely, encoding
with the three-qubit repetition code $[3,1,3]$, and with the Steane code
$[7,1,3]$, and without encoding. We will stick in the rest of our analysis to
two rounds of purification, as typically this turned out to be the best
approach; however, as can be seen in Fig. 3, it is not always the best choice.
We find that in order to achieve $F_{final}>0.9$, encoding is absolutely
necessary in this parameter regime, and that a suitable code is the Steane
code.
In Fig. 4, we plotted the rates for $L=1280$ km, $L_{0}=20$ km, and the
following codes: 3-qubit repetition $[3,1,3]$, 7-qubit repetition $[7,1,7]$,
51-qubit repetition $[51,1,51]$, Steane $[7,1,3]$, Bacon-Shor $[25,1,5]$, and
Golay $[23,1,7]$; and for comparison, also the non-encoded scheme. We plotted
the rates for different values of $\tau_{c}$ and $T$: $1-T=0.1\%$ (top),
$1-T=0.01\%$ (bottom), $\tau_{c}=0.01$ s (left), $\tau_{c}=0.1$ s (center),
and $\tau_{c}=1$ s (right). As expected, repetition codes (which cannot
correct gate errors) perform better when the gate errors are sufficiently
small ($1-T=0.01\%$). We also observe that for $\tau_{c}\leq 0.01$ s, the CSS
codes have a very bad performance. However, $\tau_{c}=0.1$ s, even with
$1-T=0.1\%$, is already enough to achieve high final fidelities
($F_{final}>0.9$) using the CSS codes. It is interesting to notice that for
the parameters presented here, the $[3,1,3]$ code always performs better than
the other repetition codes. This can be understood by noting that the bigger
the repetition code is, the more susceptible it is to gate errors. We would
like to mention that even if we allow for decoherence times of the order of
$\tau_{c}=10$ s, the HQR can still not afford gates with loss parameter of the
order of $1-T=1\%$. In addition, a scheme with $1-T=0.01\%$ performs almost
identically to a scheme with perfect gates, $1-T=0$.
It is clear that there are trade-offs between the efficiency of the codes
versus the values of decoherence time and the local gate error parameter. As
usually observed in QEC schemes, to make the code more complicated and bigger
(i.e., use larger spaces and bigger circuits) would in principle suppress the
errors more effectively; however, all the extra resources and gates are also
subject to errors; so one not only reduces the existing errors, but also
introduces new sources of errors. Considering $F_{final}=0.95$, for
$\tau_{c}=0.01$ s and $1-T=0.01\%$, the three-repetition code (see Fig. 4,
left bottom) achieves a rate of about 24 pairs per second per employed memory
qubit. For $\tau_{c}=0.1$ s and $1-T=0.1\%$, and the same final fidelity, the
$[23,1,7]$ code (see Fig. 4, center top) can achieve a rate of about 6 pairs
per second per memory. However, for $\tau_{c}=1$ s and $1-T=0.1\%$, the
$[7,1,3]$ code (see Fig. 4, right top) can achieve rates of about 14 pairs per
second per memory. Note that the final fidelities presented here are those
exactly obtained at the time when the entangled pair was distributed over the
entire distance $L$. Consequently, the dephasing errors due to memory
imperfections will continue to degrade the fidelity whenever the final pair is
not immediately consumed and used in an application.
Figure 4: (Color online) Rates for a HQR with two rounds of purification in
the first nesting level with $L=1280$ km, $L_{0}=20$ km, $\tau_{c}=0.01$ s
(left), $\tau_{c}=0.1$ s (center), $\tau_{c}=1$ s (right), $1-T=0.1\%$ (top),
and $1-T=0.01\%$ (bottom). Blue dot-dashed (thin) line is for non-encoded, red
dashed line for the [3,1,3] code, purple dashed (thin) line for the [7,1,7]
code, gray dashed (thick) line for the [51,1,51] code, orange solid line for
the [7,1,3] code, black solid (thin) line for the [23,1,7] code, and green
solid (thick) line for the [25,1,5] code.
## VIII Conclusion
We presented here an explicit protocol for a hybrid quantum repeater including
the use of QEC codes in the presence of imperfect quantum memories and local
gate errors. We showed for the case of repetition codes how encoded states can
be generated utilizing the same interactions as for the unencoded scheme.
Moreover, we calculated the entanglement generation rates and, to properly
compare the different schemes, we computed here the rates per memory qubits.
We showed that our system, with [23,1,7], with reasonable imperfections, can
achieve rates of 6 pairs per second per memory with final fidelities of about
$F=0.95$ for a repeater spacing of $L_{0}=20$ km, a final distance of $L=1280$
km, local gate errors of $1-T=0.1\%$, and a decoherence time of $\tau_{c}=0.1$
s. For comparison, in the scheme of Ref. munro2 , a rate of 2500 pairs per
second is achieved for $L=1000$ km and final fidelities higher than $F=0.99$,
requiring around 90 qubits per repeater station. Roughly, this corresponds to
a rate of 55 pairs per second per memory. However, in that scheme, the
fundamental distance has a different value, $L_{0}=40$ km, and those authors
assumed perfect local gates, making the comparison not completely fair.
The original encoded repeater of Ref. jiang achieves a generation rate of 100
pairs per second for long distances ($L>1000$ km) with final fidelities of
$F=0.9984$. However, the system parameters used in that analysis are quite
different from those presented here. In Ref. jiang , the fundamental distance
is $L_{0}=10$ km, the decoherence time is $\tau_{c}\approx 7$ ms, the
effective error parameter is $q_{eff}=0.3\%$, and approximately $6n$ qubits at
each station are employed, of which $2n$ are memory qubits, and the $4n$
remaining qubits are employed for the local operations on the memory qubits
for QEC. This leads, for example, for a three-repetition code, to a rate of
approximately 33 pairs per second per memory.
We showed here that the problem of imperfect memories can be circumvented if
we allow for a large number of initial resources and sufficiently good local
gates. We further demonstrated that there are trade-offs between the code’s
efficiency, the decoherence time, and the local gate errors. Depending on
these values, we conclude that QEC codes will not always help, and every
single code will be efficient in a different regime. Our HQR with encoding
using the Golay code $[23,1,7]$ can achieve rates of 1000 bits/s over 1280 km
with final fidelities of about $F=0.95$ provided we have 166 memory qubits per
half node of the repeater station with decoherence times of 100 ms. This
decoherence time has been already exceeded by one order of magnitude in
current experiments using nuclear spins systems.
### Acknowledgments
We thank Bill Munro for useful comments. Support from the Emmy Noether Program
of the Deutsche Forschungsgemeinschaft is gratefully acknowledged. In
addition, we thank the BMBF for support through the QuOReP program.
## References
* (1) W. K. Wootters and W. H. Zurek, Nature 299, 802 (1982).
* (2) D. Dieks, Phys. Lett. A 92, 271 (1982).
* (3) H.-J. Briegel, W. Dür, J. I. Cirac, and P. Zoller, Phys. Rev. Lett. 81, 5932 (1998).
* (4) W. Dür, H.-J. Briegel, J. I. Cirac, and P. Zoller, Phys. Rev. A 59, 169 (1999).
* (5) M. Żukowski, A. Zeilinger, M. A. Horne, and A. K. Ekert, Phys. Rev. Lett. 71, 4287 (1993).
* (6) C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W. K. Wootters, Phys. Rev. Lett. 70, 1895 (1993).
* (7) D. Deutsch et al., Phys. Rev. Lett. 77, 2818 (1996).
* (8) L. Jiang et al., Phys. Rev. A 79, 032325 (2009).
* (9) L.-M. Duan, M. D. Lukin, J. I. Cirac, and P. Zoller, Nature 414, 413 (2001).
* (10) L. Childress, J. M. Taylor, A. Sørensen, and M. D. Lukin, Phys. Rev. A 72, 052330 (2005).
* (11) L. Childress, J. M. Taylor, A. S. Sørensen, and M. D. Lukin, Phys. Rev. Lett. 96, 070504 (2006).
* (12) C. Simon, H. de Riedmatten, M. Afzelius, N. Sangouard, H. Zbinden, and N.Gisin, Phys. Rev. Lett. 98, 190503 (2007).
* (13) N. Sangouard, C. Simon, H. de Riedmatten, N. and Gisin, Rev. Mod. Phys. 83, 33 (2011).
* (14) P. van Loock et al., Phys. Rev. Lett. 96, 240501 (2006).
* (15) T. D. Ladd, P. van Loock, K. Nemoto, W. J. Munro, and Y. Yamamoto, New J. Phys. 8, 184 (2006).
* (16) P. van Loock, N. Lütkenhaus,W. J. Munro, and K. Nemoto, Phys. Rev. A 78, 062319 (2008).
* (17) N. K. Bernardes, L. Praxmeyer, and P. van Loock, Phys. Rev. A 83, 012323 (2011).
* (18) M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, Cambridge, England, 2000).
* (19) K. Kreis and P. van Loock, Phys. Rev. A 85, 032307 (2012).
* (20) K. Azuma et al., Phys. Rev. A 80, 060303(R) (2009).
* (21) P. van Loock et al., Phys. Rev. A 78, 022303 (2008).
* (22) S. G. R. Louis, W. J. Munro, T. P. Spiller, and K. Nemoto, Phys. Rev. A 78, 022326 (2008).
* (23) M. Razavi, M. Piani, and N. Lütkenhaus, Phys. Rev. A 80, 032301 (2009).
* (24) J.-W. Pan, C. Simon, C. Brukner and A. Zeilinger, Nature 410, 1067 (2001).
* (25) P. van Loock, Laser Photonics Rev. 5, 167 (2010).
* (26) D. Gonta and P. van Loock, Phys. Rev. A 84, 042303 (2011).
* (27) K. Nemoto and W. J. Munro, Phys. Rev. Lett. 93, 250502 (2004).
* (28) S. G. R. Louis, K. Nemoto, W. J. Munro, and T. P. Spiller, New J. Phys. 9, 193 (2007).
* (29) Q. Lin and B. He, Phys. Rev. A 82, 022331 (2010).
* (30) W. J. Munro, K. A. Harrison, A. M. Stephens, S. J. Devitt, and K. Nemoto, Nature Photonics 4, 792 (2010).
* (31) S. Perseguers et al., Phys. Rev. A 78, 062324 (2008).
## Appendix A
Imagine we want to purify an entangled state from two initial states between
repeater stations $\mathbf{A}$ and $\mathbf{B}$,
$\rho_{\mathbf{A}_{1}\mathbf{B}_{1}}\otimes\rho_{\mathbf{A}_{2}\mathbf{B}_{2}}$.
Let us consider that the initial states, $\rho_{\mathbf{A}_{1}\mathbf{B}_{1}}$
and $\rho_{\mathbf{A}_{2}\mathbf{B}_{2}}$, are of the form
$A|\phi^{+}\rangle\langle\phi^{+}|+B|\phi^{-}\rangle\langle\phi^{-}|+C|\psi^{+}\rangle\langle\psi^{+}|+D|\psi^{-}\rangle\langle\psi^{-}|$,
where for the present purpose, $A$, $B$, $C$, and $D$ are simply constants.
Following the purification protocol from Refs. bennett ; deutsch and
considering the error model from Eq. (3), the resulting (unnormalized) state
$\rho_{c}$ is
$\displaystyle\rho_{c}$
$\displaystyle=(1-q_{g}(x))^{4}((A^{2}+D^{2})|\phi^{+}\rangle\langle\phi^{+}|+2AD|\phi^{-}\rangle\langle\phi^{-}|$
$\displaystyle+(B^{2}+C^{2})|\psi^{+}\rangle\langle\psi^{+}|+2BC|\psi^{-}\rangle\langle\psi^{-}|)+....$
(25)
The terms represented by (…) are those where at least one error occurred in
the two-qubit gates. Note that for the case without encoding, these terms can
be easily calculated. However, with encoding, especially for large codes, the
explicit derivation of these terms is extremely complicated.
The final fidelity and the probability of success of purification are given by
$F_{pur}=\frac{\langle\phi^{+}|\rho_{c}|\phi^{+}\rangle}{\rm{Tr}\rho_{c}},\quad\textrm{and}$
(26) $P_{pur}=\rm{Tr}\rho_{c}.$ (27)
Since we do not know the exact form of $\rho_{c}$, we will estimate these
quantities in a worst-case scenario, thus aiming at lower bounds. For the
fidelity, one such bound is obtained when the denominator of the fraction
takes on its maximum value and the numerator is just given by the
corresponding terms explicitly shown in Eq. (25), resulting in
$F_{pur,lower}=\frac{(A^{2}+D^{2})(1-q_{g}(x))^{4}}{(A+D)^{2}+(B+C)^{2}}=A^{\prime}_{pur}(1-q_{g}(x))^{4}.$
(28)
The denominator was calculated, assuming $q_{g}\ll 1$, such that in first
order of $q_{g}$, the trace is written as
$\displaystyle\rm{Tr}\rho_{c}=$
$\displaystyle(1-4q_{g}(x))((A+D)^{2}+(B+C)^{2})+4q_{g}(x)\rm{Tr}\rho_{?}$
$\displaystyle\leq$ $\displaystyle(1-((A+D)^{2}+(B+C)^{2}))4q_{g}(x)+$
$\displaystyle((A+D)^{2}+(B+C)^{2}),$ (29)
where $\rho_{?}$ is the term we do not know in Eq. (25) up to coefficients
with dominating order $4q_{g}(x)(1-q_{g}(x))^{3}\approx 4q_{g}(x)$. This
corresponds to the probability that one error occurred in one of the two
qubits at side $\mathbf{A}$ or $\mathbf{B}$. The inequality appears assuming
$\rm{Tr}\rho_{?}\leq 1$. We showed that
$(1-((A+D)^{2}+(B+C)^{2}))4q_{g}(x)+((A+D)^{2}+(B+C)^{2})$ is an upper bound
for the denominator in first order of $q_{g}(x)$. We approximate this bound
$(1-((A+D)^{2}+(B+C)^{2}))4q_{g}(x)+((A+D)^{2}+(B+C)^{2})\approx(A+D)^{2}+(B+C)^{2}$,
assuming that $(1-((A+D)^{2}+(B+C)^{2}))\sim q_{g}(x)$ such that the first
term of the sum can again be neglected. Notice also that the numerical values
in our rate analysis are not noticeably changed by using this approximation
whenever $1-T\leq 0.1\%$. Comparing with the exact formula for one round of
purification and imperfect quantum gates in Eq. (37) in Appendix C, we can see
that this is indeed an upper bound for $\rm{Tr}\rho_{c}$ in first order of
$q_{g}(x)$.
Similarly, we obtain as a lower bound for the probability of success for
purification
$P_{pur,lower}=((A+D)^{2}+(B+C)^{2})(1-q_{g}(x))^{4}.$ (30)
Provided that the gate errors are sufficiently small, this bound represents a
good estimate of the exact value.
The argument used for approximating the fidelity for the swapping is very
similar to that given above. However, an important difference is that the
swapping operation is a trace-preserving operation, such that, including gate
errors, we can guarantee that the probability of success of swapping will
always be 1.
For the encoded state, the number of two-qubit gates necessary to realize the
swapping is equal to the number of physical qubits per block, $n$. This is the
reason why in Eq. (17) the fidelity is multiplied by a factor of
$(1-q_{g}(x))^{2n}$. The same explanation applies to the purification step,
but in this case, we obtain a factor of $(1-q_{g}(x))^{4n}$; here a two-qubit
gate has to be applied to each qubit of every entangled pair.
For more rounds of purification, the same pattern is followed. Considering
sufficiently many initial spatial resources, for the $k$th-round of
purification ($k\geq 1$), the lower bound will be
$\displaystyle P_{pur,lower,k}=$
$\displaystyle(P_{pur}(\underbrace{A^{\prime}_{pur}(...A^{\prime}_{pur}}_{(k-1)-\text{times}}(A,B,C,D))))...(P_{pur}(A,B,C,D))$
$\displaystyle\times(1-q_{g}(x))^{4n(2^{k}-1)},$ (31)
using Eqs. (V, 12). For the total fidelity, after the $k$th-round of
purification and $N-1$ connections, the lower bound is given by
$\underbrace{A^{\prime}_{swap}(...A^{\prime}_{swap}(}_{(\log_{2}{N})-\text{times}}\underbrace{A^{\prime}_{pur}(...A^{\prime}_{pur}}_{k-\text{times}}(A_{eff}(F,t_{k}),B_{eff}(F,t_{k}),C_{eff}(F,t_{k}),D_{eff}(F,t_{k})))))(1-q_{g}(x))^{2n((N-1)+2(2^{k}-1))},$
(32)
using Eqs. (V-V).
## Appendix B
Similarly to what was presented for the three-repetition code in Sec. V, the
state $\frac{|\bar{0}\rangle+|\bar{1}\rangle}{2}$ for the five-repetition code
can be deterministically obtained by an interaction of the qubus with the
atomic qubits described by
$U_{int}^{1}\left(\theta\right)U_{int}^{2}\left(2\theta\right)U_{int}^{3}\left(4\theta\right)U_{int}^{4}\left(8\theta\right)U_{int}^{5}\left(-15\theta\right)$
and an $x$ quadrature measurement on the qubus. Note that depending on the
measured value of $x$, a phase shift and local bit flip operations may still
be applied to change the resulting state to the desired one. In a more
systematic way, for an $n$-repetition code, this interaction sequence can be
written as
$\prod_{j=1}^{n-1}U_{int}^{j}\left(2^{j-1}\theta\right)U_{int}^{n}\left(-(2^{n-1}-1)\theta\right)$.
Assuming that we want to distinguish all the $|\beta e^{\pm
i\theta_{j}}\rangle$ rotated components for different $j$’s, $(2^{n-1}-1)$
must not be bigger than $\pi$. For $\theta\sim 10^{-2}$, this requirement is
not fulfilled for codes with $n\geq 11$, where already for $n=11$,
$(2^{10}-1)\theta\sim 3\pi$.
Figure 5: (Color online) Preparing the state
$|\bar{0}\rangle+|\bar{1}\rangle$. The normalization factor is omitted. The
qubits are initiated in the state $\left(|0\rangle+|1\rangle\right)^{\otimes
3}$. First, a qubus $|\beta_{1}\rangle$ interacts with atomic qubits 1 and 2.
Then a second qubus $|\beta_{2}\rangle$ interacts with qubits 2 and 3. Both
qubuses have their $x$ quadrature measured. All results are valid, since the
generation is deterministic. Depending on the measurement results, a phase
shift and local bit-flip operations should be applied to the resulting qubit
state.
An alternative scheme uses more qubuses for these interactions. Let us start
with the three-qubit repetition code again. The encoded state
$\frac{|\bar{0}\rangle+|\bar{1}\rangle}{2}$ is generated as illustrated in
Fig. 5. First, the qubus $|\beta_{1}\rangle$ interacts with the atoms placed
in cavities 1 and 2, with interactions described by
$\displaystyle
U_{int}^{1}\left(\theta\right)U_{int}^{2}\left(-\theta\right)\left[\left(\frac{|0\rangle+|1\rangle}{\sqrt{2}}\right)^{\otimes
2}|\beta_{1}\rangle\right]=$
$\displaystyle\frac{1}{2}\left[\left(|00\rangle+|11\rangle\right)|\beta_{1}\rangle+|01\rangle|\beta_{1}e^{i\theta}\rangle+|10\rangle|\beta_{1}e^{-i\theta}\rangle\right].$
(33)
Then a second qubus $|\beta_{2}\rangle$ interacts with the atoms placed in
cavities 2 and 3 as follows,
$\displaystyle
U_{int}^{2}\left(-\theta\right)U_{int}^{3}\left(\theta\right)\left[\frac{\left(|00\rangle+|11\rangle\right)|\beta_{1}\rangle+|01\rangle|\beta_{1}e^{i\theta}\rangle+|10\rangle|\beta_{1}e^{-i\theta}\rangle}{2}\left(\frac{|0\rangle+|1\rangle}{\sqrt{2}}\right)|\beta_{2}\rangle\right]=$
$\displaystyle\frac{1}{2\sqrt{2}}\left[\left(|000\rangle+|111\rangle\right)|\beta_{1},\beta_{2}\rangle+|001\rangle|\beta_{1},\beta_{2}e^{-i\theta}\rangle+|010\rangle|\beta_{1}e^{i\theta},\beta_{2}e^{i\theta}\rangle+|100\rangle|\beta_{1}e^{-i\theta},\beta_{2}\rangle\right.$
$\displaystyle\left.+|110\rangle|\beta_{1},\beta_{2}e^{i\theta}\rangle+|101\rangle|\beta_{1}e^{-i\theta},\beta_{2}e^{-i\theta}\rangle+|011\rangle|\beta_{1}e^{i\theta},\beta_{2}\rangle\right].$
(34)
By measuring the $x$ quadrature separately for each of the two qubuses
($|\beta_{1}\rangle$ and $|\beta_{2}\rangle$), the encoded state is
deterministically produced. For larger codes, the same procedure can be
applied: always alternate $\theta/2$-rotations with $-\theta/2$-rotations and
use $n-1$ qubuses interacting only with one pair of atoms of the $n$-qubit
chain. Finally, each atom, ignoring the atoms at the ends of the chain,
interacts with two different qubus states.
Note that, similarly to Ref. louis2 , the scheme proposed here could also be
used to generate cluster states via weak nonlinearities. However, in Ref.
louis2 , the cluster states are obtained through homodyne measurements in the
$p$ quadrature of the qubus, which makes that scheme probabilistic.
## Appendix C
The effective error probability $q_{eff}$ estimates the probability that a
physical qubit suffers an odd number of $Z$ errors. In fact, assuming these
probabilities sufficiently small jiang , it estimates the probability that
each physical qubit suffers one $Z$ error. The effective error probability
depends on the error parameters ($(1-F)$, $q_{g}(x)$, and $q_{m}(t)$)
introduced through Eqs. (1, 3, 6). We choose the $Z$-error, because in our
scheme it occurs more frequently than the $X$-error.
Before calculating the effective error probability, we should examine the
effect of a CNOT gate. An error that initially affects only the target qubit
will, after the CNOT operation, also result in an error on the control qubit,
in such a way that these errors (and their probabilities) will accumulate.
The first step for the CSS-encoding protocol presented here is the
entanglement creation (and eventually purification of the entangled states).
For this, the probability that each physical qubit suffers a phase flip is
given by $q_{1}=(1-F_{k})+q_{m}(t/2)$, where $k$ is the number of rounds of
purification. For $k=0$, $F_{0}$ is simply the initial fidelity $F$. After the
entangled, possibly purified, pairs have been created, the logical qubits are
locally prepared and each physical qubit is subject to a $Z$-error probability
of $q_{2}=q_{m}(t/2)$. After this, the encoded entangled state is generated by
teleportation-based CNOT gates, and the errors accumulate, such that the error
probabilities are $q_{3,c}=q_{1}+q_{2}+q_{g}(x)$ and $q_{3,t}=q_{2}$, for
control and target qubits, respectively. After entanglement connections take
place, the accumulated probability for obtaining a wrong output is
$q_{4,c}=q_{3,c}+q_{3,t}+q_{g}(x)$ and $q_{4,t}=q_{2}$. For simplicity, we may
just use the largest from these two values to estimate the effective error
probability per physical qubit, such that
$q_{eff}=3q_{m}(t/2)+(1-F_{k})+2q_{g}(x).$ (35)
The decaying time $t$ is considered to be the time it takes for classical
communication to announce that entanglement distribution succeeded ($T_{0}/2$)
and the time it takes to announce that purification succeeded (again
$T_{0}/2$). Hence $t=t^{\prime}_{k}=(k+1)T_{0}/2$. Note that $t^{\prime}_{k}$
is different from the decaying time $t_{k}$ for the repetition code by
$T_{0}/2$. The reason for this is that for the repetition code protocol, the
logical qubits are already decaying from the very beginning. We should be
careful in defining $F_{k}$, because gate errors must be included here. If we
start with two copies of the entangled state
$A|\phi^{+}\rangle\langle\phi^{+}|+B|\phi^{-}\rangle\langle\phi^{-}|+C|\psi^{+}\rangle\langle\psi^{+}|+D|\psi^{-}\rangle\langle\psi^{-}|$,
following the purification protocol from Ref. deutsch and considering the
gate error model from Eq. (3),the resulting state after one round of
purification is given by
$A^{\prime}|\phi^{+}\rangle\langle\phi^{+}|+B^{\prime}|\phi^{-}\rangle\langle\phi^{-}|+C^{\prime}|\psi^{+}\rangle\langle\psi^{+}|+D^{\prime}|\psi^{-}\rangle\langle\psi^{-}|$,
where
$\displaystyle A^{\prime}=\frac{1}{P_{pur}^{imp}}$
$\displaystyle\left(D^{2}+A^{2}(1+2(-1+q_{g})q_{g})^{2}-2A(-1+q_{g})q_{g}(C+2D+2(B-C-2D)q_{g}+2(-B+C+2D)q_{g}^{2})\right.$
$\displaystyle\left.-2D(-1+q_{g})q_{g}(-2D-2(C+D)(-1+q_{g})q_{g}+B(1+2(-1+q_{g})q_{g}))\right)$
$\displaystyle B^{\prime}=\frac{1}{P_{pur}^{imp}}$
$\displaystyle\left(-2D(-1+q_{g})q_{g}(C+D-2(-B+C+D)q_{g}+2(-B+C+D)q_{g}^{2})+2A^{2}q_{g}(1+q_{g}(-3-2(-2+q_{g})q_{g}))+\right.$
$\displaystyle\left.2A(D(1+2(-1+q_{g})q_{g})^{2}-(-1+q_{g})q_{g}(-2C(-1+q_{g})q_{g}+B(1+2(-1+q_{g})q_{g})))\right)$
$\displaystyle C^{\prime}=\frac{1}{P_{pur}^{imp}}$
$\displaystyle\left(C^{2}+B^{2}(1+2(-1+q_{g})q_{g})^{2}-2C(-1+q_{g})q_{g}(-2C-2(C+D)(-1+q_{g})q_{g}+A(1+2(-1+q_{g})q_{g}))\right.$
$\displaystyle\left.-2B(-1+q_{g})q_{g}(-2A(-1+q_{g})q_{g}+D(1+2(-1+q_{g})q_{g})+C(2+4(-1+q_{g})q_{g}))\right)$
$\displaystyle D^{\prime}=\frac{1}{P_{pur}^{imp}}$
$\displaystyle\left(-2C(-1+q_{g})q_{g}(C+D-2(-A+C+D)q_{g}+2(-A+C+D)q_{g}^{2})+2B^{2}q_{g}(1+q_{g}(-3-2(-2+q_{g})q_{g}))+\right.$
$\displaystyle\left.2B(C(1+2(-1+q_{g})q_{g})^{2}-(-1+q_{g})q_{g}(-2D(-1+q_{g})q_{g}+A(1+2(-1+q_{g})q_{g})))\right),$
(36)
and $P_{pur}^{imp}$ is the purification probability of success given by
$P_{pur}^{imp}=(B+C)^{2}+(A+D)^{2}-2(A-B-C+D)^{2}q_{g}+2(A-B-C+D)^{2}q_{g}^{2}.$
(37)
For the case of $q_{g}=0$, Eqs. (36, 37) are in accordance with Ref. deutsch .
The fidelity after the first round of purification, $F_{1}$, is given by
$A^{\prime}$ when $A=F$, $B=1-F$, and $C=D=0$. For small $q_{g}$ and high
initial fidelity, which is the regime under consideration here, the dominant
coefficient (after $A^{\prime}$) is $B^{\prime}$. Note that
$B^{\prime}\approx(1-F_{1})$ for $q_{g}\ll 1$, and thus the probability that
one physical qubit suffers an error becomes $(1-F_{1})$. For more rounds of
purification, a similar procedure can be performed.
We considered the probability of no error per physical qubit immediately after
one round of purification as $(1-q_{m}(t/2))F_{1}$, such that the probability
of one error is approximated by $(1-F_{1})+q_{m}(t/2)$. The purification
protocol, however, can improve the fidelity against memory dephasing that
happened during the entanglement distribution. This can be computed by
calculating $F_{1}$, substituting $A=F(1-q_{m}(t/2))+(1-F)q_{m}(t/2)$,
$B=(1-F)(1-q_{m}(t/2))+Fq_{m}(t/2)$, and $C=D=0$ in $A^{\prime}$, with
$t=T_{0}/2$. Although this strategy can improve the final fidelity, the qubits
decay further after the purification step, and so, for simplicity, we shall
ignore this fact. Indeed, in this case, the probability of success for the
purification should be smaller, however, for small probabilities of errors,
this difference is so small and we may neglect it.
We should notice here that Eq. (35) is not identical, though it is similar, to
the one presented in App. A of Ref. jiang . This lies in the fact that,
although our protocol was inspired by the paper of Jiang et al., there are
some crucial differences. To cite one, in our analysis, we do not assume that
our purified entangled pairs are perfect, and the imperfect generation of an
entangled pair is also included as an error. In addition, the qubits suffer
memory dephasing errors already during the purification step. Finally, our
error model is different from that used in Ref. jiang .
|
arxiv-papers
| 2011-05-18T09:12:00 |
2024-09-04T02:49:18.890440
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Nadja K. Bernardes and Peter van Loock",
"submitter": "Nadja Kolb Bernardes",
"url": "https://arxiv.org/abs/1105.3566"
}
|
1105.3593
|
# The energy analysis for the monte carlo simulations of a diffusive shock
Xin Wang Key Laboratory of Solar Activities of National Astronomical
Observatories, Chinese Academy of Sciences, Beijing 100012, China State Key
Laboratory of Space Weather, Chinese Academy of Sciences, Beijing 100080,
China Yihua Yan Key Laboratory of Solar Activities of National Astronomical
Observatories, Chinese Academy of Sciences, Beijing 100012, China e-mail:
wangxin@nao.cas.cn; yyh@nao.cas.cn
###### Abstract
According to the shock jump conditions, the total fluid’s mass, momentum, and
energy should be conserved in the entire simulation box. We perform the
dynamical Monte Carlo simulations with the multiple scattering law for energy
analysis. The various energy functions of time are obtained by monitoring the
total particles’ mass, momentum, and energy in the simulation box. In
conclusion, the energy analysis indicates that the smaller energy losses in
the prescribed scattering law are, the harder the energy spectrum produced is.
acceleration of particles — solar energetic particles (SEP) — cosmic ray (CR)—
shock waves
## 1 Introduction
The gradual solar energetic particles with a power-law energy spectrum are
generally thought to be accelerated by the first-Fermi acceleration mechanism
at the interplanetary shocks (IPs) (Axford et al., 1977; Krymsky, 1977; Bell,
1978; Blandford & Ostriker, 1978). It is well known that the diffusive shock
accelerated the particles efficiently by the accelerated particles scattering
off the instability of Alfven waves which are generated by the accelerated
particles themselves (Gosling et. al. , 1981; Cane et.al. , 1990; Lee & Ryan,
1986; Berezhko, et.al. , 2003; Pelletier, et.al., 2006). The diffusive shock
acceleration (DSA) is so efficient that the back-reaction of the accelerated
particles on the shock dynamics cannot be neglected. So the theoretical
challenge is how to efficiently model the full shock dynamics (Berezhko & Völk
, 2006; Caprioli, et. al., 2010; Zank, 2000; Li et al., 2003; Lee, 2005). To
efficiently model the shock dynamics and the particles’ acceleration, there
are largely three basic approaches: stationary Monte Carlo simulations, fully
numerical simulations, and semi-analytic solutions. In the stationary Monte
Carlo simulations, the full particle population with a prescribed scattering
law is calculated based on the particle-in-cell (PIC) techniques (Ellison et
al., 1996; Vladimirov et al., 2006). In the fully numerical simulations, a
time-dependent diffusion-convection equation for the CR transport is solved
with coupled gas dynamics conservation laws (Kang & Jones, 2007; Zirakashvili
& Aharonian, 2010). In the semi-analytic approach, the stationary or quasi-
stationary diffusion-convection equations coupled to the gas dynamical
equations are solved (Blasi, et. al., 2007; Malkov, et. al., 2000). Since the
velocity distribution of suprathermal particles in the Maxwellian tail is not
isotropic in the shock frame, the diffusion-convection equation cannot
directly follow the injection from the non-diffusive thermal pool into the
diffusive CR population. So considering both the quasi-stationary analytic
models and the time-dependent numerical models, the injection of particles
into the acceleration mechanism is based on an assumption of the transparency
function for thermal leakage (Blasi, et. al., 2005; Kang & Jones, 2007; Vainio
& Laitinen, 2007). Thus, the dynamical Monte Carlo simulations based on the
PIC techniques are expected to model the shock dynamics time-dependently and
also can eliminate the suspicion arising from the assumption of the injection
(Knerr, Jokipii & Ellison, 1996; Wang & Yan, 2011). In plasma simulation (PIC
and hybrid), there is no distinction between thermal and non-thermal
particles, hence particle injection is intrinsically defined by the prescribed
scattering properties, and so it is not controlled with a free parameter
(Caprioli, et. al., 2010).
Actually, Wang et al. Wang & Yan (2011) have extended the dynamical Monte
Carlo models with an anisotropic scattering law. Unlike the previous isotropic
prescribed scattering law, the Gaussian scattering angular distribution is
used as the complete prescribed scattering law. According to the extended
prescribed scattering law, we obtained a series of similar energy spectrums
with little difference in terms of the power-law tail. However, it is not
clear how such a prescribed scattering law can affect the particles’ diffusion
and the shock dynamics evolution. To probe these problems, we expect to
diagnose the energy losses in the simulations by monitoring all of the
behaviors of the simulated particles.
In the time-dependent Monte Carlo models coupled with a Gaussian angular
scattering law, the results show that the total energy spectral index and the
compression ratio are both effected by the prescribed scattering law.
Specifically, the total energy spectral index is an increasing function of the
dispersion of the scattering angular distribution, but the subshock’s energy
spectral index is a digressive function of the dispersion of the scattering
angular distribution (Wang & Yan, 2011). In the dynamical Monte Carlo
simulations, one find that it is the only way for the particles to escape
upstream via free escaped boundary (FEB). With the same size of the FEB which
limited the maximum energy of accelerated particles, we find that different
Gaussian scattering angular distribution generate different maximum energy
particles through the scattering process at the same simulation time.
In an effort to verify the efficiency of the energy transfer from the thermal
to superthermal and the effect of the shock dynamics evolution on the shock
structures, we perform a dynamical Monte Carlo code on Matlab with Gaussian
scattering angular distribution by monitoring the particles’ mass, momentum
and energy as functions of time. Our Gaussian scattering angular distribution
algorithm consists of four cases involving four specific standard deviation
values. This aim is to know if the various particle’s loss functions are
dependent on the prescribed scattering law and the various kinds of losses can
directly determine the total compression ratio and final energy spectral index
with the same timescale of the shock evolutions and the same size of FEB.
In Section 2, the basic simulation method is introduced with respect to the
Gaussian scattering angular distributions for monitoring the particles’ mass,
momentum and energy as function of time in each case. In Section 3, we present
the shock simulation results and the energy analysis for all cases with four
assumptions of scattering angle distributions. Section 4 includes a summary
and the conclusions.
## 2 Method
The Monte Carlo model is a general model, although it is considerably
expensive computationally, and it is important in many applications to include
the dynamical effects of nonlinear DSA in simulations. Since the prescribed
scattering law in Monte Carlo model instead of the field calculation in hybrid
simulations (Giacalone, 2004; Winske & Omidi, 2011), we assume that particle
scatters elastically according to a Gaussian distribution in the local plasma
frame and that the mean free path (mfp) is proportional to the gyroradius
(i.e., $\lambda\propto r_{g}$), where $r_{g}=pc/(qB)$, and its value is
proportional to its momentum. Under the prescribed scattering law, the
injection is correlated with those “thermal” particles which manage to diffuse
the shock front for obtaining additional energy gains and become superthermal
particles (Ellison et al., 2005).
However, the basic theoretical limit to the accelerated particle’s energy
arises from the accelerated particle’s Larmor radius, which must be smaller
than the dimensions of the acceleration region by at least $v_{s}/c$ (Hillas,
1984), where $v_{s}$ is the shock velocity. The limitation to the maximum
energy of accelerated particles due to the large Larmor radius would be
ameliorated if the scattering angular distribution is varied. In these
simulations, the size of the FEB is set as a finite length scale matched with
the maximum diffusive length scale. Here, we further investigate the
possibility that the accelerated particles scatter off the background magnetic
field in the acceleration region with not only an isotropic scattering angular
distribution, but also with an anisotropic scattering angular distribution.
Actually, this anisotropic distribution would probably produce an important
effect on simulation results. With the same size of FEB, the scattering law
applied isotropic distribution or anisotropic distribution would produce
different maximum energy particles. So an anisotropic scattering law in the
theory of the CR-diffusion is also needed (Bell, 2004).
Figure 1: The entire evolutional velocity profiles in four cases. The dashed
line denotes the FEB position in each plot. The precursor is located in the
area between the downstream region and the upstream region in each case.
The particle-in-cell techniques are applied in these dynamical Monte Carlo
simulations. The simulation box is divided up into some number of cells and
the field momentum is calculated at the center of each cell (Forslund, 1985;
Spitkovsky, 2003; Nishikawa, et. al., 2008). The total size of a one-
dimensional simulation box is set as $X_{max}$ and it is divided into the
number of grids $N_{max}$. Upstream bulk speed $U_{0}$ with an initial
Maxwellian thermal velocity $V_{L}$ in their local frame and the inflow in a
“pre-inflow box” (PIB) are both moving along one-dimensional simulation box.
The parallel magnetic field $B_{0}$ is along the $\hat{x}$ axis direction in
the simulation box. A free escaped boundary (FEB) with a finite size in front
of the shock position is used to decouple the escaped particles from the
system as long as the accelerated particles beyond the position of the FEB.
The simulation box is a dynamical mixture of three regions: upstream,
precursor and downstream. The bulk fluid speed in upstream is $U=U_{0}$, the
bulk fluid speed in downstream is $U=0$, and the bulk fluid speed with a
gradient of velocity in the precursor region is $U_{0}>U>0$. Because of the
prescribed scattering law instead of the particle’s movement in the fields,
the injected particles from the thermalized downstream into the precursor for
diffusive processes are controlled by the free elastic scatter mechanism. To
obtain the information of the total particles in different regions at any
time, we build a database for recording the velocities, positions, and time of
the all particles, as well as the index and the bulk speeds of the grids. The
scattering angle distributions are presented by Gaussian distribution function
with a standard deviation $\sigma$, and an average value $\mu$ involving four
cases: (1) Case A: $\sigma=\pi$/4, $\mu=0$. (2) Case B: $\sigma=\pi$/2,
$\mu=0$. (3) Case C: $\sigma=\pi$, $\mu=0$. (4) Case D: $\sigma=\infty$,
$\mu=0$. These presented simulations are all based on a one-dimensional
simulation box and the specific parameters are based on Wang & Yan (2011).
## 3 Results & analysis
We present the entire shock evolution with the velocity profiles of the time
sequences in each case as shown in Figure 1. The total velocity profiles are
divided into three parts with respect to the shock front and the FEB
locations. The upstream bulk speed $U_{0}$ dynamically slows down by passing
though the precursor region, and its value decreases to zero in the downstream
region (i.e. $U_{d}=0$). The precursor explicitly shows a different slope of
the bulk velocity and different final FEB locations in different cases. The
present velocity profiles are similar to the density profiles in the previous
simulations by Wang & Yan (2011), and the different prescribed scattering law
leads to the different shock structures.
Table 1: The Calculated Results
Items | Case A | Case B | Case C | Case D
---|---|---|---|---
$M_{loss}$ | 1037 | 338 | 182 | 127
$P_{loss}$ | 0.0352 | 0.0189 | 0.0123 | 0.0084
$E_{loss}$ | 0.7468 | 0.5861 | 0.4397 | 0.3014
$E_{feb}$ | 0.8393 | 0.5881 | 0.5310 | 0.5022
$E_{in}$ | 1.5861 | 1.1742 | 0.9707 | 0.8036
$E_{tot}$ | 3.3534 | 3.4056 | 3.3574 | 3.4026
$R_{in}$ | 38.25% | 25.67% | 19.98% | 14.80%
$R_{loss}$ | 22.27% | 17.21% | 13.10% | 8.86%
$r_{tot}$ | 8.1642 | 6.3532 | 5.6753 | 5.0909
$r_{sub}$ | 2.0975 | 3.0234 | 3.1998 | 3.9444
$\Gamma_{tot}$ | 0.7094 | 0.7802 | 0.8208 | 0.8667
$\Gamma_{sub}$ | 1.8668 | 1.2413 | 1.1819 | 1.0094
$v_{sh}$ | -0.0419 | -0.0560 | -0.0642 | -0.0733
$v_{sub}$ | 0.0805 | 0.1484 | 0.1613 | 0.2159
$V_{Lmax}$ | 11.4115 | 14.2978 | 17.2347 | 20.5286
$E_{peak}$ | 0.1650keV | 0.1723keV | 0.1986keV | 0.2870keV
$E_{max}$ | 1.23MeV | 1.93MeV | 2.80MeV | 4.01MeV
The units of mass, momentum, and energy are normal to the proton mass $m_{p}$,
initial momentum $P_{0}$, and initial energy $E_{0}$, respectively. The last
two rows are shown as scaled values.
The various losses of the particles and the calculated results of the shocks
at the end of the simulation for the four cases are listed in Table 1. The
initial box energy is $E_{0}$. The subshock’s compression ratio $r_{sub}$ and
the total compression ratio $r_{tot}$ are calculated from the fine velocity
structures in the shock frame in each case, respectively. The total energy
spectral index $\Gamma_{tot}$ and the subshock’s energy spectral index
$\Gamma_{sub}$ are deduced from the corresponding total compression ratio
$r_{tot}$ and the subshock’s compression ratio $r_{sub}$ in each case. The
$M_{loss}$, $P_{loss}$ and $E_{loss}$ are the mass, momentum, and energy
losses which are produced by the escaped particles via the FEB, respectively.
We have monitored the mass, momentum and energy of the total particles in each
time step in each case. Figure 2 shows all the types of energy functions with
respect to time. The total energy $E_{tot}$ is the energy summation over the
time in the entire simulation box at any instant in time. The box energy
$E_{box}$ presents the actual energy in the simulation box at any instant in
time. The supplement energy $E_{PIB}$ is the summation of the amount of energy
from the pre-inflow box (at the left boundary of the simulation box) which
enters into the simulation box with a constant flux. The $E_{feb}$ presents a
summation of the amount of energy held in the precursor region. The $E_{out}$
indicates a summation of the amount of energy which escapes via the FEB.
Clearly, the total energy in the simulation at any instant in time is not
equal to the actual energy in the box at any instant in time in each plot.
Figure 2: Various energy values vs. time (all normalized to the initial total
energy $E_{0}$ in the simulation box) in each case. All quantities are
calculated in the box frame.
It can be shown that the incoming particles (upstream) decrease their energy
(as viewed in the box frame) as they scatter in the shock precursor region. If
each incoming particle loses a small amount of energy as it first encounters
the shock, this would produce a constant linear divergence between the curves
for $E_{box}$ and $E_{tot}$. Actually, the cases in these simulations produce
the non-linear divergence between the curves for $E_{box}$ and $E_{tot}$,
consistent with Figure 2. Such behavior is evident from individual particle’s
trajectories. Physically, all the kinds of the losses occur in the precursor
region owing to the “back reaction” of the accelerated ions and the escaped
particles via FEB. Various energy functions obviously show that the case
applying the anisotropic scattering law produces a higher energy loss, while
the case applying the isotropic scattering law produces a lower energy loss.
Consequently, we consider that the prescribed scattering law dominate the
energy losses.
### 3.1 Energy losses
With monitoring each particle in the grid of the simulation box in any
increment of the time, the escaped particles’ mass, momentum, and energy loss
functions with the time are obtained and shown in Figure 3. Among these energy
functions, the inverse flow function is obtained from the injected particles
from the thermalized downstream to the precursor.
Figure 3: The four plots denote the mass losses, momentum losses, energy
losses and the inverse energy, respectively. The solid line, dashed line,
dash-dotted line and the dotted line represent the cases A, B, C and D in the
first three plots. In the last plot, the dashed lines marked with the signal
of dot, plus, and cross, and the solid line represent the cases A, B, C and D,
respectively. The units are normal to the proton mass $M_{p}$, initial total
momentum $P_{0}$ and initial total energy $E_{0}$, respectively.
Since the FEB is limited to be in front of the shock position with the same
size in each case, once a single accelerated particle moves backward to the
shock beyond the position of the FEB, we exclude this particle from the total
system as the loss term in the mass, momentum, and the energy conservation
equations. According to the Rankine-Hugoniot (RH) relationships, the
compression ratio of the nonrelativistic shock with a large Mach number is not
allowed to be larger than the standard value of four (Pelletier, 2001). Owing
to the energy losses which inevitably exist in the simulations, the calculated
results show a decreasing values of the loss of the mass, momentum, and the
energy from the cases A, B, and C to D, respectively. Simultaneously, the
inverse energy injected from downstream to precursor is also a decreasing
values of $(E_{in})_{A}=1.5861$, $(E_{in})_{B}=1.1742$, $(E_{in})_{C}=0.9707$,
and $(E_{in})_{D}=0.8036$ from the cases A, B, and C to D, respectively. The
accurate energy losses are the values of $(E_{loss})_{A}=0.7468$,
$(E_{loss})_{B}=0.5861$, $(E_{loss})_{C}=0.4397$, and $(E_{loss})_{D}=0.3041$
in each case. The inverse energy is the summation of the energy loss
$E_{loss}$ and the net energy $E_{feb}$ in the precursor (i.e.
$E_{in}=E_{loss}+E_{feb}$). So it is no wonder that the total shock ratios are
all larger than four because of the existence of energy losses in all cases.
Therefore, the difference of the energy losses and the inverse energy can
directly affect all aspects of the simulation shocks.
### 3.2 Subshock structure
Figure 4 shows the subshock structure in each case at the end of the
simulation time. The specific structure in each plot consists of three main
parts: precursor, subshock and downstream. The smooth precursor with a larger
scale is between the FEB and the subshock’s position $X_{sub}$, where the bulk
velocity gradually decreases from the upstream bulk speed $U_{0}$ to
$v_{sub1}$. And the size of the precursor is almost equal to the diffusive
length of the maximum energy particle accelerated by the diffusion process.
The sharp subshock with a shorter scale only spans three-grid-lengthsb
involving a deep deflection of the bulk speed abruptly decreasing from
$v_{sub1}$ to $v_{sub2}$, where the scale of the three-grid-length is largely
equal to the mean free path of the averaged thermal particles in the
thermalized downstream. The value of the subshock’s velocity can be defined by
the difference value of the two boundaries of the subshock.
$v_{sub}=|v_{sub1}-v_{sub2}|.$ (1)
With the upstream bulk speed slowing down from $U_{0}$ to zero, the size of
the downstream region is increasing with its constant shock velocity $v_{sh}$
in each case, and the bulk speed is $U=0$ owing to the dissipation processes
which characterize the downstream. The gas subshock is just an ordinary
discontinuous classical shock embedded in the total shock with a comparably
larger scale (Berezhko & Ellison , 1999).
Figure 4: Final subshock fine structures in the four cases. The vertical solid
and dashed lines indicate the positions of the shock front and subshock in
each plot, respectively. The horizontal solid, dashed, dash-dotted and dotted
lines show the values of the shock velocity $v_{sh}$, subshock velocity
$v_{sub2}$, subshock velocity $v_{sub1}$ and initial bulk velocity $U_{0}$,
respectively. Three vertical blocks in each plot represent the three
deflections of velocity: precursor region, subshock region and downstream
region. All values of the velocity are based on the box frame.
According to the subtle shock structures, the shock compression ratio can be
cataloged into two classes: one class presents the entire shock named the
total compression ratio $r_{tot}$ and the other class characterizes the
subshock named the subshock’s compression ratio $r_{sub}$. The values of the
two kinds of compression ratios can be reduced by the following formulas,
respectively.
$r_{tot}=u_{1}/u_{2},\\\ $ (2) $r_{sub}=(v_{sub}+|v_{sh}|)/|v_{sh}|,$ (3)
where $u_{1}=u_{0}+|v_{sh}|$ , $u_{2}=|v_{sh}|$, and $u_{1}(u_{2})$ is the
upstream (downstream) velocity in the shock frame, $v_{sub}$ is the subshock’s
velocity determined by Equation 1, and the shock velocity $v_{sh}$ at the end
of the simulation is decided by the following:
$v_{sh}=(X_{max}-X_{sh})/T_{max},$ (4)
where $X_{max}$ is the total length of the simulation box, $T_{max}$ is the
total simulation time, and $X_{sh}$ is the position of the shock at the end of
the simulation. The specific calculated results are shown in Table 1. The
values of the subshock’s compression ratios are $(r_{sub})_{A}$=2.0975 ,
$(r_{sub})_{B}$=3.0234, $(r_{sub})_{C}$=3.1998 and $(r_{sub})_{D}$=3.9444
corresponding to the cases A, B, C and D, respectively. The total shock
compression ratios with the values of $(r_{tot})_{A}$=8.1642,
$(r_{tot})_{B}$=6.3532, $(r_{tot})_{C}$=5.6753, and $(r_{tot})_{D}$=5.0909
also correspond to the cases A, B, C and D, respectively. In comparison, the
total shock compression ratios are all larger than the standard value four and
the subshock’s compression ratios are all lower than the standard value four.
Additionally, the value of the total shock compression ratio decreases from
Cases A, B, and C to D, while the subshock’s compression ratio increases from
Cases A, B, and C to D. These differences are naturally attributed to the
different fine subshock structures.
### 3.3 Maximum energy
Figure 5: The individual particles with their thermal velocity in the local
frame vs their positions with respect to time in each plot. The shaded area
indicates the shock front, the solid line in the bottom plane denotes the
position of the FEB in each case, respectively. Some irregular curves trace
the individual particle’s trajectories near the shock front with time. The
maximum energy of accelerated particles in each case is marked with the value
of the velocity, respectively.
We select some individual particles from the phase-space-time database
recording the all particles’ information. The trajectories of the selected
particles are shown in Figure 5. One of these trajectories clearly shows the
route of the maximum energy of accelerated particle which undergoes the
multiple collisions with the shock front in each case. The maximum value of
the energy is apparently different in each case with increasing values of
$(VL_{max})_{A}=11.4115$, $(VL_{max})_{B}=14.2978$, $(VL_{max})_{C}=17.2347$,
and $(VL_{max})_{D}=20.5286$ from the Cases A, B, and C to D, respectively.
And those particles with a value higher than the cutoff energy are unavailable
owing to their escaping from the FEB. The statistical data show the number of
escaped particles in each case decreases with the number of particles
$(n_{esc})_{A}=1037$, $(n_{esc})_{B}=338$, $(n_{esc})_{C}=182$, and
$(n_{esc})_{D}=127$ from Cases A, B, and C to D, correspondingly. Except for
the maximum energy of the particle in each case, the other particles show that
some of them obtained finite energy accelerations from the multiple crossings
with the shock and some of them do not have additional energy gains owing to
their lack of probability for crossing back into the precursor due to their
small diffusive length scale. The statistical data also exhibit that the
inverse energy injected from the downstream back to upstream is characterized
by a decreasing reflux rate of $(R_{in})_{A}=38.25\%$, $(R_{in})_{B}=25.67\%$,
$(R_{in})_{C}=19.98\%$, and $(R_{in})_{D}=14.80\%$ in corresponding Cases A,
B, C, and D. With the decrease of the inverse energy from Cases A, B, and C to
D, the corresponding energy losses are also reduced at the rate of
$(R_{loss})_{A}=22.27\%$, $(R_{loss})_{B}=17.21\%$, $(R_{loss})_{C}=13.10\%$,
and $(R_{loss})_{D}=8.86\%$ in each case, respectively. Although the maximum
energy of the accelerated particles should be identical because of the
limitation of the same size of the FEB in four cases, the cutoff energy values
are still modified by the existence of the energy losses in the different
cases applied with the different prescribed Gaussian scattering laws.
### 3.4 Energy spectrum
Figure 6: The two plots present the final energy spectrums on the downstream
and the precursor region, respectively. The thick solid line with a narrow
peak at $E=$1.3105keV in each plot represents the same initial Maxwell energy
distributions in each case. The solid, dashed, dash-dotted and dotted extended
curves with the “power-law” tail present the energy spectral distributions
corresponding to Cases A, B, C and D, respectively. All these energy spectrum
distributions are plotted in the same shock frame.
The energy spectrums with the “power-law” tail are calculated in the shock
frame from the downstream region and the precursor region at the end of
simulation, respectively. The same initial Maxwellian distribution in each
case is shown in each plot in Figure 6. As shown in Figure 6, the calculated
energy spectrums indicate that the four extended curves in the downstream
region with an increasing value of the central energy peak
$(E_{peak})_{A}=0.1650keV$, $(E_{peak})_{B}=0.1723keV$,
$(E_{peak})_{C}=0.1986keV$, and $(E_{peak})_{D}=0.2870keV$ characterize the
Maxwellian distributions in the ”heated-downstream” from Cases A ,B, and C to
D, respectively. The value of the total energy spectral index
$(\Gamma_{tot})_{A}=0.7094$, $(\Gamma_{tot})_{B}=0.7802$,
$(\Gamma_{tot})_{C}=0.8208$, and $(\Gamma_{tot})_{D}=0.8667$ in each case
indicates the Maxwellian distribution in the ”heated-downstream” with a
decreasing deviation to the “power-law” distribution from Cases A, B, and C to
D, correspondingly. But the value of the subshock’s energy spectral index
$(\Gamma_{sub})_{A}=1.8668$, $(\Gamma_{sub})_{B}=1.2413$,
$(\Gamma_{sub})_{C}=1.1819$, and $(\Gamma_{sub})_{D}=1.0094$ present the
energy spectrum distribution with a “power-law” tail in each case implying
there is an increasing rigidity of the spectrum from the Cases A, B, and C to
D, respectively. The cutoff energy at the “power-law” tail in the energy
spectrum is given with an increasing value of $(E_{max})_{A}$=1.23 MeV,
$(E_{max})_{B}$=1.93 MeV, $(E_{max})_{C}$=2.80 MeV and $(E_{max})_{D}$=4.01
MeV from the Cases A, B, C and D, respectively. In the precursor region, the
final energy spectrum is divided into two very different parts in each case.
The part in the range from the low energy to the central peak shows an
irregular fluctuation in each case. The irregular fluctuation indicates that
the cold upstream fluid slows down and becomes the “thermal fluid” by the
nonlinear “back reaction” processes. And the other part in the range beyond
the central peak energy shows a smooth “power-law” tail in each case.
Figure 7: Two plots show the correlation of the compression ratio vs the
energy losses and the correlation of the energy spectral index vs the inverse
energy, respectively. The triangles represent the total compression ratios and
the total energy spectral index of the all cases in each plot. The circles
indicate the subshock’s compression ratios and the subshock’s energy spectral
index of all cases in each plot.
As shown in Figure 7, the two kinds of shock compression ratios are both
apparently dependent on the energy losses with respect to these four presented
simulations. As viewed from Cases A, B, and C to D, the total compression
ratio is a decreasing function of the energy losses and each value is larger
than the standard value four, the subshock’s compression ratio is an
increasing function of the energy losses and each value is lower than four.
However, both the total compression ratio and the subshock’s compression ratio
approximate the standard value of four as the energy loss decreases. According
to the DSA theory, if the energy loss is limited to be the minimum, the
simulation models based on the computer will more closely fit the realistic
physical situation. Additionally, the energy spectral index is also fairly
dependent on the inverse energy from the thermalized downstream region into
the precursor region.
## 4 Summary and conclusions
In summary, we performed the dynamical Monte Carlo simulations using the
Gaussian scattering angular distributions based on the Matlab platform by
monitoring the particle’s mass, momentum and energy at any instant in time.
The specific mass, momentum and energy loss functions with respect to time are
presented. A series of analyses of the particle losses are obtained in the
four cases. We successfully examine the relationship between the shock
compression ratio and the energy losses, as well as verify the consistency of
the energy spectral index with the inverse energy injected from the downstream
to precursor region in the simulation cases which are applied with the
prescribed Gaussian scattering angular distributions.
In conclusion, the relationship of the shock compression ratio with the energy
losses via FEB verify that the energy spectral index is determined by the
inverse energy function with time. In fact, these energy losses simultaneously
depend on the assumption of the prescribed scattering law. As expected, the
maximum energy of accelerated particles is limited by the size of the FEB
according to the maximum mean free path in each case. However, there is still
a fairly large difference between the maximum energy of the particle from the
different cases with the same size of the FEB. We find that the total energy
spectral index increases as the standard deviation value of the scattering
angular distribution increases, but the subshock’s energy spectral index
decreases as the standard deviation value of the scattering angular
distribution increases. In these multiple scattering angular distribution
simulations, the prescribed scattering law dominates the energy losses and the
inverse energy. Consequently, the case of applying a prescribed law which
leads to the minimum energy losses will produce a harder subshock’s energy
spectrum than those in the cases with larger energy losses. These
relationships will drive us to find a newly prescribed scattering law which
leads to the minimum energy losses, making the shock compression ratio more
closely approximate the standard value of four for a nonrelativistic shock
with high Mach number in astrophysics.
The authors would like to thank Doctors G. Li, Hongbo Hu, Siming Liu, Xueshang
Feng, and Gang Qin for many useful and interesting discussions concerning this
work. In addition, we also appreciate Profs. Qijun Fu and Shujuan Wang, as
well as other members of the solar radio group at NAOC.
## References
* Axford et al. (1977) Axford, W.I., Leer, E., & Skadron, G., 1977 in Proc. 15th Int. Comsmic Ray Conf. (Plovdiv), 132
* Bell (1978) Bell, A. R., 1978, MNRAS, 182, 147.
* Bell (2004) Bell, A. R., 2004, MNRAS, 353, 550.
* Berezhko & Ellison (1999) Berezhko, E. G. & Ellison, D. C. 1999, Astrophys. J., 526, 385
* Berezhko, et.al. (2003) Berezhko, E. G., Ksenofontov, L. T. & Völk H. J. 2003, Astron. Astrophys., 412, L11.
* Berezhko & Völk (2006) Berezhko, E. G., & Völk H. J. 2006, Astron. Astrophys., 451, 981.
* Blandford & Ostriker (1978) Blandford, R. D., & Ostriker, J. ,P. 1978, Astrophys. J., 221, L29.
* Blasi, et. al. (2007) Blasi, P., Amato, E., & Caprioli, D., 2007, M.N.R.A.S., 375,1471
* Blasi, et. al. (2005) Blasi, P., Gabici, S., & Vannoni, G., 2005, M.N.R.A.S., 361,907
* Cane et.al. (1990) Cane, H. V., von Rosenvinge, T. T., & McGuire, R. E., 1990, J. Geophys. Res., 95, 6575.
* Caprioli, et. al. (2010) Caprioli, D., Kang, H., Vladimirov, A. E. & Jones, T. W., 2010, M.N.R.A.S., 407,1773
* Ellison et al. (1996) Ellison, D. C., Baring, M. G., & Jones, F. C. , 1996, Astrophys. J.473,1029
* Ellison, Möbius & Paschmann (1990) Ellison, D. C., Möbius, E. & Paschmann, G.1990, Astrophys. J., 352, 376
* Ellison et al. (2005) Ellison, D. C., Blasi & Gabici , 2005, in Proc. 29th Int. Comsmic Ray Conf. (India).
* Forslund (1985) Forslund, D. W. 1990, Space Sci. Rev., 42, 3
* Giacalone (2004) Giacalone, J. , 2004, Astron. Astrophys., 609, 452.
* Gosling et. al. (1981) Gosling, J.T., Asbridge, J.R., Bame, S.J., Feldman, W.C., Zwickl,R. D.,Paschmann, G.,Sckopke, N., & Hynds, R. J. 1981, J. Geophys. Res., 866, 547
* Hillas (1984) Hillas, 1984, ARA&A, 22, 425.
* Jones & Ellison (1991) Jones, F. C., & Ellison, D. C., 1991, Space Science Reviews, 58, 259.
* Kang & Jones (2007) Kang H. & Jones T.W. 2007, A. Ph., 28, 232
* Knerr, Jokipii & Ellison (1996) Knerr, J. M., Jokipii, J. R. & Ellison, D. C. 1996, Astrophys. J., 458, 641
* Krymsky (1977) Krymsky, G. F., 1977, Akad. Nauk SSSR Dokl., 243, 1306
* Lee & Ryan (1986) Lee, M. A., & Ryan, J. M., 1986, Astrophys. J., 303, 829
* Lee (2005) Lee, M. A., 2005, APJS, 158, 38
* Malkov, et. al. (2000) Malkov, M. A., Diamond, P. H., & V$\ddot{o}$lk, H. J., 2000, Astrophys. J. Lett., 533,171
* Nishikawa, et. al. (2008) Niemiec, J. , Pohl, M. , Stroman, T. , & Nishikawa, K.-I., 2008, Astrophys. J., 684, 1174.
* Li et al. (2003) Li, G., Zank, G. P., & Rice, W. K. M., 2003, JGR 108,1082
* Ostrowski (1988) Ostrowski, M. , 1988, M.N.R.A.S., 233, 257
* Pelletier (2001) Pelletier, G. 2001, Lecture Notes in Physics, , 576, 58
* Pelletier, et.al. (2006) Pelletier, G. , Lemoine, M. & Marcowith, A. , 2006, Astron. Astrophys., 453,181.
* Spitkovsky (2003) Spitkovsky, A. , 2008, Astrophys. J. Lett., 673, L39.
* Vainio & Laitinen (2007) Vainio, R., & Laitinen, T., 2007, Astrophys. J., 658, 622.
* Vladimirov et al. (2006) Vladimirov, A., Ellison, D. C., & Bykov, A., 2006, Astrophys. J., 652,1246
* Wang & Yan (2011) Wang, X., & Yan, Y., 2011, Astron. Astrophys., 530, A92.
* Winske & Omidi (2011) Winske, D. , & Omidi, N. , 1996, J. Geophys. Res., 101,17287–17304.
* Zank (2000) Zank, G., Rice, W.K.M., & Wu, C. C., 2000, JGR, 105, 25079
* Zirakashvili & Aharonian (2010) Zirakashvili, V. N. & Aharonian, F. A., 2010, Astrophys. J., 708, 965
|
arxiv-papers
| 2011-05-18T11:29:22 |
2024-09-04T02:49:18.901862
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Xin Wang, Yihua Yan",
"submitter": "Xin Wang Mr.",
"url": "https://arxiv.org/abs/1105.3593"
}
|
1105.3638
|
# Corrected portmanteau tests for VAR models with time-varying variance
Valentin Patilea111CREST (Ensai) & IRMAR (UEB), ENSAI - Campus de Ker-Lann,
Rue Blaise Pascal - BP 37203, 35172 BRUZ cedex, France. E-mail:
valentin.patilea@insa-rennes.fr, and Hamdi Raïssi222IRMAR-INSA, 20 avenue des
buttes de Co smes, CS 70839, F-35708 Rennes Cedex 7, France. E-mail:
hamdi.raissi@insa-rennes.fr
Abstract: The problem of test of fit for Vector AutoRegressive (VAR) processes
with unconditionally heteroscedastic errors is studied. The volatility
structure is deterministic but time-varying and allows for changes that are
commonly observed in economic or financial multivariate series such as breaks
or smooth transitions. Our analysis is based on the residual autocovariances
and autocorrelations obtained from Ordinary Least Squares (OLS), Generalized
Least Squares (GLS) and Adaptive Least Squares (ALS) estimation of the
autoregressive parameters. The OLS residuals are the standards estimates of
the VAR model errors. To build the GLS residuals we use the GLS estimate of
the VAR coefficients to estimate the model errors that we further standardize
by the time-varying volatility. Hence, the GLS estimates require the knowledge
of the variance structure. The ALS approach is the GLS approach adapted to the
_unknown_ time-varying volatility that is then estimated by kernel smoothing.
The properties of the three types of residual autocovariances and
autocorrelations are derived. In particular it is shown that the ALS and GLS
residual autocorrelations are asymptotically equivalent. It is also found that
the asymptotic distribution of the OLS residual autocorrelations can be quite
different from the standard chi-square asymptotic distribution obtained in a
correctly specified VAR model with iid innovations. As a consequence the
standard portmanteau tests are unreliable in our framework. The correct
critical values of the standard portmanteau tests based on the OLS residuals
are derived. Moreover, modified portmanteau statistics based on ALS residual
autocorrelations are introduced and their asymptotic critical values are
obtained. The finite sample properties of the goodness-of-fit tests we
consider are investigated by Monte Carlo experiments. The theoretical results
are also illustrated using a U.S. economic data set.
Keywords: VAR model; Unconditionally heteroscedastic errors; Residual
autocorrelations; Portmanteau tests.
## 1 Introduction
In the econometric analysis numerous tools are routinely used in the framework
of VAR (Vector AutoRegressive) modeling of time series variables (see
Lütkepohl (2005) and references therein). Nevertheless it is well known that
these tools are in general noticeably affected by the adjusted autoregressive
order. For instance Thornton and Batten (1985), Stock and Watson (1989) or
Jones (1989) discussed the importance of well specified VAR model for the test
of linear Granger causality in mean. Therefore the checking of goodness-of-fit
of the autoregressive order is commonly performed in applied works before
proceeding to the analysis of the dynamics of time series. The dominant tests
for the adequacy of the autoregressive order are the portmanteau tests
introduced in the VAR framework by Chitturi (1974) and Hosking (1980). The
properties of the tests based on the residual autocorrelations are well
explored in the case of stationary processes (see e.g. Francq, Roy and Zako an
(2005) in the univariate case, Francq and Ra ssi (2007) or Boubacar Mainassara
(2010) in the multivariate case). Duchesne (2005), Brüggemann, Lütkepohl and
Saikkonen (2006) and Ra ssi (2010) developed tests for residual
autocorrelation in a cointegrated framework with stationary innovations.
However many applied studies pointed out the presence of non stationary
volatility in economic time series. For instance Ramey and Vine (2006) found a
declining volatility in the U.S. automobile industry. Watson (1999) noted a
declining volatility of short-term U.S. interest rates and increasing
volatility for long-term U.S. interest rates. Sensier and van Dijk (2004)
considered 214 U.S. macroeconomic variables and found that approximately 80%
of these variables have a volatility that changes in time. These findings
stimulated an interest on the effects of non-stationary volatility in time
series analysis amongst econometricians (see e.g. Kim, Leybourne and Newbold
(2002) or Cavaliere and Taylor (2007)).
The present paper is motivated by the need of reliable tools for testing the
adequacy of the autoregressive order of VAR models with non stationary
volatility. On one hand, we show that in such cases the use of standard
procedures for testing the adequacy of the autoregressive order can be quite
misleading. On the other hand, valid portmanteau tests based on Ordinary Least
Squares (OLS) and Adaptive Least Squares (ALS) residual autocovariances are
proposed for testing the goodness-of-fit tests of non-stationary but stable
VAR processes. More precisely we consider the VAR model of order $p\geq 0$ and
dimension $d\geq 1$
$\displaystyle{X}_{t}={A}_{01}{X}_{t-1}+\dots+{A}_{0p}{X}_{t-p}+u_{t},$ (1.1)
$\displaystyle u_{t}=H_{t}\epsilon_{t},\qquad t=1,2,...$
where $X_{t}$ are random vectors of dimension $d$ and the $d\times d-$matrices
${A}_{0i}$, $i\in\\{1,\dots,p\\}$, are such that the process $(X_{t})$ is
stable, that means $\det(A(z))\neq 0$ for all $|z|\leq 1$, with
${A}(z)=I_{d}-\sum_{i=1}^{p}{A}_{0i}z^{i}$. Here, $H_{t}$ is an _unknown_
$d\times d$ matrix-valued deterministic function of time and $(\epsilon_{t})$
is an innovation process of unit variance that could be serially dependent.
Phillips and Xu (2005), Xu and Phillips (2008) already studied the problem of
estimation of such univariate stable autoregressive processes. Patilea and Ra
ssi (2010) investigated the estimation and the test of parameter restrictions
of multivariate stable autoregressive processes like in (1.1).
The usual way to check the adequacy of a stable VAR(p) model, implemented in
any specialized software, is to assume that the error term $u_{t}$ is second
order stationary, to fix an integer $m>0$ and to test
$\mathcal{H}_{0}:\,\,\mbox{Cov}(u_{t},u_{t-h})=0,\,\,\text{for all }\,0<h\leq
m,$ (1.2)
using a classical (Box-Pierce or Ljung-Box) portmanteau test statistic and
chi-square type critical values. The errors $u_{t}$ are approximated using the
OLS type estimates of the coefficients $A_{0i}$. With the volatility structure
we assumed in model (1.1), the variance of $u_{t}$ depends on $t$ and the
usual chi-square type critical values are in general inaccurate, that is the
asymptotic distribution of the classical portmanteau statistics under
$\mathcal{H}_{0}$ is no longer of chi-square type.
Here we propose two ways to correct this problem. First, we derive the correct
asymptotic distribution of the classical portmanteau test statistics under
$\mathcal{H}_{0}$ and the conditions of model (1.1). This asymptotic
distribution is a weighted sum of $d^{2}m$ independent chi-square
distributions. Next, we indicate how the correct critical values can be
approximated.
To explain our second approach, let us notice that
$\mbox{Cov}(u_{t},u_{t-h})=0$ is equivalent to
$\mbox{Cov}(\epsilon_{t},\epsilon_{t-h})=0$ and the variance of $\epsilon_{t}$
does not depend on the time $t$. Thus an alternative idea for checking the
adequacy of a model like (1.1) is to test
$\mathcal{H}_{0}^{\prime}:\,\,\mbox{Cov}(\epsilon_{t},\epsilon_{t-h})=0,\,\,\text{for
all }\,0<h\leq m.$ (1.3)
The values $\epsilon_{t}$ are approximated by residuals built using a
nonparametric estimate of the deterministic function $H_{t}$ and Adaptive
Least Squares (ALS) type estimates of the coefficients $A_{0i}$ that take into
account the volatility structure. More precisely, to build the ALS residual
vector at time $t$ we use the ALS estimate of the VAR coefficients to estimate
the VAR model error vector at time $t$ that we further standardize by the
nonparametric estimate of time-varying volatility $H_{t}$. Next, we build
classical portmanteau test statistics using the estimates of $\epsilon_{t}$
and we derive the asymptotic distribution under $\mathcal{H}_{0}$. The
asymptotic distribution is again a weighted sum of $d^{2}m$ independent chi-
square distributions and the weights can be easily estimated from the data. In
some important particular cases, including the univariate (i.e. $d=1$)
autoregressive models, we retrieve the standard chi-squared asymptotic
distribution.
The remainder of the paper is organized as follows. In section 2 we specify
the framework of our study and state the asymptotic behavior of the OLS and
the Generalized Least Squares (GLS) estimators of the VAR coefficients. The
asymptotic normality of the OLS and the infeasible GLS residual
autocovariances and autocorrelations is established in section 3. The GLS
residuals are defined as the standardized (by the true volatility $H_{t}$)
estimates of the model error vector obtained with the GLS estimates of the VAR
coefficients. In section 4 we highlight the unreliability of the chi-square
type critical values for standard portmanteau statistics and we derive their
correct critical values in our framework. In section 5 the ALS residual
autocovariances and autocorrelations are introduced. Since the GLS residual
autocovariances and autocorrelations are infeasible, we investigate the
relationship between the GLS and ALS residual autocovariances and
autocorrelations and we show that, in some sense, they are asymptotically
equivalent. This result is used to introduce portmanteau tests based on the
ALS residuals that have the same critical values like those based on the
infeasible GLS residuals. In section 6 we propose suitably modified quadratic
forms of OLS and ALS residual autocovariances in order to obtain alternative
test statistics with chi-square asymptotic distributions under the null
hypothesis. Such modified statistics are nothing but Wald type test statistics
for testing the nullity of a vector of autocovariances. In section 7 some
theoretical comparisons of the asymptotic power, in the Bahadur sense, are
carried out: classical Box-Pierce portmanteau test vs. modified quadratic
forms of OLS residual autocorrelations based test; and ALS vs. OLS residual
autocorrelations based portmanteau tests. A possible extension of our findings
on testing the order of a VAR model to the case of heteroscedastic co-
integrated variables is briefly described in section 8. The finite sample
properties of the different tests considered in this paper are studied by mean
of Monte Carlo experiments in section 9. In section 10 applications to U.S.
economic real data sets are used to illustrate the theoretical results: the
U.S. balance on services and balance on merchandise trade data, and the U.S.
energy-transport consumer price indexes. The summary of our finding and some
concluding remarks are given in section 11. The proofs and the tables and
figures are relegated in the appendices.
## 2 Parameters estimation
In the following weak convergence is denoted by $\Rightarrow$ while
$\stackrel{{\scriptstyle P}}{{\rightarrow}}$ stands for convergence in
probability. The symbol $\otimes$ denotes the usual Kronecker product for
matrices and $A^{\otimes 2}$ stands for $A\otimes A$. The symbol
$\mbox{vec}(\cdot)$ is used for the column vectorization operator. We denote
by $[a]$ the integer part of a real number $a$. For a squared matrix $A$,
$\mbox{tr}(A)$ denotes the trace. For a random variable $x$ we define
$\parallel x\parallel_{r}=(E\parallel x\parallel^{r})^{1/r}$, where $\parallel
x\parallel$ denotes the Euclidean norm. We also define the $\sigma-$field
$\mathcal{F}_{t}=\sigma(\epsilon_{s}:s\leq t)$. The following conditions on
the innovations process $(u_{t})$ are assumed to hold.
Assumption A1: (i) The $d\times d$ matrices $H_{t}$ are positive definite and
the components $\\{g_{ij}(r):1\leq k,l\leq d\\}$ of the matrix $G(r)$ are
measurable deterministic functions on the interval $(0,1]$, such that
$H_{t}=G(t/T)$ and, $\forall\,1\leq k,l\leq d$,
$\sup_{r\in(0,1]}|g_{k,l}(r)|<\infty$ and $g_{k,l}(\cdot)$ satisfies a
Lipschitz condition piecewise on a finite number of some sub-intervals that
partition $(0,1]$ (the partition may depend on $k,l$). The matrix
$\Sigma(r)=G(r)G(r)^{\prime}$ is assumed positive definite for all $r$.
(ii) The process $(\epsilon_{t})$ is $\alpha$-mixing and such that
$E(\epsilon_{t}\mid\mathcal{F}_{t-1})=0$,
$E(\epsilon_{t}\epsilon_{t}^{\prime}\mid\mathcal{F}_{t-1})=I_{d}$ and
$\sup_{t}\parallel\epsilon_{it}\parallel_{4\mu}<\infty$ for some $\mu>1$ and
all $i\in\\{1,\dots,d\\}$.
The second approach we propose for checking the adequacy of a VAR(p) model
requires the estimation of the innovations $\epsilon_{t}$, and hence we will
need an identification condition for $G(r)$ and an estimate of the matrix
$H_{t}$. The condition $H_{t}$ is positive definite matrix identifies $G(r)$
as the square root of $\Sigma(r)$ and this is a convenient choice for the
mathematical proofs. Nevertheless one can notice from the following that our
results could be stated using alternative conditions, like for instance
$H_{t}$ is a lower triangular matrix with diagonal components restricted to be
positive. The conditions on the unknown volatility function $G(r)$ are general
and allow for a large set of dynamics for the innovation variance as for
instance abrupt shifts or piecewise affine functions. This assumption
generalizes to a multivariate framework the specification of the innovation
variance considered in Xu and Phillips (2008). The conditional
homoscedasticity of $(\epsilon_{t})$ imposed in (ii) ensures the
identifiability of $\Sigma(r)$. We call a model like in (1.1) with the
innovation process $(u_{t})$ satisfying Assumption A1 a stable VAR(p) model
with time-varying variance.
To introduce the OLS and GLS estimators of the autoregressive parameters, set
the observations $X_{-p+1},\dots,X_{0}$ equal to the null vector of
$\mathbb{R}^{d}$ (or any other initial values) and denote by
$\theta_{0}=(\mbox{vec}\>(A_{01})^{\prime}\dots\mbox{vec}\>(A_{0p})^{\prime})^{\prime}\in\mathbb{R}^{pd^{2}}$
the vector of true parameters. The equation (1.1) becomes
$\displaystyle X_{t}=(\tilde{X}_{t-1}^{\prime}\otimes
I_{d})\theta_{0}+u_{t},\quad t=1,2,\dots$ (2.1) $\displaystyle
u_{t}=H_{t}\epsilon_{t},$
with $\tilde{X}_{t-1}=(X_{t-1}^{\prime},\dots,X_{t-p}^{\prime})^{\prime}$.
Then the OLS estimator is
$\hat{\theta}_{OLS}=\hat{\Sigma}_{\tilde{X}}^{-1}\mbox{vec}\>\left(\hat{\Sigma}_{X}\right),$
where
$\hat{\Sigma}_{\tilde{X}}=T^{-1}\sum_{t=1}^{T}\tilde{X}_{t-1}\tilde{X}_{t-1}^{\prime}\otimes
I_{d}\quad\mbox{and}\quad\hat{\Sigma}_{X}=T^{-1}\sum_{t=1}^{T}X_{t}\tilde{X}_{t-1}^{\prime}.$
Multiplying by $H_{t}^{-1}$ on the left in equation (2.1) we obtain
$\displaystyle H_{t}^{-1}X_{t}=H_{t}^{-1}(\tilde{X}_{t-1}^{\prime}\otimes
I_{d})\theta_{0}+\epsilon_{t},$
and then the GLS estimator is
$\hat{\theta}_{GLS}=\hat{\Sigma}_{\tilde{\underline{X}}}^{-1}\mbox{vec}\>\left(\hat{\Sigma}_{\underline{X}}\right),$
(2.2)
with
$\hat{\Sigma}_{\tilde{\underline{X}}}=T^{-1}\sum_{t=1}^{T}\tilde{X}_{t-1}\tilde{X}_{t-1}^{\prime}\otimes\Sigma_{t}^{-1},\quad\hat{\Sigma}_{\underline{X}}=T^{-1}\sum_{t=1}^{T}\Sigma_{t}^{-1}X_{t}\tilde{X}_{t-1}^{\prime}.$
In general, the GLS estimator is infeasible since it involves the true
volatility matrix.
Due to the stability condition, we can write
$X_{t}=\sum_{i=0}^{\infty}\psi_{i}u_{t-i},$ where $\psi_{0}=I_{d}$ and the
components of the $\psi_{i}$’s are absolutely summable $d\times d-$matrices.
Then
$\tilde{X}_{t}=\sum_{i=0}^{\infty}\tilde{\psi}_{i}u_{t-i}^{p},$
where $u_{t}^{p}$ is given by $u_{t}^{p}=\mathbf{1}_{p}\otimes u_{t}$,
$\mathbf{1}_{p}$ is the vector of ones of dimension $p$, and
$\tilde{\psi}_{i}=diag\\{\psi_{i},\psi_{i-1},\dots,\psi_{i-p+1}\\},$
taking $\psi_{j}=0$ for $j<0$.
Let $\mathbf{1}_{p\times p}$ stand for the $p\times p-$matrix with all
components equal to one. Patilea and Ra ssi (2010) proved that under A1
$T^{\frac{1}{2}}(\hat{\theta}_{GLS}-\theta_{0})\Rightarrow\mathcal{N}(0,\Lambda_{1}^{-1}),$
(2.3)
where
$\Lambda_{1}=\int_{0}^{1}\sum_{i=0}^{\infty}\left\\{\tilde{\psi}_{i}(\mathbf{1}_{p\times
p}\otimes\Sigma(r))\tilde{\psi}_{i}^{\prime}\right\\}\otimes\Sigma(r)^{-1}dr,$
and
$T^{\frac{1}{2}}(\hat{\theta}_{OLS}-\theta_{0})\Rightarrow\mathcal{N}(0,\Lambda_{3}^{-1}\Lambda_{2}\Lambda_{3}^{-1}),$
(2.4)
with
$\Lambda_{2}=\int_{0}^{1}\sum_{i=0}^{\infty}\left\\{\tilde{\psi}_{i}(\mathbf{1}_{p\times
p}\otimes\Sigma(r))\tilde{\psi}_{i}^{\prime}\right\\}\otimes\Sigma(r)dr,$
$\Lambda_{3}=\int_{0}^{1}\sum_{i=0}^{\infty}\left\\{\tilde{\psi}_{i}(\mathbf{1}_{p\times
p}\otimes\Sigma(r))\tilde{\psi}_{i}^{\prime}\right\\}dr\otimes I_{d}.$
Moreover, they showed that
$\Lambda_{3}^{-1}\Lambda_{2}\Lambda_{3}^{-1}-\Lambda_{1}^{-1}$ is positive
semi-definite.
## 3 Asymptotic behavior of the residual autocovariances
Let us define the OLS-based estimates of $u_{t}$ and the GLS-based estimates
of $\epsilon_{t}$
$\hat{u}_{t}=X_{t}-(\tilde{X}_{t-1}^{\prime}\otimes
I_{d})\hat{\theta}_{OLS}\quad\text{and}\quad\hat{\epsilon}_{t}=H_{t}^{-1}X_{t}-H_{t}^{-1}(\tilde{X}_{t-1}^{\prime}\otimes
I_{d})\hat{\theta}_{GLS}.$
The corresponding residual autocovariances are defined as
$\hat{\Gamma}_{OLS}^{u}(h)=T^{-1}\sum_{t=h+1}^{T}\hat{u}_{t}\hat{u}_{t-h}^{\prime}\quad\mbox{and}\quad\hat{\Gamma}_{GLS}^{\epsilon}(h)=T^{-1}\sum_{t=h+1}^{T}\hat{\epsilon}_{t}\hat{\epsilon}_{t-h}^{\prime}.$
In general the estimated residuals $\hat{\epsilon}_{t}$ as well as the
autocovariances $\hat{\Gamma}_{GLS}^{\epsilon}(h)$ are not computable since
they depend on the unknown matrices $H_{t}$ and the infeasible estimator
$\hat{\theta}_{GLS}$.
For any fixed integer $m\geq 1$, the estimates of the first $m$ residual
autocovariances are defined by
$\hat{\gamma}_{m}^{u,OLS}\\!=\\!\mbox{vec}\\!\left\\{\\!\left(\hat{\Gamma}^{u}_{OLS}(1),\dots,\hat{\Gamma}^{u}_{OLS}(m)\right)\\!\right\\},\,\,\,\,\hat{\gamma}_{m}^{\epsilon,GLS}\\!=\\!\mbox{vec}\\!\left\\{\\!\left(\hat{\Gamma}^{\epsilon}_{GLS}(1),\dots,\hat{\Gamma}^{\epsilon}_{GLS}(m)\right)\\!\right\\}.$
To state the asymptotic behavior of $\hat{\gamma}_{m}^{u,OLS}$ and
$\hat{\gamma}_{m}^{\epsilon,GLS}$ let us define
$K=\left(\begin{array}[]{cccc}A_{01}&\dots&A_{0p-1}&A_{0p}\\\
I_{d}&0&\dots&0\\\ &\ddots&\ddots&\vdots\\\ 0&&I_{d}&0\\\ \end{array}\right).$
Note that if $\tilde{u}_{t}=(u_{t}^{\prime},0\dots,0)^{\prime}$,
$\tilde{X}_{t}=K\tilde{X}_{t-1}+\tilde{u}_{t}.$ Now, let
$\Sigma_{G}=\int_{0}^{1}\Sigma(r)dr$, $\Sigma_{G^{\otimes
2}}=\int_{0}^{1}\Sigma(r)^{\otimes 2}dr$ and
$\Phi^{u}_{m}=\sum_{i=0}^{m-1}\left\\{e_{m}(i+1)e_{p}(1)^{\prime}\otimes\Sigma_{G}\otimes
I_{d}\right\\}\left\\{K^{i\,\prime}\otimes I_{d}\right\\},$ (3.1)
$\Lambda^{u,\theta}_{m}=\sum_{i=0}^{m-1}\left\\{e_{m}(i+1)e_{p}(1)^{\prime}\otimes\Sigma_{G^{\otimes
2}}\right\\}\left\\{K^{i\,\prime}\otimes I_{d}\right\\},$ (3.2)
$\Lambda^{\epsilon,\theta}_{m}=\sum_{i=0}^{m-1}\left\\{e_{m}(i+1)e_{p}(1)^{\prime}\otimes\int_{0}^{1}G(r)^{\prime}\otimes
G(r)^{-1}dr\right\\}\left\\{K^{i\,\prime}\otimes I_{d}\right\\},$ (3.3)
$\Lambda^{u,u}_{m}=I_{m}\otimes\Sigma_{G^{\otimes 2}},$ (3.4)
where $e_{m}(j)$ is the vector of dimension $m$ such that the $j$th component
is equal to one and zero elsewhere.333Recall that our identification condition
for $H_{t}$ implies $G(r)=\Sigma(r)^{1/2}$.
###### Proposition 1
If model (1.1) is correct and Assumption A1 holds true, we have
$T^{\frac{1}{2}}\hat{\gamma}_{m}^{u,OLS}\Rightarrow\mathcal{N}(0,\Sigma^{u,OLS}),$
(3.5)
where
$\Sigma^{u,OLS}=\Lambda^{u,u}_{m}-\Lambda^{u,\theta}_{m}\Lambda_{3}^{-1}\Phi_{m}^{u\,\prime}-\Phi^{u}_{m}\Lambda_{3}^{-1}\Lambda^{u,\theta\,\prime}_{m}+\Phi_{m}^{u}\Lambda_{3}^{-1}\Lambda_{2}\Lambda_{3}^{-1}\Phi_{m}^{u\,\prime},$
(3.6)
$T^{\frac{1}{2}}\hat{\gamma}_{m}^{\epsilon,GLS}\Rightarrow\mathcal{N}(0,\Sigma^{\epsilon,GLS}),$
(3.7)
where
$\Sigma^{\epsilon,GLS}=I_{d^{2}m}-\Lambda^{\epsilon,\theta}_{m}\Lambda_{1}^{-1}\Lambda^{\epsilon,\theta\,^{\prime}}_{m}.$
(3.8)
In the particular case $p=0$, $\Sigma^{u,OLS}=\Lambda^{u,u}_{m}$ and
$\Sigma^{\epsilon,GLS}=I_{d^{2}m}$.
Let us discuss the conclusions of Proposition 1 in some particular situations.
In the case where $\Sigma(\cdot)=\sigma^{2}(\cdot)I_{d}$ for some positive
scalar function $\sigma(\cdot)$, we have
$\Lambda^{\epsilon,\theta}_{m}=\\!\sum_{i=0}^{m-1}\\!\left\\{e_{m}(i+1)e_{p}(1)^{\prime}\otimes
I_{d}\otimes I_{d}\right\\}\left\\{K^{i\,\prime}\otimes
I_{d}\right\\}\\!,\quad\Lambda_{1}\\!=\\!\sum_{i=0}^{\infty}\left\\{\tilde{\psi}_{i}(\mathbf{1}_{p\times
p}\otimes I_{d})\tilde{\psi}_{i}^{\prime}\right\\}\otimes I_{d},$
so that in this case the asymptotic distribution of the $\epsilon_{t}$
autocovariances estimates $\hat{\gamma}_{m}^{\epsilon,GLS}$ do not depend on
the volatility function $\Sigma(\cdot)$. Meanwhile, the (asymptotic)
covariance matrix $\Sigma^{u,OLS}$ still depends on the volatility function.
If we suppose that $(u_{t})$ have a time-constant variance
$\Sigma(r)\equiv\Sigma_{u}$, we obtain
$\Lambda_{1}=\\!E\\!\left[\tilde{X}_{t}\tilde{X}_{t}^{\prime}\right]\otimes\Sigma_{u}^{-1},\>\>\Lambda^{\epsilon,\theta}_{m}=\\!E\\!\left[\epsilon_{t}^{m}\tilde{X}_{t}^{\prime}\right]\\!\otimes
G_{u}^{-1},\>\Lambda^{u,u}_{m}=I_{m}\otimes\Sigma_{u}^{\otimes
2},\>\>\Lambda_{3}=\\!E\\!\left[\tilde{X}_{t}\tilde{X}_{t}^{\prime}\right]\\!\otimes
I_{d},$
where $\Sigma_{u}=G_{u}G_{u}^{\prime}$, and
$\Lambda^{u,\theta}_{m}=E\left[u_{t}^{m}\tilde{X}_{t}^{\prime}\right]\\!\otimes\Sigma_{u},\>\>\Lambda_{2}=E\left[\tilde{X}_{t}\tilde{X}_{t}^{\prime}\right]\\!\otimes\Sigma_{u},\>\>\Phi^{u}_{m}=E\left[u_{t}^{m}\tilde{X}_{t}^{\prime}\right]\otimes
I_{d},$
where $u_{t}^{m}\\!=(u_{t}^{\prime},\dots,u_{t-m}^{\prime})^{\prime}$ and
$\epsilon_{t}^{m}\\!=(\epsilon_{t}^{\prime},\dots,\epsilon_{t-m}^{\prime})^{\prime}$.
By straightforward computations
$\Sigma^{u,OLS}=I_{m}\otimes\Sigma_{u}^{\otimes
2}-E\left[u_{t}^{m}\tilde{X}_{t}^{\prime}\right]E\left[\tilde{X}_{t}\tilde{X}_{t}^{\prime}\right]^{-1}E\left[u_{t}^{m}\tilde{X}_{t}^{\prime}\right]^{\prime}\otimes\Sigma_{u},$
(3.9)
$\Sigma^{\epsilon,GLS}=I_{d^{2}m}-E\left[\epsilon_{t}^{m}\tilde{X}_{t}^{\prime}\right]E\left[\tilde{X}_{t}\tilde{X}_{t}^{\prime}\right]^{-1}E\left[\epsilon_{t}^{m}\tilde{X}_{t}^{\prime}\right]^{\prime}\otimes
I_{d}.$ (3.10)
Formula (3.9) (resp. (3.10)) corresponds to the (asymptotic) covariance matrix
obtained in the standard case with an i.i.d. error process of variance
$\Sigma_{u}$ (resp. $I_{d}$), see Lütkepohl (2005), Proposition 4.5. Herein,
some dependence of the error process is allowed. In particular, equation
(3.10) indicates that the homoscedastic (time-constant variance) case is
another situation where $\Sigma^{\epsilon,GLS}$ does not depend on error
process variance $\Sigma_{u}$.
Proposition 1 shows that in general VAR models with time-varying variance the
covariance matrix $\Sigma^{\epsilon,GLS}$ depends on $\Sigma(\cdot)$. For the
sake of simpler notation, hereafter we write $\hat{\Gamma}_{OLS}(h)$ (resp.
$\hat{\gamma}^{OLS}_{m}$) (resp. $\Sigma^{OLS}$) instead of
$\hat{\Gamma}_{OLS}^{u}(h)$ (resp. $\hat{\gamma}^{u,OLS}_{m}$) (resp.
$\Sigma^{u,OLS}$). Similar notation simplification will be applied for
$\hat{\Gamma}_{\epsilon,GLS}(h)$, $\hat{\gamma}^{\epsilon,GLS}_{m}$ and
$\Sigma^{\epsilon,GLS}$.
The following example shows that when the error process is heteroscedastic,
the covariance matrices $\Sigma^{OLS}$ and $\Sigma^{GLS}$ can be quite
different and far from the covariance matrices obtained in the stationary
case.
###### Example 3.1
Consider a bivariate $AR(1)$ model $X_{t}=A_{0}X_{t-1}+u_{t}$ with true
parameter $A_{0}$ equal to the zero $2\times 2-$matrix. One can use such a
model to study linear Granger causality in mean between uncorrelated
variables. However in practice one has first to check that the error process
is a white noise. If we assume for simplicity that
$\Sigma(r)=\left(\begin{array}[]{cc}\Sigma_{1}(r)&0\\\ 0&\Sigma_{2}(r)\\\
\end{array}\right),$
we obtain diagonal covariance matrices $\Sigma^{OLS}=diag\\{0_{4\times
4},I_{m-1}\otimes\breve{\Sigma}^{OLS}\\}$ and
$\Sigma^{GLS}=diag\\{\breve{\Sigma}^{GLS},I_{4(m-1)}\\}$, with
${\breve{\Sigma}^{OLS}=\left(\begin{array}[]{cccc}\int_{0}^{1}\Sigma_{1}(r)^{2}dr&0&0&0\\\
0&\int_{0}^{1}\Sigma_{1}(r)\Sigma_{2}(r)dr&0&0\\\
0&0&\int_{0}^{1}\Sigma_{1}(r)\Sigma_{2}(r)dr&0\\\
0&0&0&\int_{0}^{1}\Sigma_{2}(r)^{2}dr\\\ \end{array}\right)}$
and
${\breve{\Sigma}^{GLS}=\left(\begin{array}[]{ccccc}0&0&0&0\\\
0&1-\frac{(\int_{0}^{1}\Sigma_{1}(r)^{\frac{1}{2}}\Sigma_{2}(r)^{-\frac{1}{2}}dr)^{2}}{\int_{0}^{1}\Sigma_{1}(r)\Sigma_{2}(r)^{-1}dr}&0&0\\\
0&0&1-\frac{(\int_{0}^{1}\Sigma_{1}(r)^{-\frac{1}{2}}\Sigma_{2}(r)^{\frac{1}{2}}dr)^{2}}{\int_{0}^{1}\Sigma_{2}(r)\Sigma_{1}(r)^{-1}dr}&0\\\
0&0&0&0\\\ \end{array}\right)}.$
We denote by $0_{q\times q}$ the null matrix of dimension $q\times q$. Note
that the matrix $I_{4(m-1)}$ which appears in the expression of $\Sigma^{GLS}$
is a consequence of the assumption $A_{0}=0_{2\times 2}$. If we suppose that
the errors are homoscedastic, that is $\Sigma(r)$ is constant, equal to some
$\Sigma_{u}$, we obtain $\Sigma^{OLS}=diag\\{0_{4\times
4},I_{m-1}\otimes\Sigma_{u}^{\otimes 2}\\}$ and
$\Sigma^{GLS}=diag\\{0_{4\times 4},I_{4(m-1)}\\}$. Therefore in the OLS
approach and if the innovations variance is spuriously assumed constant, the
asymptotic spurious covariance matrix $\Sigma_{S}^{OLS}=diag\\{0_{4\times
4},I_{m-1}\otimes\Sigma_{u,S}^{\otimes 2}\\}$ is used with
$\Sigma_{u,S}=\left(\begin{array}[]{cc}\int_{0}^{1}\Sigma_{1}(r)dr&0\\\
0&\int_{0}^{1}\Sigma_{2}(r)dr\\\ \end{array}\right).$
Now we illustrate the difference between the covariance matrices obtained if
we take into account the unconditional heteroscedasticity of the process and
the case where the process is spuriously supposed homoscedastic. We take
$\Sigma_{1}(r)=\sigma_{10}^{2}+(\sigma_{11}^{2}-\sigma_{10}^{2})\times\mathbf{1}_{\\{r\geq\tau_{1}\\}}(r)$
(3.11)
and
$\Sigma_{2}(r)=\sigma_{20}^{2}+(\sigma_{21}^{2}-\sigma_{20}^{2})\times\mathbf{1}_{\\{r\geq\tau_{2}\\}}(r),$
(3.12)
where $\tau_{i}\in[0,1]$ with $i\in\\{1,2\\}$. This specification of the
volatility function is inspired by Example 1 of Xu and Phillips (2008) (see
also Cavaliere (2004)). In Figure 1, we take $\tau_{1}=\tau_{2}$,
$\sigma_{10}^{2}=\sigma_{20}^{2}=1$ and $\sigma_{11}^{2}=0.5$, so that only
the break dates and $\sigma_{21}^{2}$ vary freely. In figure 2 only the break
dates vary with $\tau_{1}\neq\tau_{2}$ in general, and
$\sigma_{10}^{2}=\sigma_{20}^{2}=1$, $\sigma_{11}^{2}=\sigma_{21}^{2}=4$. In
Figure 1 and 2 we plot in the left graphics the second component
$\Sigma^{GLS}(2,2)$ on the diagonal of $\Sigma^{GLS}$ and in the right
graphics the ratio of $\Sigma^{OLS}(6,6)/\Sigma^{OLS}_{S}(6,6)$.
From the left graphic of Figure 1 it turns out that $\Sigma^{GLS}(2,2)$ could
be far from zero for larger values of $\sigma_{21}$ and when the breaking
point $\tau_{1}$ is located early in the sample. From the right graphic of
Figure 1 we can see that the ratio of
$\Sigma^{OLS}(6,6)/\Sigma^{OLS}_{S}(6,6)$ can be far from 1, however the
relation between this ratio and the variations of $\tau_{1}$, $\sigma_{21}$ is
not clear. From the left graphic of Figure 2, it appears that
$\Sigma^{GLS}(2,2)$ can be far from zero. According to the right graphic of
Figure 2, the relative difference between $\Sigma^{OLS}(6,6)$ and
$\Sigma^{OLS}_{S}(6,6)$ is significantly larger when the breaking points
$\tau_{1}$ and $\tau_{2}$ are located at the end of the sample. This example
shows that the standard results for the analysis of the autocovariances can be
quite misleading when the unconditional homoscedasticity assumption on the
innovations process does not hold.
We also consider the vector of residual autocorrelations: for a given integer
$m\geq 1$, define
$\hat{\rho}_{m}^{OLS}=\mbox{vec}\>\left\\{\left(\hat{R}_{OLS}(1),\dots,\hat{R}_{OLS}(m)\right)\right\\}\quad\mbox{where}\quad\hat{R}_{OLS}(h)=\hat{S}_{u}^{-1}\hat{\Gamma}_{OLS}(h)\hat{S}_{u}^{-1}$
with
$\hat{S}_{u}^{2}=\mbox{Diag}\\{\hat{\sigma}_{u}^{2}(1),\dots,\hat{\sigma}_{u}^{2}(d)\\}$,
$\hat{\sigma}_{u}^{2}(i)=T^{-1}\sum_{t=1}^{T}\hat{u}_{it}^{2}$, and
$\hat{\rho}_{a,m}^{GLS}=\mbox{vec}\>\left\\{\left(\hat{R}_{GLS}(1),\dots,\hat{R}_{GLS}(m)\right)\right\\}\quad\mbox{where}\quad\hat{R}_{GLS}(h)=\hat{S}_{\epsilon}^{-1}\hat{\Gamma}_{GLS}(h)\hat{S}_{\epsilon}^{-1},$
with
$\hat{S}_{\epsilon}^{2}=\mbox{Diag}\\{\hat{\sigma}_{\epsilon}^{2}(1),\dots,$
$\hat{\sigma}_{\epsilon}^{2}(d)\\}$,
$\hat{\sigma}_{\epsilon}^{2}(i)=T^{-1}\sum_{t=1}^{T}\hat{{\epsilon}}_{it}^{2}$.
Since $\epsilon_{t}$ has identity variance matrix, we can also define
$\hat{\rho}_{b,m}^{GLS}=\hat{\gamma}_{m}^{GLS}.$
###### Proposition 2
If model (1.1) is correct and Assumption A1 holds true, we have
$T^{\frac{1}{2}}\hat{\rho}_{m}^{OLS}\Rightarrow\mathcal{N}(0,\Psi^{OLS}),$
(3.13)
where
$\Psi^{OLS}=\\{I_{m}\otimes(S_{u}\otimes
S_{u})^{-1}\\}\Sigma^{OLS}\\{I_{m}\otimes(S_{u}\otimes S_{u})^{-1}\\},$
where $S_{u}^{2}=\mbox{Diag}\\{\Sigma_{G,11},\dots,\Sigma_{G,dd}\\}$.
Moreover,
$T^{\frac{1}{2}}\hat{\rho}_{m}^{GLS}\Rightarrow\mathcal{N}(0,\Sigma^{GLS}),$
(3.14)
where $\hat{\rho}_{m}^{GLS}$ stands for any of $\hat{\rho}_{a,m}^{GLS}$ or
$\hat{\rho}_{b,m}^{GLS}$.
Using Proposition 2, $\hat{S}_{u}$ and a consistent estimator of
$\Sigma^{OLS}$ (that can build in a similar way to that of $\Delta_{m}^{OLS}$,
see Section 4, p. 4), one can easily build a consistent estimate of
$\Psi^{OLS}$ and confidence intervals for the OLS residual autocorrelations.
## 4 Modified portmanteau tests based on OLS estimation
Corrected portmanteau tests based on the OLS residual autocorrelations are
proposed below. We use the standard Box-Pierce statistic, Box and Pierce
(1970), introduced in the VAR framework by Chitturi (1974)
$\displaystyle Q_{m}^{OLS}$ $\displaystyle=$ $\displaystyle
T\sum_{h=1}^{m}\mbox{tr}\left(\hat{\Gamma}_{OLS}^{\prime}(h)\hat{\Gamma}_{OLS}^{-1}(0)\hat{\Gamma}_{OLS}(h)\hat{\Gamma}_{OLS}^{-1}(0)\right)$
(4.1) $\displaystyle=$ $\displaystyle
T\hat{\gamma}^{OLS^{\prime}}_{m}\left(I_{m}\otimes\hat{\Gamma}_{OLS}^{-1}(0)\otimes\hat{\Gamma}_{OLS}^{-1}(0)\right)\hat{\gamma}^{OLS}_{m}.$
We also consider the Ljung-Box statistic (Ljung and Box (1978)) introduced in
the VAR framework by Hosking (1980)
$\displaystyle\tilde{Q}_{m}^{OLS}$ $\displaystyle=$ $\displaystyle
T^{2}\sum_{h=1}^{m}(T-h)^{-1}\mbox{tr}\left(\hat{\Gamma}_{OLS}^{\prime}(h)\hat{\Gamma}_{OLS}^{-1}(0)\hat{\Gamma}_{OLS}(h)\hat{\Gamma}_{OLS}^{-1}(0)\right).$
The following result, a direct consequence of Proposition 1 equation (3.13),
provides the asymptotic distribution of $Q_{m}^{OLS}$ and
$\tilde{Q}_{m}^{OLS}$.
###### Theorem 4.1
If model (1.1) is correct and Assumption A1 holds true, the statistics
$Q_{m}^{OLS}$ and $\tilde{Q}_{m}^{OLS}$ converge in law to
$U(\delta_{m}^{OLS})=\sum_{i=1}^{d^{2}m}\delta^{ols}_{i}U_{i}^{2},$ (4.2)
as $T\to\infty$, where
$\delta_{m}^{OLS}=(\delta^{ols}_{1},\dots,\delta^{ols}_{d^{2}m})^{\prime}$ is
the vector of the eigenvalues of the matrix
$\Delta_{m}^{OLS}=(I_{m}\otimes\Sigma_{G}^{-1/2}\otimes\Sigma_{G}^{-1/2})\Sigma^{OLS}(I_{m}\otimes\Sigma_{G}^{-1/2}\otimes\Sigma_{G}^{-1/2}),$
$\Sigma_{G}=\int_{0}^{1}\Sigma(r)dr$ and the $U_{i}$’s are independent
$\mathcal{N}(0,1)$ variables.
When the error process is homoscedastic i.i.d. and $m$ is large, it is well
known that the asymptotic distribution of the statistics $Q_{m}^{OLS}$ and
$\tilde{Q}_{m}^{OLS}$ under the null hypothesis $\mathcal{H}_{0}$ can be
approximated by a chi-square law with $d^{2}(m-p)$ degrees of freedom, see Box
and Pierce (1970). In our framework, even for large $m$, the limit
distribution in (4.2) can be very different from a chi-square law. The
following example illustrate this point.
###### Example 4.1
Consider the bivariate process in Example 3.1. Then
$\Delta_{m}^{OLS}=diag\\{0_{4\times 4},I_{m-1}\otimes\breve{\Delta}_{OLS}\\}$
with
$\breve{\Delta}_{OLS}=\left(\begin{array}[]{cccc}\frac{\int_{0}^{1}\Sigma_{1}(r)^{2}dr}{(\int_{0}^{1}\Sigma_{1}(r)dr)^{2}}&0&0&0\\\
0&\frac{\int_{0}^{1}\Sigma_{1}(r)\Sigma_{2}(r)dr}{\int_{0}^{1}\Sigma_{1}(r)dr\int_{0}^{1}\Sigma_{2}(r)dr}&0&0\\\
0&0&\frac{\int_{0}^{1}\Sigma_{1}(r)\Sigma_{2}(r)dr}{\int_{0}^{1}\Sigma_{2}(r)dr\int_{0}^{1}\Sigma_{1}(r)dr}&0\\\
0&0&0&\frac{\int_{0}^{1}\Sigma_{2}(r)^{2}dr}{(\int_{0}^{1}\Sigma_{2}(r)dr)^{2}}\\\
\end{array}\right).$
If we suppose that $\Sigma(r)$ is constant and $A_{0}=0_{2\times 2}$, we
obtain $\breve{\Delta}_{OLS}=I_{4}$, so that the asymptotic distribution of
$Q_{m}^{OLS}$ and $\tilde{Q}_{m}^{OLS}$ is $\chi^{2}(d^{2}(m-p))$ with $p=1$
and $d=2$. However it is easy to see that the $d^{2}(m-p)$ non zero diagonal
elements in $\Delta_{m}^{OLS}$ can be far from one if the error process is
heteroscedastic. From the Jensen inequality the components
$\breve{\Delta}_{OLS}(1,1)$ and $\breve{\Delta}_{OLS}(4,4)$ are greater or
equal than one. For illustration, in the right graphics of Figures 1 and 2 we
present the second diagonal element of $\breve{\Delta}_{OLS}$ when the
volatility function is like in (3.11)-(3.12).
Estimates of the weights which appear in (4.2) can be obtained as follows.
First, let us recall the following results proved by Patilea and Ra ssi
(2010):
$\hat{\Sigma}_{G^{\otimes
2}}:=T^{-1}\sum_{t=2}^{T}\hat{u}_{t-1}\hat{u}_{t-1}^{\prime}\otimes\hat{u}_{t}\hat{u}_{t}^{\prime}=\Sigma_{G^{\otimes
2}}+o_{p}(1),$ (4.3)
$\hat{\Sigma}_{G}:=T^{-1}\sum_{t=1}^{T}\hat{u}_{t}\hat{u}_{t}^{\prime}=\Sigma_{G}+o_{p}(1),$
(4.4)
$\hat{\Lambda}_{2}:=T^{-1}\sum_{t=1}^{T}\tilde{X}_{t-1}\tilde{X}_{t-1}^{\prime}\otimes\hat{u}_{t}\hat{u}_{t}^{\prime}=\Lambda_{2}+o_{p}(1),$
(4.5)
and
$\hat{\Lambda}_{3}:=\hat{\Sigma}_{\tilde{X}}=\Lambda_{3}+o_{p}(1).$ (4.6)
A consistent estimator of $\Phi^{u}_{m}$ and $\Lambda^{u,\theta}_{m}$ given in
(3.1) and (3.2) is easily obtained by replacing $A_{01},\dots,A_{0p}$ with
their OLS estimators in $K$ and using (4.3) and (4.4). Thus from this and the
equations (4.3) to (4.6), one can easily define a consistent estimator of
$\Delta_{m}^{OLS}$. Denote the estimated eigenvalues of $\Delta_{m}^{OLS}$ by
$\hat{\delta}_{m}^{OLS}=(\hat{\delta}_{1}^{ols},\dots,\hat{\delta}_{d^{2}m}^{ols})^{\prime}$.
We are now ready to introduce the OLS residuals-based corrected versions of
the Box-Pierce (resp. Ljung-Box) portmanteau tests for testing the order of
the VAR model (1.1). With at hand a vector $\hat{\delta}_{m}^{OLS}$, at the
asymptotic level $\alpha$, the Box-Pierce (resp. Ljung-Box) procedure consists
in rejecting the null hypothesis (1.2) of uncorrelated innovations when
$P(Q_{m}^{OLS}>U_{OLS}(\hat{\delta}_{m}^{OLS})\mid X_{1},\dots,X_{T})<\alpha$
(resp. $P(\tilde{Q}_{m}^{OLS}>U_{OLS}(\hat{\delta}_{m}^{OLS})\mid
X_{1},\dots,X_{T})<\alpha).$ The $p$-values can be evaluated using the Imhof
algorithm (Imhof, 1961) or the saddle point method, see e.g. Kuonen (1999).
Let us end this section with some remarks on the particular case
$\Sigma(\cdot)=\sigma^{2}(\cdot)I_{d}$ (that includes the univariate AR(p)
models with time-varying variance). In this case
$\Delta_{m}^{OLS}=\left[\int_{0}^{1}\sigma^{2}(r)dr\right]^{-2}\Sigma^{OLS}=\left[\int_{0}^{1}\sigma^{2}(r)dr\right]^{-2}\left[\int_{0}^{1}\sigma^{4}(r)dr\right]\Sigma^{GLS}=:c_{\sigma}\Sigma^{GLS},$
(4.7)
and clearly, $c_{\sigma}\geq 1$. If in addition $p=0$, by Proposition 1 we
have $\Sigma^{GLS}=I_{d^{2}m}$ and hence
$\delta_{m}^{OLS}=c_{\sigma}(1,\cdots,1)^{\prime}$.
## 5 Adaptive portmanteau tests
An alternative way to build portmanteau tests for VAR(p) models with time-
varying variance we consider herein is to use approximations of the innovation
$\epsilon_{t}$. A nonparametric estimate of the volatility function is needed
for building such approximations. For this purpose we generalize the approach
of Xu and Phillips (2008) to the multivariate case, see also Patilea and
Raïssi (2010). Let us denote by $A\odot B$ the Hadamard (entrywise) product of
two matrices of same dimension $A$ and $B$. Define the _symmetric_ matrix
$\check{\Sigma}_{t}^{0}=\sum_{i=1}^{T}w_{ti}\odot\hat{u}_{i}\hat{u}_{i}^{\prime},$
where, as before the $\hat{u}_{i}$’s are the OLS residuals and the
$kl-$element, $k\leq l$, of the $d\times d$ matrix of weights $w_{ti}$ is
given by
$w_{ti}(b_{kl})=\left(\sum_{i=1}^{T}K_{ti}(b_{kl})\right)^{-1}K_{ti}(b_{kl}),$
with $b_{kl}$ the bandwidth and
$K_{ti}(b_{kl})=\left\\{\begin{array}[]{c}K(\frac{t-i}{Tb_{kl}})\quad\mbox{if}\quad
t\neq i,\\\ 0\quad\mbox{if}\quad t=i.\\\ \end{array}\right.$
The kernel function $K(z)$ is bounded nonnegative and such that
$\int_{-\infty}^{\infty}K(z)dz=1$. For all $1\leq k\leq l\leq d$ the bandwidth
$b_{kl}$ belongs to a range $\mathcal{B}_{T}=[c_{min}b_{T},c_{max}b_{T}]$ with
$c_{min},c_{max}>0$ some constants and $b_{T}\downarrow 0$ at a suitable rate
that will be specified below.
When using the same bandwidth $b_{kl}\in\mathcal{B}_{T}$ for all the cells of
$\check{\Sigma}_{t}^{0}$, since $\hat{u}_{i}$, $i=1,...,T$ are almost sure
linear independent each other, $\check{\Sigma}_{t}^{0}$ is almost sure
positive definite provided $T$ is sufficiently large. When using several
bandwidths $b_{kl}$ a regularization of $\check{\Sigma}_{t}^{0}$ could be
necessary in order to ensure positive definiteness. Let us consider
$\check{\Sigma}_{t}=\left\\{\left(\check{\Sigma}_{t}^{0}\right)^{2}+\nu_{T}I_{d}\right\\}^{1/2}$
where $\nu_{T}>0$, $T\geq 1$, is a sequence of real numbers decreasing to zero
at a suitable rate that will be specified below. Our simulation experiments
indicate that in applications with moderate and large samples $\nu_{T}$ could
be even set equal to 0.
In practice the bandwidths $b_{kl}$ can be chosen by minimization of a cross-
validation criterion like
$\sum_{t=1}^{T}\parallel\check{\Sigma}_{t}-\hat{u}_{t}\hat{u}_{t}^{\prime}\parallel^{2},$
with respect to all $b_{kl}\in\mathcal{B}_{T}$, $1\leq k\leq l\leq d$, where
$\parallel\cdot\parallel$ is some norm for a square matrix, for instance the
Frobenius norm that is the square root of the sum of the squares of matrix
elements. Like in Patilea and Ra ssi (2010), the theoretical results below are
obtained uniformly with respect to the bandwidths $b_{kl}\in\mathcal{B}_{T}$
and this provides a justification for the common cross-validation bandwidth
selection approach in the framework we consider.
Let us now introduce the following adaptive least squares (ALS) estimator
$\hat{\theta}_{ALS}=\check{\Sigma}_{\tilde{\underline{X}}}^{-1}\mbox{vec}\>\left(\check{\Sigma}_{\underline{X}}\right),$
with
$\check{\Sigma}_{\tilde{\underline{X}}}=T^{-1}\sum_{t=1}^{T}\tilde{X}_{t-1}\tilde{X}_{t-1}^{\prime}\otimes\check{\Sigma}_{t}^{-1},\quad\mbox{and}\quad\check{\Sigma}_{\underline{X}}=T^{-1}\sum_{t=1}^{T}\check{\Sigma}_{t}^{-1}X_{t}\tilde{X}_{t-1}^{\prime}.$
The ALS residuals, proxies of the infeasible GLS residuals, are defined as
$\check{\epsilon}_{t}=\check{H}_{t}^{-1}X_{t}-\check{H}_{t}^{-1}(\tilde{X}_{t-1}^{\prime}\otimes
I_{d})\hat{\theta}_{ALS},$ and the adaptive autocovariances and
autocorrelations
$\hat{\Gamma}_{ALS}(h)=\hat{\Gamma}_{ALS}^{\epsilon}(h)=T^{-1}\sum_{t=h+1}^{T}\check{\epsilon}_{t}\check{\epsilon}_{t-h}^{\prime},\quad\hat{R}_{ALS}(h)=\check{S}_{\epsilon}^{-1}\hat{\Gamma}_{ALS}(h)\check{S}_{\epsilon}^{-1},$
where $\check{S}_{\epsilon}=\mbox{Diag}\\{\check{\sigma}_{\epsilon}(1),\dots,$
$\check{\sigma}_{\epsilon}(d)\\}$,
$\check{\sigma}_{\epsilon}^{2}(i)=T^{-1}\sum_{t=1}^{T}\check{{\epsilon}}_{it}^{2}$,
and $\check{H}_{t}$ is the nonparametric estimator obtained from
$\check{\Sigma}_{t}$ and the identification condition on $H_{t}$ (see
Assumption A1(i)), that is $\check{H}_{t}=\check{\Sigma}_{t}^{1/2}$.
Let
$\hat{\gamma}_{m}^{ALS}=\mbox{vec}\\!\\{(\hat{\Gamma}_{ALS}(1),\dots,\hat{\Gamma}_{ALS}(m))\\}$.
Following the notation of the previous section, for a given integer $m\geq 1$,
define the residual autocorrelations
$\hat{\rho}_{a,m}^{ALS}=\mbox{vec}\left\\{\>\left(\hat{R}_{ALS}(1),\dots,\hat{R}_{ALS}(m)\right)\\!\right\\}\quad\mbox{and}\quad\hat{\rho}_{b,m}^{ALS}=\hat{\gamma}_{m}^{ALS}.$
The main result of this section shows that $\hat{\gamma}_{m}^{ALS}$ and
$\hat{\rho}_{a,m}^{ALS}$ are asymptotic equivalent to $\hat{\gamma}_{m}^{GLS}$
and $\hat{\rho}_{a,m}^{GLS}$. This will allow us to define new portmanteau
statistics based on the ALS residuals. For this purpose, we need the following
assumptions.
Assumption A1’: Suppose that all the conditions in Assumption A1(i) hold true.
In addition:
(i) $\inf_{r\in(0,1]}\lambda_{min}(\Sigma(r))>0$ where for any symmetric
matrix $A$ the real value $\lambda_{min}(A)$ denotes its smallest eigenvalue.
(ii) $\sup_{t}\|\epsilon_{kt}\|_{8}<\infty$ for all $k\in\\{1,...,d\\}$.
Assumption A2: (i) The kernel $K(\cdot)$ is a bounded density function defined
on the real line such that $K(\cdot)$ is nondecreasing on $(-\infty,0]$ and
decreasing on $[0,\infty)$ and $\int_{\mathbb{R}}v^{2}K(v)dv<\infty$. The
function $K(\cdot)$ is differentiable except a finite number of points and the
derivative $K^{\prime}(\cdot)$ is an integrable function. Moreover, the
Fourier Transform $\mathcal{F}[K](\cdot)$ of $K(\cdot)$ satisfies
$\int_{\mathbb{R}}\left|s\mathcal{F}[K](s)\right|ds<\infty$.
(ii) The bandwidths $b_{kl}$, $1\leq k\leq l\leq d$, are taken in the range
$\mathcal{B}_{T}=[c_{min}b_{T},c_{max}b_{T}]$ with $0<c_{min}<c_{max}<\infty$
and $b_{T}+1/Tb_{T}^{2+\gamma}\rightarrow 0$ as $T\rightarrow\infty$, for some
$\gamma>0$.
(iii) The sequence $\nu_{T}$ is such that $T\nu_{T}^{2}\rightarrow 0.$
Below, we say that a sequence of random matrices $A_{T}$, $T\geq 1$ is
$o_{p}(1)$ uniformly with respect to (w.r.t.) $b_{kl}\in\mathcal{B}_{T}$ as
$T\rightarrow\infty$ if $\sup_{1\leq k\leq l\leq
d}\sup_{b_{kl}\in\mathcal{B}_{T}}\|\mbox{vec}\>\left(A_{T}\right)\|\stackrel{{\scriptstyle
P}}{{\longrightarrow}}0$. The following proposition gives the asymptotic
behavior of variances, autocovariances and autocorrelations estimators based
on the ALS estimator of $\theta_{0}$ and the nonparametric estimate of the
time-varying variance structure $\Sigma_{t}$. The results are uniformly w.r.t
the bandwidths.
###### Proposition 3
If model (1.1) is correct and Assumptions A1’ and A2 hold, uniformly w.r.t.
$b\in\mathcal{B}_{T}$
$T^{-1}\sum_{t=1}^{T}\check{H}_{t}^{\prime}\otimes\check{H}_{t}^{-1}=\int_{0}^{1}G(r)^{\prime}\otimes
G(r)^{-1}dr+o_{p}(1),$ (5.1)
$\check{\Sigma}_{\tilde{\underline{X}}}=\Lambda_{1}+o_{p}(1).$ (5.2)
Moreover, given any $m\geq 1$,
$T^{\frac{1}{2}}\left\\{\hat{\gamma}^{ALS}_{m}-\hat{\gamma}^{GLS}_{m}\right\\}=o_{p}(1)\quad\text{and}\quad
T^{\frac{1}{2}}\left\\{\hat{\rho}^{ALS}_{m}-\hat{\rho}^{GLS}_{m}\right\\}=o_{p}(1),$
(5.3)
where $\hat{\rho}^{ALS}_{m}$ (resp. $\hat{\rho}^{GLS}_{m}$) stands for any of
$\hat{\rho}^{ALS}_{a,m}$ and $\hat{\rho}^{ALS}_{b,m}$ (resp.
$\hat{\rho}^{GLS}_{a,m}$ and $\hat{\rho}^{GLS}_{b,m}$).
This asymptotic equivalence result allows us to propose portmanteau test
statistics adapted to the case of time-varying variance. Consider the Box-
Pierce type statistic
$\displaystyle Q_{a,m}^{ALS}$ $\displaystyle=$ $\displaystyle
T\sum_{h=1}^{m}\mbox{tr}\left(\hat{\Gamma}_{ALS}^{\prime}(h)\hat{\Gamma}_{ALS}^{-1}(0)\hat{\Gamma}_{ALS}(h)\hat{\Gamma}_{ALS}^{-1}(0)\right)$
$\displaystyle=$ $\displaystyle
T\hat{\gamma}^{ALS^{\prime}}_{m}\left(I_{m}\otimes\hat{\Gamma}_{ALS}^{-1}(0)\otimes\hat{\Gamma}_{ALS}^{-1}(0)\right)\hat{\gamma}^{ALS}_{m},$
and
$Q_{b,m}^{ALS}=T\hat{\rho}^{ALS^{\prime}}_{b,m}\hat{\rho}^{ALS}_{b,m}.$
Consider also the Ljung-Box type statistics
$\tilde{Q}_{a,m}^{ALS}=T^{2}\sum_{h=1}^{m}(T-h)^{-1}\mbox{tr}\left(\hat{\Gamma}_{ALS}^{\prime}(h)\hat{\Gamma}_{ALS}^{-1}(0)\hat{\Gamma}_{ALS}(h)\hat{\Gamma}_{ALS}^{-1}(0)\right)$
and
$\tilde{Q}_{b,m}^{ALS}=T^{2}\sum_{h=1}^{m}(T-h)^{-1}\mbox{tr}\left(\hat{\Gamma}_{ALS}^{\prime}(h)\hat{\Gamma}_{ALS}(h)\right).$
The following theorem is a direct consequence of (3.7) and Proposition 3 and
hence the proof is omitted.
###### Theorem 5.1
Under the assumptions of Proposition 3, the statistics $Q_{a,m}^{ALS}$,
$Q_{b,m}^{ALS}$ and $\tilde{Q}_{a,m}^{ALS}$, $\tilde{Q}_{b,m}^{ALS}$ converge
in distribution to
$U(\delta_{m}^{ALS})=\sum_{i=1}^{d^{2}m}\delta^{als}_{i}U_{i}^{2},$ (5.4)
as $T\to\infty$, where
$\delta_{m}^{ALS}=(\delta^{als}_{1},\dots,\delta^{als}_{d^{2}m})^{\prime}$ is
the vector of the eigenvalues of $\Sigma^{GLS},$ and the $U_{i}$’s are
independent $\mathcal{N}(0,1)$ variables.
To compute the critical values of the adaptive portmanteau tests, we first
obtain a consistent estimator of $\Lambda^{\epsilon,\theta}_{m}$ given in
(3.3) by replacing $A_{01},\dots,A_{0p}$ by their ALS estimators in $K$ and
using (5.1). Next we consider the estimate of $\Lambda_{1}$ given in (5.2).
Plugging these estimates into the formula (3.8), we obtain a consistent
estimator of $\Sigma^{GLS}$ with eigenvalues
$\hat{\delta}_{m}^{ALS}=(\hat{\delta}_{1}^{als},\dots,\hat{\delta}_{d^{2}m}^{als})^{\prime}$
that consistently estimate $\delta_{m}^{ALS}$.
There are several important particular cases that could be mentioned. In the
case of a VAR(0) model (i.e., the process $(u_{t})$ is observed),
$\Sigma^{GLS}=I_{d^{2}m}$ (see Proposition 1) and hence the asymptotic
distribution of the four test statistics in Theorem 5.1 would be
$\chi^{2}_{d^{2}m}$, that means independent of the variance structure given by
$\Sigma(\cdot)$. In the general case $p\geq 1$ where the autoregressive
coefficients $A_{0i}$, $i=1,...,p$ have to be estimated, the matrix
$I_{d^{2}m}-\Sigma^{GLS}$ being positive semi-definite, the eigenvalues
$\delta_{1}^{als},...,\delta_{d^{2}m}^{als}$ are smaller or equal to 1. Since,
in some sense, the unconditional heteroscedasticity is removed in the ALS
residuals, one could expect that the $\chi^{2}_{d^{2}(m-p)}$ asymptotic
approximation is reasonably accurate for the ALS tests. Example 3.1 indicates
that this is may not the case, the asymptotic distribution we obtain for the
ALS portmanteau statistics can be very different from the
$\chi^{2}_{d^{2}(m-p)}$ approximation when the errors are heteroscedastic.
Finally note that Patilea and Ra ssi (2010) pointed out that using the
adaptive estimators of autoregressive parameters instead of the OLS estimators
lead to a gain of efficiency, so that it is advisable to compute the kernel
smoothing estimator of the variance function $\Sigma(\cdot)$ at the estimation
stage. Therefore since the kernel estimator of the variance $\Sigma_{t}$ is
available for the validation stage, the ALS tests are not more complicated
than the OLS tests to implement.
Let us also point out that the eigenvalues
$\delta_{1}^{als},...,\delta_{d^{2}m}^{als}$ will not depend on the variance
structure when $\Sigma(\cdot)=\sigma^{2}(\cdot)I_{d}$ (in particular in the
univariate case), whatever the value of $p$ is. Moreover, using the arguments
of Box and Pierce (1970), see also Brockwell and Davis (1991, pp. 310–311),
one can easily show that for large values of $m$, the law of
$U(\delta_{m}^{ALS})$ is accurately approximated by a $\chi^{2}_{d^{2}(m-p)}$
distribution. However, in the general the multivariate setup the asymptotic
distribution in (5.4) depend on the variance function $\Sigma(\cdot)$.
## 6 Modified portmanteau statistics with standard chi-square asymptotic
distributions
In the previous sections we considered portmanteau tests for which the
asymptotic critical values are given by weighted sums of chi-squares in the
general VAR(p) case. Using a suitable change of our quadratic forms one can
propose alternative portmanteau test statistics with chi-squared asymptotic
distributions under the null hypothesis. This type of modification was already
proposed in the recent time series literature but in different contexts.
First note that as remarked above when testing that the observed process is
uncorrelated ($p=0$) and using the standard portmanteau statistic (4.1) we
obtain a non standard asymptotic distribution. Then following the approach of
Lobato, Nankervis and Savin (2002) we consider the modified portmanteau test
statistic
$\underline{Q}_{m}^{OLS}=T\hat{\gamma}^{OLS^{\prime}}_{m}\left(\hat{\Lambda}_{m}^{u,u}\right)^{-1}\hat{\gamma}^{OLS}_{m},$
where $\hat{\Lambda}_{m}^{u,u}=I_{m}\otimes\hat{\Sigma}_{G^{\otimes 2}}$ with
$\hat{\Sigma}_{G^{\otimes 2}}$ defined in equation (4.3). The invertibility of
$\hat{\Lambda}_{m}^{u,u}$ is guaranteed asymptotically by our assumptions. In
view of Proposition 1 it is clear that under the null hypothesis of
uncorrelated observed process, the asymptotic distribution of the
$\underline{Q}_{m}^{OLS}$ statistic is $\chi^{2}_{d^{2}m}$. Recall that this
kind of statistic correction is not necessary to obtain a standard asymptotic
distribution for the adaptive portmanteau tests when the non correlation of
the observed process is tested.
This approach can be generalized to the case of VAR($p$) models with possibly
$p>0$ using the approach of Katayama (2008) for building tests with standard
asymptotic distributions. In this part we take $p<m<T$. Let us introduce
$D_{m}^{OLS}=\Phi_{m}\left\\{\Phi_{m}^{\prime}\left(\Lambda_{m}^{u,u}\right)^{-1}\Phi_{m}\right\\}^{-1}\Phi_{m}^{\prime}\left(\Lambda_{m}^{u,u}\right)^{-1}$
$D_{m}^{GLS}=\Lambda_{m}^{\epsilon,\theta}\left\\{\Lambda_{m}^{\epsilon,\theta^{\prime}}\Lambda_{m}^{\epsilon,\theta}\right\\}^{-1}\Lambda_{m}^{\epsilon,\theta^{\prime}}$
so that $(I_{d^{2}m}-D_{m}^{OLS})\Phi_{m}=0$ and
$(I_{d^{2}m}-D_{m}^{GLS})\Lambda_{m}^{\epsilon,\theta}=0$. From the proof of
Proposition 1 (equation (12.18)), it is easy to see that
$(I_{d^{2}m}-D_{m}^{OLS})T^{\frac{1}{2}}\hat{\gamma}_{m}^{OLS}=(I_{d^{2}m}-D_{m}^{OLS})T^{\frac{1}{2}}c_{m}^{u}+o_{p}(1)$
where $T^{1/2}c_{m}^{u}$ is asymptotically normal of mean 0 and variance
$\Lambda_{m}^{u,u}$. Deduce that
$(I_{d^{2}m}-D_{m}^{OLS})T^{\frac{1}{2}}\hat{\gamma}_{m}^{OLS}\Rightarrow\mathcal{N}(0,V),$
where
$V=(I_{d^{2}m}-D_{m}^{OLS})\Lambda_{m}^{u,u}(I_{d^{2}m}-D_{m}^{OLS})^{\prime}$.
Now, notice that
$(I_{d^{2}m}-D_{m}^{OLS})\Lambda_{m}^{u,u}=\Lambda_{m}^{u,u}-\Phi_{m}\left\\{\Phi_{m}^{\prime}\left(\Lambda_{m}^{u,u}\right)^{-1}\Phi_{m}\right\\}^{-1}\Phi_{m}^{\prime}=\Lambda_{m}^{u,u}(I_{d^{2}m}-D_{m}^{OLS})^{\prime}.$
From this and the fact that $I_{d^{2}m}-D_{m}^{OLS}$ is a projector, deduce
that the matrix $AV$ is idempotent, where
$A=(I_{d^{2}m}-D_{m}^{OLS})^{\prime}(\Lambda_{m}^{u,u})^{-1}(I_{d^{2}m}-D_{m}^{OLS}).$
Moreover, since $\Phi_{m}$ is of full column rank $d^{2}p$, it is easy to see
that the rank of $A$ is $d^{2}(m-p)$. A classical result in multivariate data
analysis implies
$T\hat{\gamma}_{m}^{OLS^{\prime}}(I_{d^{2}m}-D_{m}^{OLS})^{\prime}(\Lambda_{m}^{u,u})^{-1}(I_{d^{2}m}-D_{m}^{OLS})\hat{\gamma}_{m}^{OLS}\Rightarrow\chi_{d^{2}(m-p)}^{2}.$
(6.1)
Similarly we obtain
$(I_{d^{2}m}-D_{m}^{GLS})T^{\frac{1}{2}}\hat{\gamma}_{m}^{GLS}\Rightarrow\mathcal{N}(0,I_{d^{2}m}-D_{m}^{GLS})$
(6.2)
and we deduce
$T\hat{\gamma}_{m}^{GLS^{\prime}}(I_{d^{2}m}-D_{m}^{GLS})\hat{\gamma}_{m}^{GLS}\Rightarrow\chi_{d^{2}(m-p)}^{2}.$
(6.3)
The matrices $D_{m}^{OLS}$ and $\Lambda_{m}^{u,u}$ could be estimated as
suggested in Section 4, see comments after equation (4.6), and hence a
modified portmanteau test statistic based on the OLS estimates
$\hat{\gamma}_{m}^{OLS}$ and having standard chi-square critical values could
be derived from equation (6.1). On the other hand, using a nonparametric
estimate of $H_{t}$ one could easily estimate $D_{m}^{GLS}$, see Proposition 3
and the comments after Theorem 5.1. Moreover, Proposition 3 allows us to
replace $\hat{\gamma}_{m}^{GLS}$ with $\hat{\gamma}_{m}^{ALS}$ and thus to
introduce an adaptive portmanteau test with a modified statistic and standard
chi-square critical values based on equation (6.3). Clearly one can consider
similar modification for Ljung-Box type statistics.
The chi-square critical values are certainly more convenient for portmanteau
tests. Moreover, in section 7 we provide evidence that the test based on the
statistic (6.1) could be more powerful, in the Bahadur slope sense, than the
OLS estimates based test based on the $Q_{m}^{OLS}$ statistic investigated in
Theorem 1.1. However, it is not necessarily true that the modified procedures
presented in this section are preferable in applications. Indeed, the
empirical evidence presented in Section 9 shows that test statistics like in
(6.1) and (6.3) are unstable and induce bad levels even with series of few
hundred observations.
## 7 Testing for autocorrelation in heteroscedastic series: some theoretical
power comparisons
In this part we carry out some theoretical power comparisons for the tests we
considered above in the important case where the non correlation of the
observed process $X_{t}=u_{t}$ is tested. The case $p\geq 1$ will be
considered elsewhere. On one hand we compare the classical Box-Pierce
portmanteau test and modified quadratic forms of OLS residual autocorrelations
based test introduced in section 6. On the other hand we compare ALS and OLS
residual autocorrelations based portmanteau tests. For this purpose we use the
Bahadur slope approach that we briefly recall here. Let $Q_{A}$ denote a test
statistic and, for any $x>0$, define $q_{A}(x)=-\log P_{0}(Q_{A}>x)$ where
$P_{0}$ stands for the limit distribution of $Q_{A}$ under the null
hypothesis. Following Bahadur (1960) (see also van der Vaart 1998, chapter
14), consider the _(asymptotic) slope_
$c_{A}(\varrho)=2\lim_{T\rightarrow\infty}T^{-1}q_{A}(Q_{A})$ under a _fixed_
alternative $\mathcal{H}_{1}$ such that the limit exists in probability. The
asymptotic relative efficiency of the test based on $Q_{A}$ with respect to a
competing test based on a test statistic $Q_{B}$ is then defined as the ratio
$ARE_{A,B}(\varrho)=c_{A}(\varrho)/c_{B}(\varrho)$. A relative efficiency
$ARE_{A,B}(\varrho)\geq 1$ suggests that the test given by $Q_{A}$ is better
suited to detect $\mathcal{H}_{1}$ because the associated $p-$values wanes
faster or equally faster compared to the $p-$values of the test based on
$Q_{B}$.
For the sake of simplicity we restrict our attention to the BP statistics and
consider the case where one tests the non correlation of the observed process,
while the underlying true process is the autoregressive process of order 1
$u_{t}=Bu_{t-1}+\tilde{H}_{t}\epsilon_{t},$ (7.1)
where $\det(I_{d}-Bz)\neq 0$ for all $|z|\leq 1$ and $B\neq 0$. We keep the
notation $E(X_{t}X_{t}^{\prime})=E(u_{t}u_{t}^{\prime})=\Sigma_{t}$ and we
introduce
$E(\tilde{H}_{t}\epsilon_{t}\epsilon_{t}^{\prime}\tilde{H}_{t}^{\prime})=\tilde{H}_{t}\tilde{H}_{t}^{\prime}:=\tilde{\Sigma}_{t}$.
Under the alternative hypothesis (7.1) we have the relationship
$\Sigma_{t}=\sum_{i=0}^{\infty}B^{i}\tilde{\Sigma}_{t-i}B^{i^{\prime}}.$ (7.2)
Using similar arguments to that of the proofs of Lemmas 12.1 to 12.3 in the
Appendix, deduce that
$\hat{\Gamma}_{OLS}(h)=T^{-1}\sum_{t=h+1}^{T}u_{t}u_{t-h}^{\prime}=B^{h}\int_{0}^{1}\Sigma(r)dr+o_{p}(1)$
(7.3)
and
$\hat{\Gamma}_{GLS}(h)=T^{-1}\sum_{t=h+1}^{T}H_{t}^{-1}u_{t}u_{t-h}^{\prime}H_{t-h}^{-1^{\prime}}=\int_{0}^{1}G(r)^{-1}B^{h}G(r)dr+o_{p}(1).$
(7.4)
Using basic properties of the vec($\cdot$) operator and the Kronecker product
we obtain
$T^{-1}Q_{m}^{OLS}=\mathcal{B}^{\prime}\left\\{I_{m}\otimes\int_{0}^{1}\Sigma(r)dr\otimes\left(\int_{0}^{1}\Sigma(r)dr\right)^{-1}\right\\}\mathcal{B}+o_{p}(1)$
$T^{-1}\underline{Q}_{m}^{OLS}\\!=\\!\mathcal{B}^{\prime}\\!\left\\{\\!I_{m}\otimes\left(\\!\int_{0}^{1}\\!\\!\\!\Sigma(r)dr\\!\otimes\\!I_{d}\\!\right)\\!\\!\left(\\!\int_{0}^{1}\\!\\!\\!\Sigma(r)\\!\otimes\\!\Sigma(r)dr\\!\\!\right)^{\\!\\!-1}\\!\\!\left(\\!\int_{0}^{1}\\!\\!\\!\Sigma(r)dr\\!\otimes\\!I_{d}\\!\right)\\!\\!\right\\}\mathcal{B}+o_{p}(1)$
and
$T^{-1}Q_{i,m}^{ALS}=\mathcal{B}^{\prime}\left\\{I_{m}\otimes\left(\int_{0}^{1}G(r)^{\prime}\otimes
G(r)^{-1}dr\right)^{2}\right\\}\mathcal{B}+o_{p}(1),$
with $i\in\\{a,b\\}$ and
$\mathcal{B}=\mbox{vec}\left\\{(B^{1},\dots,B^{m})\right\\}$.
###### Proposition 4
(i) If Assumption A1 holds true and the observations follow the model (7.1),
the asymptotic relative efficiency of the portmanteau test based on
$\underline{Q}_{m}^{OLS}$ with respect to the portmanteau tests based on
$Q_{m}^{OLS}$ is larger or equal to 1.
(ii) Suppose that $\Sigma(\cdot)=\sigma^{2}(\cdot)I_{d}$ where $\sigma(\cdot)$
is some positive scalar function. Suppose that Assumptions A1’ and A2 holds
true and the observations follow the model (7.1). Then asymptotic relative
efficiencies of the portmanteau test based on $Q_{m}^{ALS}$ with respect to
the portmanteau tests based on $Q_{m}^{OLS}$ or $\underline{Q}_{m}^{OLS}$ are
larger or equal to 1.
In the first part of Proposition 4 the result is obtained without additional
restriction on $\Sigma(\cdot)$ while in the second part we impose
$\Sigma(\cdot)=\sigma^{2}(\cdot)I_{d}$ which is for instance true in the
univariate case ($d=1$). In the general multivariate case the portmanteau test
based on ALS residual autocorrelations does not necessarily outperforms, in
the sense of the Bahadur slope, the tests based on the OLS residual
autocorrelations considered above.
## 8 Extending the scope: testing the order of a heteroscedastic co-
integration model
Consider the case of a unit root multivariate process $(X_{t})$ with time-
varying volatility, see for instance Cavaliere, Rahbek and Taylor (2010) or
Boswijk (2010). With $Z_{0t}:=X_{t}-X_{t-1}$ the model (1.1) can be rewritten
in its error correction form
$\displaystyle Z_{0t}=\Pi_{01}X_{t-1}+\sum_{i=2}^{p}\Pi_{0i}Z_{0t-i+1}+u_{t}$
(8.1) $\displaystyle u_{t}=H_{t}\epsilon_{t}.$
The matrices $\Pi_{0i}$ are functions of the $A_{i}$’s, and such that the
assumptions of the Granger representation theorem hold (see for instance
Assumption 1 of Cavaliere, Rahbek and Taylor (2010)),
$\Pi_{01}=\alpha_{0}\beta_{0}^{\prime}$ where the $d\times s$-dimensional
matrices $\alpha_{0}$ and $\beta_{0}$ are identified in some appropriate way
(see e.g. Johansen 1995, p. 72, for the identification problem). If $p=1$ the
sum in (8.1) vanishes. In this section we follow Cavaliere, Rahbek and Taylor
(2010) and we slightly strengthen A1 assuming that $(\epsilon_{t})$ is iid.
Then it follows from their Lemma 1 that $(X_{t})$ have a random walk behavior
and also that $(\beta_{0}^{\prime}X_{t})$ is stable. By analogy with the
homoscedastic case, the number $s$ of independent linear stable combinations
in $(\beta_{0}^{\prime}X_{t})$ is the cointegrating rank (see section 2.3 of
Cavaliere, Rahbek and Taylor 2010, for a detailed discussion on the concept of
cointegration in our framework). If $s=0$ the process $(X_{t})$ is not
cointegrated and the procedures described in the previous sections apply
directly to the process $(Z_{0t})$. Many contributions in the literature that
considered the standard homoscedastic framework pointed out that the choice of
the lag length is important for the contegrating rank analysis, see e.g.
Boswijk and Franses (1992, section 4). It seems reasonable to imagine that a
similar remark remains true with a time-varying variance.
To describe the estimation procedure of (8.1), let us define
$Z_{1t}(\beta)=(X_{t-1}^{\prime}\beta,Z_{0t-1}^{\prime},$
$\dots,Z_{0t-p+1}^{\prime})^{\prime}$ for any $d\times s$-matrix $\beta$ and
rewrite model (8.1) under the form
$Z_{0t}=(Z_{1t}(\beta_{0})^{\prime}\otimes I_{d})\vartheta_{0}+u_{t},$
where
$\vartheta_{0}=\mbox{vec}\left\\{\left(\alpha_{0},\Pi_{02},\dots,\Pi_{0p}\right)\right\\}$.
The estimator of the long run relationships $\hat{\beta}$ can be obtained
using the reduced rank regression method introduced by Anderson (1951).
Cavaliere et al (2010) showed that in our framework
$T(\hat{\beta}-\beta)=O_{p}(1).$ (8.2)
Now, let us define
$\hat{\vartheta}_{OLS}(\beta)=\hat{\Sigma}_{Z_{1}}^{-1}(\beta)\mbox{vec}\>\left(\hat{\Sigma}_{Z_{0}}(\beta)\right),$
where
$\hat{\Sigma}_{Z_{1}}(\beta)=T^{-1}\sum_{t=1}^{T}Z_{1t}(\beta)Z_{1t}(\beta)^{\prime}\otimes
I_{d}\quad\mbox{and}\quad\hat{\Sigma}_{Z_{0}}(\beta)=T^{-1}\sum_{t=1}^{T}Z_{0t}Z_{1t}(\beta)^{\prime},$
and similarly to (2.2) let us introduce
$\hat{\vartheta}_{GLS}(\beta)=\hat{\Sigma}_{\underline{Z}_{1}}(\beta)^{-1}\mbox{vec}\>\left(\hat{\Sigma}_{\underline{Z}_{0}}(\beta)\right),$
with
$\hat{\Sigma}_{\underline{Z}_{1}}=T^{-1}\sum_{t=1}^{T}Z_{1t}(\beta)Z_{1t}(\beta)^{\prime}\otimes\Sigma_{t}^{-1},\quad\hat{\Sigma}_{\underline{Z}_{0}}=T^{-1}\sum_{t=1}^{T}\Sigma_{t}^{-1}Z_{0t}Z_{1t}(\beta)^{\prime},$
where the volatility $\Sigma_{t}$ is assumed known. Next, for any fixed
$\beta$, let us define the estimated residuals
$\hat{u}_{t}(\beta)=Z_{0t}-(Z_{1t}(\beta)^{\prime}\otimes
I_{d})\hat{\vartheta}_{OLS}(\beta),$
$\hat{\epsilon}_{t}(\beta)=H_{t}^{-1}Z_{0t}-H_{t}^{-1}(Z_{1t}(\beta)^{\prime}\otimes
I_{d})\hat{\vartheta}_{GLS}(\beta)$
and the corresponding estimated autocovariance matrices
$\hat{\Gamma}_{OLS}(h,\beta)=T^{-1}\sum_{t=h+1}^{T}\hat{u}_{t}(\beta)\hat{u}_{t-h}(\beta)^{\prime}\quad\mbox{and}\quad\hat{\Gamma}_{GLS}(h,\beta)=T^{-1}\sum_{t=h+1}^{T}\hat{\epsilon}_{t}(\beta)\hat{\epsilon}_{t-h}(\beta)^{\prime}.$
From (8.2) we obviously have
$\hat{\Gamma}_{OLS}(h,\hat{\beta})=\hat{\Gamma}_{OLS}^{u}(h,\beta_{0})+o_{p}(T^{-\frac{1}{2}})\quad\mbox{and}\quad\hat{\Gamma}_{GLS}(h,\hat{\beta})=\hat{\Gamma}_{GLS}^{u}(h,\beta_{0})+o_{p}(T^{-\frac{1}{2}}).$
Defining
$\check{\Sigma}_{t}^{0}(\beta)=\sum_{i=1}^{T}w_{ti}\odot\hat{u}_{i}(\beta)\hat{u}_{i}(\beta)^{\prime},$
it is also obvious that
$\check{\Sigma}_{t}^{0}(\hat{\beta})=\check{\Sigma}_{t}^{0}(\beta_{0})+o_{p}(T^{-\frac{1}{2}}).$
It is clear now that one can treat $\beta_{0}$ as known and, following the
lines of the previous sections, one can use
$\hat{\Gamma}_{OLS}(h,\hat{\beta})$ and the ALS version of
$\hat{\Gamma}_{GLS}(h,\hat{\beta})$ to build portmanteau tests for checking
the order $p$ of the model (8.1).
## 9 Monte Carlo experiments
In the sequel $LB_{m}^{OLS}$ and $LB_{m}^{ALS}$ will denote the Ljung-Box type
portmanteau tests based on the adaptive approach with non standard
distributions (4.2) and (5.4). For the sake of brevity, only the results
obtained with the test statistic $\tilde{Q}_{a,m}^{ALS}$ will be reported.
Moreover, since we found similar results for the $BP$ and $LB$ tests, we only
report on $LB$ tests. The $LB$ tests based on modified statistics which are
built using the results in Section 6 are denoted by $\widetilde{LB}_{m}^{OLS}$
and $\widetilde{LB}_{m}^{ALS}$. If we assume that the volatility function is
known, one can also build portmanteau tests using the result in (3.14) in a
similar way to the adaptive portmanteau tests. These infeasible tests denoted
by $LB_{m}^{GLS}$ and $\widetilde{LB}_{m}^{GLS}$ will serve as a benchmark for
comparison with the ALS tests $LB_{m}^{ALS}$ and $\widetilde{LB}_{m}^{ALS}$.
It is clear that the asymptotic critical values of the ALS tests are the same
as the critical values of the GLS tests. In this section we investigate by
simulations the finite sample properties of the ALS and GLS portmanteau tests
and we compare them with the OLS estimation-based tests. In the next section
we study the model adequacy of two real data sets: the US energy and
transportation price indexes for all urban consumers on one hand and the US
balances on merchandise trade and on services on the other hand.
### 9.1 Empirical size
Our Monte Carlo experiments are based on the following Data Generating Process
(DGP) specification
$\left(\begin{array}[]{c}X_{1t}\\\ X_{2t}\\\
\end{array}\right)=\left(\begin{array}[]{cc}0.3&-0.3\\\ 0&-0.1\\\
\end{array}\right)\left(\begin{array}[]{c}X_{1t-1}\\\ X_{2t-1}\\\
\end{array}\right)+\left(\begin{array}[]{cc}\mathfrak{a}&0\\\
0&\mathfrak{a}\\\ \end{array}\right)\left(\begin{array}[]{c}X_{1t-2}\\\
X_{2t-2}\\\ \end{array}\right)+\left(\begin{array}[]{c}u_{1t}\\\ u_{2t}\\\
\end{array}\right),$ (9.1)
where $\mathfrak{a}=0$ in the empirical size part of the study and
$\mathfrak{a}=-0.3$ in the empirical power part. The autoregressive parameters
are such that the stability condition hold and are inspired from the ALS
estimation obtained for the U.S. balance on services and merchandise trade
data (see Table 5 below). In the case of smooth variance structure we consider
$\Sigma(r)=\left(\begin{array}[]{cc}(1+\pi_{1}r)(1+\varpi^{2})&\varpi(1+\pi_{1}r)^{\frac{1}{2}}(0.1+\pi_{2}r)^{\frac{1}{2}}\\\
\varpi(1+\pi_{1}r)^{\frac{1}{2}}(0.1+\pi_{2}r)^{\frac{1}{2}}&(0.1+\pi_{2}r)\\\
\end{array}\right),$ (9.2)
where we take $\pi_{1}=250$ and $\pi_{2}=5$. In order to investigate the
properties of the tests when a volatility break is present we also consider
the following specification
$\Sigma(r)=\left(\begin{array}[]{cc}(6+f_{1}(r))(1+\varpi^{2})&\varpi(6+f_{1}(r))^{\frac{1}{2}}(0.5+f_{2}(r))^{\frac{1}{2}}\\\
\varpi(0.5+f_{2}(r))^{\frac{1}{2}}(6+f_{1}(r))^{\frac{1}{2}}&(0.5+f_{2}(r))(1+\rho^{2})\\\
\end{array}\right),$ (9.3)
with $f_{1}(r)=54\times\mathbf{1}_{(r\geq 1/2)}(r)$ and
$f_{2}(r)=3\times\mathbf{1}_{(r\geq 1/2)}(r)$. In this case we have a common
volatility break at the date $t=T/2$. In all experiments we fix $\varpi=0.2$.
These volatility specifications are inspired by the real data studies we
consider in the next section. For instance to fix the $(1,1)-$component of the
specification (9.2) we noticed that the last estimated variances with the
balance services and merchandise trade data are all greater than 200. In the
energy-transport price indexes data some of the last estimated variances of
the first component are even greater than 300. The amplitudes of the functions
$f_{1}(\cdot)$ and $f_{2}(\cdot)$ in the volatility specification with an
abrupt break defined in (9.3) were calibrated close to the means of the first
$T/2$ estimated volatilities and of the $T/2$ last volatilities for the
balance services and merchandise trade data. To assess the finite sample
properties of the tests under comparison when the errors are stationary, we
also considered standard i.i.d. Gaussian error processes. For each experiment
$N=1000$ independent trajectories are simulated using DGP (9.1). Samples of
length $T=50$, $T=100$ and $T=200$ are simulated.
We first study the empirical size of the tests taking $\mathfrak{a}=0$, and
adjusting a VAR(1) model to the simulated processes. The portmanteau tests for
the non correlation of the error terms are applied using $m=5$ and $m=15$ at
the asymptotic nominal level 5%. The results are given in Tables 1, 2 and 3.
Since $N=1000$ replications are performed and assuming that the finite sample
size of the tests is $5\%$, the relative rejection frequencies should be
between the significant limits 3.65% and 6.35% with probability 0.95. Then the
relative rejection frequencies are displayed in bold type when they are
outside these significant limits. Note that the distributions (4.2) and (5.4)
are given for fixed $m$, while the $\chi_{d^{2}(m-p)}^{2}$ approximation (see
discussion after Theorem 4.1 above) should be accurate only for large $m$.
Therefore we only comment the results for small and moderate samples ($T=50$
and $T=100$) for the standard portmanteau tests with $m=5$, while the results
for large samples ($T=200$) are considered when $m=15$ is taken.
In Table 1 i.i.d. standard Gaussian errors are used, that means
$\Sigma(\cdot)\equiv I_{2}$. In this simple case the relative rejection
frequencies of the tests converge to the asymptotic nominal level. In general
we do not remark a major loss of efficiency of the $LB_{m}^{ALS}$ test when
compared to the standard test. Therefore one can use the $LB_{m}^{ALS}$ test
in case of doubt of the presence of unconditional heteroscedasticity. The same
remark can be made for the $LB_{m}^{OLS}$ test when $m$ is small. However we
note that the $LB_{m}^{OLS}$ is oversized for small samples and when $m$ is
large. It also appears that the tests based on modified statistics are
oversized when $m$ is large. This can be explained by the fact that the
matrices $\Lambda_{m}^{\epsilon,\theta^{\prime}}\Lambda_{m}^{\epsilon,\theta}$
and $\Phi_{m}^{\prime}(\Lambda_{m}^{u,u})^{-1}\Phi_{m}$ are difficult to
invert in such situations.
In Table 2 heteroscedastic errors with an abrupt volatility break are
considered, while the trending specification (9.2) is used for the volatility
in Table 3. In line with the theoretical the relative rejection frequencies of
the ALS, GLS and OLS tests converge to the asymptotic nominal level. As
expected the standard portmanteau test is not valid. In general it emerges
from this part that the $LB_{m}^{ALS}$ and $LB_{m}^{GLS}$ tests have similar
results. Then it seems that estimating the volatility entails no major loss of
efficiency when building this kind of test. We did not found clear advantage
for the $LB_{m}^{ALS}$ when compared to the $LB_{m}^{OLS}$ in the presented
experiments. However note that the asymptotic distribution (4.2) of the
standard portmanteau statistic seems estimated with less precision than the
asymptotic distribution of the ALS portmanteau statistic. For instance we see
in Table 4 that for $m=5$ the standard deviations of the ALS weights are lower
or equal to the standard deviations of the OLS weights. We found that this
difference is more marked for $m=15$ which may lead to problems for the
control of the error of first kind for the $LB_{m}^{OLS}$ as already noted in
the homoscedastic case (see Table 1 for $m=15$). Other simulation results not
reported here show that the $LB_{m}^{OLS}$ test can be oversized when $m$ is
large as in the homoscedastic case with $m=15$ and we noted that the
estimation of the weights requires a relatively large number of observations
for the OLS approach. A possible explanation is also that
$\delta_{i}^{als}\in[0,1]$ while $\delta_{i}^{ols}\in[0,\infty)$ for
$i\in\\{1,\dots,d^{2}m\\}$, which may proceed more instable estimation for
$\delta_{i}^{ols}$ in many cases. In addition the energy-transportation price
indexes example below show that the estimation of the weights may be
problematic in the OLS case. Therefore we recommend to choose small $m$ when
the samples are small and use large $m$ only when the samples are large
despite the asymptotic results hold true for fixed $m$ when the $LB_{m}^{OLS}$
is considered. We again note that the tests with modified statistics are in
general clearly oversized. A possible explanation is that White (1980) type
correction matrices are inverted in the modified portmanteau statistics which
may lead to oversized tests as pointed out in Vilasuso (2001) in the
stationary case. We can conclude that with a data generating process close to
our simulation design the $LB_{m}^{ALS}$ test controls reasonably well the
error of the first kind in all the situations we considered.
### 9.2 Empirical power
In the empirical power part of this section, we examine the ability of the
different tests to detect underspecified autoregressive order. The power
investigation is realized in the Bahadur sense, that is the sample size is
increased while the alternative is kept fixed. More precisely we set
$\mathfrak{a}=-0.3$, and we adjusted a VAR(1) model to the simulated VAR(2)
processes with $T=50,100,200,300$. We simulated $N=1000$ independent
trajectories using DGP (9.1) with standard Gaussian innovations and
heteroscedastic volatility specifications (9.2) and (9.3). The non correlation
of the error process is again tested at the asymptotic nominal level 5%, but
taking $m=10$ in all the experiments. In Figure 3 we consider the
homoscedastic case, errors where an abrupt volatility shift is observed and
errors with smoothly varying variance.
When the variance is constant it appears that the tests with non standard
distribution have similar power to the standard test when the errors are
homoscedastic. Therefore we again note that there is no major loss of
efficiency when the tests with modified distribution are used while the
variance is constant. The tests with standard distribution may seem more
powerful when the samples are small, but this mainly comes from the fact that
these tests are oversized. When the errors are heteroscedastic the standard
test seems more powerful than the other tests. However the standard test is
oversized and in this case and the comparison is again not fair. A similar
comment can be made when the tests with modified distribution are compared to
the tests with standard distribution. It emerges that the $LB_{m}^{ALS}$ test
is more powerful than the $LB_{m}^{OLS}$. The relation between the powers of
$\widetilde{LB}_{m}^{OLS}$ and $\widetilde{LB}_{m}^{ALS}$ is not clear.
Finally we can remark that in the presented experiments the GLS type tests are
not necessarily more powerful than the ALS tests.
The results of section 7 are also illustrated. To this aim $N=1000$
independent trajectories are simulated using a VAR(1) DGP with
$A_{01}=-0.3I_{2}$. We consider heteroscedastic errors with trending behavior
volatility specification:
$\Sigma(r)=\left(\begin{array}[]{cc}(1+\pi_{1}r)&0\\\ 0&(1+\pi_{1}r)\\\
\end{array}\right)$
where we take $\pi_{1}=150$, and a volatility with an abrupt shift at $T/2$:
$\Sigma(r)=\left(\begin{array}[]{cc}(1+f_{1}(r))&0\\\ 0&(1+f_{1}(r))\\\
\end{array}\right),$
where $f_{1}(r)=10\times\mathbf{1}_{(r\geq 1/2)}(r)$. The lengths of the
simulated series are $T=50,100,200$. The non correlation of the observed
process is tested at the asymptotic nominal level 5% taking $m=10$. From
Figure 4 the $\widetilde{LB}_{OLS}$ test may appear more powerful than the
$LB_{ALS}$ test when the samples are small. However this comes from the fact
that the $\widetilde{LB}_{OLS}$ test is strongly oversized in small samples.
In accordance with the theoretical we see that the $LB_{ALS}$ test clearly
outperform the $\widetilde{LB}_{OLS}$ and $LB_{OLS}$ as the samples become
large.
## 10 Illustrative examples
For our real data illustrations we use two U.S. economic data sets. First we
consider the quarterly U.S. international finance data for the period from
January 1, 1970 to October 1, 2009: the balance on services and the balance on
merchandise trade in billions of Dollars. The length of the balance data
series is $T=159$. We also consider monthly data on the U.S. consumer price
indexes of energy and transportation for all urban consumers for the period
from January 1, 1957 to February 1, 2011. The length of the energy-
transportation series is $T=648$. The series are available seasonally adjusted
from the website of the research division of the Federal Reserve Bank of Saint
Louis.
### 10.1 VAR modeling of the U.S. balance trade data
In our VAR system the first component corresponds to the balance on
merchandise trade and the second corresponds to the balance on services trade.
From Figure 5 it seems that the series have a random walk behavior. We applied
the approach of unit root testing proposed by Beare (2008) in presence of non
constant volatility using the Augmented Dickey-Fuller (ADF) test for each
series. The ADF statistic is $0.72$ for the merchandise trade balance data and
is $-0.15$ for the services balance data. These statistics are greater than
the 5% critical value $-1.94$ of the ADF test, so that the stability
hypothesis have to be rejected for the studied series. Furthermore we also
applied the Kolmogorov-Smirnov (KS) type test for homoscedasticity considered
by Cavaliere and Taylor (2008, Theorem 3) for each series. The KS statistic is
$3.05$ for the merchandise trade balance data and is $7.62$ for the services
balance data. These statistics are greater than the 5% critical value $1.36$,
so that the homoscedasticity hypothesis has to be rejected for our series.
Therefore we consider the first differences of the series to get stable
processes, so that the evolutions of the U.S. balance data are studied in the
sequel. From Figure 6 we see that the first differences of the series are
stable but have a non constant volatility. We adjusted a VAR(1) model to
capture the linear dynamics of the series. The ALS and OLS estimators are
given in Table 5. The standard deviations into brackets are computed using the
results (2.3) and (2.4). In accordance with Patilea and Ra ssi (2010), we find
that the ALS estimation method seems better estimate the autoregressive
parameters than the OLS estimation method, in the sense that the standard
deviations of the ALS estimators are smaller than those of the OLS estimators.
The bandwidth we use for the ALS estimation, $b=7.67\times 10^{-2}$, is
selected by cross-validation in a given range and using 200 grid points
(Figure 7).
A quick inspection of Figures 8 and 9 suggests that the OLS residuals have a
varying volatility, while the stationarity of the ALS residuals is plausible.
Thus it seems that the volatility of the error process $(u_{t})$ is
satisfactorily estimated by the adaptive method. Now we examine the possible
presence conditional heteroscedasticity in the ALS residuals. From Figure 10
we see that the autocorrelations of the squared ALS residuals components are
not significant. In addition we also considered the ARCH-LM test of Engle
(1982) with different lags in Table 6 for testing the presence of ARCH effects
in the ALS residuals. It appears that the null hypothesis of conditional
homoscedasticity cannot be rejected at the level 5%. These diagnostics give
some evidence that the conditional homoscedasticity assumption on
$(\epsilon_{t})$ is plausible in our case. To analyze the changes of the
variance of the OLS error terms, we plotted the estimated variances and cross
correlation of the components of the error process in Figures 11 and 12. It
appears from Figure 11 that the variance of the first component of the
residuals does not vary much until the end of the 90’s and then increase.
Similarly the volatility of the second component of the residuals does not
vary much until the end of the 80’s and then increase. From Figure 12 we also
note that the correlation between the components of the innovations seems to
be positive until the beginning of the 90’s and then become negative.
Now we turn to the check of the goodness-of-fit of the VAR(1) model adjusted
to the first differences of the series. To illustrate the results of
Proposition 1 we plotted the ALS residual autocorrelations in Figures 13 and
14, and the OLS residual autocorrelations in Figures 15 and 16, where we
denote
$\hat{R}_{OLS}^{ij}(h)=\frac{T^{-1}\sum_{t=h+1}^{T}\hat{u}_{i\>t}\hat{u}_{j\>t-h}}{\hat{\sigma}_{u}(i)\hat{\sigma}_{u}(j)}\quad\mbox{and}\quad\hat{R}_{ALS}^{ij}(h)=\frac{T^{-1}\sum_{t=h+1}^{T}\check{\epsilon}_{i\>t}\check{\epsilon}_{j\>t-h}}{\check{\sigma}_{\epsilon}(i)\check{\sigma}_{\epsilon}(j)}.$
The ALS 95% confidence bounds obtained using (3.14) and (5.3) are displayed in
Figures 13 and 14. In Figures 15 and 16 the standard 95% confidence bounds
obtained using (3.9) and the OLS 95 % confidence bounds obtained using (3.13)
are plotted. We can remark that the ALS residual autocorrelations are inside
or not much larger than the ALS significance limits. A similar comment can be
made for the OLS residual autocorrelations when compared to the OLS
significance limits. However we found that the OLS significance limits can be
quite different from the standard significance limits. This can be explained
by the presence of unconditional volatility in the analyzed series. In
particular we note that the $\hat{R}_{OLS}^{21}(5)$ is far from the standard
confidence bounds. We also apply the different portmanteau tests considered in
this paper for testing if the errors are uncorrelated. The test statistics and
$p$-values of the tests are displayed in Tables 7 and 8. It appears that the
$p$-values of the standard tests are very small. Therefore the standard tests
clearly reject the null hypothesis. We also remark that the $p$-values of the
modified tests based on the OLS estimation and of the adaptive tests are far
from zero. Thus in view of the tests introduced in this paper the null
hypothesis is not rejected. These contradictory results can be explained by
the fact that we found that the distribution in (4.2) is very different from
the $\chi^{2}_{d^{2}(m-p)}$ standard distribution. For instance we obtained
$\sup_{i\in\\{1,\dots,d^{2}m\\}}\left\\{\hat{\delta}_{i}^{ols}\right\\}=11.18$
for $m=15$ in our case. Our findings may be viewed as a consequence of the
presence of unconditional heteroscedasticity in the data. Since the
theoretical basis of the standard tests do not include the case of stable
processes with non constant volatility, we can suspect that the results of the
standard tests are not reliable. Therefore we can draw the conclusion that the
practitioner is likely to select a too large autoregressive order in our case
when using the standard tools for checking the adequacy of the VAR model. From
Table 8 we see that the OLS and ALS statistics are quite different. We also
noted that the weights (not reported here) of the sums in (4.2) and in (5.4)
are quite different for our example.
### 10.2 VAR modeling of the U.S. energy-transportation data
In this example the first component of the VAR system corresponds to the
transportation price index and the second corresponds to the energy price
index. We first briefly describe some features of the energy-transportation
price indexes. In Figure 17 we again see that the studied series seems to have
a random walk behavior and then we again consider the first differences of the
series. The KS type statistic is $7.05$ for the energy price index and is
$6.81$ for the transportation price index, so that the homoscedasticity
hypothesis has to be rejected for these series. From Figure 18 we see that the
first differences of the series are stable but have a non constant volatility.
We adjusted a VAR(4) model to capture the linear dynamics of the series. The
results in Table 9 indicate that the ALS approach is more precise than the OLS
approach for the estimation of the autoregressive parameters. The bandwidth
obtained by the cross-validation for the ALS estimation is $b=9.53\times
10^{-2}$ (Figure 19). From Figure 21 we see that the OLS residuals seem to
have a varying volatility. The stationarity of the ALS residuals is plausible
(Figure 20) and ARCH-LM tests (not reported here) show that the conditional
homoscedasticity of the ALS residuals cannot be rejected. From Figure 22 we
see that the shape of the variance structure of the components of the OLS
residuals are similar. More precisely it can be noted that the variance of the
components of the OLS residuals is relatively low and seems constant until the
beginning of the 80’s. The OLS residual variance seems to switch to an other
regime where the variance is increased but constant from the beginning of the
80’s to the end of the 90’s. The volatility of the OLS residual variance seems
to increase from the end of the 90’s. From Figure 12 we also note that the
components of the OLS residuals seem highly correlated.
Now we check the adequacy of the VAR(4) model. The ALS and OLS residual
autocorrelations are given in Figures 24, 25 and 26, 27. The OLS residual
autocorrelations with the standard bounds are given in Figures 28 and 29. From
Figures 24 and 25 it can be noted that the ALS residual autocorrelations are
inside or not much larger than the ALS significance limits. The OLS confidence
bounds seems not reliable since we remark that some confidence bounds
(corresponding to the $\hat{R}_{OLS}^{ij}(2)$’s) are unexpectedly much larger
than the OLS residual autocorrelations. From Figures 28 and 29 it can also be
noted that some of the OLS residual autocorrelations are far from the standard
confidence bounds. However considering the standard confidence bounds in
presence of non constant variance can be misleading in view of our theoretical
findings. In addition we also remark that some standard confidence bounds
(corresponding to the $\hat{R}_{OLS}^{11}(2)$, $\hat{R}_{OLS}^{21}(2)$ and
$\hat{R}_{OLS}^{22}(8)$, $\hat{R}_{OLS}^{12}(8)$) are unexpectedly far from
the residual autocorrelations. The non correlation of the residuals is tested
using the different portmanteau tests considered in this paper. The test
statistics and $p$-values of the tests are displayed in Tables 10 and 11. The
standard tests again lead to select complicated models since the adequacy of
the VAR(4) model is rejected. We also remark that the $p$-values of the OLS
tests are very large. In fact we found that the OLS tests do not reject the
hypothesis of the adequacy of a VAR(1) model for the studied series. From
Table 12 we see that some of the estimated weights for the asymptotic
distribution of the standard statistic take very large values. It appears that
the OLS method seems not able to estimate correctly the asymptotic
distribution of the standard statistics and may be suspected to have a low
power in this example. Then the OLS portmanteau test seem not reliable in this
example on the contrary to the ALS portmanteau test. Indeed it can be noted
that the asymptotic distribution seems well estimated. Finally note that we
found that the estimators of the
$\Lambda_{m}^{\epsilon,\theta^{\prime}}\Lambda_{m}^{\epsilon,\theta}$ and
$\Phi_{m}^{\prime}(\Lambda_{m}^{u,u})^{-1}\Phi_{m}$ are not invertible, so
that the $\widetilde{LB}_{m}^{ALS}$ and $\widetilde{LB}_{m}^{OLS}$ are not
feasible in this example. In view of the different outputs we presented, it
seems that the $LB_{m}^{ALS}$ test is the only test which give reliable
conclusions in this example. This may be explained by the fact that the
variance strongly change in this second example when compared to the first
example and hence make the tests based on inverted matrices or the tests which
do not exploit the variance structure difficult to implement.
## 11 Conclusion
In this paper the problem of specification of the linear autoregressive
dynamics of multivariate series with deterministic but non constant volatility
is studied. Considering such situations is important since it is well known
that economic or financial series commonly exhibit non stationary volatility.
The unreliability of the standard portmanteau tests for testing the adequacy
of the autoregressive order of VAR processes is highlighted through
theoretical results and empirical illustrations. From the statistical
methodology point of view, the main contribution of the paper is two-fold. In
the setup of a stable VAR with time-varying variance, (a) we show how to
compute corrected critical values for the standard portmanteau statistics
implemented in all specialized software; and (b) we propose new portmanteau
statistics based on the model residuals filtered for the non constant
variance. Moreover, we provide some theoretical and empirical power
comparisons of the two approaches and we show that they are well-suited for
replacing the usual test procedures even when the volatility is constant. The
new portmanteau statistics require the estimation of the time-varying variance
that is done by classical kernel smoothing of the outer product of the OLS
residuals vectors. Then another contribution of the paper is represented by
the fact that our asymptotic results are derived uniformly in the bandwidth
used in the kernel. This makes the theory compatible with the practice where
people usually use the data to determine the bandwidth. Our theoretical and
empirical investigation could be extended to other setups, like for instance
the co-integrated systems. We briefly mention this extension but a deeper
investigation is left for future work.
## 12 Appendix A: Proofs
For the sake of a simpler presentation, hereafter we stick to our
identification condition for $H_{t}$ and hence we replace everywhere $G(r)$ by
$\Sigma(r)^{1/2}$. Recall that
$\tilde{X}_{t}=\sum_{i=0}^{\infty}\tilde{\psi}_{i}u_{t-i}^{p}=\sum_{i=0}^{\infty}K^{i}\tilde{u}_{t-i},$
(see pages 2 and 3). Let us introduce
$\Upsilon_{t-1}^{u}=(u_{t-1}^{\prime},\dots,u_{t-m}^{\prime},\tilde{X}_{t-1}^{\prime})^{\prime}=(u_{t-1}^{m^{\prime}},\tilde{X}_{t-1}^{\prime})^{\prime}$
and
$\Upsilon_{t-1}^{\epsilon}=(\epsilon_{t-1}^{\prime},\dots,\epsilon_{t-m}^{\prime},\tilde{X}_{t-1}^{\prime})^{\prime}=(\epsilon_{t-1}^{m^{\prime}},\tilde{X}_{t-1}^{\prime})^{\prime},$
for a given $m>0$. To prove Propositions 1 and 2 we need several preliminary
results that are gathered in Lemmas 12.1 to 12.3 below.
###### Lemma 12.1
Under Assumption A1 we have
$\lim_{T\to\infty}E\left[\tilde{X}_{[Tr]-1}\tilde{X}_{[Tr]-1}^{\prime}\right]=\sum_{i=0}^{\infty}\tilde{\psi}_{i}\left\\{\mathbf{1}_{p\times
p}\otimes\Sigma(r)\right\\}\tilde{\psi}_{i}^{\prime}:=\Omega(r),$ (12.1)
$\lim_{T\to\infty}E\left[\Upsilon_{[Tr]-1}^{u}\Upsilon_{[Tr]-1}^{u^{\prime}}\right]=\Omega^{u}(r),$
(12.2)
$\lim_{T\to\infty}E\left[\Upsilon_{[Tr]-1}^{\epsilon}\Upsilon_{[Tr]-1}^{\epsilon^{\prime}}\right]=\Omega^{\epsilon}(r),$
(12.3)
for values $r\in(0,1]$ where the functions $g_{ij}(\cdot)$ are continuous. The
matrices in (12.2) and (12.3) are given by
$\Omega^{u}(r)=\left(\begin{array}[]{cc}I_{m}\otimes\Sigma(r)&\Theta_{m}^{u}(r)\\\
\Theta_{m}^{u}(r)^{\prime}&\Omega(r)\\\
\end{array}\right),\quad\Theta_{m}^{u}(r)=\sum_{i=0}^{m-1}\left\\{e_{m}(i+1)e_{p}(1)^{\prime}\otimes\Sigma(r)\right\\}K^{i\,\prime}$
and
$\Omega^{\epsilon}(r)=\left(\begin{array}[]{cc}I_{dm}&\Theta_{m}^{\epsilon}(r)\\\
\Theta_{m}^{\epsilon}(r)^{\prime}&\Omega(r)\\\
\end{array}\right),\quad\Theta_{m}^{\epsilon}(r)=\sum_{i=0}^{m-1}\left\\{e_{m}(i+1)e_{p}(1)^{\prime}\otimes\Sigma(r)^{1/2}\right\\}K^{i\,\prime}.$
Proof of Lemma 12.1 Statement 12.1 is a direct consequence of Lemma 7.2 in
Patilea and Ra ssi (2010). For the proof of (12.2) we write444Here we make a
common abuse of notation because in Assumption A1 the matrix-valued function
$\Sigma(\cdot)$ is not defined for negative values. To remedy this problem it
suffices to extend the function $\Sigma(\cdot)$ to the left of the origin, for
instance by setting $\Sigma(r)$ equal to the identity matrix if $r\leq 0$.
$\displaystyle E(u_{t-1}^{m}\tilde{X}_{t-1}^{\prime})$ $\displaystyle=$
$\displaystyle\sum_{i=0}^{\infty}E(u_{t-1}^{m}\tilde{u}_{t-i-1}^{\prime}K^{i\,\prime})$
$\displaystyle=$
$\displaystyle\sum_{i=0}^{m-1}E(u_{t-1}^{m}\tilde{u}_{t-i-1}^{\prime}K^{i\,\prime})$
$\displaystyle=$
$\displaystyle\sum_{i=0}^{m-1}\\{e_{m}(i+1)e_{p}(1)^{\prime}\otimes
H_{t-i-1}H_{t-i-1}^{\prime}\\}K^{i\,\prime}$ $\displaystyle=$
$\displaystyle\sum_{i=0}^{m-1}\\{e_{m}(i+1)e_{p}(1)^{\prime}\otimes\Sigma((t-i-1)/T)\\}K^{i\,\prime}.$
Therefore
$\displaystyle\lim_{T\to\infty}E(u_{[Tr]-1}^{m}\tilde{X}_{[Tr]-1}^{\prime})$
$\displaystyle=$
$\displaystyle\lim_{T\to\infty}\sum_{i=0}^{m-1}\\{e_{m}(i+1)e_{p}(1)^{\prime}\otimes\Sigma(([Tr]-i-1)/T)\\}K^{i\,\prime}$
$\displaystyle=$
$\displaystyle\sum_{i=0}^{m-1}\\{e_{m}(i+1)e_{p}(1)^{\prime}\otimes\Sigma(r)\\}K^{i\,\prime}.$
Similarly we have
$\lim_{T\to\infty}E(u_{[Tr]-1}^{m}u_{[Tr]-1}^{m^{\prime}})=I_{m}\otimes\Sigma(r),$
(12.4)
so that using (12.1) we obtain the result (12.2). The proof of (12.3) is
similar. $\quad\square$
Let us define
$v_{t}^{u}=\mbox{vec}(\Upsilon_{t-1}^{u}\Upsilon_{t-1}^{u^{\prime}}\otimes
u_{t}u_{t}^{\prime})$ and
$v_{t}^{\epsilon}=\mbox{vec}(\Upsilon_{t-1}^{\epsilon}\Upsilon_{t-1}^{\epsilon^{\prime}}\otimes\epsilon_{t}\epsilon_{t}^{\prime})$.
The following lemma is similar to Lemma 7.3 of Patilea and Ra ssi (2010), and
hence the proof is omitted.
###### Lemma 12.2
Under A1 we have
$T^{-1}\sum_{t=i+1}^{T}\mbox{vec}(u_{t}\tilde{X}_{t-i}^{\prime})\stackrel{{\scriptstyle
P}}{{\longrightarrow}}0,$ (12.5)
$T^{-1}\sum_{t=i+1}^{T}\mbox{vec}(\epsilon_{t}\tilde{X}_{t-i}^{\prime})\stackrel{{\scriptstyle
P}}{{\longrightarrow}}0,$ (12.6)
for $i>0$, and
$T^{-1}\sum_{t=m+1}^{T}\mbox{vec}(u_{t-1}^{m}\tilde{X}_{t-1}^{\prime})\stackrel{{\scriptstyle
P}}{{\longrightarrow}}\lim_{T\to\infty}T^{-1}\sum_{t=m+1}^{T}\mbox{vec}\left\\{E(u_{t-1}^{m}\tilde{X}_{t-1}^{\prime})\right\\},$
(12.7)
$T^{-1}\sum_{t=m+1}^{T}\mbox{vec}(\epsilon_{t-1}^{m}\tilde{X}_{t-1}^{\prime})\stackrel{{\scriptstyle
P}}{{\longrightarrow}}\lim_{T\to\infty}T^{-1}\sum_{t=m+1}^{T}\mbox{vec}\left\\{E(\epsilon_{t-1}^{m}\tilde{X}_{t-1}^{\prime})\right\\}.$
(12.8)
In addition we have
$T^{-1}\sum_{t=1}^{T}v_{t}^{u}\stackrel{{\scriptstyle
P}}{{\longrightarrow}}\lim_{T\to\infty}T^{-1}\sum_{t=1}^{T}E(v_{t}^{u})=\lim_{T\to\infty}T^{-1}\sum_{t=1}^{T}\mbox{vec}\left\\{E(\Upsilon_{t-1}^{u}\Upsilon_{t-1}^{u^{\prime}})\otimes\Sigma_{t}\right\\},$
(12.9) $T^{-1}\sum_{t=1}^{T}v_{t}^{\epsilon}\stackrel{{\scriptstyle
P}}{{\longrightarrow}}\lim_{T\to\infty}T^{-1}\sum_{t=1}^{T}E(v_{t}^{\epsilon})=\lim_{T\to\infty}T^{-1}\sum_{t=1}^{T}\mbox{vec}\left\\{E(\Upsilon_{t-1}^{\epsilon}\Upsilon_{t-1}^{\epsilon^{\prime}})\otimes
I_{d}\right\\}.$ (12.10)
###### Lemma 12.3
Under A1 we have
$\hat{\Sigma}_{\tilde{\underline{X}}}=\Lambda_{1}+o_{p}(1),$ (12.11)
$\hat{\Sigma}_{\tilde{X}}=\Lambda_{3}+o_{p}(1).$ (12.12)
In addition we also have
$T^{-\frac{1}{2}}\sum_{t=1}^{T}\Upsilon_{t-1}^{u}\otimes
u_{t}\Rightarrow\mathcal{N}(0,\Xi_{u}),$ (12.13)
$T^{-\frac{1}{2}}\sum_{t=1}^{T}J_{t}^{-1}(\Upsilon_{t-1}^{\epsilon}\otimes\epsilon_{t})\Rightarrow\mathcal{N}(0,\Xi_{\epsilon}),$
(12.14)
where
$J_{t}=\left(\begin{array}[]{cc}I_{d^{2}m}&0\\\ 0&I_{dp}\otimes H_{t}\\\
\end{array}\right)$
and
$\Xi_{u}=\int_{0}^{1}\left(\begin{array}[]{cc}I_{m}\otimes\Sigma(r)^{\otimes
2}&\Theta_{m}^{u}(r)\otimes\Sigma(r)\\\
\Theta_{m}^{u\,\prime}(r)\otimes\Sigma(r)&\Omega(r)\otimes\Sigma(r)\\\
\end{array}\right)dr=\left(\begin{array}[]{cc}\Lambda_{m}^{u,u}&\Lambda_{m}^{u,\theta}\\\
\Lambda_{m}^{u,\theta\,\prime}&\Lambda_{2}\\\ \end{array}\right),$
$\Xi_{\epsilon}=\int_{0}^{1}\left(\begin{array}[]{cc}I_{d^{2}m}&\Theta_{m}^{\epsilon}(r)\otimes\Sigma(r)^{-\frac{1}{2}}\\\
\Theta_{m}^{\epsilon^{\prime}}(r)\otimes\Sigma(r)^{-\frac{1}{2}}&\Omega(r)\otimes\Sigma(r)^{-1}\\\
\end{array}\right)dr=\left(\begin{array}[]{cc}\Lambda_{m}^{u,u}&\Lambda_{m}^{\epsilon,\theta}\\\
\Lambda_{m}^{\epsilon,\theta\,\prime}&\Lambda_{1}\\\ \end{array}\right).$
Proof of Lemma 12.3 Statements (12.11) and (12.12) lemma are direct
consequences of Lemma 7.4 of Patilea and Ra ssi (2010). We only give the proof
of (12.14) and (12.13). To prove (12.14), using the well known identity
$(B\otimes C)(D\otimes F)=(BD)\otimes(CF)$ for matrices of appropriate
dimensions, we obtain
$J_{t}^{-1}(\Upsilon_{t-1}^{\epsilon}\otimes\epsilon_{t})(\Upsilon_{t-1}^{\epsilon^{\prime}}\otimes\epsilon_{t}^{\prime})J_{t}^{-1}=J_{t}^{-1}(\Upsilon_{t-1}^{\epsilon}\Upsilon_{t-1}^{\epsilon^{\prime}}\otimes\epsilon_{t}\epsilon_{t}^{\prime})J_{t}^{-1}.$
From (12.10) we write
$\displaystyle
T^{-1}\sum_{t=1}^{T}J_{t}^{-1}(\Upsilon_{t-1}^{\epsilon}\Upsilon_{t-1}^{\epsilon^{\prime}}\otimes\epsilon_{t}\epsilon_{t}^{\prime})J_{t}^{-1}$
$\displaystyle\stackrel{{\scriptstyle P}}{{\rightarrow}}$
$\displaystyle\lim_{T\to\infty}T^{-1}\sum_{t=1}^{T}J_{t}^{-1}\left[E\left\\{\Upsilon_{t-1}^{\epsilon}\Upsilon_{t-1}^{\epsilon^{\prime}}\right\\}\otimes
I_{d}\right]J_{t}^{-1}.$
Now let us denote the discontinuous points of the functions $g_{ij}(.)$ by
$\xi_{1},\xi_{2},\dots,\xi_{q}$ where $q$ is a finite number independent of
$T$. We write
$\displaystyle\lim_{T\to\infty}T^{-1}\sum_{t=1}^{T}J_{t}^{-1}\left[E(\Upsilon_{t-1}^{\epsilon}\Upsilon_{t-1}^{\epsilon^{\prime}})\otimes
I_{d}\right]J_{t}^{-1}$ $\displaystyle=$
$\displaystyle\lim_{T\to\infty}\sum_{t=1}^{T}\int_{t/T}^{(t+1)/T}J_{[Tr]}^{-1}\left[E(\Upsilon_{[Tr]-1}^{\epsilon}\Upsilon_{[Tr]-1}^{\epsilon^{\prime}})\otimes
I_{d}\right]J_{[Tr]}^{-1}dr+o_{p}(1)$ $\displaystyle=$
$\displaystyle\lim_{T\to\infty}\int_{1/T}^{\xi_{1}}J_{[Tr]}^{-1}\left[E(\Upsilon_{[Tr]-1}^{\epsilon}\Upsilon_{[Tr]-1}^{\epsilon^{\prime}})\otimes
I_{d}\right]J_{[Tr]}^{-1}dr+\dots$ $\displaystyle\dots+$
$\displaystyle\int_{\xi_{q}}^{(T+1)/T}J_{[Tr]}^{-1}\left[E(\Upsilon_{[Tr]-1}^{\epsilon}\Upsilon_{[Tr]-1}^{\epsilon^{\prime}})\otimes
I_{d}\right]J_{[Tr]}^{-1}dr+o_{p}(1).$
Then from (12.3) we obtain
$\displaystyle
T^{-1}\sum_{t=1}^{T}J_{t}^{-1}(\Upsilon_{t-1}^{\epsilon}\Upsilon_{t-1}^{\epsilon^{\prime}}\otimes\epsilon_{t}\epsilon_{t}^{\prime})J_{t}^{-1}$
$\displaystyle\stackrel{{\scriptstyle P}}{{\longrightarrow}}$
$\displaystyle\int_{0}^{1}J(r)^{-1}\left(\begin{array}[]{cc}I_{d^{2}m}&\Theta_{m}^{\epsilon}(r)\otimes
I_{d}\\\ \Theta_{m}^{\epsilon^{\prime}}(r)\otimes I_{d}&\Omega(r)\otimes
I_{d}\\\ \end{array}\right)J(r)^{-1}dr,$
where
$J(r)=\left(\begin{array}[]{cc}I_{d^{2}m}&0\\\
0&I_{dp}\otimes\Sigma(r)^{-\frac{1}{2}}\\\ \end{array}\right)$
and $\Sigma(r)^{1/2}=G(r)=H_{[Tr]}$. Noting that
$J_{t}^{-1}(\Upsilon_{t-1}^{\epsilon}\otimes\epsilon_{t})$ are martingale
differences, we obtain the result (12.14) using the Lindeberg central limit
theorem. Using relations (12.2) and (12.9), the proof of (12.13) is similar.
Finally, the equivalent compact expressions of $\Xi_{\epsilon}$ and $\Xi_{u}$
can be easily derived using elementary properties of the Kronecker product.
$\quad\square$
Proof of Proposition 1 First we establish the result (3.5). Let us define
$\Gamma_{u}(h)=T^{-1}\sum_{t=h+1}^{T}u_{t}u_{t-h}^{\prime}\quad\mbox{and}\quad
c_{m}^{u}=\mbox{vec}\left\\{(\Gamma_{u}(1),\dots,\Gamma_{u}(m))\right\\}.$
Let us first show the asymptotic normality of
$T^{1/2}(c_{m}^{u\,\prime},(\hat{\theta}_{OLS}-\theta_{0})^{\prime})^{\prime}$.
Note that
$c_{m}^{u}=T^{-1}\sum_{t=1}^{T}\tilde{u}_{t-1}^{m}\otimes
u_{t}\quad\mbox{and}\quad\hat{\theta}_{OLS}-\theta_{0}=\hat{\Sigma}_{\tilde{X}}^{-1}\left\\{T^{-1}\sum_{t=1}^{T}(\tilde{X}_{t-1}\otimes
u_{t})\right\\},$
with $\tilde{u}_{t-1}^{m}=(\mathbf{1}_{(0,\infty)}(t-1)\times
u_{t-1}^{\prime},\dots,\mathbf{1}_{(0,\infty)}(t-m)\times
u_{t-m}^{\prime})^{\prime}$. From (12.12) we write
$T^{\frac{1}{2}}\left(\begin{array}[]{c}c_{m}^{u}\\\
\hat{\theta}_{OLS}-\theta_{0}\\\
\end{array}\right)=\dot{\Lambda}_{3}^{-1}\left\\{T^{-\frac{1}{2}}\sum_{t=1}^{T}\Upsilon_{t-1}^{u}\otimes
u_{t}\right\\}+o_{p}(1),$
with
$\dot{\Lambda}_{3}=\left(\begin{array}[]{cc}I_{d^{2}m}&0\\\ 0&\Lambda_{3}\\\
\end{array}\right).$
Then we can obtain from (12.13)
$T^{\frac{1}{2}}\left(\begin{array}[]{c}c_{m}^{u}\\\
\hat{\theta}_{OLS}-\theta_{0}\\\
\end{array}\right)\Rightarrow\mathcal{N}(0,\dot{\Lambda}_{3}^{-1}\Xi_{u}\dot{\Lambda}_{3}^{-1}),$
(12.16)
with $\Xi_{u}$ defined in Lemma 12.3. Now, define
$u_{t}(\theta)=X_{t}-(\tilde{X}_{t-1}^{\prime}\otimes I_{d})\theta$ with
$\theta\in\mathbb{R}^{d^{2}p}$. Considering $\hat{\gamma}_{m}^{u,OLS}$ and
$c_{m}^{u}$ as values of the same function at the points $\theta_{0}$ and
$\hat{\theta}_{OLS}$, by the Mean Value Theorem
$\hat{\gamma}_{m}^{u,OLS}=c_{m}^{u}+T^{-1}\sum_{t=1}^{T}\left\\{\tilde{u}_{t-1}^{m}(\theta)\otimes\frac{\partial
u_{t}(\theta)}{\partial\theta^{\prime}}+\frac{\partial\tilde{u}_{t-1}^{m}(\theta)}{\partial\theta^{\prime}}\otimes
u_{t}(\theta)\right\\}_{\theta=\theta^{*}}(\hat{\theta}_{OLS}-\theta_{0}),$
with $\theta^{*}$ between $\hat{\theta}_{OLS}$ and $\theta_{0}$.555The value
$\theta^{*}$ between $\hat{\theta}_{OLS}$ and $\theta_{0}$ may be different
for different components of $\hat{\gamma}_{m}^{OLS}$ and $c_{m}^{u}$. Using
$T^{1/2}(\hat{\theta}_{OLS}-\theta_{0})=O_{p}(1)$ and since $\partial
u_{t-i}(\theta)\\!/\partial\theta^{\prime}\\!=-(\tilde{X}_{t-i-1}^{\prime}\otimes
I_{d})$, it follows from (12.5) and (12.7) that
$\displaystyle\hat{\gamma}_{m}^{u,OLS}$ $\displaystyle=$ $\displaystyle
c_{m}^{u}+\lim_{T\to\infty}T^{-1}\sum_{t=1}^{T}E\left\\{\tilde{u}_{t-1}^{m}\otimes\frac{\partial
u_{t}}{\partial\theta^{\prime}}\right\\}(\hat{\theta}_{OLS}-\theta_{0})+o_{p}(T^{-\frac{1}{2}})$
$\displaystyle=$ $\displaystyle
c_{m}^{u}+\lim_{T\to\infty}T^{-1}\sum_{t=1}^{T}-E\left\\{\tilde{u}_{t-1}^{m}\otimes\tilde{X}_{t-1}^{\prime}\otimes
I_{d}\right\\}(\hat{\theta}_{OLS}-\theta_{0})$ $\displaystyle+$ $\displaystyle
o_{p}(T^{-\frac{1}{2}}).$
From (12.2) and using arguments like in the proof of (12.14), it is easy to
see that
$\lim_{T\to\infty}T^{-1}\sum_{t=1}^{T}-E\left\\{\tilde{u}_{t-1}^{m}\otimes\tilde{X}_{t-1}^{\prime}\otimes
I_{d}\right\\}=-\int_{0}^{1}\Theta_{m}^{u}(r)dr\otimes
I_{d}+o_{p}(1)=-\Phi_{m}^{u}+o_{p}(1).$
Finally from (12) we have
$\displaystyle\hat{\gamma}_{m}^{u,OLS}$ $\displaystyle=$ $\displaystyle
c_{m}^{u}-\Phi_{m}^{u}(\hat{\theta}_{OLS}-\theta_{0})+o_{p}(T^{-\frac{1}{2}}),$
(12.18)
so that it follows from (12.16) that $T^{1/2}\hat{\gamma}_{m}^{OLS}$ is
asymptotically normal with covariance matrix
$\Sigma^{u,OLS}=\Lambda^{u,u}_{m}+\Phi^{u}_{m}\Lambda_{3}^{-1}\Lambda_{2}\Lambda_{3}^{-1}\Phi^{u\,\prime}_{m}-\Lambda^{u,\theta}_{m}\Lambda_{3}^{-1}\Phi^{u\,\prime}_{m}-\Phi^{u}_{m}\Lambda_{3}^{-1}\Lambda^{u,\theta\,\prime}_{m}.$
(12.19)
The proof of (3.7) is very similar, here we only present a sketch. Let us
define
$\Gamma_{\epsilon}(h)=T^{-1}\sum_{t=h+1}^{T}\epsilon_{t}\epsilon_{t-h}^{\prime}\quad\mbox{and}\quad
c_{m}^{\epsilon}=\mbox{vec}\left\\{(\Gamma_{\epsilon}(1),\dots,\Gamma_{\epsilon}(m))\right\\}.$
Using (12.11) and (12.14) it can be shown that
$T^{\frac{1}{2}}\left(\begin{array}[]{c}c_{m}^{\epsilon}\\\
\hat{\theta}_{GLS}-\theta_{0}\\\
\end{array}\right)\Rightarrow\mathcal{N}(0,\dot{\Lambda}_{1}^{-1}\Xi_{\epsilon}\dot{\Lambda}_{1}^{-1}),$
(12.20)
with $\Xi_{\epsilon}$ defined in Lemma 12.3 and
$\dot{\Lambda}_{1}=\left(\begin{array}[]{cc}I_{d^{2}m}&0\\\ 0&\Lambda_{1}\\\
\end{array}\right).$
From (12.6), (12.8) and since
$\partial\epsilon_{t-i}(\theta)/\partial\theta^{\prime}=-(\tilde{X}_{t-i-1}^{\prime}\otimes
H_{t}^{-1})$ we write
$\displaystyle\hat{\gamma}_{m}^{\epsilon,GLS}$ $\displaystyle=$ $\displaystyle
c_{m}^{\epsilon}+\lim_{T\to\infty}T^{-1}\sum_{t=m+1}^{T}-E\left\\{\epsilon_{t-1}^{m}\otimes\tilde{X}_{t-1}^{\prime}\otimes
H_{t}^{-1}\right\\}(\hat{\theta}_{GLS}-\theta_{0})+o_{p}(T^{-\frac{1}{2}})$
$\displaystyle=$ $\displaystyle
c_{m}^{\epsilon}-\Lambda^{\epsilon,\theta}_{m}(\hat{\theta}_{GLS}-\theta_{0})+o_{p}(T^{-\frac{1}{2}}).$
By (12.20), $T^{1/2}\hat{\gamma}_{m}^{\epsilon,GLS}$ is asymptotically normal
with covariance matrix
$\Sigma^{\epsilon,GLS}=I_{d^{2}m}-\Lambda^{\epsilon,\theta}_{m}\Lambda_{1}^{-1}\Lambda^{\epsilon,\theta\,\prime}_{m}.$
The particular case where the order of the VAR model is $p=0$ is an easy
consequence of the arguments above in this proof. $\quad\square$
Proof of Proposition 2 For the proof of (3.13), we write
$u_{it}=\sum_{j=1}^{d}h_{ij,t}\epsilon_{jt}\quad\mbox{and}\quad
E(u_{it}^{2})=\sum_{j=1}^{d}h_{ij,t}^{2}=\sigma_{ii,t}^{2},\quad\mbox{say.}$
It is clear from (12.4) that
$\lim_{T\to\infty}E(u_{i[Tr]}^{2})=\sigma_{ii}^{2}(r),$
where $\sigma_{ii}^{2}(r)$ is the $i$th diagonal element of $\Sigma(r)$.
Following similar arguments used in Phillips and Xu (2005 p 303) for the proof
of Lemma 1 (iii), we write
$T^{-1}\sum_{t=1}^{T}u_{it}^{2}=\int_{0}^{1}\sigma_{ii}^{2}(r)dr+o_{p}(1).$
Let us define $\Sigma^{u}=T^{-1}\sum_{t=1}^{T}u_{t}u_{t}^{\prime}$ and
$\hat{\Sigma}^{u}=T^{-1}\sum_{t=1}^{T}\hat{u}_{t}\hat{u}_{t}^{\prime}$. We
have using again the Mean Value Theorem
$\mbox{vec}(\hat{\Sigma}^{u})=\mbox{vec}(\Sigma^{u})+T^{-1}\sum_{t=1}^{T}\left\\{u_{t}(\theta)\otimes\frac{\partial
u_{t}(\theta)}{\partial\theta^{\prime}}+\frac{\partial
u_{t}(\theta)}{\partial\theta^{\prime}}\otimes
u_{t}(\theta)\right\\}_{\theta=\theta^{*}}(\hat{\theta}_{OLS}-\theta_{0}),$
with $\theta^{*}$ is between $\hat{\theta}_{OLS}$ and $\theta_{0}$.666The
value $\theta^{*}$ may be different for different components of
$\mbox{vec}(\hat{\Sigma}^{u})$ and $\mbox{vec}(\Sigma^{u})$. Therefore using
$\partial
u_{t}(\theta)/\partial\theta^{\prime}=-(\tilde{X}_{t-1}^{\prime}\otimes
I_{d})$ and from the consistency of $\hat{\theta}_{OLS}$, we write
$T^{\frac{1}{2}}\mbox{vec}\hat{\Sigma}^{u}=T^{\frac{1}{2}}\mbox{vec}\Sigma^{u}+o_{p}(1),$
and
$T^{-1}\sum_{t=1}^{T}\hat{u}_{it}^{2}=\int_{0}^{1}\sigma_{ii}^{2}(r)dr+o_{p}(1),$
so that the result follows from the Slutsky lemma. We obtain the expression
(3.13) noting that
$\hat{\rho}_{m}^{OLS}=\\{I_{m}\otimes(\hat{S}_{u}\otimes\hat{S}_{u})^{-1}\\}\hat{\gamma}_{m}^{OLS}.$
The proof of (3.14) is similar to that of (3.13) and hence is omitted.
$\quad\square$
Proof of Proposition 3 In the following, $c$, $C$, … denote constants with
possibly different values from line to line. To simplify notation, let $b$
denote the $d(d+1)/2$ vector of bandwidths $b_{kl}$, $1\leq k\leq l\leq d$.
Below we will simply write _uniformly w.r.t. $b$_ instead of _uniformly w.r.t.
$b_{kl}$, $1\leq k\leq l\leq d$_, and $\sup_{b}$ instead of
$\sup_{b_{kl}\in\mathcal{B}_{T},1\leq k\leq l\leq d}$. Here the norm
$\|\cdot\|$ is the Frobenius norm which in particular is a sub-multiplicative
norm, that is $\|AB\|\leq\|A\|\|B\|$, and for a positive definite matrix $A$,
$\|A\|\leq C[\lambda_{min}(A)]^{-1}$ with $C$ a constant depending only on the
dimension of $A$. Moreover, $\|A\otimes B\|=\|A\|\|B\|$.
To obtain the asymptotic equivalences in equation (5.3) it suffices to notice
that for all $1\leq i\leq d$, $\hat{\sigma}^{2}_{\epsilon}(i)-1=o_{p}(1)$, and
to prove
$\sup_{1\leq i\leq
d}\sup_{b}\left|\check{\sigma}^{2}_{\epsilon}(i)-\hat{\sigma}^{2}_{\epsilon}(i)\right|=o_{p}(1)$
(12.21)
and
$\sup_{b}\left|T^{\frac{1}{2}}\left\\{\Gamma_{ALS}(h)-\Gamma_{GLS}(h)\right\\}\right|=o_{p}(1),$
(12.22)
for any fixed $h\geq 1$. Let us write
$\displaystyle\check{\epsilon}_{t}-\hat{\epsilon}_{t}$ $\displaystyle=$
$\displaystyle(\check{\Sigma}_{t}^{-\frac{1}{2}}-\Sigma_{t}^{-\frac{1}{2}})u_{t}+\check{\Sigma}_{t}^{-\frac{1}{2}}(\tilde{X}_{t-1}^{\prime}\otimes
I_{d})(\hat{\theta}_{GLS}-\hat{\theta}_{ALS})$
$\displaystyle+(\Sigma_{t}^{-\frac{1}{2}}-\check{\Sigma}_{t}^{-\frac{1}{2}})(\tilde{X}_{t-1}^{\prime}\otimes
I_{d})(\hat{\theta}_{GLS}-\theta_{0})$ $\displaystyle=:$
$\displaystyle(\check{\Sigma}_{t}^{-\frac{1}{2}}-\Sigma_{t}^{-\frac{1}{2}})u_{t}+\delta^{\epsilon}_{t}$
where $\|\delta^{\epsilon}_{t}\|\leq\|\tilde{X}_{t-1}\|\check{R}_{T}(b)$ with
$\check{R}_{T}(b)=d\\!\left\\{\|\hat{\theta}_{GLS}\\!-\\!\hat{\theta}_{ALS}\|\sup_{1\leq
t\leq
T}\left\|\check{\Sigma}_{t}^{-\frac{1}{2}}\right\|+\|\hat{\theta}_{GLS}-\theta_{0}\|\sup_{1\leq
t\leq
T}\left\|\check{\Sigma}_{t}^{-\frac{1}{2}}-\Sigma_{t}^{-\frac{1}{2}}\right\|\right\\}$
By Lemma 12.4-(a,b) and given that
$\hat{\theta}_{GLS}-\theta_{0}=O_{p}(T^{-1/2})$ and
$\sup_{b}\|\hat{\theta}_{GLS}-\hat{\theta}_{ALS}\|=o_{p}(T^{-1/2})$, we obtain
that
$\sup_{b}\check{R}_{T}(b)=o_{p}(T^{-1/2}).$ (12.23)
From this and the moment conditions on $(X_{t})$ induced by Assumption A1,
deduce that (12.21) holds true. On the other hand,
$\displaystyle\Gamma_{ALS}(h)-\Gamma_{GLS}(h)$ $\displaystyle=$
$\displaystyle\frac{1}{T}\sum_{t=h+1}^{T}\hat{\epsilon}_{t}(\check{\epsilon}_{t-h}-\hat{\epsilon}_{t-h})^{\prime}+\frac{1}{T}\sum_{t=h+1}^{T}(\check{\epsilon}_{t}-\hat{\epsilon}_{t})\hat{\epsilon}_{t-h}^{\prime}$
$\displaystyle+\frac{1}{T}\sum_{t=h+1}^{T}(\check{\epsilon}_{t}-\hat{\epsilon}_{t})(\check{\epsilon}_{t-h}-\hat{\epsilon}_{t-h})^{\prime}$
$\displaystyle=:$ $\displaystyle R_{1T}(h)+R_{2T}(h)+R_{3T}(h).$
The terms $R_{1T}(h)$ and $R_{2T}(h)$ could be handled in a similar manner,
hence we will only analyze $R_{2T}(h)$. Let us write
$\displaystyle R_{2T}(h)\\!\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\\!\frac{1}{T}\sum_{t=h+1}^{T}\\!\\!\left[(\check{\Sigma}_{t}^{-\frac{1}{2}}\\!-\\!\Sigma_{t}^{-\frac{1}{2}})u_{t}+\delta^{\epsilon}_{t}\right]\\!\\!\left[\Sigma_{t-h}^{-\frac{1}{2}}u_{t-h}\\!-\\!\Sigma_{t-h}^{-\frac{1}{2}}(\tilde{X}_{t-h-1}^{\prime}\\!\otimes\\!I_{d})(\hat{\theta}_{GLS}\\!-\\!\theta_{0})\right]^{\prime}$
$\displaystyle=:$
$\displaystyle\\!\\!\\!\\!\frac{1}{T}\sum_{t=h+1}^{T}(\check{\Sigma}_{t}^{-\frac{1}{2}}\\!-\\!\Sigma_{t}^{-\frac{1}{2}})u_{t}u_{t-h}^{\prime}\Sigma_{t-h}^{-\frac{1}{2}\,\prime}+R_{22T}(h;b)=:R_{21T}(h;b)+R_{22T}(h;b).$
By (12.23) and the moment conditions on the innovation process
$(\epsilon_{t})$, and the rate of convergence of $\hat{\theta}_{GLS}$ and
12.4-(b), it is clear that $\sup_{b}\|R_{22T}(h;b)\|=o_{p}(T^{-1/2})$. Next
let us write
$\displaystyle R_{21T}(h)$ $\displaystyle=$
$\displaystyle\frac{1}{T}\sum_{t=h+1}^{T}\left(\check{\Sigma}_{t}^{-\frac{1}{2}}-(\check{\Sigma}_{t}^{0})^{-\frac{1}{2}}\right)u_{t}\epsilon_{t-h}^{\prime}+\frac{1}{T}\sum_{t=h+1}^{T}\left((\check{\Sigma}_{t}^{0})^{-\frac{1}{2}}-\stackrel{{\scriptstyle\circ}}{{\Sigma}}\;\\!\\!\\!_{t}^{-\frac{1}{2}}\right)u_{t}\epsilon_{t-h}^{\prime}$
$\displaystyle+\frac{1}{T}\sum_{t=h+1}^{T}\left(\stackrel{{\scriptstyle\circ}}{{\Sigma}}\;\\!\\!\\!_{t}^{-\frac{1}{2}}-\bar{\Sigma}_{t}^{-\frac{1}{2}}\right)u_{t}\epsilon_{t-h}^{\prime}+\frac{1}{T}\sum_{t=h+1}^{T}\left(\bar{\Sigma}_{t}^{-\frac{1}{2}}-\Sigma_{t}^{-\frac{1}{2}}\right)u_{t}\epsilon_{t-h}^{\prime}$
$\displaystyle=:$ $\displaystyle
R_{211T}(h;b)+R_{212T}(h;b)+R_{213T}(h;b)+R_{214T}(h;b),$
where, like in Patilea and Raïssi (2010),
$\stackrel{{\scriptstyle\circ}}{{\Sigma}}_{t}=\stackrel{{\scriptstyle\circ}}{{\Sigma}}_{t}(b)=\sum_{i=1}^{T}w_{ti}\odot{u}_{i}{u}_{i}^{\prime}\quad\text{and}\quad\bar{\Sigma}_{t}=\bar{\Sigma}_{t}(b)=\sum_{i=1}^{T}w_{ti}\odot\Sigma_{i}.$
From classical matrix norm inequalities (see for instance Horn and Johnson,
1994), we have that for any $d\times d-$positive definite matrices $A$ and
$B$, for $a=1$ or $a=-1$,
$\|A^{-\frac{a}{2}}-B^{-\frac{a}{2}}\|\leq
c_{a}\left(\max\\{\|A^{a}\|,\|B^{a}\|\\}\right)^{\frac{1}{2}}\left\|A^{-\frac{1+a}{2}}\right\|\left\|B^{-\frac{1+a}{2}}\right\|\|A-B\|,$
(12.24)
where $c_{a}$ is a constant that depends only on $d$ (by definition
$A^{0}=B^{0}=I_{d}$). Applying this inequality twice we deduce
$\displaystyle\left\|\check{\Sigma}_{t}^{-\frac{1}{2}}-(\check{\Sigma}_{t}^{0})^{-\frac{1}{2}}\right\|$
$\displaystyle\leq$
$\displaystyle\nu_{T}c_{1}c_{-1}\left\|\check{\Sigma}_{t}^{-1}\right\|\left\|(\check{\Sigma}_{t}^{0})^{-1}\right\|$
$\displaystyle\\!\\!\\!\\!\times\left(\max\\{\|\check{\Sigma}_{t}\|,\|\check{\Sigma}_{t}^{0}\|\\}\right)^{\frac{1}{2}}\left(\max\\{\|[(\check{\Sigma}_{t}^{0})^{2}+\nu_{T}I_{d}]^{-1}\|,\|(\check{\Sigma}_{t}^{0})^{-2}\|\\}\right)^{\frac{1}{2}}\\!.$
Take the norm of $R_{211T}$, use the inequality in the last display, Lemma
12.4-(a) below, the moment conditions on the innovation process and the
condition $T\nu_{T}^{2}\rightarrow\infty$ to deduce that
$\sup_{b}\|R_{211T}(h;b)\|=o_{p}(T^{-1/2})$. Next, using similar matrix norm
inequalities, Lemma 12.4-(a) and the Cauchy-Schwarz inequality,
$\displaystyle\sup_{b}\|R_{212T}(h;b)\|\\!\\!$ $\displaystyle\leq$
$\displaystyle\\!\\!O_{p}(1)\sup_{b}\left\\{\frac{1}{T}\sum_{t=h+1}^{T}\left\|(\check{\Sigma}_{t}^{0})-\stackrel{{\scriptstyle\circ}}{{\Sigma}}\;\\!\\!\\!_{t}\right\|\|u_{t}\epsilon_{t-h}^{\prime}\|\right\\}$
$\displaystyle\leq$
$\displaystyle\\!\\!\\!\\!O_{p}(1)\left(\sup_{b}\left\\{\\!\\!\frac{1}{T}\sum_{t=h+1}^{T}\left\|(\check{\Sigma}_{t}^{0})-\stackrel{{\scriptstyle\circ}}{{\Sigma}}\;\\!\\!\\!_{t}\right\|^{2}\right\\}\right)^{\\!\\!1/2}\\!\\!\left(\frac{1}{T}\\!\sum_{t=h+1}^{T}\\!\\!\|u_{t}\epsilon_{t-h}^{\prime}\|^{2}\right)^{\\!\\!1/2}$
$\displaystyle=$ $\displaystyle
O_{p}(1)O_{p}(T^{-1}b_{T}^{-1})\left(\frac{1}{T}\sum_{t=h+1}^{T}\|u_{t}\epsilon_{t-h}^{\prime}\|^{2}\right)^{1/2},$
where for the equality we used Lemma 7.6-(i) in Patilea and Ra ssi (2010).
Deduce that $\sup_{b}\|R_{212T}(h;b)\|=o_{p}(T^{-1/2})$. The uniform rate of
convergence for $R_{213T}(h;b)$ is obtained after replacing
$\stackrel{{\scriptstyle\circ}}{{\Sigma}}_{t}^{-1/2}-\bar{\Sigma}_{t}^{-1/2}$
by a Taylor expansion of the power $-1/2$ function for positive definite
matrix, a key and apparently new ingredient we provide in section 12.1 below.
The reminder term of the Taylor expansion could be controlled taking
expectation, using Cauchy-Schwarz inequality and Lemma 12.4-(d). The term
under the integral that represents the first order term of this Taylor
expansion could be treated similarly to the term
$\bar{\Sigma}_{t}^{-1}[\Sigma_{i}-u_{i}u_{i}^{\prime}]\bar{\Sigma}_{t}^{-1}u_{t}\tilde{X}_{t-1}^{\prime}$
in the proof of the Proposition 4.1 of Patilea and Raïssi (2010). That means
we use the CLT for m.d. sequences indexed by classes of functions, see Bae,
Jun and Levental (2010), see also Bae and Choi (1999). Here the uniformity to
be considered is also with respect to the integration variable $v$, but this
can be handled with very little additional effort, like in Patilea and Raïssi
(2010). The details are omitted. Finally, to derive the uniform order
$R_{214T}(h;b)$, let us write it as
$R_{214T}(h;b)=R_{214T}(h;b)-R_{214T}(h;b_{T})+R_{214T}(h;b_{T})=:r_{214T}(b)+R_{214T}(h;b_{T}).$
The term $R_{214T}(h;b_{T})$ is centered and the variance of each element of
this matrix decreases to zero at the rate $o(1/T)$ (use Lemma 12.4-(d) and
Assumption A1’ to derive the rate of the variance). Deduce that
$R_{214T}(h;b_{T})=o_{p}(T^{-1/2})$. Next consider the $d^{2}$ stochastic
processes corresponding to the elements of $r_{214T}(b)$ and indexed by
$\vartheta\in[c_{min},c_{max}]$ where $b=\vartheta b_{T}$. For each such
process apply Theorem 1 of Bae, Jun and Levental (2010) to deduce that
$\sup_{b}\|r_{214T}(b)\|=o_{p}(T^{-1/2})$. Finally, deduce that
$\sup_{b}\|R_{214T}(h;b)\|=o_{p}(T^{-1/2})$
To handle the term $R_{3T}(h)$, let us write
$\displaystyle R_{3T}(h)$ $\displaystyle=$
$\displaystyle\\!\frac{1}{T}\sum_{t=h+1}^{T}\left[(\check{\Sigma}_{t}^{-\frac{1}{2}}-\Sigma_{t}^{-\frac{1}{2}})u_{t}+\delta^{\epsilon}_{t}\right]\\!\left[(\check{\Sigma}_{t-h}^{-\frac{1}{2}}-\Sigma_{t-h}^{-\frac{1}{2}})u_{t-h}+\delta^{\epsilon}_{t-h}\right]^{\prime}$
$\displaystyle=:$
$\displaystyle\frac{1}{T}\sum_{t=h+1}^{T}(\check{\Sigma}_{t}^{-\frac{1}{2}}-\Sigma_{t}^{-\frac{1}{2}})u_{t}u_{t-h}^{\prime}(\check{\Sigma}_{t-h}^{-\frac{1}{2}}-\Sigma_{t-h}^{-\frac{1}{2}})^{\prime}+R_{32T}(h)$
$\displaystyle=:$ $\displaystyle R_{31T}(h)+R_{32T}(h).$
The term $R_{32T}(h)$ could be easily handled taking the norm, using the bound
on $\delta^{\epsilon}_{t}$ and Lemma 12.4-(b) below. For $R_{31T}(h)$, we
could decompose $\check{\Sigma}_{t}^{-1/2}-\Sigma_{t}^{-1/2}$ in four term
exactly as we did for $R_{21T}(h)$ and apply the same techniques. The details
are omitted and are available from the authors upon request. $\quad\square$
###### Lemma 12.4
Let $\|\cdot\|$ denote the Frobenius norm. Under the Assumptions of
Proposition 3 we have:
(a) As $T\rightarrow\infty$, for $a=1$ or $a=-1$, we have
$\sup_{1\leq t\leq
T}\sup_{b\in\mathcal{B}_{T}}\left\\{\left\|\check{\Sigma}_{t}^{-\frac{1}{2}}\right\|+\left\|\check{\Sigma}_{t}^{a}\right\|+\left\|(\check{\Sigma}_{t}^{0})^{a}\right\|+\left\|\stackrel{{\scriptstyle\circ}}{{\Sigma}}\;\\!\\!\\!_{t}^{a}\right\|+\left\|\bar{\Sigma}_{t}^{a}\right\|\right\\}=O_{p}(1).$
(b) As $T\rightarrow\infty$,
$\sup_{1\leq t\leq
T}\sup_{b\in\mathcal{B}_{T}}\left\|\check{\Sigma}_{t}^{-\frac{1}{2}}-\Sigma_{t}^{-\frac{1}{2}}\right\|=o_{p}(1).$
(c) As $T\rightarrow\infty$,
$\sup_{b\in\mathcal{B}_{T}}\frac{1}{T}\sum_{t=1}^{T}\left\|\bar{\Sigma}_{t}-\Sigma_{t}\right\|^{2}=o(1).$
(d) As $T\rightarrow\infty$,
$\max_{1\leq t\leq
T}E\left(\sup_{b\in\mathcal{B}_{T}}\|\stackrel{{\scriptstyle\circ}}{{\Sigma}}_{t}-\bar{\Sigma}_{t}\|^{4}\right)=O(\left(1/(Tb_{T})^{2}\right).$
(12.25)
The proof of Lemma 12.4 is a direct consequence of Lemmas 7.5 and 7.6 of
Patilea and Raïssi (2010) and Lemma A of Xu and Phillips (2008) applied
elementwise, and hence will be omitted.
Proof of Proposition 4 The notation in this proof are those of section 7.
First let us notice that $q_{A}(x)=x/2\\{1+o(1)\\}$ for large values of $x$,
provided the asymptotic law of a test statistic $Q_{A}$ under the null
hypothesis is $\chi_{m}^{2}$ with some $m\geq 1$. In the case where a test
statistic $Q_{A}$ has the asymptotic distribution of $U(\delta_{m}^{OLS})$
defined in equation (4.2),
$\displaystyle q_{A}(x)$ $\displaystyle=$ $\displaystyle-\log
P(U(\delta_{m}^{OLS})>x)\leq-\log
P\left(\max_{i}\\{\delta_{i}^{OLS}\\}U^{2}>x\right)$ $\displaystyle=$
$\displaystyle\frac{x}{2\max_{i}\\{\delta_{i}^{OLS}\\}}\\{1+o(1)\\},$
and
$\displaystyle q_{A}(x)$ $\displaystyle=$ $\displaystyle-\log
P(U(\delta_{m}^{OLS})>x)\geq-\log
P\left(\max_{i}\\{\delta_{i}^{OLS}\\}\Sigma_{j=1}^{d^{2}m}U_{j}^{2}>x\right)$
$\displaystyle=$
$\displaystyle\frac{x}{2\max_{i}\\{\delta_{i}^{OLS}\\}}\\{1+o(1)\\},$
with $U$ and $U_{j}$ independent $\mathcal{N}(0,1)$ variables. Thus to prove
(i) it suffices to show that
$\int_{0}^{1}\Sigma dr\otimes\left(\int_{0}^{1}\Sigma
dr\right)^{\\!\\!-1}\\!\\!\ll\max_{i}\\{\delta_{i}^{OLS}\\}\left(\\!\int_{0}^{1}\\!\\!\\!\Sigma
dr\\!\otimes\\!I_{d}\\!\right)\Sigma_{G^{\otimes
2}}^{-1}\left(\\!\int_{0}^{1}\\!\\!\\!\Sigma dr\\!\otimes\\!I_{d}\\!\right).$
(12.26)
Herein, for any $A$ and $B$ symmetric matrices, $A\ll B$ means that $B-A$ is
positive semidefinite. Now, in the last display, multiply both sides of the
order relationship on the left and on the right by $\left(\int_{0}^{1}\Sigma
dr\right)^{-1/2}\otimes\left(\int_{0}^{1}\Sigma dr\right)^{1/2}$ and deduce
that it suffices to prove
$\displaystyle I_{d}\otimes I_{d}$ $\displaystyle\ll$
$\displaystyle\\!\\!\max_{i}\\{\delta_{i}^{OLS}\\}\left[\left(\\!\int_{0}^{1}\\!\\!\\!\Sigma
dr\\!\right)^{\\!\\!1/2}\\!\otimes\\!\left(\\!\int_{0}^{1}\\!\\!\\!\Sigma
dr\\!\right)^{\\!\\!1/2}\right]\\!\Sigma_{G^{\otimes
2}}^{-1}\left[\left(\\!\int_{0}^{1}\\!\\!\\!\Sigma
dr\\!\right)^{\\!\\!1/2}\\!\\!\otimes\\!\left(\\!\int_{0}^{1}\\!\\!\\!\Sigma
dr\\!\right)^{\\!\\!1/2}\right]$ $\displaystyle=:$
$\displaystyle\\!\\!\max_{i}\\{\delta_{i}^{OLS}\\}\tilde{\Delta}_{m}^{-1}.$
To obtain (12.26) it remains to notice that
$\Delta_{m}^{OLS}=I_{m}\otimes\tilde{\Delta}_{m}$ and that $\delta_{i}^{OLS}$,
$1\leq i\leq d^{2}m$ are the eigenvalues of $\Delta_{m}^{OLS}$.
In (ii) we suppose $\Sigma(\cdot)=\sigma^{2}(\cdot)I_{d}$ and in this case it
suffices to notice that
$\left(\\!\int_{0}^{1}\\!\\!\\!\Sigma
dr\\!\otimes\\!I_{d}\\!\right)\Sigma_{G^{\otimes
2}}^{-1}\left(\\!\int_{0}^{1}\\!\\!\\!\Sigma
dr\\!\otimes\\!I_{d}\\!\right)=\frac{\left(\int_{0}^{1}\sigma^{2}(r)dr\right)^{2}}{\int_{0}^{1}\sigma^{4}(r)dr}I_{d}\otimes
I_{d}\ll I_{d}\otimes I_{d},$
where for the order relationship we use Cauchy-Schwarz inequality, while
$\left(\int_{0}^{1}G(r)^{\prime}\otimes G(r)^{-1}dr\right)^{2}=I_{d}\otimes
I_{d}.$
$\quad\square$
### 12.1 A Taylor expansion of the matrix function $f(A)=A^{-1/2}$
Recall that the differential of a function $F$ that maps a $r\times r$ matrix
$X$ into a $r\times r$ matrix $F(X)$ is defined by the equation
$vec(dF)=d\mathbf{f}$
where $\mathbf{f}$ is a $r^{2}\times 1$ vector function such that
$\mathbf{f}(vec(X))=vec(F(X))$. In other words, the (first-order) differential
of $F$ at $X$ in the $r\times r$ matrix obtained by unstacking the
differential of $d\mathbf{f}$ at $vec(X)$. See also Schott (2005), section
9.3. Basic properties of vector differentials implies
$0=d(X^{-1/2}XX^{-1/2})=d(X^{-1/2})X^{1/2}+X^{-1/2}d(X)X^{-1/2}+X^{1/2}d\left(X^{-1/2}\right).$
Now, recall that for any $A$ positive definite matrix, the Lyapunov equation
$AY+YA=B$ has a unique solution that can be represented as
$Y=\int_{0}^{\infty}\exp(-vA)B\exp(-vA)dv.$
See Horn and Johnson (1991), section 6.5. All these facts brings us to the
following technical result.
###### Lemma 12.5
Let $A$ and $\widehat{A}$ be two positive definite $r\times r-$matrices such
that $0<c_{1}\leq\lambda_{min}(B)<\infty$ for some constant $c_{1}$ and $B=A$
and $B=\widehat{A}$, where $\lambda_{min}(B)$ is the smallest eigenvalue of
the symmetric matrix $B$. Moreover, suppose that $\|\widehat{A}-A\|\leq c_{2}$
for some small constant $c_{2}$. Then
$\widehat{A}^{-1/2}-A^{-1/2}=-\int_{0}^{\infty}\exp(-vA)A^{-1/2}\\{\widehat{A}-A\\}A^{-1/2}\exp(-vA)dv+R_{n}$
where $R_{n}$ is a $r\times r-$symmetric matrix with $\|R_{n}\|\leq
C\|\widehat{A}-A\|^{2}$ and $C$ is a constant depending only on $c_{1}$,
$c_{2}$.
Proof of Lemma 12.5 Let $\Delta$ be some arbitrary matrix. By Taylor
expansion, for sufficiently small values of $\varepsilon$ and for some
matrices $G_{i}$, $i=1,2,...$
$(A+\varepsilon\Delta)^{-1/2}=A^{-1/2}+\varepsilon
G_{1}+\varepsilon^{2}G_{2}+...=A^{-1/2}+\varepsilon G_{1}+R_{1}$ (12.27)
with $\|R_{1}\|\leq C_{1}\varepsilon^{2}$ and $C_{1}$ a constant depending on
$c_{1}$ and the norm of $\Delta$. This kind of representation could be derived
from a Taylor formula for the vector function $\mathbf{f}$ defined by the
equation $\mathbf{f}(vec(X))=vec(X^{-1/2})$ considered in a neighborhood of
the $vec(A)$ for a positive definite matrix $A$. See, for instance, Schott
(2005), section 9.6. On the other hand, recall that for $B$ a square matrix
with $\|B\|<1$, $(I-B)^{-1}=I+B+B^{2}+...=I+B+R_{2}$ where $R_{2}$ is the
reminder of the expansion with $\|R_{2}\|\leq\|B\|^{2}(1-\|B\|)^{-1}$. Thus
for sufficiently small values of we can write
$\displaystyle(A+\varepsilon\Delta)^{-1}$ $\displaystyle=$ $\displaystyle
A^{-1/2}(I+\varepsilon A^{-1/2}\Delta A^{-1/2})^{-1}A^{-1/2}$ $\displaystyle=$
$\displaystyle A^{-1/2}(I-\varepsilon A^{-1/2}\Delta
A^{-1/2}+\varepsilon^{2}A^{-1/2}\Delta A^{-1}\Delta A^{-1/2}+...)A^{-1/2}$
Taking the square on both sides of the first equality in (12.27) and
identifying the coefficients of the power of $\varepsilon$ in equation (12.1)
deduce that $G_{1}$ is the solution of the Lyapunov equation
$A^{-1/2}Y+YA^{-1/2}=-A^{-1}\Delta A^{-1}.$
Finally, the result follows by taking
$\Delta=(\widehat{A}-A)/\|\widehat{A}-A\|$ and
$\varepsilon=\|\widehat{A}-A\|$. $\quad\square$
## References
Ahn, S.K. (1988) Distribution for residual autocovariances in multivariate
autoregressive models with structured parameterization. Biometrika 75,
590-593.
Anderson, T.W. (1951) Estimating linear restrictions on regression
coefficients for multivariate normal distributions. Annals of Mathematical
Statistics 22, 327-351.
Bae, J., and Choi, M.J. (1999) The uniform CLT for martingale difference of
function-indexed process under uniform integrable entropy. Communications
Korean Mathematical Society 14, 581-595.
Bae, J., Jun, D., and Levental, S. (2010) The uniform CLT for martingale
differences arrays under the uniformly integrable entropy. Bulletin of the
Korean Mathematical Society 47, 39-51.
Baek, E., and Brock, W. (1992) A general test for nonlinear Granger causality:
bivariate model. Working paper, University of Wisconsin-Madison.
Bahadur, R.R. (1960) Stochastic comparison of tests. Annals of Mathematical
Statistics 31, 276-295.
Beare B.K. (2008) Unit root testing with unstable volatility. Working paper,
Nuffield College, University of Oxford.
Boswijk, H.P. (2010) Nuisance parameter free inference on cointegration
parameters in the presence of a variance shift. Economics Letters 107,
190-193.
Boswijk, H.P., and Franses, H.F. (1992) Dynamic specification and
coitengration. Oxford Bulletin of Economics and Statistics 54, 369-381.
Boubacar Mainassara, Y. (2010) Multivariate portmanteau test for structural
VARMA models with uncorrelated but non-independent error terms. Working paper.
EQUIPPE Universit Lille 3.
Box, G.E.P., and Pierce, D.A. (1970) Distribution of residual autocorrelations
in autoregressive-integrated moving average time series models. Journal of the
American Statistical Association 65, 1509-1526.
Brockwell, P.J., and Davis, R.A. (1991) Time Series: Theory and methods.
Springer, New York.
Brüggemann R., Lütkepohl H., and Saikkonen P. (2006) Residual autocorrelation
testing for vector error correction models. Journal of Econometrics 134,
579-604.
Cavaliere, G. (2004) Unit root tests under time-varying variance shifts.
Econometric Reviews 23, 259-292.
Cavaliere, G., Rahbek, A. and Taylor, A.M.R. (2010) Testing for co-integration
in vector autoregressions with non-stationary volatility. Journal of
Econometrics 158, 7-24.
Cavaliere, G., and Taylor, A.M.R. (2007) Testing for unit roots in time series
models with non-stationary volatility. Journal of Econometrics 140, 919-947.
Cavaliere, G., and Taylor, A.M.R. (2008) Time-transformed unit root tests for
models with non-stationary volatility. Journal of Time Series Analysis 29,
300-330.
Chitturi, R. V. (1974) Distribution of residual autocorrelations in multiple
autoregressive schemes. Journal of the American Statistical Association 69,
928-934.
Duchesne, P. (2005) Testing for serial correlation of unknown form in
cointegrated time series models. Annals of the Institute of Statistical
Mathematics 57, 575-595.
Edgerton, D., and Shukur, G. (1999) Testing autocorrelation in a system
perspective. Econometric Reviews 18, 343-386.
Engle, R.F. (1982) Autoregressive conditional heteroscedasticity with
estimates of the variance of UK inflation. Econometrica 50, 987-1008.
Francq, C., and Ra ssi, H. (2007) Multivariate portmanteau test for
autoregressive models with uncorrelated but nonindependent errors. Journal of
Time Series Analysis 28, 454-470.
Francq, C., Roy, R., and Zakoïan, J.-M. (2005) Diagnostic checking in ARMA
models with uncorrelated errors. Journal of the American Statistical
Association 13, 532-544.
Horn, R.A., and Johnson, C.R. (1994) _Topics in Matrix Analysis_. Cambridge
University Press, Cambridge.
Hosking, J. R. M. (1980) The multivariate portmanteau statistic. Journal of
the American Statistical Association 75, 343-386.
Imhof, J. P. (1961) Computing the distribution of quadratic forms in normal
variables. Biometrika 48, 419-426.
Johansen, S. (1995) Likelihood-Based Inference in Cointegrated Vector
Autoregressive Models. Oxford University Press, New York.
Jones, J.D. (1989) A comparison of lag-length selection techniques in tests of
Granger causality between money growth and inflation: evidence for the US,
1959-86. Applied Economics 21, 809-822.
Katayama, N. (2008) An improvement of the portmanteau statistic. Journal of
Time Series Analysis 29, 359-370.
Kim, T.-H., Leybourne, S., and Newbold, P. (2002) Unit root tests with a break
in innovation variance. Journal of Econometrics 103, 365-387.
Kuonen, D. (1999) Saddlepoint approximations for distributions of quadratic
forms in normal variables. Biometrika 86, 929-935.
Ljung, G.M. and Box, G.E.P. (1978) On measure of lack of fit in time series
models. Biometrika 65, 297-303.
Lobato, I., Nankervis, J.C., and Savin, N.E. (2002) Testing for zero
autocorrelation in the presence of statistical dependence. Econometric Theory
18, 730-743.
L tkepohl, H. (2005) New Introduction to Multiple Time Series Analysis.
Springer, Berlin.
Patilea, V., and Ra ssi, H. (2010) Adaptive estimation of vector
autoregressive models with time-varying variance: application to testing
linear causality in mean. Working document IRMAR-INSA; arXiv:1007.1193v2
Phillips, P.C.B., and Xu, K.L. (2005) Inference in autoregression under
heteroskedasticity. Journal of Time Series Analysis 27, 289-308.
Ramey, V.A., and Vine, D.J. (2006) Declining volatility in the U.S. automobile
industry. The American Economic Review 96, 1876-1889.
Ra ssi, H. (2010) Autocorrelation based tests for vector error correction
models with uncorrelated but nonindependent errors. Test 19, 304-324.
Sensier, M., and van Dijk, D. (2004) Testing for volatility changes in U.S.
macroeconomic time series. Review of Economics and Statistics 86, 833-839.
Schott, J.R. (2005) Matrix analysis for statistics (2nd ed.).Wiley series in
probability and statistics, Wiley, Hoboken, N.J.
Stock, J.H., and Watson, M.W. (1989) Interpreting the evidence on money-income
causality. _Journal of Econometrics_ 40, 161-181.
van der Vaart, A.W. (1998) Asymptotic Statistics. Cambridge University Press,
Cambridge.
Thornton, D.L. and Batten, D.S. (1985) Lag-length selection and tests of
Granger causality between money and income. _Journal of Money, Credit, and
Banking_ 17, 164-178.
Vilasuso, J. (2001) Causality tests and conditional heteroskedasticity: Monte
Carlo evidence. _Journal of Econometrics_ 101, 25-35.
Watson, M.W. (1999) Explaining the increased variability in long-term interest
rates. Federal Reserve Bank of Richmond Economic Quarterly 85/4, 71-96.
White, H. (1980) A heteroskedasticity consistent covariance matrix estimator
and a direct test for heteroskedasticity. _Econometrica_ 48, 817-838.
Xu, K.L., and Phillips, P.C.B. (2008) Adaptive estimation of autoregressive
models with time-varying variances. Journal of Econometrics 142, 265-280.
## 13 Appendix B: Tables and Figures
Table 1: Empirical size (in %) of the portmanteau tests with iid standard
Gaussian errors.
Case | $m=5$ | $m=15$
---|---|---
T | 50 | 100 | 200 | 50 | 100 | 200
$LB_{m}^{S}$ | 2.6 | 4.6 | 5.5 | 4.4 | 4.1 | 4.6
$LB_{m}^{OLS}$ | 4.2 | 4.9 | 5.2 | 11.5 | 8.1 | 6.9
$LB_{m}^{ALS}$ | 2.2 | 4.1 | 5.1 | 4.1 | 3.7 | 4.4
$LB_{m}^{GLS}$ | 2.0 | 3.9 | 5.1 | 3.7 | 3.8 | 4.3
$\widetilde{LB}_{m}^{OLS}$ | 14.2 | 9.0 | 6.5 | 30.9 | 15.4 | 10.5
$\widetilde{LB}_{m}^{ALS}$ | 6.3 | 5.9 | 5.6 | 15.8 | 7.4 | 6.8
$\widetilde{LB}_{m}^{GLS}$ | 4.6 | 4.7 | 4.8 | 8.6 | 8.1 | 8.1
Table 2: Empirical size (in %) of the portmanteau tests. The innovations are
heteroscedastic with an abrupt break at $T/2$.
Case | $m=5$ | $m=15$
---|---|---
T | 50 | 100 | 200 | 50 | 100 | 200
$LB_{m}^{S}$ | 27.9 | 35.3 | 40.1 | 35.7 | 63.0 | 76.7
$LB_{m}^{OLS}$ | 4.5 | 3.3 | 4.8 | 5.4 | 6.1 | 6.0
$LB_{m}^{ALS}$ | 3.2 | 3.7 | 5.0 | 3.8 | 3.8 | 3.9
$LB_{m}^{GLS}$ | 2.6 | 4.2 | 5.7 | 3.7 | 4.2 | 4.7
$\widetilde{LB}_{m}^{OLS}$ | 28.9 | 13.8 | 9.7 | 30.9 | 21.9 | 15.2
$\widetilde{LB}_{m}^{ALS}$ | 18.6 | 8.7 | 7.1 | 35.0 | 15.9 | 9.0
$\widetilde{LB}_{m}^{GLS}$ | 6.4 | 5.3 | 6.3 | 12.5 | 10.0 | 9.7
Table 3: Empirical size (in %) of the portmanteau tests. The innovations are
heteroscedastic with trending behaviour.
Case | $m=5$ | $m=15$
---|---|---
T | 50 | 100 | 200 | 50 | 100 | 200
$LB_{m}^{S}$ | 12.8 | 15.1 | 19.2 | 18.4 | 27.5 | 36.8
$LB_{m}^{OLS}$ | 4.9 | 4.5 | 4.8 | 9.2 | 7.3 | 6.3
$LB_{m}^{ALS}$ | 4.4 | 5.0 | 5.0 | 8.0 | 6.0 | 6.0
$LB_{m}^{GLS}$ | 2.3 | 3.7 | 5.2 | 2.8 | 4.0 | 4.0
$\widetilde{LB}_{m}^{OLS}$ | 32.4 | 22.6 | 15.3 | 38.3 | 25.1 | 16.5
$\widetilde{LB}_{m}^{ALS}$ | 10.0 | 4.2 | 3.3 | 22.7 | 6.7 | 5.2
$\widetilde{LB}_{m}^{GLS}$ | 5.7 | 5.3 | 6.3 | 10.8 | 9.7 | 9.2
Table 4: The empirical means and standard deviations of the weights in the
sums (4.2), (5.4) and their GLS counterparts over the $N=1000$ iterations. The
innovations are heteroscedastic with trending behaviour.
$i$ | 1 | 2 | 3 | 4 | 5
---|---|---|---|---|---
$\hat{\delta}_{i}^{ols}$ | $0.02_{[0.02]}$ | $0.06_{[0.04]}$ | $0.11_{[0.06]}$ | $0.2_{[0.09]}$ | $0.89_{[0.11]}$
$\hat{\delta}_{i}^{als}$ | $0.03_{[0.02]}$ | $0.05_{[0.02]}$ | $0.09_{[0.03]}$ | $0.11_{[0.03]}$ | $1.00_{[0.00]}$
$\hat{\delta}_{i}^{gls}$ | $0.05_{[0.04]}$ | $0.06_{[0.04]}$ | $0.15_{[0.08]}$ | $0.17_{[0.08]}$ | $1.00_{[0.00]}$
$i$ | 6 | 7 | 8 | 9 | 10
$\hat{\delta}_{i}^{ols}$ | $0.89_{[0.11]}$ | $0.89_{[0.11]}$ | $0.91_{[0.11]}$ | $1.11_{[0.13]}$ | $1.11_{[0.13]}$
$\hat{\delta}_{i}^{als}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$
$\hat{\delta}_{i}^{gls}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$
$i$ | 11 | 12 | 13 | 14 | 15
$\hat{\delta}_{i}^{ols}$ | $1.11_{[0.13]}$ | $1.12_{[0.13]}$ | $1.36_{[0.17]}$ | $1.36_{[0.17]}$ | $1.36_{[0.17]}$
$\hat{\delta}_{i}^{als}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$
$\hat{\delta}_{i}^{gls}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$
$i$ | 16 | 17 | 18 | 19 | 20
$\hat{\delta}_{i}^{ols}$ | $1.36_{[0.17]}$ | $1.69_{[0.27]}$ | $1.71_{[0.27]}$ | $1.71_{[0.27]}$ | $1.71_{[0.27]}$
$\hat{\delta}_{i}^{als}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$
$\hat{\delta}_{i}^{gls}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$ | $1.00_{[0.00]}$
Table 5: The estimators of the autoregressive parameters of the VAR(1) model
for the balance data for the U.S..
Parameter | $\theta_{1}$ | $\theta_{2}$ | $\theta_{3}$ | $\theta_{4}$
---|---|---|---|---
ALS estimate | $0.33_{[0.08]}$ | $0.02_{[0.02]}$ | $-0.35_{[0.30]}$ | $-0.07_{[0.08]}$
OLS estimate | $0.45_{[0.23]}$ | $0.00_{[0.02]}$ | $-1.02_{[0.60]}$ | $0.1_{[0.17]}$
Table 6: The balance data for the U.S.: the $p$-values of the ARCH-LM tests
(in %) for the components of the ALS-residuals of a VAR(1).
lags | 2 | 5 | 10
---|---|---|---
$\check{\epsilon}_{1t}$ | 22.26 | 45.05 | 36.44
$\check{\epsilon}_{2t}$ | 25.32 | 73.32 | 77.18
Table 7: The $p$-values of the portmanteau tests (in %) for the checking of
the adequacy of the VAR(1) model for the U.S. trade balance data.
$m$ | 5 | 15
---|---|---
$LB_{m}^{S}$ | 0.00 | 0.01
$LB_{m}^{OLS}$ | 50.80 | 99.94
$LB_{m}^{ALS}$ | 6.36 | 15.95
$\widetilde{LB}_{m}^{OLS}$ | 0.00 | 7.87
$\widetilde{LB}_{m}^{ALS}$ | 5.61 | 15.50
Table 8: The balance data for the U.S.: the test statistics of the portmanteau
tests used for checking the adequacy of the VAR(1) model. The
$\underline{\tilde{Q}}_{m}^{OLS}$ and $\underline{\tilde{Q}}_{m}^{ALS}$
correspond to the statistics of the LB version of the Katayama portmanteau
tests with standard asymptotic distribution.
$m$ | 5 | 15
---|---|---
$\tilde{Q}_{m}^{OLS}$ | 6.84 | 106.34
$\tilde{Q}_{m}^{ALS}$ | 25.73 | 66.83
$\underline{\tilde{Q}}_{m}^{OLS}$ | 6.84 | 106.34
$\underline{\tilde{Q}}_{m}^{ALS}$ | 48.73 | 66.83
Table 9: The estimators of the autoregressive parameters of the VAR(4) model
for the U.S. energy-transportation price indexes.
Parameter | $\theta_{1}$ | $\theta_{2}$ | $\theta_{3}$ | $\theta_{4}$
---|---|---|---|---
ALS estimate | $0.36_{[0.08]}$ | $0.37_{[0.14]}$ | $0.10_{[0.04]}$ | $0.43_{[0.08]}$
OLS estimate | $0.74_{[0.32]}$ | $1.08_{[0.67]}$ | $-0.08_{[0.13]}$ | $0.10_{[0.28]}$
Parameter | $\theta_{5}$ | $\theta_{6}$ | $\theta_{7}$ | $\theta_{8}$
ALS estimate | $0.06_{[0.08]}$ | $-0.02_{[0.14]}$ | $-0.09_{[0.04]}$ | $-0.13_{[0.08]}$
OLS estimate | $-0.53_{[0.35]}$ | $-1.26_{[0.73]}$ | $0.10_{[0.14]}$ | $0.27_{[0.30]}$
Parameter | $\theta_{9}$ | $\theta_{10}$ | $\theta_{11}$ | $\theta_{12}$
ALS estimate | $0.18_{[0.08]}$ | $0.13_{[0.14]}$ | $-0.05_{[0.04]}$ | $0.01_{[0.08]}$
OLS estimate | $0.21_{[0.24]}$ | $0.13_{[0.52]}$ | $-0.05_{[0.14]}$ | $0.03_{[0.31]}$
Parameter | $\theta_{13}$ | $\theta_{14}$ | $\theta_{15}$ | $\theta_{16}$
ALS estimate | $0.17_{[0.08]}$ | $0.15_{[0.14]}$ | $-0.07_{[0.04]}$ | $0.03_{[0.08]}$
OLS estimate | $0.32_{[0.26]}$ | $0.64_{[0.57]}$ | $-0.17_{[0.15]}$ | $0.31_{[0.32]}$
Table 10: The $p$-values of the portmanteau tests (in %) for the checking of
the adequacy of the VAR(4) model for the U.S. energy-transportation price
indexes (n.a.: not available).
$m$ | 3 | 6 | 12
---|---|---|---
$LB_{m}^{S}$ | n.a. | 2.08 | 0.00
$LB_{m}^{OLS}$ | 100.00 | 100.00 | 100.00
$LB_{m}^{ALS}$ | 85.44 | 98.72 | 10.07
$\widetilde{LB}_{m}^{OLS}$ | n.a. | n.a. | n.a.
$\widetilde{LB}_{m}^{ALS}$ | n.a. | n.a. | n.a.
Table 11: VAR modeling of the energy-transportation price indexes: the test
statistics of the portmanteau tests used for checking the adequacy of the
VAR(4) model.
$m$ | 3 | 6 | 12
---|---|---|---
$\tilde{Q}_{m}^{OLS}$ | 1.87 | 18.06 | 117.51
$\tilde{Q}_{m}^{ALS}$ | 5.03 | 9.50 | 57.69
$\tilde{\underline{Q}}_{m}^{OLS}$ | n.a. | n.a. | n.a.
$\tilde{\underline{Q}}_{m}^{ALS}$ | n.a. | n.a. | n.a.
Table 12: VAR modeling of the energy-transportation price indexes: the weights
of the non standard distributions of the portmanteau tests used for checking
the adequacy of the VAR(4) model with $m=3$.
$i$ | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
---|---|---|---|---|---|---|---|---|---|---|---|---
$\hat{\delta}_{i}^{ols}$ | 0.05 | 0.48 | 1.94 | 2.05 | 2.66 | 2.85 | 4.97 | 6.58 | 10.51 | 16.06 | 64.14 | 312.18
$\hat{\delta}_{i}^{als}$ | 0.01 | 0.09 | 0.24 | 0.89 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00
$\tau_{1}$ $\sigma_{21}$
$\tau_{1}$ $\sigma_{21}$
Figure 1: The asymptotic variance $\Sigma_{GLS}(2,2)$ on the left and the
ratio $\Sigma_{OLS}(6,6)/\Sigma_{OLS}^{S}(6,6)$ on the right for
$\tau_{1}=\tau_{2}$ in Example 3.1.
$\tau_{1}$ $\tau_{2}$
$\tau_{1}$ $\tau_{2}$
Figure 2: The same as in Figure 1 but for $\tau_{1}\neq\tau_{2}$ in general.
Figure 3: Empirical power (in %) of the portmanteau tests with $m=10$. The
adequacy of a VAR(1) model to VAR(2) processes is tested. The innovations are
homoscedastic on the right. The variance exhibits a break at $T/2$ on the left
and have a trending behavior in the middle.
Figure 4: Empirical power (in %) of the portmanteau tests with $m=10$. The
non correlation of VAR(1) processes is tested. The variance have a trending
behavior on the left and exhibits an abrupt shift on the right.
Figure 5: The balance on merchandise trade for the U.S. on the left and the
balance on services for the U.S. on the right in billions of dollars from
1/1/1970 to 10/1/2009, T=160. Data source: The research division of the
federal reserve bank of Saint Louis, www.research.stlouis.org.
Figure 6: The differences of the balance on merchandise trade (on the left)
and of the balance on services for the U.S. (on the right).
Figure 7: The cross validation score (CV) for the ALS estimation of the
VAR(1) model for the differences of the balance on merchandise trade and on
services in the U.S..
Figure 8: The ALS residuals of a VAR(1) for the differences of the balance on
merchandise trade and on services in the U.S.. The first component of the ALS
residuals is on the left and the second is on the right.
Figure 9: The same as in Figure 8 but for the OLS residuals.
$h$ $h$
Figure 10: The balance data for the U.S.: the autocorrelations of the squares
of the first component of the ALS residuals (on the left) and of the second
component of the ALS residual (on the right).
Figure 11: The balance data for the U.S.: the logarithms of the
$\hat{u}_{1t}^{2}$’s (full line) and the logarithms of the non parametric
estimates of Var($u_{1t}$) (dotted line) on the left and the same for the
$\hat{u}_{2t}^{2}$’s and Var($u_{2t}$) on the right.
Figure 12: The balance data for the U.S.: estimation of the correlation
between of the components of the error process.
$\hat{R}_{ALS}^{11}(h)$ $\hat{R}_{ALS}^{22}(h)$
$h$ $h$
Figure 13: The balance data for the U.S.: the ALS residual autocorrelations
$\hat{R}_{ALS}^{11}(h)$ (on the left) and $\hat{R}_{ALS}^{22}(h)$ (on the
right), with obvious notations. The 95% confidence bounds are obtained using
(3.7) and (5.3).
$\hat{R}_{ALS}^{21}(h)$ $\hat{R}_{ALS}^{12}(h)$
$h$ $h$
Figure 14: The same as in Figure 13 but for $\hat{R}_{ALS}^{21}(h)$ (on the
left) and $\hat{R}_{ALS}^{12}(h)$ (on the right).
$\hat{R}_{OLS}^{11}(h)$ $\hat{R}_{OLS}^{22}(h)$
$h$ $h$
Figure 15: The balance data for the U.S.: the OLS residual autocorrelations
$\hat{R}_{OLS}^{11}(h)$ (on the left) and $\hat{R}_{OLS}^{22}(h)$ (on the
right). The full lines 95% confidence bounds are obtained using (3.13). The
dotted lines 95% confidence bounds are obtained using the standard result
(3.9).
$\hat{R}_{OLS}^{21}(h)$ $\hat{R}_{OLS}^{12}(h)$
$h$ $h$
Figure 16: The same as in Figure 13 but for $\hat{R}_{OLS}^{21}(h)$ (on the
left) and $\hat{R}_{OLS}^{12}(h)$ (on the right).
Figure 17: The energy price index (full line) and the transportation price
index (dotted line) in the U.S. from 1/1/1957 to 2/1/2011, T=648. Data source:
The research division of the federal reserve bank of Saint Louis,
www.research.stlouis.org.
Figure 18: The differences of the energy price (on the left) and of the
transportation price indexes for the U.S. (on the right).
Figure 19: The cross validation score (CV) for the ALS estimation of the
VAR(4) model for the differences of the energy-transportation price indexes in
the U.S..
Figure 20: The ALS residuals of a VAR(4) model for the differences of the
energy and transportation price indexes for the U.S.. The first component of
the ALS residuals is on the left and the second is on the right.
Figure 21: The same as in Figure 8 but for the OLS residuals.
Figure 22: The energy-transportation data for the U.S.: the logarithms of the
$\hat{u}_{1t}^{2}$’s (full line) and the logarithms of the non parametric
estimation of Var($u_{1t}$) (dotted line) on the left and the same for the
$\hat{u}_{2t}^{2}$’s and Var($u_{2t}$) on the right.
Figure 23: The energy-transportation data for the U.S.: estimation of the
correlation between the components of the error process.
$\hat{R}_{ALS}^{11}(h)$ $\hat{R}_{ALS}^{22}(h)$
$h$ $h$
Figure 24: The energy-transportation data for the U.S.: the ALS residual
autocorrelations $\hat{R}_{ALS}^{11}(h)$ (on the left) and
$\hat{R}_{ALS}^{22}(h)$ (on the right), with obvious notations. The 95%
confidence bounds are obtained using (3.7) and (5.3).
$\hat{R}_{ALS}^{21}(h)$ $\hat{R}_{ALS}^{12}(h)$
$h$ $h$
Figure 25: The same as in Figure 24 but for $\hat{R}_{ALS}^{21}(h)$ (on the
left) and $\hat{R}_{ALS}^{12}(h)$ (on the right).
$\hat{R}_{OLS}^{11}(h)$ $\hat{R}_{OLS}^{22}(h)$
$h$ $h$
Figure 26: The energy-transportation data for the U.S.: the OLS residual
autocorrelations $\hat{R}_{OLS}^{11}(h)$ (on the left) and
$\hat{R}_{OLS}^{22}(h)$ (on the right). The full line 95% confidence bounds
are obtained using (3.13). The dotted lines 95% confidence bounds are obtained
using the standard result (3.9).
$\hat{R}_{OLS}^{21}(h)$ $\hat{R}_{OLS}^{12}(h)$
$h$ $h$
Figure 27: The same as in Figure 26 but for $\hat{R}_{OLS}^{21}(h)$ (on the
left) and $\hat{R}_{OLS}^{12}(h)$ (on the right).
$\hat{R}_{OLS}^{11}(h)$ $\hat{R}_{OLS}^{22}(h)$
$h$ $h$
Figure 28: The energy-transportation data for the U.S.: the OLS residual
autocorrelations $\hat{R}_{OLS}^{11}(h)$ (on the left) and
$\hat{R}_{OLS}^{22}(h)$ (on the right). The dotted lines 95% confidence bounds
are obtained using the standard result (3.9).
$\hat{R}_{OLS}^{21}(h)$ $\hat{R}_{OLS}^{12}(h)$
$h$ $h$
Figure 29: The same as in Figure 28 but for $\hat{R}_{OLS}^{21}(h)$ (on the
left) and $\hat{R}_{OLS}^{12}(h)$ (on the right).
|
arxiv-papers
| 2011-05-18T14:09:25 |
2024-09-04T02:49:18.913067
|
{
"license": "Public Domain",
"authors": "Valentin Patilea, Hamdi Ra\\\"issi",
"submitter": "Hamdi Raissi",
"url": "https://arxiv.org/abs/1105.3638"
}
|
1105.3824
|
# Magnetization of multicomponent ferrofluids
I. Szalai1 and S. Dietrich2,3 1Institute of Physics and Mechatronics,
University of Pannonia, H-8201 Veszprém, PO Box 158, Hungary
2Max-Planck-Institut für Intelligente Systeme, Heisenbergstr. 3, D-70569
Stuttgart, Germany
3Institut für Theoretische und Angewandte Physik, Universität Stuttgart,
Pfaffenwaldring 57, D-70569 Stuttgart, Germany szalai@almos.vein.hu,
dietrich@mf.mpg.de
###### Abstract
The solution of the mean spherical approximation (MSA) integral equation for
isotropic multicomponent dipolar hard sphere fluids without external fields is
used to construct a density functional theory (DFT), which includes external
fields, in order to obtain an analytical expression for the external field
dependence of the magnetization of ferrofluidic mixtures. This DFT is based on
a second-order Taylor series expansion of the free energy density functional
of the anisotropic system around the corresponding isotropic MSA reference
system. The ensuing results for the magnetic properties are in quantitative
agreement with our canonical ensemble Monte Carlo simulation data presented
here.
## 1 Introduction
Ferrofluids are colloidal suspensions of single domain ferromagnetic grains
dispersed in a solvent. The stabilization of such suspensions is usually
obtained by coating the magnetic particles with polymer or surfactant layers
or by using electric double layer formation. Since each particle of a
ferrofluid possesses a permanent magnetic dipole moment, upon integrating out
the degress of freedom of the solvent, which gives rise to effective pair
potentials, dispersions of ferrocolloids can be considered as paradigmatic
realizations of dipolar liquids [1]. The effective interactions of such
magnetic particles are often modeled by dipolar hard-sphere (DHS) [2, 3],
dipolar Yukawa [4], or Stockmayer [5] interaction potentials. The most
frequently applied methods to describe ferrofluids encompass mean field
theories [6, 7], thermodynamical perturbation theory [2], integral equation
theories [8, 9, 10], various DFTs [11, 12, 13, 14, 15, 4], as well as Monte
Carlo [16, 17] and molecular dynamics [18, 19, 20] simulations.
Within the framework of DFT and the mean spherical approximation (MSA),
previously we have proposed an analytical equation [4] for the magnetic field
dependence of the magnetization of one-component ferrofluids, which turned out
to be reliable as compared with corresponding Monte Carlo (MC) simulation
data. For this kind of system the effect of an external magnetic field has
been taken into account by a DFT method, which approximates the free energy
functional of the anisotropic system with an external field by a second-order
Taylor series expansion around the corresponding isotropic reference system
without an external field. The expansion coefficients are the direct
correlation functions which for the studied isotropic dipolar hard-sphere
(DHS) and dipolar Yukawa reference systems can be obtained analytically from
Refs. [21, 22].
However, in practice the magnetic colloidal suspensions are often
multicomponent. In order to describe the magnetization of ferrofluidic
mixtures we extend our one-component theory to multicomponent systems. This
extension is based on the multicomponent MSA solution obtained by Adelman and
Deutch [23]. They showed that the properties of equally sized hard spheres
with different dipole moments can be expressed in terms of those of an
effective single component system. Because MSA is a linear response theory
Adelman and Deutch could predict only the initial slope (or zero-field
susceptibility) of the magnetization curve. Using their MSA solutions as those
of a reference system, in the following we present DFT calculations of the
full magnetization curves of equally sized, dipolar hard sphere mixtures.
These results are compared with MC simulation data.
## 2 Microscopic model and MSA solution
We consider dipolar hard-sphere (DHS) fluid mixtures which consist of $C$
components. The constitutive particles have the same diameter $\sigma$ but the
strength $m_{a}$ of the embedded point dipole can be different for the
components $a=1,...,C$. In the following the indices $a$ and $b$ refer to the
components while the indices $i$ and $j$ refer to individual particles. The
system is characterized by the following pair potential:
$w_{ij}^{DHS}({\bf{r}}_{12},\omega_{1},\omega_{2})=w^{HS}_{ij}(r_{12})+w^{DD}_{ij}({\bf{r}}_{12},\omega_{1},\omega_{2}),$
(1)
where $w^{HS}_{ij}$ and $w^{DD}_{ij}$ are the hard-sphere and the dipole-
dipole interaction pair potential, respectively. The hard-sphere pair
potential given by
$w^{HS}_{ij}(r_{12})=\left\\{\begin{array}[]{lll}\infty&,&r_{12}<\sigma\\\
0&,&r_{12}\geq\sigma.\end{array}\right.\ $ (2)
The dipole-dipole pair potential is
$w^{DD}_{ij}({\bf{r}}_{12},\omega_{1},\omega_{2})=-\frac{m_{i}m_{j}}{r_{12}^{3}}D(\omega_{12},\omega_{1},\omega_{2}),$
(3)
with the rotationally invariant function
$D(\omega_{12},\omega_{1},\omega_{2})=3[\widehat{\mathbf{m}}_{1}(\omega_{1})\cdot\widehat{\mathbf{r}}_{12}][\widehat{\mathbf{m}}_{2}(\omega_{2})\cdot\widehat{\mathbf{r}}_{12}]-[\widehat{\mathbf{m}}_{1}(\omega_{1})\cdot\widehat{\mathbf{m}}_{2}(\omega_{2})],$
(4)
where particle 1 (2) of type $i$ ($j$) is located at ${\mathbf{r}}_{1}$
(${\mathbf{r}}_{2}$) and carries a dipole moment of strength $m_{i}$ ($m_{j}$)
with an orientation given by the unit vector
$\widehat{\mathbf{m}}_{1}(\omega_{1})$
($\widehat{\mathbf{m}}_{2}(\omega_{2})$) with polar angles
$\omega_{1}=(\theta_{1},\phi_{1})$ ($\omega_{2}=(\theta_{2},\phi_{2})$);
${\mathbf{r}}_{12}={\mathbf{r}}_{1}-{\mathbf{r}}_{2}$ is the difference vector
between the center of particle 1 and the center of particle 2 with
$r_{12}=|{\mathbf{r}}_{12}|$.
Within the framework of MSA Adelman and Deutch [23] presented an analytical
solution for the aforementioned $C$-component isotropic dipolar fluid mixture
in the absence of external fields. The importance of their contribution is
that it provides simple analytic expressions for correlation functions, the
dielectric constant (in our case the zero-field magnetic susceptibility), and
thermodynamic functions. In our envisaged DFT calculations for dipolar
mixtures with external fields we consider the isotropic DHS fluid mixture
without external field as a reference system which is described by the
following MSA second-order direct correlation function:
$\displaystyle
c^{(2)}_{ab}(\mathbf{r}_{12},\omega_{1},\omega_{2},\rho_{1},...,\rho_{C},T,m_{1},...,m_{C})=c_{HS}^{(2)}(r_{12},\rho)+$
$\displaystyle\frac{m_{a}m_{b}}{\widehat{m}^{2}}\left[{c_{D}^{(2)}(r_{12},{\rho},\widehat{n})D(\omega_{12},\omega_{1},\omega_{2})+c_{\Delta}^{(2)}(r_{12},{\rho},\widehat{n})\Delta(\omega_{1},\omega_{2})}\right],$
(5)
where $\rho=N/V=\sum_{a=1}^{C}\rho_{a}$ is the total number density in the
volume $V$ of the system, $\rho_{a}=N_{a}/V$ is the number density of species
$a$, $c_{HS}^{(2)}$ is the one-component hard sphere direct correlation
function, while $c_{D}^{(2)}$ and $c_{\Delta}^{(2)}$ are correlation functions
determined by Wertheim [21] for the one-component dipolar MSA fluid at the
same temperature but evaluated for an effective dipole moment $m=\widehat{m}$
and at an effective packing fraction $\eta=\widehat{\eta}$. Accordingly, in
Eq. (5) we have introduced
$\widehat{m}=\sqrt{\frac{\sum_{a=1}^{C}{m_{a}^{2}}}{C}},\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\widehat{\eta}=\frac{\pi}{6}\sigma^{3}\frac{\sum_{a=1}^{C}m_{a}^{2}\rho_{a}}{\widehat{m}^{2}}$
(6)
and the rotationally invariant function
$\Delta(\omega_{1},\omega_{2})=\widehat{\mathbf{m}}_{1}(\omega_{1})\cdot\widehat{\mathbf{m}}_{2}(\omega_{2}).$
(7)
In order to explain dependence on $\widehat{n}$ it is convenient to introduce
a new parameter
$\widehat{\xi}\equiv\widehat{\xi}(\widehat{\chi}_{{}_{L}})=\widehat{\eta}\,\widehat{n}$
which is given by the implicit equation
$4\pi\widehat{\chi}_{{}_{L}}=q(2\widehat{\xi}\,)-q(-\widehat{\xi}\,),$ (8)
which has the same form as the corresponding equation for the one-component
system (see Refs. [4, 21]) where
$\widehat{\chi}_{{}_{L}}=\frac{1}{3}\beta\sum_{a=1}^{C}\rho_{a}m_{a}^{2}$ (9)
is the averaged Langevin susceptibility and $\beta=1/({k_{B}T})$ is the
inverse temperature with the Boltzmann constant $k_{B}$. The function $q(x)$
is the reduced inverse compressibility function of hard spheres within the
Percus-Yevic approximation:
$q(x)=\frac{(1+2x)^{2}}{(1-x)^{4}}.$ (10)
For the zero-field (initial) magnetic susceptibility $\chi$ of the mixture the
theory by Adelman and Deutch [23] yields
$\chi=\frac{\widehat{\chi}_{{}_{L}}}{q(-\widehat{\xi}(\widehat{\chi}_{{}_{L}})\,)}\,\,\,\,.$
(11)
Concerning Eq. (5) Wertheim [21] and Adelman and Deutch [23] showed that
$c^{(2)}_{\Delta}(r_{12},\rho,\widehat{n})=2\widehat{n}[c_{HS}^{(2)}(r_{12},2\widehat{n}\rho)-c_{HS}^{(2)}(r_{12},-\widehat{n}\rho)],$
(12)
$c^{(2)}_{D}(r_{12},{\rho},\widehat{n})=\overline{c}_{D}^{(2)}(r_{12},{\rho},\widehat{n})-3r_{12}^{-3}\int_{0}^{r_{12}}dss^{2}\overline{c}_{D}^{(2)}(s,{\rho},\widehat{n})$
(13)
with
$\overline{c}_{D}^{(2)}(r_{12},{\rho},\widehat{n})=\widehat{n}[2c_{HS}^{(2)}(r_{12},2\widehat{n}\rho)+c_{HS}^{(2)}(r_{12},-\widehat{n}\rho)],$
(14)
where $c_{HS}^{(2)}(r_{12},\rho)$ is the one-component hard sphere Percus-
Yevick correlation function at density $\rho$. The dimensionless quantity
$\widehat{n}=\widehat{\xi}/\widehat{\eta}$ is determined by solving Eq. (8).
We find that $\widehat{n}$ vanishes in the nonpolar limit
$m_{a}\rightarrow{0}$ for all $a$. This can be inferred from the results of
Rushbrooke et al. [24] (obtained originally for one-component dipolar fluids)
according to which the solution of Eq. (8) can be expressed as a power series
in terms of $\widehat{\chi}_{{}_{L}}$:
$\widehat{\xi}=\frac{\pi}{6}\widehat{\chi}_{{}_{L}}-\frac{5\pi^{2}}{48}\widehat{\chi}_{{}_{L}}^{2}+O(\widehat{\chi}_{{}_{L}}^{3}).$
(15)
From Eqs. (15) and (6) it follows that
$\displaystyle\lim_{\\{m_{1},...,m_{C}\\}\rightarrow{0}}\widehat{n}=$
$\displaystyle\lim_{\\{m_{1},...,m_{C}\\}\rightarrow{0}}\frac{\widehat{\xi}}{\widehat{\eta}}=\lim_{\\{m_{1},...,m_{C}\\}\rightarrow{0}}(1+O(\widehat{\chi}_{{}_{L}}))\left(\frac{\beta}{3C\sigma^{3}}\sum_{a=1}^{C}m_{a}^{2}\right)=0.$
(16)
Therefore in this limit the functions $c_{D}^{(2)}$ and $c_{\Delta}^{(2)}$
vanish (see Eqs. (12), (13), and (14)). Due to
$\frac{m_{a}m_{b}}{\widehat{m}^{2}}\leq{C}\frac{m_{a}m_{b}}{m_{a}^{2}+m_{b}^{2}}\leq{C}$,
as expected in the nonpolar limit the rhs of Eq. (5) reduces to the direct
correlation function $c_{HS}^{(2)}$ of a one-component HS fluid with total
density $\rho=\sum_{a=1}^{C}\rho_{a}$.
For a one-component system ($C={1}$) the prefactor
${m_{a}m_{b}/\widehat{m}^{2}}$ equals 1 and
$\widehat{\eta}=\eta=\frac{\pi}{6}\rho\sigma^{3}$ so that Eqs. (8) and (9)
render $\xi$ for the one-component system which indeed yields the one-
component direct correlation function.
Considering the case of a binary mixture of hard spheres and of DHS reveals
the approximate character of Eq. (5). In this case, for the dipolar hard
sphere – hard sphere cross correlations (i.e., $m_{a}\neq{0}$, $m_{b}=0$),
according to Eq. (5) the corresponding correlation function reduces to the a
one-component hard sphere direct correlation function, which certainly is a
rough approximation. We note that
$c^{(2)}_{D}(r_{12}\rightarrow{\infty})\sim{r_{12}^{-3}}$ is long-ranged (see
Eq. (13)) while $c^{(2)}_{\Delta}(r_{12}\geq\sigma)=0$ is short-ranged (see
Eq. (12)).
In the following we consider DHS mixtures in a homogeneous external magnetic
field $\mathbf{H}$, the direction of which is taken to coincide with the
direction of the $z$ axis. For a single dipole the magnetic field gives rise
to the following additional contribution to the interaction potential:
$u^{ext}_{i}=-\mathbf{m}_{i}\mathbf{H}=-m_{i}H\cos\theta_{i},$ (17)
where the angle $\theta_{i}$ measures the orientation of the $i$-th dipole
relative to the field direction.
## 3 Magnetization in an external field
In the following we extend our previous theory [4] to $C$-component and
polydisperse dipolar mixtures in which the particles have the same hard sphere
diameter but different strengths of the dipole moments.
### 3.1 Multicomponent systems
Our analysis is based on the following grand canonical variational functional
$\Omega$, which is an extension to $C$ components of the one-component
functional used in Ref. [4]:
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\Omega=F_{DHS}[\rho_{1},...,\rho_{C},\\{\alpha_{1}(\omega),...,\alpha_{C}(\omega)\\},T]-\sum_{a=1}^{C}\rho_{a}\int{d\omega}\alpha_{a}(\omega)(\mu_{a}-u^{ext}_{a}(\omega)),$
(18)
where $F_{DHS}$ is the Helmholtz free energy functional of an anisotropic,
dipolar, equally sized hard sphere fluid mixture and where $\mu_{a}$ and
$\alpha_{a}(\omega)$ are the chemical potential and the orientational
distribution function of the species $a$, respectively. Since the external
field is spatially constant, $\rho_{1},...,\rho_{C}$ are constant, too. Thus
$F_{DHS}$ is a function of $\rho_{1},...,\rho_{C},m_{1},...,m_{C}$ and a
functional of $\alpha_{1}(\omega),...,\alpha_{C}(\omega)$. The Helmholtz free
energy functional consists of the ideal gas and the excess contribution:
$\displaystyle
F_{DHS}=F^{id}[\rho_{1},...,\rho_{C},\\{\alpha_{1}(\omega),...,\alpha_{C}(\omega)\\},T]$
$\displaystyle+F^{exc}_{DHS}[\rho_{1},...,\rho_{C},\\{\alpha_{1}(\omega),...,\alpha_{C}(\omega)\\},T].$
(19)
For the $C$-component mixture the ideal gas contribution has the form
$F^{id}=k_{B}TV\sum_{a=1}^{C}\rho_{a}\left[{\ln(\rho_{a}\Lambda_{a})-1+\int{d\omega}\alpha_{a}(\omega)\ln(4\pi\alpha_{a}(\omega)})\right],$
(20)
where $\Lambda_{a}$ is the de Broglie wavelength of species $a$. If the system
is anisotropic the DHS free energy $F_{DHS}^{exc,\,ai}$ is approximated by a
second-order functional Taylor series, expanded around a homogeneous isotropic
reference system with bulk densities $\rho_{1},...,\rho_{C}$ and an isotropic
free energy $F_{DHS}^{exc,\,i}$:
$\displaystyle{\beta}F_{DHS}^{exc,\,ai}[\rho_{1},...,\rho_{C},\\{\alpha_{1}(\omega),...,\alpha_{C}(\omega)\\},T]={\beta}F_{DHS}^{exc,\,i}(\rho_{1},...,\rho_{C},T)$
$\displaystyle-\sum_{a=1}^{C}\rho_{a}\int{d^{3}r_{1}}{d\omega}\Delta\alpha_{a}(\omega)c_{a}^{(1)}(\rho_{1},...,\rho_{C},T)$
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!-\frac{1}{2}\sum_{a,b=1}^{C}\rho_{a}\rho_{b}\int{d^{3}r_{1}}{d\omega_{1}}\int{d^{3}r_{2}}{d\omega_{2}}\Delta\alpha_{a}(\omega_{1})\Delta\alpha_{b}(\omega_{2})c^{(2)}_{ab}(\mathbf{r}_{12},\omega_{1},\omega_{2},\rho_{1},...,\rho_{C},T),$
(21)
where $\Delta\alpha_{a}(\omega)=\alpha_{a}(\omega)-1/(4\pi)$ is the difference
between the anisotropic ($H\neq{0}$) and the isotropic ($H=0$) orientational
distribution function of the component $a$; $c_{a}^{(1)}$ and $c_{ab}^{(2)}$
(see Eq.(5)) are the first- and second-order direct correlation functions,
respectively, of the components of the isotropic DHS mixtures. Since in the
isotropic system all $c_{a}^{(1)}$ are independent of the dipole orientation
$\omega$, and because $\int{d\omega}\Delta\alpha_{a}(\omega)=0$, only the
second-order direct correlation functions $c_{ab}^{(2)}$ provide a nonzero
contribution to the above free energy functional. Since $c_{ab}^{(2)}$ depends
only on the difference vector ${\mathbf{r}}_{1}-{\mathbf{r}}_{2}$, Eq.(21)
reduces to
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!{\beta}F_{DHS}^{exc,\,ai}[\rho_{1},...,\rho_{C},\\{\alpha_{1}(\omega),...,\alpha_{C}(\omega)\\},T]={\beta}F_{DHS}^{exc,\,i}(\rho_{1},...,\rho_{C},T)$
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!-\frac{1}{2}\rho^{2}V\sum_{a,b=1}^{C}x_{a}x_{b}\int{d\omega_{1}}\int{d\omega_{2}}\Delta\alpha_{a}(\omega_{1})\Delta\alpha_{b}(\omega_{2})\int{d^{3}r_{12}\,}c^{(2)}_{ab}(\mathbf{r}_{12},\omega_{1},\omega_{2},\rho_{1},...,\rho_{C},T),$
where $x_{a}=N_{a}/N=\rho_{a}/\rho$ is the mole fraction of the component $a$.
The expression for the excess free energy ${\beta}F_{DHS}^{exc,i}$ of the
isotropic DHS system was also given by Adelman and Deutch [23]. In the case of
a cylindrical sample (elongated around the magnetic field direction ) and
homogeneous magnetization all $\alpha_{a}(\omega)$ depend only on the polar
angle $\theta$, and thus they can be expanded in terms of Legendre
polynomials:
$\alpha_{a}(\omega)=\frac{1}{2\pi}\overline{\alpha}_{a}(\cos\theta)=\frac{1}{2\pi}\sum_{l=0}^{\infty}\alpha_{al}P_{l}(\cos\theta),\,\,\,\,a=1,2,...,C\,\,.$
(23)
Due to $\alpha_{a0}=1/2$ one has
$\Delta\alpha_{a}(\omega)=\frac{1}{2\pi}\sum_{l=1}^{\infty}\alpha_{al}P_{l}(\cos\theta).$
(24)
The second-order MSA direct correlation functions of the DHS fluid mixture
(see Eq. (5)) are used to obtain the excess free energy functional. In order
to avoid depolarization effects due to domain formation, we consider sample
shapes of thin cylinders, i.e., needle-shaped volumes $V$. Due to the
properties of $D$, $\Delta$, and $P_{l}$ only the terms $\alpha_{al}$ with
$l\leq 1$ contribute to the excess free energy. Elementary calculation leads
to
$\frac{F_{DHS}^{exc,\,ai}}{V}=f^{exc,\,i}_{DHS}-\frac{2\rho^{2}}{9{\widehat{\chi}}_{{}_{L}}}(1-q(-\widehat{\xi}\,))\sum_{a,b=1}^{C}x_{a}x_{b}m_{a}m_{b}\alpha_{a1}\alpha_{b1},$
(25)
where $f_{DHS}^{exc,\,i}=F_{DHS}^{exc,\,i}/V$. Minimization of the grand
canonical functional with respect to the orientational distribution functions
(note that $f_{DHS}^{exc,\,i}$ does not depend on them) yields
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\bar{\alpha}_{a}(\omega)=Z_{a}^{-1}\exp\left(\beta{m_{a}}\left(H+\frac{2\rho}{3\widehat{\chi}}_{{}_{L}}(1-q(-\widehat{\xi}\,))\sum_{b=1}^{C}x_{b}{m_{b}}\alpha_{b1}\right)P_{1}(\cos\theta)\right),$
(26)
with normalization constants $Z_{a}$ which are fixed by the requirements
$\int{d\omega}\alpha_{a}(\omega)=1$. With this normalization the expansion
coefficients $\alpha_{a1}$ are given by
$\\!\\!\\!\\!\\!\\!\\!\\!\alpha_{a1}=\frac{3}{2}L\left[\beta{m_{a}}\left({H+\frac{2(1-q(-\widehat{\xi}\,))}{3\widehat{\chi}_{{}_{L}}}\sum_{b=1}^{C}\rho_{b}m_{b}\alpha_{b1}}\right)\right],\,\,\,\,a=1,2,...,C,$
(27)
where $L(x)=\coth(x)-1/x$ is the Langevin function. Each particle of the
magnetic fluid carries a dipole moment which will be aligned preferentially in
the direction of the external field. This gives rise to a magnetization
$M=\sum_{a=1}^{C}\rho_{a}m_{a}\int{d\omega}\alpha_{a}(\omega)\cos\theta=\frac{2}{3}\sum_{a=1}^{C}\rho_{a}m_{a}\alpha_{a1}.$
(28)
Equations (28) and (27) lead to an implicit equation for the dependence of the
magnetization on the external field:
$\displaystyle
M=\rho\sum_{a=1}^{C}m_{a}x_{a}L\left[{\beta}m_{a}\left(H+\frac{(1-q(-\widehat{\xi}\,))}{\widehat{\chi}_{{}_{L}}}M\right)\right].\
$ (29)
We note that in Eq. (29) in the limit of weak fields the series expansion of
the Langevin function, $L(x\rightarrow{0})={x/3}$, reduces to Eq. (11) for the
zero-field magnetic susceptibility.
### 3.2 Polydisperse systems
For ferromagnetic grains the dipole moment of a particle is given by
$m(x)=\frac{\pi}{6}M_{s}{\mathcal{D}}^{3},$ (30)
where $M_{s}$ is the bulk saturation magnetization of the core material and
$\mathcal{D}$ is the diameter of the particle. Accordingly, our model of
equally sized particles with different dipole moments applies to systems
composed of materials with distinct saturation magnetizations. Another
possibility consists of considering particles with a magnetic core and a
nonmagnetic shell, which allows one to vary $m$ via changing the core size
with $M_{s}$ fixed and by keeping the overall diameter of the particles fixed
via adjusting the thickness of the shell. For a small number $C$ of components
this can be experimentally realizable.
Equation (11) has been extended even to the description of polydisperse
ferrofluids [9, 19]. However, it is unlikely that this extension relates to a
realistic experimental system because it supposes again that the diameters of
all particles are the same. The two possible realizations mentioned above will
be very difficult to implement for a large number $C$ of components, mimicking
polydispersity. If one nonetheless wants to study such kind of a system the
expression for its zero-field susceptibility is a natural extension of Eq.
(9):
${\overline{\chi}}_{{}_{L}}=\frac{1}{3}\beta\rho\int_{0}^{\infty}d\mathcal{D}\,p(\mathcal{D})m^{2}(\mathcal{D}),$
(31)
where $p(\mathcal{D})$ is the probability distribution function for the
magnetic core diameter. The corresponding zero-field susceptibility of the
polydisperse system is
${\overline{\chi}}=\frac{{\overline{\chi}}_{{}_{L}}}{q(-{\overline{\xi}}(-{\overline{\chi}}_{{}_{L}}))},$
(32)
where $\overline{\xi}$ is the implicit solution of the equation
$4\pi\overline{\chi}_{{}_{L}}=q(2\overline{\xi})-q(-\overline{\xi}).$ (33)
We note that similarly the above equation for the magnetization (see Eq. (29))
can also be extended to polydisperse fluids leading to magnetization curves
$\overline{M}(H)$ defined implicitly by
$\overline{M}=\rho\int_{0}^{\infty}\,d{\mathcal{D}}p(\mathcal{D})m(\mathcal{D})L\left[{\beta}m(\mathcal{D})\left(H+\frac{(1-q(-\overline{\xi}))}{\overline{\chi}_{{}_{L}}}\overline{M}\right)\right].$
(34)
In the limit of weak fields Eq. (34) reduces to the expression in Eq. (32) for
the zero-field susceptibility. In the following we shall do not assess via MC
simulations the range of validity of Eq. (34) for polydisperse magnetic
fluids, leaving this for future studies.
Figure 1: Zero-field susceptibility $\chi$ (Eq. (11)) as function of the
averaged Langevin susceptibility $\widehat{\chi}_{{}_{L}}\sim{1/T}$ (Eq. (9)).
In terms of these quantities DFT (MSA) predicts the master curve given by the
full line. The MC data correspond to the following choices of the system
parameters. The dipole moments are $m_{1}^{*}=0.5$ and $m_{2}^{*}=1$ in all
cases, the reduced densities are $\rho^{*}=0.6$ (diamonds), $\rho^{*}=0.65$
(circles), and $\rho^{*}=0.7$ (triangles) with the concentrations $x_{1}=0$,
$x_{1}=0.25$, $x_{1}=0.5$, $x_{1}=0.75$, and $x_{1}=1$. The error bars of the
MC data are given by the symbol sizes.
## 4 Monte Carlo simulations
In order to assess the predictions of the DFT presented in Sec. 3 we have
carried out MC simulations for DHS fluid mixtures using canonical (NVT)
ensembles. Boltzmann sampling and periodic boundary conditions with the
minimum-image convention [25] have been applied. A spherical cutoff of the
dipole-dipole interaction potential at half of the linear extension of the
simulation cell has been applied and the reaction field long-ranged correction
[25] with a conducting boundary condition has been adopted. For obtaining the
magnetization data, after 40.000 equilibration cycles 0.8-1.0 million
production cycles have been used involving 1024 particles. In the simulations
with an applied field the equilibrium magnetization is obtained from the
equation
$\mathbf{M}=\frac{1}{V}\left\langle{\sum_{i=1}^{N}{\mathbf{m}}_{i}}\right\rangle,$
(35)
Figure 2: Magnetization curves of a binary DHS fluid mixture for five values
of the concentration $x_{1}$ of the species with $m^{*}_{1}=0.5$;
$m_{2}^{*}=1$. The overall number density is $\rho^{*}=0.6$. The curves
correspond to the predictions of the MSA based DFT (Eq. (29)). The symbols are
MC simulation data. Their error bars are given by the symbol sizes.
where the brackets denote the ensemble average. In simulations without
external field the zero-field magnetic susceptibility has been obtained from
the corresponding fluctuation formula
$\chi=\frac{\beta(\langle{{\mathbf{\mathcal{M}}}^{2}}\rangle-\langle{{\mathbf{\mathcal{M}}}}\rangle^{2})}{3V},$
(36)
where $\mathcal{M}=\sum_{i=1}^{N}{\mathbf{m}}_{i}$ is the instantaneous
magnetic dipole moment of the system. Statistical errors have been determined
from the standard deviations of subaverages encompassing 100.000 MC cycles.
## 5 Numerical results and discussion
In the following we shall use reduced quantities: $\rho^{*}=\rho\sigma^{3}$ as
the reduced density, $m^{*}_{a}=m_{a}/\sqrt{k_{B}T\sigma^{3}}$ as the
dimensionless dipole moment of species $a$,
$H^{*}=H\sqrt{\sigma^{3}/(k_{B}T)}$ as the reduced magnetic field strength,
and $M^{*}=M\sqrt{\sigma^{3}/(k_{B}T)}$ as the reduced magnetization. The
calculation of the zero-field susceptibility and the magnetization $M(H)$ of
the multicomponent DHS fluid mixtures (with identical particle diameters but
different dipole moments) can be summarized by the sequence of the following
steps:
1) calculation of the Langevin susceptibility according to Eq. (9),
2) solving Eq. (8) for $\widehat{\xi}$,
3) calculation of the zero-field susceptibility according to Eq. (11),
4) calculation of the magnetization for a given value of $H$ according to Eq.
(29) using the consecutive approximation method with $M=0$ as initial value.
The convergence of this consecutive approximation is very good, obtaining the
limiting results within 5-8 cycles.
Figure 3: Same system as in Fig. 2. For $x_{1}=0.25$ and $x_{1}=0.75$ DFT
(MSA) and the corresponding MC data are compared with the Langevin
magnetization curves. The error bars of the MC data are given by the symbol
sizes.
Figure 1 shows the dependence of the zero-field susceptibility $\chi$ on the
Langevin susceptibility $\chi_{{}_{L}}\sim{1/T}$ (see Eq. (9)) for dipolar
hard sphere mixtures as obtained from Eq. (11) and from the numerical solution
of Eq. (8). This result is compared with MC simulation data for a binary
mixture $[(m_{1}^{*},m_{2}^{*})=(0.5,1)]$ for three total number densities
$\rho^{*}$ and five concentrations $x_{1}$. In these cases DFT (MSA) provides
a good approximation for the initial susceptibility of this binary system
within the range $0\leq{4\pi}\chi_{{}_{L}}\lesssim 2.5$. Within this range the
two-component system with various concentrations and densities can be
described by the same master curve which is the same also for different
systems $(m_{1}^{*},m_{2}^{*},...,m_{C}^{*})$. This is the main statement of
the MSA theory by Adelman and Deutch [23].
Figure 2 displays the magnetization curves of the two-component DHS fluid
mixture with $(m_{1}^{*},m_{2}^{*},\rho^{*})=(0.5,1,0.6)$ for five values of
the concentration. For high values of $H^{*}$ we find excellent, quantitative
agreement for all concentrations between the DFT (MSA) results and the MC
data. For small values of $H^{*}$, especially for $H^{*}=0.5$, i.e., in the
linear response regime, the agreement between the simulation data and the DFT
results is also very good, which matches with the good agreement found for the
zero-field susceptibility (see Fig. 1). Close to the elbow of the
magnetization curves the level of quantitative agreement reduces significantly
for smaller concentrations $x_{1}$ of the magnetically weaker component, while
it remains good for large concentrations $x_{1}$. We note that also for two-
dimensional systems this range is the most sensitive one concerning the
agreement between theoretical results and simulation data [26]. For the same
system Fig. 3 displays a comparison between the DFT results together with the
MC data and the corresponding Langevin theory. This shows that the
interparticle interaction enhances the magnetization relative to the
corresponding values of the Langevin theory. For a three-component DHS fluid
mixture [$(m_{1}^{*},m_{2}^{*},m_{3}^{*})=(0.5,0.75,1)$] Fig. 4 displays the
dependence of the magnetization on the concentration $x_{3}$ for a fixed field
strength $H^{*}=2$ and for three values of the concentration $x_{1}$;
$x_{1}+x_{2}+x_{3}=1$. Since the value $H^{*}=2$ falls into the aforementioned
elbow regime of the magnetization curves, for the small concentration
$x_{1}=0.25$ of the less polar ($m_{1}^{*}=0.5$) component the DFT (MSA)
results underestimate the simulation data. At higher concentrations of $x_{1}$
the agreement between DFT and the simulation data is much better. This is
expected to occur, because for large $x_{1}$ the fluid is dominated by less
polar particles.
Figure 4: Dependence of the magnetization on the concentration $x_{3}$ of the
species with $m_{3}^{*}=1$ for a three-component DHS fluid mixture for a fixed
field strength $H^{*}=2$ and concentrations $x_{1}=0.25$, $x_{1}=0.5$, and
$x_{1}=0.75$; $x_{1}+x_{2}+x_{3}=1$. The reduced dipole moments of the species
1 and 2 are $m_{1}^{*}=0.5$ and $m_{2}^{*}=0.75$. The overall number density
is $\rho^{*}=0.6$. The error bars of the MC data are given by the symbol
sizes.
## 6 Summary
We have obtained the following main results:
1) Based on a second-order Taylor series expansion of the anisotropic free
energy functional of equally sized dipolar hard spheres with different dipole
moments and by using the mean spherical approximation (MSA) we have derived an
analytical expression (Eq. (29)) for the magnetization of multicomponent
ferrofluidic mixtures in external fields. This implicit equation extends the
applicability of MSA to the presence of external magnetic fields of arbitrary
strengths. We find quantitative agreement between the results from this DFT
(MSA) and our Monte Carlo simulation data for Langevin susceptibilities
$4\pi\chi_{{}_{L}}\lesssim{2.5}$ (Figs. 2, 3, and 4).
2) As confirmed also by MC simulation data the zero-field susceptibility of
multicomponent ferrofluids can be expressed by a single master curve in terms
of the Langevin susceptibility (Fig. 1). Beyond the linear response regime the
magnetization curves of multicomponent ferrofluids cannot be reduced to a
single master curve (Eq. (29)).
3) By applying the MSA theory for the magnetic susceptibility to polydisperse
systems we have extended the multicomponent magnetization equation to
polydisperse systems.
## Acknowledgments
I. Szalai acknowledges financial support for this work by the Hungarian State
and the European Union within the project TAMOP-4.2.1/B-09/1/KONV-2010-0003.
## References
## References
* [1] Huke B and Lücke M 2004 Rep. Prog. Phys. 67, 1731
* [2] Ivanov A O and Kuznetsova O B 2001 Phys. Rev. E 64, 041405
* [3] Buyevich Y A and Ivanov A O 1992 Physica A 190, 276
* [4] Szalai I and Dietrich S 2008 J. Phys.: Condensed Matter 20, 204122
* [5] Russier V and Douzi M 1994 J. Colloid Interface Sci. 162, 356
* [6] Debye P 1912 Z. Phys. 13, 97
* [7] Sano K and Doi M 1983 J. Phys. Soc. Jpn. 52, 2810
* [8] Martin G A R, Bradbury A and Chantrell R W 1987 J. Magn. Magn. Mat. 65, 177
* [9] Morozov K I and Lebedev A V 1990 J. Magn. Magn. Mater. 85, 51
* [10] Klapp S H L and Forstmann F 1999 Phys. Rev. E 60, 3183
* [11] Teixeira P I and Telo da Gama M M 1991 J. Phys.: Condensed Matter 3, 111
* [12] Frodl P and Dietrich S 1992 Phys. Rev. A 45, 7330
* [13] Groh B and Dietrich S 1994 Phys. Rev. E 50, 3814
* [14] Groh B and Dietrich S 1996 Phys. Rev. E 53, 2509
* [15] Szalai I and Dietrich S 2009 Eur. Phys. J. E 28, 347
* [16] Kristóf T and Szalai I 2003 Phys. Rev. E 68, 041109
* [17] Trasca R A and Klapp S H L 2008 J. Chem. Phys. 129, 084702
* [18] Holm C and Weis J J 2005 Curr. Opin. Colloid Inter. Sci. 10, 133
* [19] Ivanov A O, Kantorovich S S, Reznikov E N, Holm C, Pshenichnikov A F and Lebedev A V 2007 Phys. Rev. E 75, 061405
* [20] Jordanovic J and Klapp S H L 2009 Phys. Rev. E 79, 021405
* [21] Wertheim M S 1971 J. Chem. Phys. 55, 4291
* [22] Henderson D, Boda D, Chan K-Y and Szalai I 1999 J. Chem. Phys. 110, 7348
* [23] Adelman S A and Deutch J M 1973 J. Chem. Phys. 59, 3971
* [24] Rushbrooke G S, Stell G and Høye J S 1973 Molec. Phys. 26, 1199
* [25] Allen M P and Tildesley D J, Computer Simulation of Liquids (Clarendon, Oxford, 2001)
* [26] Kristóf T and Szalai I 2008 J. Phys.: Condensed Matter 20, 204111
|
arxiv-papers
| 2011-05-19T09:38:50 |
2024-09-04T02:49:18.936620
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "I. Szalai and S. Dietrich",
"submitter": "Istvan Szalai",
"url": "https://arxiv.org/abs/1105.3824"
}
|
1105.3870
|
# Nonlinear elliptic boundary value problems at resonance with nonlinear
Wentzell boundary conditions
Ciprian G. Gal C. G. Gal, Department of Mathematics, University of
Missouri,Columbia, MO 65211 (USA) galc@missouri.edu and Mahamadi Warma M.
Warma, University of Puerto Rico, Department of Mathematics (Rio Piedras
Campus), PO Box 70377 San Juan PR 00936-8377 (USA) mjwarma@gmail.com,
warma@uprrp.edu Dedicated to the 70th birthday of Jerome A. Goldstein
###### Abstract.
In the first part of the article, we give necessary and sufficient conditions
for the solvability of a class of nonlinear elliptic boundary value problems
with nonlinear boundary conditions involving the $q$-Laplace-Beltrami
operator. In the second part, we give some additional results on existence and
uniqueness and we study the regularity of the weak solutions for these classes
of nonlinear problems. More precisely, we show some global a priori estimates
for these weak solutions in an $L^{\infty}$-setting.
###### Key words and phrases:
Nonlinear Wentzell-Robin boundary conditions, necessary and sufficient
conditions for existence of weak solutions, subdifferentials, a priori
estimates, boundary value problems at resonance.
###### 2010 Mathematics Subject Classification:
35J65, 35D30, 35B45
## 1\. Introduction
Let $\Omega\subset\mathbf{R}^{N},$ $N\geq 1,$ be a bounded domain with a
Lipschitz boundary $\partial\Omega$ and consider the following nonlinear
boundary value problem with nonlinear second order boundary conditions:
$\begin{cases}-\Delta_{p}u+\alpha_{1}\left(u\right)=f\left(x\right),&\text{ in
}\Omega,\\\ &\\\ b\left(x\right)\left|\nabla
u\right|^{p-2}\partial_{\mathbf{n}}u-\rho
b\left(x\right)\Delta_{q,\Gamma}u+\alpha_{2}\left(u\right)=g\left(x\right),&\text{
on }\partial\Omega,\end{cases}$ (1.1)
where $b\in L^{\infty}\left(\partial\Omega\right),$ $b(x)\geq b_{0}>0,$ for
some constant $b_{0}$, $\rho$ is either $0$ or $1,$ and $\alpha_{1},$
$\alpha_{2}\in C\left(\mathbb{R},\mathbb{R}\right)$ are monotone nondecreasing
functions such that $\alpha_{i}\left(0\right)=0$. Moreover,
$\Delta_{p}u=\mbox{div}(\left|\nabla u\right|^{p-2}\nabla u)$ is the
$p$-Laplace operator, $p\in\left(1,+\infty\right)$ and $f\in
L^{2}\left(\Omega,dx\right),$ $g\in L^{2}(\partial\Omega,\sigma)$ are given
real-valued functions. Here, $dx$ denotes the usual $N$-dimensional Lebesgue
measure in $\Omega$ and $\sigma$ denotes the restriction to $\partial\Omega$
of the $(N-1)$-dimensional Hausdorff measure. Recall that $\sigma$ coincides
with the usual Lebesgue surface measure since $\Omega$ has a Lipschitz
boundary, and $\partial_{\mathbf{n}}u$ denotes the normal derivative of $u$ in
direction of the outer normal vector $\overrightarrow{\mathbf{n}}$.
Furthermore, $\Delta_{q,\Gamma}$ is defined as the generalized $q$-Laplace-
Beltrami operator on $\partial\Omega,$ that is,
$\Delta_{q,\Gamma}u=\mbox{div}_{\Gamma}(\left|\nabla_{\Gamma}u\right|^{q-2}\nabla_{\Gamma}u),$
$q\in\left(1,+\infty\right)$. In particular, $\Delta_{2}=\Delta$ and
$\Delta_{2,\Gamma}=\Delta_{\Gamma}$ become the well-known Laplace and Laplace-
Beltrami operators on $\Omega$ and $\partial\Omega,$ respectively. Here, for
any real valued function $v,$
$\mbox{div}_{\Gamma}v=\sum_{i=1}^{N-1}\partial_{\tau_{i}}v,$
where $\partial_{\tau_{i}}v$ denotes the directional derivative of $v$ along
the tangential directions $\tau_{i}$ at each point on the boundary, whereas
$\nabla_{\Gamma}v=\left(\partial_{\tau_{1}}v,...,\partial_{\tau_{N-1}}v\right)$
denotes the tangential gradient at $\partial\Omega$. It is worth mentioning
again that when $\rho=0$ in (1.1), the boundary conditions are of lower order
than the order of the $p$ -Laplace operator, while for $\rho=1,$ we deal with
boundary conditions which have the same differential order as the operator
acting in the domain $\Omega.$ Such boundary conditions arise in many
applications, such as phase-transition phenomena (see, e.g., [13, 14] and the
references therein) and have been studied by several authors (see, e.g., [2,
12, 16, 24, 28]).
In a recent paper [12], the authors have formulated necessary and sufficient
conditions for the solvability of (1.1) when $p=q=2,$ by establishing a sort
of ”nonlinear Fredholm alternative” for such elliptic boundary value problems.
We shall now state their main result. Defining two real parameters
$\lambda_{1},$ $\lambda_{2}\in\mathbb{R}_{+}$ by
$\lambda_{1}=\int_{\Omega}dx,\text{
}\lambda_{2}=\int_{\partial\Omega}\frac{d\sigma}{b},$ (1.2)
this result reads that a necessary condition for the existence of a weak
solution of (1.1) is that
$\int_{\Omega}f\left(x\right)dx+\int_{\partial\Omega}g\left(x\right)\frac{d\sigma}{b\left(x\right)}\in\left(\lambda_{1}\mathcal{R}\left(\alpha_{1}\right)+\lambda_{2}\mathcal{R}\left(\alpha_{2}\right)\right),$
(1.3)
while a sufficient condition is
$\int_{\Omega}f\left(x\right)dx+\int_{\partial\Omega}g\left(x\right)\frac{d\sigma}{b\left(x\right)}\in\mbox{int}\left(\lambda_{1}\mathcal{R}\left(\alpha_{1}\right)+\lambda_{2}\mathcal{R}\left(\alpha_{2}\right)\right),$
(1.4)
where $\mathcal{R}(\alpha_{j})$ denotes the range of $\alpha_{j}$, $j=1,2$ and
$\mbox{int}(G)$ denotes the interior of the set $G$.
Relation (1.3) turns out to be both necessary and sufficient if either of the
sets $\mathcal{R}\left(\alpha_{1}\right)$ or
$\mathcal{R}\left(\alpha_{2}\right)$ is an open interval. This particular
result was established in [12, Theorem 3], by employing methods from convex
analysis involving subdifferentials of convex, lower semicontinuous
functionals on suitable Hilbert spaces. As an application of our results, we
can consider the following boundary value problem
$\left\\{\begin{array}[]{ll}-\Delta
u+\alpha_{1}\left(u\right)=f\left(x\right),&\text{in }\Omega,\\\
b\left(x\right)\partial_{n}u=g\left(x\right),&\text{on
}\partial\Omega,\end{array}\right.$ (1.5)
which is only a special case of (1.1) (i.e., $\rho=0,$ $\alpha_{2}\equiv 0$
and $p=2$). According to [12, Theorem 3] (see also (1.4)), this problem has a
weak solution if
$\int_{\Omega}f\left(x\right)dx+\int_{\partial\Omega}g\left(x\right)\frac{d\sigma}{b\left(x\right)}\in\mbox{int}\left(\lambda_{1}\mathcal{R}\left(\alpha_{1}\right)\right),$
(1.6)
which yields the result of Landesman and Lazer [17] for $g\equiv 0$. This last
condition is both necessary and sufficient when the interval
$\mathcal{R}\left(\alpha_{1}\right)$ is open. This was put into an abstract
context and significantly extended by Brezis and Haraux [8]. Their work was
much further extended by Brezis and Nirenberg [9]. The goal of the present
article is comparable to that of [12] since we want to establish similar
conditions to (1.4) and (1.6) for the existence of solutions to (1.1) when
$p,q\neq 2,$ with main emphasis on the generality of the boundary conditions.
Recall that $\lambda_{1}$ and $\lambda_{2}$ are given by (1.2). Let
$\mathbb{I}$ be the interval
$\lambda_{1}\mathcal{R}\left(\alpha_{1}\right)+\lambda_{2}\mathcal{R}\left(\alpha_{2}\right).$
Our first main result is as follows (see Section 4 also).
###### Theorem 1.1.
Let $\alpha_{j}:\mathbb{R}\rightarrow\mathbb{R}$ $(j=1,2)$ be odd, monotone
nondecreasing, continuous function such that $\alpha_{j}(0)=0$. Assume that
the functions $\Lambda_{j}(t):=\int_{0}^{|t|}\alpha_{j}(s)ds$ satisfy
$\Lambda_{j}(2t)\leq C_{j}\Lambda_{j}(t),\;\mbox{ for all }\;t\in\mathbb{R},$
(1.7)
for some constants $C_{j}>1$, $j=1,2$. If $u$ is a weak solution of (1.1) (in
the sense of Definition 4.10 below), then
$\int_{\Omega}f\left(x\right)dx+\int_{\partial\Omega}g\left(x\right)\frac{d\sigma}{b\left(x\right)}\in\mathbb{I}.$
(1.8)
Conversely, if
$\int_{\Omega}f\left(x\right)dx+\int_{\partial\Omega}g\left(x\right)\frac{d\sigma}{b\left(x\right)}\in
int\left(\mathbb{I}\right),$ (1.9)
then (1.1) has a weak solution.
Our second main result of the paper deals with a modified version of (1.1)
which is obtained by replacing the functions $\alpha_{1}\left(s\right),$
$\alpha_{2}\left(s\right)$ in (1.1) by
$\overline{\alpha}_{1}\left(s\right)+\left|s\right|^{p-2}s$ and
$\overline{\alpha}_{2}\left(s\right)+\rho b\left|u\right|^{q-2}u$,
respectively, and also allowing $\overline{\alpha}_{1},$
$\overline{\alpha}_{2}$ to depend on $x\in\overline{\Omega}$. Under additional
assumptions on $\overline{\alpha}_{1},\overline{\alpha}_{2}$ and under higher
integrability properties for the data $\left(f,g\right)$, the next theorem
provides us with conditions for unique solvability results for solutions to
such boundary value problems. Then, we obtain some regularity results for
these solutions. In addition to these results, the continuous dependence of
the solution to (1.1) with respect to the data $\left(f,g\right)$ can be also
established. In particular, we prove the following
###### Theorem 1.2.
Let all the assumptions of Theorem 1.1 be satisfied for the functions
$\overline{\alpha}_{1},$ $\overline{\alpha}_{2}$. Moreover, for each $j=1,2$,
assume that $\overline{\alpha}_{j}\left(t\right)/t\rightarrow 0,$ as
$t\rightarrow 0$ and $\overline{\alpha}_{j}\left(t\right)/t\rightarrow\infty,$
as $t\rightarrow\infty$, respectively.
(a) Then, for every $\left(f,g\right)\in L^{p_{1}}(\Omega)\times
L^{q_{1}}(\partial\Omega)$ with
$p_{1}>\max\left\\{1,\frac{N}{p}\right\\},\text{
}q_{1}>\left\\{\begin{array}[]{ll}\max\left\\{1,\frac{N-1}{p-1}\right\\},&\text{if
}\rho\in\left\\{0,1\right\\},\\\
\max\left\\{1,\frac{N-1}{p}\right\\},&\text{if }\rho=1\text{ and
}p=q,\end{array}\right.$
there exists a unique weak solution to problem (1.1) (in the sense of
Definition 5.3 below) which is bounded.
(b) Let $\overline{\alpha}_{j},$ $j=1,2,$ be such that
$c_{j}\left|\overline{\alpha}_{j}(\xi-\eta)\right|\leq\left|\overline{\alpha}_{j}(\xi)-\overline{\alpha}_{j}(\eta)\right|,\text{
}\mbox{ for all }\;\xi,\eta\in\mathbb{R},$
for some constants $c_{j}\in(0,1]$. Then, the weak (bounded) solution of
problem (1.1) depends continuously on the data $\left(f,g\right)$. Precisely,
let us indicate by $u_{F_{j}}$ the unique solution corresponding to the data
$F_{j}:=\left(f_{j},g_{j}\right)\in L^{p_{1}}(\Omega)\times
L^{q_{1}}(\partial\Omega),$ for each $j=1,2$. Then, the following estimate
holds:
$\|u_{F_{1}}-u_{F_{2}}\|_{L^{\infty}(\Omega)}+\|u_{F_{1}}-u_{F_{2}}\|_{L^{\infty}(\partial\Omega)}\leq
Q\left(\|f_{1}-f_{2}\|_{L^{p_{1}}(\Omega)},\|g_{1}-g_{2}\|_{L^{q_{1}}(\partial\Omega)}\right),$
for some nonnegative function $Q:\mathbb{R}_{+}^{2}\rightarrow\mathbb{R}_{+}$,
$Q\left(0,0\right)=0$, which can be computed explicitly.
We organize the paper as follows. In Section 2, we introduce some notations
and recall some well-known results about Sobolev spaces, maximal monotone
operators and Orlicz type spaces which will be needed throughout the article.
In Section 3, we show that the subdifferential of a suitable functional
associated with problem (1.1) satisfies a sort of ”quasilinear” version of the
Fredholm alternative (cf. Theorem 3.5), which is needed in order to obtain the
result in Theorem 1.1. Finally, in Sections 4 and 5, we provide detailed
proofs of Theorem 1.1 and Theorem 1.2. We also illustrate the application of
these results with some examples.
## 2\. Preliminaries and notations
In this section we put together some well-known results on nonlinear forms,
maximal monotone operators and Sobolev spaces. For more details on maximal
monotone operators, we refer to the monographs [4, 7, 20, 21, 27]. We will
also introduce some notations.
### 2.1. Maximal monotone operators
Let $H$ be a real Hilbert space with scalar product $(\cdot,\cdot)_{H}$.
###### Definition 2.1.
Let $A:\;D(A)\subset H\to H$ be a closed (nonlinear) operator. The operator
$A$ is said to be:
1. (i)
_monotone_ , if for all $u,v\in D(A)$ one has
$(Au-Av,u-v)_{H}\geq 0.$
2. (ii)
_maximal monotone_ , if it is monotone and the operator $I+A$ is invertible.
Next, let $V$ be a real reflexive Banach space which is densely and
continuously embedded into the real Hilbert space $H$, and let $V^{\prime}$ be
its dual space such that $V\hookrightarrow H\hookrightarrow V^{\prime}$.
###### Definition 2.2.
Let $\mathcal{A}:\;V\times V\to\mathbb{R}$ be a continuous map.
1. (a)
The map $\mathcal{A}:\;V\times V\to\mathbb{R}$ is called a _nonlinear form_ on
$H$ if for all $u\in V$ one has $\mathcal{A}(u,\cdot)\in V^{\prime}$, that is,
if $\mathcal{A}$ is linear and bounded in the second variable.
2. (b)
The nonlinear form $\mathcal{A}:\;V\times V\to\mathbb{R}$ is said to be:
1. (i)
_monotone_ if $\mathcal{A}(u,u-v)-\mathcal{A}(v,u-v)\geq 0\;\mbox{ for all
}\;u,v\in V$;
2. (ii)
_hemicontinuous_ if $\displaystyle\lim_{t\downarrow
0}\mathcal{A}(u+tv,w)=\mathcal{A}(u,w),\;\;\forall\;u,v,w\in V$ ;
3. (iii)
_coercive_ , if
$\displaystyle\lim_{\|v\|_{V}\to+\infty}\frac{\mathcal{A}(v,v)}{\|v\|_{V}}=+\infty$.
Now, let $\varphi:\;H\rightarrow(-\infty,+\infty]$ be a proper, convex, lower
semicontinuous functional with effective domain
$D(\varphi):=\\{u\in H:\;\varphi(u)<\infty\\}.$
The subdifferential $\partial\varphi$ of the functional $\varphi$ is defined
by
$\begin{cases}\displaystyle D(\partial\varphi)&:=\\{u\in
D(\varphi):\;\exists\;w\in H\;\forall\;v\in
D(\varphi):\;\varphi(v)-\varphi(u)\geq(w,v-u)_{H}\\};\\\ &\\\
\displaystyle\partial\varphi(u)&:=\\{w\in H:\;\forall\;v\in
D(\varphi):\;\varphi(v)-\varphi(u)\geq(w,v-u)_{H}\\}.\end{cases}$
By a classical result of Minty [20] (see also [7, 21]), $\partial\varphi$ is a
maximal monotone operator.
### 2.2. Functional setup
Let $\Omega\subset\mathbb{R}^{N}$ be a bounded domain with a Lipschitz
boundary $\partial\Omega$. For $1<p<\infty$, we let $W^{1,p}(\Omega)$ be the
first order Sobolev space, that is,
$W^{1,p}(\Omega)=\\{u\in L^{p}(\Omega):\;\nabla u\in(L^{p}(\Omega))^{N}\\}.$
Then $W^{1,p}(\Omega),$ endowed with the norm
$\|u\|_{W^{1,p}(\Omega)}:=\left(\|u\|_{\Omega,p}^{p}+\|\nabla
u\|_{\Omega,p}^{p}\right)^{1/p}$
is a Banach space, where we have set
$\|u\|_{\Omega,p}^{p}:=\int_{\Omega}|u|^{p}\;dx.$
Since $\Omega$ has a Lipschitz boundary, it is well-known that there exists a
constant $C>0$ such that
$\|u\|_{\Omega,p_{s}}\leq C\|u\|_{W^{1,p}(\Omega)},\;\text{for all }u\in
W^{1,p}(\Omega),$ (2.1)
where $p_{s}=\frac{pN}{N-p}$ if $p<N,$ and $1\leq p_{s}<\infty$ if $N=p$.
Moreover the trace operator $\mbox{Tr}(u):=u_{|_{\partial\Omega}}$ initially
defined for $u\in C^{1}(\bar{\Omega})$ has an extension to a bounded linear
operator from $W^{1,p}(\Omega)$ into $L^{q_{s}}(\partial\Omega)$ where
$q_{s}:=\frac{p(N-1)}{N-p}$ if $p<N$, and $1\leq q_{s}<\infty$ if $N=p$.
Hence, there is a constant $C>0$ such that
$\|u\|_{\partial\Omega,q_{s}}\leq C\|u\|_{W^{1,p}(\Omega)},\;\text{for
all}\;u\in W^{1,p}(\Omega).$ (2.2)
Throughout the remainder of this article, for $1<p<N$, we let
$p_{s}:=\frac{pN}{N-p}\;\mbox{ and }\;q_{s}:=\frac{p(N-1)}{N-p}.$ (2.3)
If $p>N$, one has that
$W^{1,p}(\Omega)\hookrightarrow C^{0,1-\frac{N}{p}}(\bar{\Omega}),$ (2.4)
that is, the space $W^{1,p}(\Omega)$ is continuously embedded into
$C^{0,1-\frac{N}{p}}(\bar{\Omega})$. For more details, we refer to [23,
Theorem 4.7] (see also [19, Chapter 4]).
For $1<q<\infty$, we define the Sobolev space $W^{1,q}(\partial\Omega)$ to be
the completion of the space $C^{1}(\partial\Omega)$ with respect to the norm
$\|u\|_{W^{1,q}(\partial\Omega)}:=\left(\int_{\partial\Omega}|u|^{q}\;d\sigma+\int_{\partial\Omega}|\nabla_{\Gamma}u|^{q}\;d\sigma\right)^{1/q},$
where we recall that $\nabla_{\Gamma}u$ denotes the tangential gradient of the
function $u$ at the boundary $\partial\Omega$. It is also well-known that
$W^{1,q}(\partial\Omega)$ is continuously embedded into
$L^{q_{t}}(\partial\Omega)$ where $q_{t}:=\frac{q(N-1)}{N-1-q}$ if $1<q<N-1,$
and $1\leq q_{t}<\infty$ if $q=N-1$. Hence, for $1<q\leq N-1$, there exists a
constant $C>0$ such that
$\|u\|_{q_{t},\partial\Omega}\leq C\|u\|_{W^{1,q}(\partial\Omega)},\text{ for
all }u\in W^{1,q}(\partial\Omega).$ (2.5)
Let $\lambda_{N}$ denote the $N$-dimensional Lebesgue measure and let the
measure $\mu:=\lambda_{N}|_{\Omega}\oplus\sigma$ on $\overline{\Omega}$ be
defined for every measurable set $A\subset\overline{\Omega}$ by
$\mu(A):=\lambda_{N}(\Omega\cap A)+\sigma(A\cap\partial\Omega).$
For $p,q\in[1,\infty],$ we define the Banach space
$X^{p,q}(\overline{\Omega},\mu):=\\{F=(f,g):\;f\in L^{p}(\Omega)\mbox{ and
}g\in L^{q}(\partial\Omega)\\}$
endowed with the norm
$\|F\|_{X^{p,q}(\overline{\Omega})}=\||F\||_{p,q}:=\|f\|_{\Omega,p}+\|g\|_{\partial\Omega,q},$
if $1\leq p,q<\infty,$ and
$\|F\|_{X^{\infty,\infty}(\overline{\Omega},\mu)}=\||F\||_{\infty}:=\max\\{\|f\|_{\Omega,\infty},\|g\|_{\partial\Omega,\infty}\\}.$
If $p=q$, we will simply denote $\||F\||_{p,p}=\||F\||_{p}$.
Identifying each function $u\in W^{1,p}(\Omega)$ with
$U=(u,u|_{\partial\Omega})$, we have that $W^{1,p}(\Omega)$ is a subspace of
$X^{p,p}(\overline{\Omega},\mu)$.
For $1<p,q<\infty,$ we endow
$\mathcal{V}_{1}:=\\{U:=(u,u|_{\partial\Omega}),u\in
W^{1,p}(\Omega),\;u|_{\partial\Omega}\in W^{1,q}(\partial\Omega)\\}$
with the norm
$\|U\|_{\mathcal{V}_{1}}:=\|u\|_{W^{1,p}(\Omega)}+\|u\|_{W^{1,q}(\partial\Omega)},$
while
$\mathcal{V}_{0}:=\\{U=(u,u|_{\partial\Omega}):\;u\in W^{1,p}(\Omega)\\}$
is endowed with the norm
$\|U\|_{\mathcal{V}_{0}}:=\|u\|_{W^{1,p}(\Omega)}.$
It follows from (2.1)-(2.2) that $\mathcal{V}_{0}$ is continuously embedded
into $X^{p_{s},q_{s}}(\overline{\Omega},\mu),$ with $p_{s}$ and $q_{s}$ given
by (2.3), for $1<p<N$. Moreover, by (2.1) and (2.5), $\mathcal{V}_{1}$ is
continuously embedded into $X^{p_{s},q_{t}}(\overline{\Omega},\mu)$.
### 2.3. Musielak-Orlicz type spaces
For the convenience of the reader, we introduce the Orlicz and Musielak-Orlicz
type spaces and prove some properties of these spaces which will be frequently
used in the sequel (see Section 5).
###### Definition 2.3.
Let $(X,\Sigma,\nu)$ be a complete measure space. We call a function
$B:X\times\mathbb{R}\to[0,\infty]$ a _Musielak-Orlicz function_ on $X$ if
1. (a)
$B(x,\cdot)$ is non-trivial, even, convex for $\nu$-a.e. $x\in X$;
2. (b)
$B(x,\cdot)$ is vanishing and continuous at $0$ for $\nu$-a.e. $x\in X$;
3. (c)
$B(x,\cdot)$ is left continuous on $[0,\infty);$
4. (d)
$B(\cdot,t)$ is $\Sigma$-measurable for all $t\in[0,\infty)$;
5. (e)
$\displaystyle\lim_{t\rightarrow\infty}\frac{B(x,t)}{t}=\infty$.
The _complementary Musielak-Orlicz function_ $\widetilde{B}$ is defined by
$\widetilde{B}(x,t):=\sup\\{s|t|-B(x,s):s>0\\}.$
It follows directly from the definition that for $t,s\geq 0$ (and hence for
all $t,s\in\mathbb{R}$)
$st\leq B(x,t)+\widetilde{B}(x,s).$
###### Definition 2.4.
We say that a Musielak-Orlicz function $B$ satisfies the
$(\triangle_{\alpha}^{0})$-condition $(\alpha>1)$ if there exists a set
$X_{0}$ of $\nu$-measure zero and a constant $C_{\alpha}>1$ such that
$B(x,\alpha t)\leq C_{\alpha}B(x,t),$
for all $t\in\mathbb{R}$ and every $x\in X\setminus X_{0}$.
We say that $B$ satisfies the $(\nabla_{2}^{0})$-condition if there is a set
$X_{0}$ of $\nu$-measure zero and a constant $c>1$ such that
$B(x,t)\leq\frac{1}{2c}B(x,ct),$
for all $t\in\mathbb{R}$ and all $x\in X\setminus X_{0}$.
###### Definition 2.5.
A function $\Phi:\mathbb{R}\rightarrow[0,\infty)$ is called an
${\mathcal{N}}$-function if
* •
$\Phi$ is even, strictly increasing and convex;
* •
$\Phi(t)=0$ if and only if $t=0$;
* •
$\displaystyle\lim_{t\rightarrow 0}\frac{\Phi(t)}{t}=0$ and
$\displaystyle\lim_{t\rightarrow\infty}\frac{\Phi(t)}{t}=\infty$.
We say that an ${\mathcal{N}}$-function $\Phi$ satisfies the
$(\triangle_{2})$-condition if there exists a constant $C_{2}>1$ such that
$\Phi(2t)\leq C_{2}\Phi(t),\;\;\;\mbox{ for all }\;t\in\mathbb{R},$
and it satisfies the $(\nabla_{2})$-condition if there is a constant $c>1$
such that
$\Phi(t)\leq\Phi(ct)/(2c),\;\;\mbox{ for all }\;t\in\mathbb{R}.$
For more details on ${\mathcal{N}}$-functions, we refer to the monograph of
Adams [1, Chapter VIII] (see also [25, Chapter I], [26, Chapter I]).
###### Remark 2.6.
_For an ${\mathcal{N}}$-function $\Phi$, we let $\varphi$ be its left-sided
derivative. Then $\varphi$ is left continuous on $(0,\infty)$ and
nondecreasing. Let $\psi$ be given by_
$\psi\left(s\right):=\inf\left\\{t>0:\varphi\left(t\right)>s\right\\}.$
_Then_
$\Phi(t)=\int_{0}^{|t|}\varphi(s)\;ds;\qquad\Psi(t):=\int_{0}^{|t|}\psi(s)\;ds=\sup\\{|t|s-\Phi(s):s>0\\}.$
_As before for all $s,t\in\mathbb{R}$,_
$st\leq\Phi(t)+\Psi(s).$ (2.6)
_Moreover, if $s=\varphi(t)$ or $t=\psi(s)$ then we have equality, that is,_
$\Psi(\varphi(t))=t\varphi(t)-\Phi(t).$ (2.7)
_The function $\Psi$ is called the complementary ${\mathcal{N}}$-function of
$\Phi$. It is also known that an ${\mathcal{N}}$-function $\Phi$ satisfies the
$(\triangle_{2})$-condition if and only if_
$ct\varphi(t)\leq\Phi(t)\leq t\varphi(t),$ (2.8)
_for some constant $c\in(0,1]$ and for all $t\in\mathbb{R}$, where $\varphi$
is the left-sided derivative of $\Phi$._
###### Lemma 2.7.
Let $\Phi$ be an ${\mathcal{N}}$-function which satisfies the
$(\triangle_{2})$-condition with the constant $C_{2}>1$ and let $\Psi$ be its
complementary ${\mathcal{N}}$-function. Then $\Psi$ satisfies the
$(\nabla_{2})$-condition with the constant $c:=2^{C_{2}-1}$.
###### Proof.
We have
$t\varphi(t)\leq\int_{t}^{2t}\varphi(s)\;ds\leq\int_{0}^{2t}\varphi(s)\;ds=\Phi(2t)\leq
C_{2}\Phi(t).$
Since $\varphi(\psi(s))\geq s$ for all $s\geq 0$ and $s/\Psi(s)$ and $s/(s-1)$
are decreasing, we get for $t:=\psi(s),$ that
$\frac{s\psi(s)}{\Psi(s)}\geq\frac{\varphi(\psi(s))\psi(s)}{\Psi(\varphi(\psi(s)))}=\frac{t\varphi(t)}{\Psi(\varphi(t))}=\frac{t\varphi(t)}{t\varphi(t)-\Phi(t)}\geq\frac{C_{2}}{C_{2}-1}.$
Now let $c:=2^{C_{2}-1}$. Then for $t\geq 0,$
$\displaystyle\ln\left(\frac{\Psi(ct)}{\Psi(t)}\right)$
$\displaystyle=\int_{t}^{ct}\frac{\psi(s)}{\Psi(s)}\;ds\geq\int_{t}^{ct}\frac{C_{2}}{s(C_{2}-1)}\;ds$
$\displaystyle=\frac{C_{2}}{C_{2}-1}\ln(c)=C_{2}\log(2)=\ln(2\cdot
2^{C_{2}-1}).$
Hence, $\Psi(t)2c\leq\Psi(ct)$.
###### Corollary 2.8.
Let $B$ be a Musielak-Orlicz function such that $B(x,\cdot)$ is an
${\mathcal{N}}$-function for $\nu$-a.e. $x$. If $B$ satisfies the
$(\triangle_{2}^{0})$-condition, then $\widetilde{B}$ satisfies the
$(\nabla_{2}^{0})$-condition.
###### Definition 2.9.
Let $B$ be a Musielak-Orlicz function. Then the Musielak-Orlicz space
$L^{B}(X)$ associated with $B$ is defined by
$L^{B}(X):=\\{u:X\rightarrow\mathbb{R}\text{ measurable
}:\rho_{B}(u/\alpha)<\infty\text{ for some }\alpha>0\\},$
where
$\rho_{B}(v):=\int_{X}B(x,v(x))\;d\nu(x).$
On this space we consider the Luxemburg norm $\|\cdot\|_{X,B}$ defined by
$\|u\|_{X,B}:=\inf\\{\alpha>0:\rho_{B}(u/\alpha)\leq 1\\}.$
###### Proposition 2.10.
Let $B$ be a Musielak-Orlicz function which satisfies the $(\nabla_{2}^{0})$
-condition. Then
$\lim_{\|u\|_{X,B}\rightarrow+\infty}\frac{\rho_{B}(u)}{\|u\|_{X,B}}=+\infty.$
###### Proof.
If $B$ satisfies the $(\nabla_{2}^{0})$-condition, then there exists a set
$X_{0}\subset X$ of measure zero such that for every $\varepsilon>0$ there
exists $\alpha=\alpha(\varepsilon)>0$,
$B(x,\alpha t)\leq\alpha\varepsilon B(x,t),$ (2.9)
for all $t\in\mathbb{R}$ and all $x\in X\backslash X_{0}$. Let
$\lambda\in(0,\infty)$ be fixed. For $\varepsilon:=1/\lambda$ there exists
$\alpha>0$ satisfying the above inequality. We will show that
$\rho_{B}(u)\geq\lambda\|u\|_{X,B}$ whenever $\|u\|_{X,B}>1/\alpha$. Assume
that $\|u\|_{X,B}>1/\alpha$ and let $\delta>0$ be such that
$\alpha=(1+\delta)/\|u\|_{X,B}$. Then
$\displaystyle\rho_{B}(\alpha u)$
$\displaystyle=\int_{X}B(x,u(1+\delta)/\|u\|_{X,B})\;d\mu$
$\displaystyle\geq(1+\delta)^{1-1/n}\int_{X}B(x,u(1+\delta)^{1/n}/\|u\|_{X,B})\;d\mu\geq(1+\delta)^{1-1/n},$
for all $n\in\mathbb{N}$. If we assume that the last inequality does not hold,
then
$\|u\|_{X,B}/(1+\delta)\in\\{\alpha>0:\rho(u/\alpha)\leq 1\\},$
and this clearly contradicts the definition of $\|u\|_{X,B}$. Therefore, we
must have
$\rho_{B}(\alpha u)\geq 1+\delta=\alpha\|u\|_{X,B}.$ (2.10)
From (2.9), (2.10), we obtain
$\rho_{B}(u)=\int_{X}B(x,u(x))\;d\mu\geq\frac{\lambda}{\alpha}\int_{X}B(x,\alpha
u(x))\;d\mu=\frac{\lambda}{\alpha}\rho_{B}(\alpha u)\geq\lambda\|u\|_{X,B}.$
The proof is finished.
###### Corollary 2.11.
Let $B$ be a Musielak-Orlicz function such that $B(x,\cdot)$ is an
${\mathcal{N}}$-function for $\nu$-a.e. $x$. If its complementary
${\mathcal{N}}$-function $\widetilde{B}$ satisfies the
$(\triangle_{2}^{0})$-condition, then $B$ satisfies the
$(\nabla_{2}^{0})$-condition and
$\lim_{\|u\|_{X,B}\to+\infty}\frac{\rho_{B}(u)}{\|u\|_{X,B}}=+\infty.$
### 2.4. Some tools
For the reader’s convenience, we report here below some useful inequalities
which will be needed in the course of investigation.
###### Lemma 2.12.
Let $a,b\in\mathbb{R}^{N}$ and $p\in(1,\infty)$. Then, there exists a constant
$C_{p}>0$ such that
$\left(|a|^{p-2}a-|b|^{p-2}b\right)(a-b)\geq
C_{p}\left(|a|+|b|\right)^{p-2}|a-b|^{2}\geq 0.$ (2.11)
If $p\in[2,\infty)$, then there exists a constant $c_{p}\in(0,1]$ such that
$\left(|a|^{p-2}a-|b|^{p-2}b\right)(a-b)\geq c_{p}|a-b|^{p}.$ (2.12)
###### Proof.
The proof of (2.12) is included in [10, Lemma I.4.4]. In order to show (2.11),
one only needs to show that the left hand side is non-negative, which follows
easily.
The following result which is of analytic nature and whose proof can be found
in [22, Lemma 3.11] will be useful in deriving some a priori estimates of weak
solutions of elliptic equations.
###### Lemma 2.13.
Let $\psi:[k_{0},\infty)\rightarrow\mathbb{R}$ be a non-negative, non-
increasing function such that there are positive constants $c,\alpha$ and
$\delta$ ($\delta>1$) such that
$\psi(h)\leq c(h-k)^{-\alpha}\psi(k)^{\delta},\qquad\forall\;h>k\geq k_{0}.$
Then $\psi(k_{0}+d)=0$ with
$d=c^{1/\alpha}\psi(k_{0})^{(\delta-1)/\alpha}2^{\delta(\delta-1)}$.
## 3\. The Fredholm alternative
In what follows, we assume that $\Omega\subset\mathbb{R}^{N}$ is a bounded
domain with Lipschitz boundary $\partial\Omega$. Let $b\in
L^{\infty}(\partial\Omega)$ satisfy $b(x)\geq b_{0}>0$ for some constant
$b_{0}$. Let $\mathbb{X}_{2}$ be the real Hilbert space
$L^{2}\left(\Omega,dx\right)\oplus
L^{2}\left(\partial\Omega,\frac{d\sigma}{b}\right)$. Then, it is clear that
$\mathbb{X}_{2}$ is isomorphic to
$X^{2,2}(\overline{\Omega},\lambda_{N}\oplus\sigma)$ with equivalent norms.
Next, let $\rho\in\\{0,1\\}$ and $p,q\in(1,+\infty)$ be fixed. We define the
functional $\mathcal{J}_{\rho}:\;\mathbb{X}_{2}\rightarrow[0,+\infty]$ by
setting
$\mathcal{J}_{\rho}\left(U\right)=\begin{cases}\displaystyle\frac{1}{p}\int_{\Omega}\left|\nabla
u\right|^{p}dx+\frac{1}{q}\int_{\partial\Omega}\rho\left|\nabla_{\Gamma}u\right|^{q}\;d\sigma,&\text{
if }U=\left(u,u_{\mid\partial\Omega}\right)\in
D\left(\mathcal{J}_{\rho}\right),\\\ +\infty,&\text{ if
}U\in\mathbb{X}_{2}\diagdown D\left(\mathcal{J}_{\rho}\right),\end{cases}$
(3.1)
where the effective domain is given
$D(\mathcal{J}_{\rho})=\mathcal{V}_{\rho}\cap\mathbb{X}_{2}$.
Throughout the remainder of this section, we let
$\displaystyle\mu:=\lambda_{N}\oplus\frac{d\sigma}{b}$. The following result
can be obtained easily.
###### Proposition 3.1.
The functional $\mathcal{J}_{\rho}$ defined by (3.1) is proper, convex and
lower semicontinuous on $\mathbb{X}_{2}=X^{2,2}(\overline{\Omega},\mu)$.
The following result contains a computation of the subdifferential
$\partial\mathcal{J}_{\rho}$ for the functional $\mathcal{J}_{\rho}$.
###### Remark 3.2.
_Let $U=(u,u|_{\partial\Omega})\in D(\mathcal{J}_{\rho})$ and let
$F:=(f,g)\in\partial\mathcal{J}_{\rho}(U)$. Then, by definition,
$F\in\mathbb{X}_{2}$ and for all $V=(v,v|_{\partial\Omega})\in
D(\mathcal{J}_{\rho})$, we have_
$\int_{\overline{\Omega}}F(V-U)\;d\mu\leq\frac{1}{p}\int_{\Omega}\bigg{(}|\nabla
v|^{p}-|\nabla
u|^{p}\bigg{)}\;dx+\frac{1}{q}\rho\int_{\Omega}\bigg{(}|\nabla_{\Gamma}v|^{q}-|\nabla_{\Gamma}u|^{q}\bigg{)}\;d\sigma.$
_Let $W=(w,w|_{\partial\Omega})\in D(\mathcal{J}_{\rho})$, $0<t\leq 1$ and set
$V:=tW+U$ above. Dividing by $t$ and taking the limit as $t\downarrow 0$, we
obtain that_
$\int_{\overline{\Omega}}FW\;d\mu\leq\int_{\Omega}|\nabla u|^{p-2}\nabla
u\cdot\nabla
w\;dx+\rho\int_{\partial\Omega}|\nabla_{\Gamma}|^{q-2}\nabla_{\Gamma}u\cdot\nabla_{\Gamma}w\,d\sigma,$
(3.2)
_where we recall that_
$\int_{\overline{\Omega}}F\;d\mu=\int_{\Omega}f\;dx+\int_{\partial\Omega}g\;\frac{d\sigma}{b}.$
_Choosing $w=\pm\psi$ with $\psi\in\mathcal{D}(\Omega)$ (the space of test
functions) and integrating by parts in (3.2), we obtain_
$-\Delta_{p}u=f\;\;\mbox{ in }\;\mathcal{D}^{\prime}(\Omega)$
_and_
$g=b(x)\left|\nabla u\right|^{p-2}\partial_{n}u-\rho
b\left(x\right)\Delta_{q,\Gamma}u\;\mbox{ weakly on }\;\partial\Omega.$
_Therefore, the single valued operator $\partial\mathcal{J}_{\rho}$ is given
by_
$D(\partial\mathcal{J}_{\rho})=\\{U=(u,u_{\mid\partial\Omega})\in
D(\mathcal{J}_{\rho}),\;\left(-\Delta_{p}u,b(x)\left|\nabla
u\right|^{p-2}\partial_{n}u-\rho
b\left(x\right)\Delta_{q,\Gamma}u\right)\in\mathbb{X}_{2}\\},$
_and_
$\partial\mathcal{J}_{\rho}(U)=\left(-\Delta_{p}u,b(x)\left|\nabla
u\right|^{p-2}\partial_{n}u-\rho b\left(x\right)\Delta_{q,\Gamma}u\right).$
(3.3)
Since the functional $\mathcal{J}_{\rho}$ is proper, convex and lower
semicontinuous, it follows that its subdifferential
$\partial\mathcal{J}_{\rho}$ is a maximal monotone operator.
In the following two lemmas, we establish a relation between the null space of
the operator $A_{\rho}:=\partial\mathcal{J}_{\rho}$ and its range.
###### Lemma 3.3.
Let $\mathcal{N}\left(A_{\rho}\right)$ denote the null space of the operator
$A_{\rho}$. Then
$\mathcal{N}\left(A_{\rho}\right)=C\mathbf{1}=\left\\{C=(c,c):\;c\in\mathbb{R}\right\\},$
that is, $\mathcal{N}\left(A_{\rho}\right)$ consists of all the real constant
functions on $\overline{\Omega}$.
###### Proof.
We say that $U\in\mathcal{N}\left(A_{\rho}\right)$ if and only if (by
definition) $U=(u,u|_{\partial\Omega})$ is a weak solution of
$\begin{cases}-\Delta_{p}u=0,&\text{ in }\Omega,\\\
b\left(x\right)\left|\nabla u\right|^{p-2}\partial_{n}u-\rho
b\left(x\right)\Delta_{q,\Gamma}u=0,&\text{ on }\partial\Omega.\end{cases}$
(3.4)
A function $U=(u,u|_{\partial\Omega})\in\mathcal{V}_{\rho}\cap\mathbb{X}_{2}$
is said to be a weak solution of (3.4), if for every
$V=(v,v|_{\partial\Omega})\in\mathcal{V}_{\rho}\cap\mathbb{X}_{2}$, there
holds
$\mathcal{A}_{\rho}(U,V):=\int_{\Omega}\left|\nabla u\right|^{p-2}\nabla
u\cdot\nabla
v\;dx+\rho\int_{\partial\Omega}\left|\nabla_{\Gamma}u\right|^{q-2}\nabla_{\Gamma}u\cdot\nabla_{\Gamma}v\;d\sigma=0.$
(3.5)
Let $C:=(c,c)$ with $c\in\mathbb{R}$. Then it is clear that
$C\in\mathcal{N}\left(A_{\rho}\right)$.
Conversely, let
$U=(u,u|_{\partial\Omega})\in\mathcal{N}\left(A_{\rho}\right)$. Then, it
follows from (3.5) that
$\mathcal{A}_{\rho}(U,U):=\int_{\Omega}\left|\nabla
u\right|^{p}\;dx+\rho\int_{\partial\Omega}\left|\nabla_{\Gamma}u\right|^{q}\;d\sigma=0.$
(3.6)
Since $\Omega$ is bounded and connected, this implies that $u$ is equal to a
constant. Therefore, $U=C\mathbf{1}$ and this completes the proof.
###### Lemma 3.4.
The range of the operator $A_{\rho}$ is given by
$\mathcal{R}(A_{\rho})=\left\\{F:=(f,g)\in\mathbb{X}_{2}:\;\int_{\overline{\Omega}}F\;d\mu:=\int_{\Omega}f\;dx+\int_{\partial\Omega}g\;\frac{d\sigma}{b(x)}=0\right\\}.$
###### Proof.
Let $F\in\mathcal{R}(A_{\rho})\subset\mathbb{X}_{2}$. Then there exists
$U=(u,u|_{\partial\Omega})\in D(A_{\rho})$ such that $A_{\rho}(U)=F$. More
precisely, for every
$V=(v,v|_{\partial\Omega})\in\mathcal{V}_{\rho}\cap\mathbb{X}_{2},$ we have
$\displaystyle\mathcal{A}_{\rho}(U,V)$
$\displaystyle=\int_{\Omega}\left|\nabla u\right|^{p-2}\nabla u\cdot\nabla
v\;dx+\rho\int_{\partial\Omega}\left|\nabla_{\Gamma}u\right|^{q-2}\nabla_{\Gamma}u\cdot\nabla_{\Gamma}v\;d\sigma$
(3.7) $\displaystyle=\int_{\overline{\Omega}}FV\;d\mu.$
Taking $V=(1,1)\in\mathcal{V}_{\rho}\cap\mathbb{X}_{2}$, we obtain that
$\displaystyle\int_{\overline{\Omega}}F\;d\mu=0$. Hence,
$\displaystyle\mathcal{R}(A_{\rho})\subseteq\left\\{F\in\mathbb{X}_{2}:\;\int_{\overline{\Omega}}F\;d\mu=0\right\\}.$
Let us now prove the converse. To this end, let $F\in\mathbb{X}_{2}$ be such
that $\displaystyle\int_{\overline{\Omega}}F\;d\mu=0$. We have to show that
$F\in\mathcal{R}(A_{\rho})$, that is, there exists
$U\in\mathcal{V}_{\rho}\cap\mathbb{X}_{2}$ such that (3.7) holds, for every
$V\in\mathcal{V}_{\rho}\cap\mathbb{X}_{2}$. To this end, consider
$\mathcal{V}_{\rho,0}:=\left\\{U=(u,u|_{\partial\Omega})\in\mathcal{V}_{\rho}\cap\mathbb{X}_{2}:\;\int_{\overline{\Omega}}U\;d\mu:=\int_{\Omega}u\;dx+\int_{\partial\Omega}u\frac{d\sigma}{b}=0\right\\}.$
It is clear that $\mathcal{V}_{\rho,0}$ is a closed linear subspace of
$\mathcal{V}_{\rho}\cap\mathbb{X}_{2}\hookrightarrow\mathbb{X}_{2}$, and
therefore is a reflexive Banach space. Using [18, Section 1.1], we have that
the norm
$\|U\|_{\mathcal{V}_{\rho,0}}:=\|\nabla
u\|_{p,\Omega}+\rho\|\nabla_{\Gamma}u\|_{q,\partial\Omega}$
defines an equivalent norm on $\mathcal{V}_{\rho,0}$. Hence, there exists a
constant $C>0$ such that for every $U\in\mathcal{V}_{\rho,0}$,
$\||U\||_{2}\leq C\|U\|_{\mathcal{V}_{\rho,0}}:=\|\nabla
u\|_{p,\Omega}+\rho\|\nabla_{\Gamma}u\|_{q,\partial\Omega}.$ (3.8)
Define the functional
$\mathcal{F}_{\rho}:\;\mathcal{V}_{\rho,0}\rightarrow\mathbb{R}$ by
$\mathcal{F}_{\rho}(U)=\frac{1}{p}\int_{\Omega}|\nabla
u|^{p}\;dx+\frac{\rho}{q}\int_{\partial\Omega}|\nabla_{\Gamma}u|^{q}\;d\sigma-\int_{\overline{\Omega}}FU\;d\mu.$
It is easy to see that $\mathcal{F}_{\rho}$ is convex and lower-semicontinuous
on $\mathbb{X}_{2}$ (see Proposition 3.1). We show now that
$\mathcal{F}_{\rho}$ is coercive. By exploiting a classical Hölder inequality
and using (3.8), we have
$\displaystyle\left|\int_{\overline{\Omega}}FU\;d\mu\right|$
$\displaystyle\leq C\||F\||_{2}\||U\||_{2}\leq
C\||F\||_{2}\|U\|_{\mathcal{V}_{\rho,0}}$
$\displaystyle=C\||F\||_{2}\left(\|\nabla
u\|_{p,\Omega}+\rho\|\nabla_{\Gamma}u\|_{q,\partial\Omega}\right).$
Obviously, this estimate yields
$-\int_{\overline{\Omega}}FU\;d\mu\geq-C\||F\||_{2}\left(\|\nabla
u\|_{p,\Omega}+\rho\|\nabla_{\Gamma}u\|_{q,\partial\Omega}\right).$ (3.9)
Therefore, from (3.9), we immediately get
$\frac{\mathcal{F}_{\rho}(U)}{\|U\|_{\mathcal{V}_{\rho,0}}}\geq\frac{\frac{1}{p}\|\nabla
u\|_{p,\Omega}^{p}+\frac{\rho}{q}\|\nabla_{\Gamma}u\|_{q,\partial\Omega}^{q}}{{\|\nabla
u\|_{p,\Omega}+\rho\|\nabla_{\Gamma}u\|_{q,\partial\Omega}}}-C\||F\||_{2}.$
This inequality implies that
$\lim_{\|U\|_{\mathcal{V}_{\rho,0}}\rightarrow+\infty}\frac{\mathcal{F}_{\rho}(U)}{\|U\|_{\mathcal{V}_{\rho,0}}}=+\infty,$
and this shows that the functional $\mathcal{F}_{\rho}$ is coercive. Since
$\mathcal{F}_{\rho}$ is also convex, lower-semicontinuous, it follows from [3,
Theorem 3.3.4] that, there exists a function $U^{\ast}\in\mathcal{V}_{\rho,0}$
which minimizes $\mathcal{F}_{\rho}$. More precisely, for all
$V\in\mathcal{V}_{\rho,0}$,
$\mathcal{F}_{\rho}(U^{\ast})\leq\mathcal{F}_{\rho}(V);$ this implies that for
every $0<t\leq 1$ and every $V\in\mathcal{V}_{\rho,0}$,
$\mathcal{F}_{\rho}(U^{\ast}+tV)-\mathcal{F}_{\rho}(U^{\ast})\geq 0.$
Hence,
$\lim_{t\downarrow
0}\frac{\mathcal{F}_{\rho}(U^{\ast}+tV)-\mathcal{F}_{\rho}(U^{\ast})}{t}\geq
0.$
Using the Lebesgue Dominated Convergence, an easy computation shows that
$\displaystyle 0\leq\lim_{t\downarrow
0}\frac{\mathcal{F}_{\rho}(U^{\ast}+tV)-\mathcal{F}_{\rho}(U^{\ast})}{t}$
$\displaystyle=\int_{\Omega}|\nabla u^{\ast}|^{p-2}\nabla u^{\ast}\cdot\nabla
v\;dx$ (3.10)
$\displaystyle+\rho\int_{\partial\Omega}|\nabla_{\Gamma}u^{\ast}|^{q-2}\nabla_{\Gamma}u^{\ast}\cdot\nabla_{\Gamma}v\;d\sigma-\int_{\overline{\Omega}}FV\;d\mu.$
Changing $V$ to $-V$ into (3.10) gives that
$\int_{\Omega}|\nabla u^{\ast}|^{p-2}\nabla u^{\ast}\cdot\nabla
v\;dx+\rho\int_{\partial\Omega}|\nabla_{\Gamma}u^{\ast}|^{q-2}\nabla_{\Gamma}u^{\ast}\cdot\nabla_{\Gamma}v\;d\sigma=\int_{\overline{\Omega}}FV\;d\mu,$
(3.11)
for every $V\in\mathcal{V}_{\rho,0}$. Now, let
$V\in\mathcal{V}_{\rho}\cap\mathbb{X}_{2}$. Writing $V=V-C+C$ with $C=(c,c),$
$c:=\frac{1}{\left(\lambda_{1}+\lambda_{2}\right)}\left(\int_{\Omega}v\;dx+\int_{\partial\Omega}v\;\frac{d\sigma}{b}\right),$
and using the fact that $\displaystyle\int_{\overline{\Omega}}F\;d\mu=0$, we
obtain, for every $V\in\mathcal{V}_{\rho}\cap\mathbb{X}_{2}$, that
$\int_{\Omega}|\nabla u^{\ast}|^{p-2}\nabla u^{\ast}\cdot\nabla
v\;dx+\rho\int_{\partial\Omega}|\nabla_{\Gamma}u^{\ast}|^{q-2}\nabla_{\Gamma}u^{\ast}\cdot\nabla_{\Gamma}v\;d\sigma=\int_{\overline{\Omega}}FV\;d\mu.$
Therefore, $A_{\rho}(U)=F$. Hence, $F\in\mathcal{R}(A_{\rho})$ and this
completes the proof of the lemma.
The following result is a direct consequence of Lemmas 3.3, 3.4. This is the
main result of this section.
###### Theorem 3.5.
The operator $A_{\rho}=\partial\mathcal{J}_{\rho}$ satisfies the following
type of ”quasi-linear” Fredholm alternative:
$\mathcal{R}\left(A_{\rho}\right)=\mathcal{N}\left(A_{\rho}\right)^{\perp}=\left\\{F\in\mathbb{X}_{2}:\left\langle
F,\mathbf{1}\right\rangle_{\mathbb{X}_{2}}=0\right\\}.$
## 4\. Necessary and sufficient conditions for existence of solutions
In this section, we prove the first main result (cf. Theorem 1.1) for problem
(1.1). Before we do so, we will need the following results from maximal
monotone operators theory and convex analysis.
###### Definition 4.1.
Let $\mathcal{H}$ be a real Hilbert space. Two subsets $K_{1}$ and $K_{2}$ of
$\mathcal{H}$ are said to be almost equal, written, $K_{1}\simeq K_{2},$ if
$K_{1}$ and $K_{2}$ have the same closure and the same interior, that is,
$\overline{K_{1}}=\overline{K_{2}}$ and
$\mbox{int}\left(K_{1}\right)=\mbox{int}\left(K_{2}\right).$
The following abstract result is taken from [8, Theorem 3 and Generalization
in p.173–174].
###### Theorem 4.2 (Brezis-Haraux).
Let $A$ and $B$ be subdifferentials of proper convex lower semicontinuous
functionals $\varphi_{1}$ and $\varphi_{2}$, respectively, on a real Hilbert
space $\mathcal{H}$ with $D(\varphi_{1})\cap D(\varphi_{2})\neq\emptyset$, and
let $C$ be the subdifferential of the proper, convex lower semicontinuous
functional $\varphi_{1}+\varphi_{2}$, that is
$C=\partial(\varphi_{1}+\varphi_{2})$. Then
$\mathcal{R}(A)+\mathcal{R}(B)\subset\overline{\mathcal{R}(C)}\;\;\;\mbox{ and
}\;\;\;\mbox{Int}\left(\mathcal{R}(A)+\mathcal{R}(B)\right)\subset\mathcal{R}(C)$
In particular, if the operator $A+B$ is maximal monotone, then
$\mathcal{R}\left(A+B\right)\simeq\mathcal{R}\left(A\right)+\mathcal{R}\left(B\right),$
and this is the case if
$\partial(\varphi_{1}+\varphi_{2})=\partial\varphi_{1}+\partial\varphi_{2}$.
### 4.1. Assumptions and intermediate results
Let us recall that the aim of this section is to establish some necessary and
sufficient conditions for the solvability of the following nonlinear elliptic
problem:
$\begin{cases}-\Delta_{p}u+\alpha_{1}\left(u\right)=f,&\text{ in }\Omega,\\\
&\\\ b\left(x\right)\left|\nabla u\right|^{p-2}\partial_{n}u-\rho
b\left(x\right)\Delta_{q,\Gamma}u+\alpha_{2}\left(u\right)=g,&\text{ on
}\partial\Omega,\end{cases}$ (4.1)
where $p,q\in(1,+\infty)$ are fixed.We also assume that
$\alpha_{j}:\mathbb{R}\rightarrow\mathbb{R}$ ($j=1,2$) satisfy the following
assumptions.
###### Assumption 4.3.
The functions $\alpha_{j}:\;\mathbb{R}\rightarrow\mathbb{R}$ $(j=1,2)$ are
odd, monotone nondecreasing, continuous and satisfy $\alpha_{j}(0)=0$.
Let $\tilde{\alpha}_{j}$ be the inverse of $\alpha_{j}$. We define the
functions
$\Lambda_{j},\;\widetilde{\Lambda}_{j}:\;\mathbb{R}\rightarrow\mathbb{R}_{+}$
($j=1,2$) by
$\Lambda_{j}(t):=\int_{0}^{|t|}\alpha_{j}(s)ds\;\mbox{ and
}\;\widetilde{\Lambda}_{j}(t):=\int_{0}^{|t|}\tilde{\alpha}_{j}(s)ds.$ (4.2)
Then it is clear that $\Lambda_{j}$, $\widetilde{\Lambda}_{j}$ are even,
convex and monotone increasing on $\mathbb{R}_{+},$ with
$\Lambda_{j}(0)=\widetilde{\Lambda}_{j}(0),$ for each $j=1,2$. Moreover, since
$\alpha_{j}$ are odd, we have
$\Lambda_{j}^{{}^{\prime}}\left(t\right)=\alpha_{j}\left(t\right),$ for all
$t\in\mathbb{R}$ and $j=1,2$, with a similar relation holding for
$\widetilde{\Lambda}_{j}$ as well. The following result whose proof is
included in [25, Chap. I, Section 1.3, Theorem 3] holds.
###### Lemma 4.4.
The functions $\Lambda_{j}$ and $\widetilde{\Lambda}_{j}$ $(j=1,2)$ satisfy
(2.6) and (2.7). More precisely, for all $s,t\in\mathbb{R}$,
$st\leq\Lambda_{j}(s)+\widetilde{\Lambda}_{j}(t).$
If $s=\alpha_{j}(t)$ or $t=\tilde{\alpha}_{j}(s),$ then we also have equality,
that is,
$\widetilde{\Lambda}_{j}(\alpha_{j}(s))=s\alpha_{j}(s)-\Lambda_{j}(s),\text{
}j=1,2.$
We note that in [25], the statement of Lemma 4.4 assumed that $\Lambda_{j}$,
$\widetilde{\Lambda}_{j}$ are $\mathcal{N}$-functions in the sense of
Definition 2.5. However, the conclusion of that result holds under the weaker
hypotheses of Lemma 4.4.
Define the functional $\mathcal{J}_{2}:\;\mathbb{X}_{2}\rightarrow[0,+\infty]$
by
$\mathcal{J}_{2}(u,v):=\begin{cases}\displaystyle\int_{\Omega}\Lambda_{1}(u)\;dx+\int_{\partial\Omega}\Lambda_{2}(v)\;\frac{d\sigma}{b},\;\;\;&\mbox{
if }\;(u,v)\in D(\mathcal{J}_{2}),\\\ +\infty,&\mbox{ if
}(u,v)\in\mathbb{X}_{2}\setminus D(\mathcal{J}_{2}),\end{cases}$
with the effective domain
$D(\mathcal{J}_{2}):=\left\\{(u,v)\in\mathbb{X}_{2}:\;\int_{\Omega}\Lambda_{1}(u)\;dx+\int_{\partial\Omega}\Lambda_{2}(v)\;\frac{d\sigma}{b}<\infty\right\\}.$
###### Lemma 4.5.
Let $\alpha_{j}$ $(j=1,2)$ satisfy Assumption 4.3. Then the functional
$\mathcal{J}_{2}$ is proper, convex and lower semicontinuous on
$\mathbb{X}_{2}$.
###### Proof.
It is routine to check that $\mathcal{J}_{2}$ is convex and proper. This
follows easily from the convexity of $\Lambda_{j}$ and the fact that
$\Lambda_{j}(0)=0$. To show the lower semicontinuity on $\mathbb{X}_{2}$, let
$U_{n}=(u_{n},v_{n})\in D(\mathcal{J}_{2})$ be such that $U_{n}\rightarrow
U:=(u,v)$ in $\mathbb{X}_{2}$ and $\mathcal{J}_{2}(U_{n})\leq C$ for some
constant $C>0$. Since $U_{n}\rightarrow U$ in $\mathbb{X}_{2}$, then there is
a subsequence, which we also denote by $U_{n}=(u_{n},v_{n}),$ such that
$u_{n}\rightarrow u$ a.e. on $\Omega$ and $v_{n}\rightarrow v$ $\sigma$-a.e.
on $\Gamma$. Since $\Lambda_{j}(\cdot)$ are continuous (thus, lower-
semicontinuous), we have
$\Lambda_{1}(u)\leq\liminf_{n\rightarrow\infty}\Lambda_{1}(u_{n})\;\mbox{ and
}\;\Lambda_{2}(v)\leq\liminf_{n\rightarrow\infty}\Lambda_{2}(v_{n}).$
By Fatou’s Lemma, we obtain
$\int_{\Omega}\Lambda_{1}(u)dx\leq\int_{\Omega}\liminf_{n\rightarrow\infty}\Lambda_{1}(u_{n})dx\leq\liminf_{n\rightarrow\infty}\int_{\Omega}\Lambda_{1}(u_{n})dx$
and
$\int_{\partial\Omega}\Lambda_{2}(v)\frac{d\sigma}{b}\leq\int_{\partial\Omega}\liminf_{n\rightarrow\infty}\Lambda_{2}(v_{n})\frac{d\sigma}{b}\leq\liminf_{n\rightarrow\infty}\int_{\partial\Omega}\Lambda_{2}(v_{n})\frac{d\sigma}{b}.$
Hence, $\mathcal{J}_{2}$ is lower semicontinuous on $\mathbb{X}_{2}$.
We have the following result whose proof is contained in [25, Chap. III,
Section 3.1, Theorem 2].
###### Lemma 4.6.
Let $\alpha_{j}$ $(j=1,2)$ satisfy Assumption 4.3 and assume that there exist
constants $C_{j}>1$ $(j=1,2)$ such that
$\Lambda_{j}(2t)\leq C_{j}\Lambda_{j}(t),\;\mbox{ for all }\;t\in\mathbb{R}.$
(4.3)
Then $D(\mathcal{J}_{2})$ is a vector space.
Let the operator $B_{2}$ be defined by
$\begin{cases}D\left(B_{2}\right)=\left\\{U:=\left(u,v\right)\in\mathbb{X}_{2}:\left(\alpha_{1}\left(u\right),\alpha_{2}\left(v\right)\right)\in\mathbb{X}_{2}\right\\},\\\
B_{2}(U)=\left(\alpha_{1}\left(u\right),\alpha_{2}\left(v\right)\right).\end{cases}$
(4.4)
We have the following result.
###### Lemma 4.7.
Let the assumptions of Lemma 4.6 be satisfied. Then the subdifferential of
$\mathcal{J}_{2}$ and the operator $B_{2}$ coincide, that is, for all
$(u,v)\in D(B_{2})=D(\partial\mathcal{J}_{2})$,
$\partial\mathcal{J}_{2}(u,v)=B_{2}(u,v).$
###### Proof.
Let $U=(u,v)\in D(\mathcal{J}_{2})$ and
$F=(f,g)\in\partial\mathcal{J}_{2}(u,v)$. Then by definition,
$F\in\mathbb{X}_{2}$ and, for every $V=(u_{1},v_{1})\in D(\mathcal{J}_{2}),$
we get
$\int_{\overline{\Omega}}F(V-U)\;d\mu\leq\mathcal{J}_{2}(V)-\mathcal{J}_{2}(U).$
Let $V=U+tW,$ with $W=(u_{2},v_{2})\in D(\mathcal{J}_{2})$ and $0<t\leq 1$.
Then by Lemma 4.6, $V=U+tW\in D(\mathcal{J}_{2})$. Now, dividing by $t$ and
taking the limit as $t\downarrow 0$, we obtain
$\int_{\Omega}FW\;d\mu\leq\int_{\Omega}\alpha_{1}(u)u_{2}dx+\int_{\partial\Omega}\alpha_{2}(v)v_{2}\,\frac{d\sigma}{b}.$
(4.5)
Changing $W$ to $-W$ in (4.5) gives that
$\int_{\overline{\Omega}}FW\;d\mu=\int_{\Omega}\alpha_{1}(u)u_{2}dx+\int_{\partial\Omega}\alpha_{2}(v)v_{2}\,\frac{d\sigma}{b}.$
In particular, if $W=(u_{2},0)$ with $u_{2}\in\mathcal{D}(\Omega)$, we have
$\int_{\Omega}fu_{2}\;dx=\int_{\Omega}\alpha_{1}(u)u_{2}\;dx,$
and this shows that $\alpha_{1}(u)=f$. Similarly, one obtains that
$\alpha_{2}(v)=g$. We have shown that $U\in D(B_{2})$ and
$B_{2}(U):=B_{2}(u,v)=(\alpha_{1}(u),\alpha_{2}(v))=(f,g).$
Conversely, let $U=(u,v)\in D(B_{2})$ and set
$F=(f,g):=B_{2}(u,v)=(\alpha_{1}(u),\alpha_{2}(v))$. Since
$(\alpha_{1}(u),\alpha_{2}(v))\in\mathbb{X}_{2}$, from (4.2) and (4.3), it
follows that
$\int_{\Omega}\Lambda_{1}(u)dx+\int_{\partial\Omega}\Lambda_{2}(v)\frac{d\sigma}{b}<\infty.$
Hence, $U=(u,v)\in D(\mathcal{J}_{2})$. Let $V=(u_{1},v_{1})\in
D(\mathcal{J}_{2})$. Using Lemma 4.4, we obtain
$\displaystyle\alpha_{1}(u)(u_{1}-u)$
$\displaystyle=\alpha_{1}(u)u_{1}-\alpha_{1}(u)u$ (4.6)
$\displaystyle\leq\Lambda_{1}(u_{1})+\Lambda_{1}(\alpha_{1}(u))-\alpha_{1}(u)u$
$\displaystyle=\Lambda_{1}(u_{1})-\Lambda_{1}(u)$
and similarly,
$\alpha_{2}(v)(v_{1}-v)\leq\Lambda_{2}(v_{1})-\Lambda_{2}(v).$
Therefore,
$\displaystyle\int_{\overline{\Omega}}F(V-U)\;d\mu$
$\displaystyle=\int_{\Omega}\alpha_{1}(u)(u_{1}-u)dx+\int_{\partial\Omega}\alpha_{2}(v)(v_{1}-v)\frac{d\sigma}{b}$
$\displaystyle\leq\mathcal{J}_{2}(V)-\mathcal{J}_{2}(U).$
By definition, this shows that
$F=(\alpha_{1}(u),\alpha_{2}(v))=B_{2}(U)\in\partial\mathcal{J}_{2}(U)$. We
have shown that $U\in D(\partial\mathcal{J}_{2})$ and
$B_{2}(U)\in\partial\mathcal{J}_{2}(U)$. This completes the proof of the
lemma.
Next, we define the functional
$\mathcal{J}_{3,\rho}:\;\mathbb{X}_{2}\rightarrow[0,+\infty]$ by
$\mathcal{J}_{3,\rho}(U)=\begin{cases}\mathcal{J}_{\rho}(U)+\mathcal{J}_{2}(U)\;\;&\mbox{
if }\;U\in D(\mathcal{J}_{3,\rho}):=D(\mathcal{J}_{\rho})\cap
D(\mathcal{J}_{2}),\\\ +\infty&\mbox{ if }\;U\in\mathbb{X}_{2}\backslash
D(\mathcal{J}_{3,\rho}).\end{cases}$ (4.7)
Note that for $\rho=0$,
$D(\mathcal{J}_{3,0})=\\{U=(u,u|_{\partial\Omega})\in D(\mathcal{J}_{2}):u\in
W^{1,p}(\Omega)\cap L^{2}(\Omega),\;u|_{\partial\Omega}\in
L^{2}(\partial\Omega)\\},$ (4.8)
while for $\rho=1$,
$D(\mathcal{J}_{3,1})=\\{U=(u,u|_{\partial\Omega})\in D(\mathcal{J}_{2}):u\in
W^{1,p}(\Omega)\cap L^{2}(\Omega),\;u|_{\partial\Omega}\in
W^{1,q}(\partial\Omega)\cap L^{2}(\partial\Omega)\\}.$ (4.9)
We have the following result.
###### Lemma 4.8.
Let the assumptions of Lemma 4.6 be satisfied. Then the subdifferential of the
functional $\mathcal{J}_{3,\rho}$ is given by
$\displaystyle D(\partial\mathcal{J}_{3,\rho})$
$\displaystyle=\left\\{U=(u,u_{\mid\partial\Omega})\in
D(\mathcal{J}_{3,\rho}):-\Delta_{p}u+\alpha_{1}(u)\in L^{2}(\Omega)\right.$
$\displaystyle\left.\text{and }b(x)|\nabla
u|^{p-2}\partial_{n}u-b(x)\rho\Delta_{q,\Gamma}u+\alpha_{2}(u)\in
L^{2}(\partial\Omega,{d\sigma}/{b})\right\\}$
and
$\partial\mathcal{J}_{3,\rho}(U)=\bigg{(}-\Delta_{p}u+\alpha_{1}(u),b(x)|\nabla
u|^{p-2}\partial_{n}u-b(x)\rho\Delta_{q,\Gamma}u+\alpha_{2}(u)\bigg{)}.$
(4.10)
In particular, if for every $U=(u,u_{\mid\partial\Omega})\in
D(\mathcal{J}_{3,\rho}),$ the function
$\left(\alpha_{1}(u),\alpha_{2}(u)\right)\in\mathbb{X}_{2}$, then
$\partial\mathcal{J}_{3,\rho}:=\partial(\mathcal{J}_{\rho}+\mathcal{J}_{2})=\partial\mathcal{J}_{\rho}+\partial\mathcal{J}_{2}.$
###### Proof.
We calculate the subdifferential $\partial\mathcal{J}_{3,\rho}$. Let
$F=(f,g)\in\partial\mathcal{J}_{3,\rho}(U)$, that is, $F\in\mathbb{X}_{2}$,
$U\in D(\mathcal{J}_{3,\rho})=D(\mathcal{J}_{\rho})\cap D(\mathcal{J}_{2})$
and for every $V\in D(\mathcal{J}_{3,\rho})$, we have
$\int_{\overline{\Omega}}F(V-U)d\mu\leq\mathcal{J}_{3,\rho}(V)-\mathcal{J}_{3,\rho}(U).$
Proceeding as in Remark 3.2 and the proof of Lemma 4.7, we obtain that
$-\Delta_{p}u+\alpha_{1}(u)=f\;\;\mbox{ in }\;\mathcal{D}(\Omega)^{\prime},$
and
$b(x)|\nabla
u|^{p-2}\partial_{n}u-b(x)\rho\Delta_{q,\Gamma}u+\alpha_{2}(u)=g\;\mbox{
weakly on }\;\partial\Omega.$
Noting that $\partial\mathcal{J}_{3,\rho}$ is also a single-valued operator
(which follows from the assumptions on $\alpha_{j}$ and $\Lambda_{j}$), we
easily obtain (4.10), and this completes the proof of the first part.
To show the last part, note that it is clear that
$\partial\mathcal{J}_{\rho}+\partial\mathcal{J}_{2}\subset\partial\mathcal{J}_{3,\rho}$
always holds. To show the converse inclusion, let assume that for every
$U=(u,u_{\mid\partial\Omega})\in D(\mathcal{J}_{3,\rho}),$ the function
$(\alpha_{1}(u),\alpha_{2}(u))\in\mathbb{X}_{2}$. Then it follows from (3.3),
(4.4) (since $\partial\mathcal{J}_{2}=B_{2}$) and (4.10), that
$D(\partial\mathcal{J}_{3,\rho})=D(\partial\mathcal{J}_{\rho})\cap
D(\partial\mathcal{J}_{2})$ and
$\displaystyle\partial\mathcal{J}_{3,\rho}(U)$
$\displaystyle=\left(-\Delta_{p}u+\alpha_{1}(u),b(x)|\nabla
u|^{p-2}\partial_{n}u-b(x)\rho\Delta_{q,\Gamma}u+\alpha_{2}(u)\right)$
$\displaystyle=\left(-\Delta_{p}u,b(x)|\nabla
u|^{p-2}\partial_{n}u-b(x)\rho\Delta_{q,\Gamma}u\right)+\left(\alpha_{1}(u),\alpha_{2}(u)\right)$
$\displaystyle=\partial\mathcal{J}_{\rho}(U)+\partial\mathcal{J}_{2}(U).$
This completes the proof.
The following lemma is the main ingredient in the proof of Theorem 4.11 below.
###### Lemma 4.9.
Let $B_{1}:=A_{\rho}$ and set $B_{3}:=\partial\mathcal{J}_{3,\rho}$. Then
$\mathcal{R}\left(B_{1}\right)+\mathcal{R}\left(B_{2}\right)\subset\overline{\mathcal{R}(B_{3})}\;\mbox{
and
}\;\mbox{Int}(\mathcal{R}\left(B_{1}\right)+\mathcal{R}\left(B_{2}\right))\subset\mathcal{R}(B_{3}).$
(4.11)
In particular, if for every $U=(u,u_{\mid\partial\Omega})\in
D(\mathcal{J}_{3,\rho}),$ the function
$(\alpha_{1}(u),\alpha_{2}(u))\in\mathbb{X}_{2}$, then
$\mathcal{R}\left(B_{3}\right):=\mathcal{R}\left(B_{1}+B_{2}\right)\simeq\mathcal{R}\left(B_{1}\right)+\mathcal{R}\left(B_{2}\right).$
(4.12)
###### Proof.
By Remark 3.2 and Lemmas 4.7, 4.8, the operators $B_{1}$, $B_{2}$ and $B_{3}$
are subdifferentials of proper, convex and lower semicontinuous functionals
$\mathcal{J}_{\rho},\mathcal{J}_{2}$ and $\mathcal{J}_{\rho}+\mathcal{J}_{2}$,
respectively, on $\mathbb{X}_{2}$. Hence, $B_{1}$, $B_{2}$ and $B_{3}$ are
maximal monotone operators. In particular, if
$(\alpha_{1}(u),\alpha_{2}(u))\in\mathbb{X}_{2}$, for every
$U=(u,u_{\mid\partial\Omega})\in D(\mathcal{J}_{3,\rho}),$ then by Lemma 4.8,
one has $B_{3}=B_{1}+B_{2}$. Now, the lemma follows from the celebrated
Brezis-Haraux result in Theorem 4.2.
### 4.2. Statement and proof of the main result
Next, let $\mathcal{V}_{\rho}:=D(\mathcal{J}_{3,\rho})$ be given by (4.8) if
$\rho=0$ and by (4.9) if $\rho=1$.
###### Definition 4.10.
Let $F=(f,g)\in\mathbb{X}_{2}$. A function $u\in W^{1,p}(\Omega)$ is said to
be a weak solution of (4.1), if $\alpha_{1}(u)\in L^{1}(\Omega),$
$\alpha_{2}(u)\in L^{1}(\partial\Omega)$, $u|_{\partial\Omega}\in
W^{1,q}(\partial\Omega),$ if $\rho>0$ and
$\displaystyle\int_{\Omega}|\nabla u|^{p-2}\nabla u\cdot\nabla
vdx+\rho\int_{\partial\Omega}|\nabla_{\Gamma}u|^{q-2}\nabla_{\Gamma}u\cdot\nabla_{\Gamma}vd\sigma$
(4.13)
$\displaystyle+\int_{\Omega}\alpha_{1}(u)vdx+\int_{\partial\Omega}\alpha_{2}(u)v\frac{d\sigma}{b}=\int_{\Omega}fvdx+\int_{\partial\Omega}gv\frac{d\sigma}{b},$
for every $v\in W^{1,p}(\Omega)\cap C(\overline{\Omega})$ with
$v|_{\partial\Omega}\in W^{1,q}(\partial\Omega),$ if $\rho>0$.
Recall that $\lambda_{1}:=\int_{\Omega}dx$ and
$\displaystyle\lambda_{2}:=\int_{\partial\Omega}\frac{d\sigma}{b}$. We also
define the average $\left\langle F\right\rangle_{\overline{\Omega}}$ of
$F=\left(f,g\right)$ with respect to the measure $\mu,$ as follows:
$\left\langle
F\right\rangle_{\overline{\Omega}}:=\frac{1}{\mu\left(\overline{\Omega}\right)}\int_{\overline{\Omega}}Fd\mu=\frac{1}{\mu\left(\overline{\Omega}\right)}\left(\int_{\Omega}fdx+\int_{\partial\Omega}g\frac{d\sigma}{b}\right),$
where $\mu\left(\overline{\Omega}\right)=\lambda_{1}+\lambda_{2}$. Now, we are
ready to state the main result of this section.
###### Theorem 4.11.
Let $\alpha_{j}$ $(j=1,2)$ satisfy Assumption 4.3 and assume that the
functions $\Lambda_{j}$ $(j=1,2)$ satisfy (4.3). Let
$F=\left(f,g\right)\in\mathbb{X}_{2}$. The following hold:
1. (a)
Suppose that the nonlinear elliptic problem (4.1) possesses a weak solution.
Then
$\left\langle
F\right\rangle_{\overline{\Omega}}\in\frac{\lambda_{1}\mathcal{R}\left(\alpha_{1}\right)+\lambda_{2}\mathcal{R}\left(\alpha_{2}\right)}{\lambda_{1}+\lambda_{2}}.$
(4.14)
2. (b)
Assume that
$\left\langle
F\right\rangle_{\overline{\Omega}}\in\mbox{int}\left(\frac{\lambda_{1}\mathcal{R}\left(\alpha_{1}\right)+\lambda_{2}\mathcal{R}\left(\alpha_{2}\right)}{\lambda_{1}+\lambda_{2}}\right).$
(4.15)
Then the nonlinear elliptic problem (4.1) has at least one weak solution.
###### Proof.
We show that condition (4.14) is necessary. Let $F:=(f,g)\in\mathbb{X}_{2}$
and let $U=\left(u,u_{\mid\partial\Omega}\right)\in
D(B_{3})\subset\mathcal{V}_{\rho}$ be a weak solution of $B_{3}U=F$. Then, by
definition, for every $V=(v,v|_{\partial\Omega})\in\mathcal{V}_{\rho},$ (4.13)
holds. Taking $v\equiv 1$ in (4.13) yields
$\int_{\Omega}f\;dx+\int_{\partial\Omega}g\;\frac{d\sigma}{b}=\int_{\Omega}\alpha_{1}\left(u\right)dx+\int_{\partial\Omega}\alpha_{2}\left(u\right)\frac{d\sigma}{b}.$
Hence,
$\int_{\Omega}f\;dx+\int_{\partial\Omega}g\;\frac{d\sigma}{b}\in\left(\lambda_{1}\mathcal{R}\left(\alpha_{1}\right)+\lambda_{2}\mathcal{R}\left(\alpha_{2}\right)\right),$
and so (4.14) holds. This completes the proof of part (a).
We show that the condition (4.15) is sufficient.
(i) First, let $C\in\mathbf{C}$, where
$\mathbf{C:}=\left\\{C=(c_{1},c_{2}):(c_{1},c_{2})\in\mathcal{R}(\alpha_{1})\times\mathcal{R}(\alpha_{2})\right\\}.$
By definition, one has that $\mathbf{C}\subset\mathcal{R}\left(B_{2}\right)$
since $c_{1}=\alpha_{1}\left(d_{1}\right)$ for some constant function $d_{1}$
on $\Omega$ and $c_{2}=\alpha_{2}(d_{2})$ for some constant function $d_{2}$
on $\partial\Omega$. Let $F\in\mathbb{X}_{2}$ be such that (4.15) holds. We
must show $F\in\mathcal{R}\left(B_{3}\right)$. By (4.15), we may choose
$C=\left(c_{1},c_{2}\right)\in\mathbf{C}$ such that
$\left\langle
F\right\rangle_{\overline{\Omega}}=\frac{\lambda_{1}c_{1}+\lambda_{2}c_{2}}{\lambda_{1}+\lambda_{2}}\in\mbox{int}\left(\frac{\lambda_{1}\mathcal{R}\left(\alpha_{1}\right)+\lambda_{2}\mathcal{R}\left(\alpha_{2}\right)}{\lambda_{1}+\lambda_{2}}\right).$
Then, for $F\in\mathbb{X}_{2}$, we have $F=F_{1}+F_{2}$ with
$F_{1}:=F-C\mbox{ and }\;F_{2}=C.$
First,
$F_{1}\in\mathcal{R}\left(B_{1}\right)=\mathcal{N}\left(B_{1}\right)^{\perp}=\mathbf{1}^{\perp}$,
since
$\displaystyle\displaystyle\int_{\overline{\Omega}}F_{1}d\mu$ $\displaystyle=$
$\displaystyle\int_{\overline{\Omega}}\left(F-C\right)d\mu$ $\displaystyle=$
$\displaystyle\int_{\Omega}f\text{ }dx+\int_{\partial\Omega}g\text{
}\frac{d\sigma}{b}-\left(\lambda_{1}c_{1}+\lambda_{2}c_{2}\right)$
$\displaystyle=$
$\displaystyle\left(\lambda_{1}+\lambda_{2}\right)\left\langle
F\right\rangle_{\overline{\Omega}}-\left(\lambda_{1}c_{1}+\lambda_{2}c_{2}\right)=0.$
Obviously, $F_{2}=C\in\mathcal{R}\left(B_{2}\right)$. Hence, it is readily
seen that
$F\in(\mathcal{R}\left(B_{1}\right)+\mathcal{R}\left(B_{2}\right)).$
(ii) Next, denote by $\mathbb{B}_{\mathbb{R}}(x,r)$ the open ball in
$\mathbb{R}$ of center $x$ and radius $r>0$. Since
$\displaystyle\left\langle
F\right\rangle_{\overline{\Omega}}\in\mbox{int}\left(\frac{\lambda_{1}\mathcal{R}\left(\alpha_{1}\right)+\lambda_{2}\mathcal{R}\left(\alpha_{2}\right)}{\lambda_{1}+\lambda_{2}}\right),$
there exists $\delta>0$ such that the open ball
$\displaystyle\mathbb{B}_{\mathbb{R}}(\left\langle
F\right\rangle_{\overline{\Omega}},\delta)\subset\left(\frac{\lambda_{1}\mathcal{R}\left(\alpha_{1}\right)+\lambda_{2}\mathcal{R}\left(\alpha_{2}\right)}{\lambda_{1}+\lambda_{2}}\right).$
Since the mapping $F\mapsto\left\langle F\right\rangle_{\overline{\Omega}}$
from $\mathbb{X}_{2}$ into $\mathbb{R}$ is continuous, then there exists
$\varepsilon>0$ such that
$\left\langle
G\right\rangle_{\overline{\Omega}}\in\mathbb{B}_{\mathbb{R}}(\left\langle
F\right\rangle_{\overline{\Omega}},\delta)\subset\left(\frac{\lambda_{1}\mathcal{R}\left(\alpha_{1}\right)+\lambda_{2}\mathcal{R}\left(\alpha_{2}\right)}{\lambda_{1}+\lambda_{2}}\right),$
for all $G\in\mathbb{X}_{2}$ satisfying $\||F-G\||_{2}<\varepsilon$. It
finally follows from part (i) above that
$(\mathcal{R}\left(B_{1}\right)+\mathcal{R}\left(B_{2}\right))$ contains an
$\varepsilon$-ball in $\mathbb{X}_{2}$ centered at $F$. Therefore,
$F\in\mbox{int}(\mathcal{R}\left(B_{1}\right)+\mathcal{R}\left(B_{2}\right))\subset\mathcal{R}(B_{3}).$
Consequently, problem (4.1) is (weakly) solvable for every function
$F=(f,g)\in\mathbb{X}_{2},$ if (4.15) holds. This completes the proof of the
theorem.
###### Remark 4.12.
It is important to remark that in order to prove Theorem 4.11, we do not
require that $(\alpha_{1}(u),\alpha_{2}(u))$ should belong to
$\mathbb{X}_{2},$ for every $U=(u,u_{\mid\Gamma})\in D(\mathcal{J}_{3,\rho}).$
In particular, only the assumption (4.11) was needed. However, if this
happens, then we get the much stronger result in (4.12) which would require
that the nonlinearities $\alpha_{1},\alpha_{2}$ satisfy growth assumptions at
infinity.
We conclude this section with the following corollary and some examples.
###### Corollary 4.13.
Let the assumptions of Theorem 4.11 be satisfied. Let
$F=\left(f,g\right)\in\mathbb{X}_{2}$. Assume that at least one of the sets
$\mathcal{R}\left(\alpha_{1}\right)$, $\mathcal{R}\left(\alpha_{2}\right)$ is
open. Then the nonlinear elliptic problem (4.1) possesses a weak solution if
and only if (4.15) holds.
###### Remark 4.14.
Similar results to Theorem 4.11 and Corollary 4.13 were also obtained in [12,
Theorem 4.4], but only when $p=q=2.$
### 4.3. Examples
We will now give some examples as applications of Theorem 4.11.
Let$p,q\in(1,+\infty)$ be fixed.
###### Example 4.15.
Let $\alpha_{1}\left(s\right)$ or $\alpha_{2}\left(s\right)$ be equal to
$\alpha\left(s\right)=c\left|s\right|^{r-1}s,$ where $c,$ $r>0$. Note that
$\mathcal{R}\left(\alpha\right)=\mathbb{R}$. It is easy to check that $\alpha$
satisfies all the conditions of Assumption 4.3 and that the function
$\Lambda(t)=\int_{0}^{|t|}\alpha(s)ds$ satisfies (4.3). Then, it follows that
problem (4.1) is solvable for any $f\in L^{2}\left(\Omega\right),$ $g\in
L^{2}\left(\partial\Omega\right)$.
###### Example 4.16.
Consider the case when $\rho=\alpha_{2}\equiv 0$ in (4.1), that is, consider
the following boundary value problem:
$\left\\{\begin{array}[]{c}-\Delta_{p}u+\alpha_{1}\left(u\right)=f\text{ in
}\Omega,\\\ b\left(x\right)\left|\nabla u\right|^{p-2}\partial_{n}u=g\text{ on
}\Gamma.\end{array}\right.$
Then, by Theorem 4.11, this problem has a weak solution if
$\int_{\Omega}f\text{ }dx+\int_{\partial\Omega}g\text{
}\frac{d\sigma}{b}\in\lambda_{1}\mbox{int}(\mathcal{R}\left(\alpha_{1}\right)),$
which yields the classical Landesman-Lazer result (see (1.6)) for $g\equiv 0$
and $p=2$.
###### Example 4.17.
Let us now consider the case when $\alpha_{1}\equiv\alpha$ and
$\alpha_{2}\equiv 0,$ where $\alpha$ is a continuous, odd and nondecreasing
function on $\mathbb{R}$ such that $\alpha\left(0\right)=0$. The problem
$\left\\{\begin{array}[]{cc}-\Delta_{p}u+\alpha\left(u\right)=f,&\text{in
}\Omega,\\\ b\left(x\right)\left|\nabla u\right|^{p-2}\partial_{n}u-\rho
b\left(x\right)\Delta_{q,\Gamma}u=g,&\text{on
}\partial\Omega,\end{array}\right.$ (4.16)
has a weak solution if
$\int_{\Omega}f\text{ }dx+\int_{\partial\Omega}g\text{
}\frac{d\sigma}{b}\in\lambda_{2}\mbox{int}\bigg{(}\mathcal{R}\left(\alpha\right)\bigg{)}.$
(4.17)
Let us now choose $\alpha\left(s\right)=\arctan\left(s\right)$ in (4.16).
Then, it is easy to check that
$\Lambda(t):=\int_{0}^{|t|}\alpha(s)ds=\left|t\right|\arctan\left(\left|t\right|\right)-\frac{1}{2}\ln\left(1+t^{2}\right),\text{
}t\in\mathbb{R}$
is monotone increasing on $\mathbb{R}_{+}$ and that it satisfies
$\Lambda(2t)\leq C_{2}\Lambda(t),$ $\forall t\in\mathbb{R}$, for some constant
$C_{2}>1$. Therefore, (4.17) becomes the necessary and sufficient condition
$\left|\frac{1}{\lambda_{2}}\left(\int_{\Omega}f\text{
}dx+\int_{\partial\Omega}g\text{
}\frac{d\sigma}{b}\right)\right|<\frac{\pi}{2}.$ (4.18)
## 5\. A priori estimates
Let $\Omega\subset\mathbb{R}^{N}$ be a bounded Lipschitz domain with boundary
$\partial\Omega$. Recall that $1<p,q<\infty$, $\rho\in\\{0,1\\}$ and $b\in
L^{\infty}(\partial\Omega)$ with $b(x)\geq b_{0}>0,$ for some constant
$b_{0}$. We consider the nonlinear elliptic boundary value problem formally
given by
$\begin{cases}\displaystyle-\Delta_{p}u+\alpha_{1}(x,u)+|u|^{p-2}u=f,&\mbox{
in }\;\Omega\\\ &\\\ \displaystyle-\rho b(x)\Delta_{q,\Gamma}u+\rho
b(x)|u|^{q-2}u+b(x)|\nabla u|^{p-2}\partial_{n}u+\alpha_{2}(x,u)=g,&\mbox{ on
}\;\partial\Omega,\end{cases}$ (5.1)
where $f\in L^{p_{1}}(\Omega)$ and $g\in L^{q_{1}}(\partial\Omega)$ for some
$1\leq p_{1},q_{1}\leq\infty$. If $\rho=0$, then the boundary conditions in
(5.1) are of Robin type. Existence and regularity of weak solutions for this
case have been obtained in [5] for $p=2$ (see also [29] for the linear case)
and for general $p$ in [6]. Therefore, we will concentrate our attention to
the case $\rho=1$ only; in this case, the boundary condition in (5.1) is a
generalized Wentzell-Robin boundary condition. For the sake of simplicity,
from now on we will also take $b\equiv 1$.
### 5.1. General assumptions
Throughout this section, we assume that the functions
$\alpha_{1}:\Omega\times\mathbb{R}\rightarrow\mathbb{R}$ and
$\alpha_{2}:\partial\Omega\times\mathbb{R}\rightarrow\mathbb{R}$ satisfy the
following conditions:
###### Assumption 5.1.
$\begin{cases}\displaystyle\alpha_{j}(x,\cdot)\mbox{ is odd and strictly
increasing},\\\
\displaystyle\alpha_{j}(x,0)=0,\;\;\displaystyle\alpha_{j}(x,\cdot)\mbox{ is
continuous },\\\ \displaystyle\lim_{t\rightarrow
0}\frac{\alpha_{j}(x,t)}{t}=0,\;\;\displaystyle\lim_{t\rightarrow\infty}\frac{\alpha_{j}(x,t)}{t}=\infty,\end{cases}$
for $\lambda_{N}$-a.e. $x\in\Omega$ if $j=1$ and $\sigma$-a.e.
$x\in\partial\Omega$ if $j=2$.
Since $\alpha_{j}(x,\cdot)$ are strictly increasing for $\lambda_{N}$-a.e.
$x\in\Omega$ if $j=1$ and $\sigma$-a.e. $x\in\partial\Omega$ if $j=2$, then
they have inverses which we denote by $\widetilde{\alpha}_{j}(x,\cdot)$ (cf.
also Section 4). We define the functions $\Lambda_{1},$
$\widetilde{\Lambda}_{1}:\Omega\times\mathbb{R}\rightarrow[0,\infty)$ and
$\Lambda_{2},$
$\widetilde{\Lambda}_{2}:\partial\Omega\times\mathbb{R}\rightarrow[0,\infty)$
by
$\Lambda_{j}(x,t):=\int_{0}^{|t|}\alpha_{j}(x,s)\;ds\;\mbox{ and
}\;\widetilde{\Lambda}_{j}(x,t):=\int_{0}^{|t|}\widetilde{\alpha}_{j}(x,s)\;ds.$
Then, it is clear that, for $\lambda_{N}$-a.e. $x\in\Omega$ if $j=1$ and
$\sigma$-a.e. $x\in\partial\Omega$ if $j=2$, $\Lambda_{j}(x,\cdot)$ and
$\widetilde{\Lambda}_{j}(x,\cdot)$ are differentiable, monotone and convex
with $\Lambda_{j}(x,0)=\widetilde{\Lambda}_{j}(x,0)=0.$ Furthermore,
$\Lambda_{j}(x,\cdot)$ is an ${\mathcal{N}}$-function and
$\widetilde{\Lambda}_{j}(x,\cdot)$ is its complementary
${\mathcal{N}}$-function. The function $\widetilde{\Lambda}_{j}$ is then the
complementary Musielak-Orlick function of $\Lambda_{j}$ in the sense of Young
(see Definition 2.3).
###### Assumption 5.2.
We assume, for $\lambda_{N}$-a.e. $x\in\Omega$ if $j=1$ and $\sigma$-a.e.
$x\in\partial\Omega$ if $j=2$, that $\Lambda_{j}(x,\cdot)$ and
$\widetilde{\Lambda}_{j}(x,\cdot)$ satisfy the ($\triangle_{2}$)-condition in
the sense of Definition 2.5.
It follows from Assumption 5.2 that there exist two constants
$c_{1},c_{2}\in(0,1]$ such that for $\lambda_{N}$-a.e. $x\in\Omega$ if $j=1$
and $\sigma$-a.e. $x\in\partial\Omega$ if $j=2$ and for all $t\in\mathbb{R}$,
$c_{j}t\alpha_{j}(x,t)\leq\Lambda_{j}(x,t)\leq t\alpha_{j}(x,t).$ (5.2)
Next, let
$L_{\Lambda_{1}}(\Omega):=\left\\{u:\Omega\rightarrow\mathbb{R}\;\mbox{
measurable: }\;\int_{\Omega}\Lambda_{1}(x,u)dx<\infty\right\\}$
and
$L_{\Lambda_{2}}(\partial\Omega):=\left\\{u:\partial\Omega\rightarrow\mathbb{R}\;\mbox{
measurable: }\;\int_{\partial\Omega}\Lambda_{2}(x,u)d\sigma<\infty\right\\}.$
Since $\Lambda_{j}(x,\cdot)$ and $\widetilde{\Lambda}_{j}(x,\cdot)$ satisfy
the $(\triangle_{2})$-condition, it follows from [1, Theorem 8.19], that
$L_{\Lambda_{1}}(\Omega)$ and $L_{\Lambda_{2}}(\partial\Omega),$ endowed
respectively with the norms
$\|u\|_{\Lambda_{1},\Omega}:=\inf\left\\{k>0:\int_{\Omega}\Lambda_{1}\left(x,\frac{u(x)}{k}\right)dx\leq
1\right\\},$
and
$\|u\|_{\Lambda_{2},\partial\Omega}:=\inf\left\\{k>0:\int_{\partial\Omega}\Lambda_{2}\left(x,\frac{u(x)}{k}\right)d\sigma\leq
1\right\\},$
are reflexive Banach spaces. Moreover, by [1, Section 8.11, p.234], the
following generalized versions of Hölder’s inequality will also become useful
in the sequel,
$\left|\int_{\Omega}uvdx\right|\leq
2\|u\|_{\Lambda_{1},\Omega}\|v\|_{\widetilde{\Lambda}_{1},\Omega}$ (5.3)
and
$\left|\int_{\partial\Omega}uv\;d\sigma\right|\leq
2\|u\|_{\Lambda_{2},\partial\Omega}\|v\|_{\widetilde{\Lambda}_{2},\partial\Omega}.$
(5.4)
### 5.2. Existence and uniqueness of weak solutions of perturbed equations
Let
$\mathcal{V}:=\\{U:=(u,u|_{\partial\Omega}):\;u\in W^{1,p}(\Omega)\cap
L_{\Lambda_{1}}(\Omega),\;u_{\mid\partial\Omega}\in
W^{1,q}(\partial\Omega)\cap L_{\Lambda_{2}}(\partial\Omega)\\}.$
Then for every $1<p,q<\infty$, $\mathcal{V}$ endowed with the norm
$\|U\|_{\mathcal{V}}=\|u\|_{W^{1,p}(\Omega)}+\|u\|_{\Lambda_{1},\Omega}+\|u\|_{W^{1,q}(\partial\Omega)}+\|u\|_{\Lambda_{2},\partial\Omega}$
is a reflexive Banach space. Recall that $\rho=1$. Throughout the following,
we denote by $\mathcal{V}^{\prime}$ the dual of $\mathcal{V}$.
###### Definition 5.3.
A function $U=(u,u|_{\partial\Omega})\in\mathcal{V}$ is said to be a weak
solution of (5.1), if for every $V\in\mathcal{V}=(v,v|_{\partial\Omega}),$
$\mathcal{A}(U,V)=\int_{\Omega}fvdx+\int_{\partial\Omega}gvd\sigma,$ (5.5)
provided that the integrals on the right-hand side exist. Here,
$\displaystyle\mathcal{A}(U,V)$ $\displaystyle:=\int_{\Omega}|\nabla
u|^{p-2}\nabla u\cdot\nabla vdx+\int_{\Omega}|u|^{p-2}uvdx$
$\displaystyle+\int_{\Omega}\alpha_{1}(x,u)vdx+\int_{\partial\Omega}|\nabla_{\Gamma}u|^{q-2}\nabla_{\Gamma}u\cdot\nabla_{\Gamma}vd\sigma$
$\displaystyle+\int_{\partial\Omega}|u|^{q-2}uvd\sigma+\int_{\partial\Omega}\alpha_{2}(x,u)vd\sigma.$
###### Lemma 5.4.
Assume Assumptions 5.1 and 5.2. Let $1<p,q<\infty$ and $U\in\mathcal{V}$ be
fixed. Then the functional $V\mapsto\mathcal{A}(U,V)$ belongs to
$\mathcal{V}^{\prime}$. Moreover, $\mathcal{A}$ is strictly monotone,
hemicontinuous and coercive.
###### Proof.
Let $U=(u,u|_{\partial\Omega})\in\mathcal{V}$ be fixed. It is clear that
$\mathcal{A}(U,\cdot)$ is linear. Let
$V=(v,v|_{\partial\Omega})\in\mathcal{V}$. Then, exploiting (5.3) and (5.4),
we obtain
$\displaystyle\left|\mathcal{A}(U,V)\right|\leq\|u\|_{W^{1,p}(\Omega)}^{p-1}\|v\|_{W^{1,p}(\Omega)}+\|u\|_{W^{1,q}(\partial\Omega)}^{q-1}\|v\|_{W^{1,q}(\partial\Omega)}$
(5.6)
$\displaystyle+2\max\left\\{1,\int_{\Omega}\widetilde{\Lambda}_{1}(x,\alpha_{1}(x,u))\;dx\right\\}\|v\|_{\Lambda_{1},\Omega}$
$\displaystyle+2\max\left\\{1,\int_{\partial\Omega}\widetilde{\Lambda}_{2}(x,\alpha_{2}(x,u))\;d\sigma\right\\}\|v\|_{\Lambda_{2},\partial\Omega}$
$\displaystyle\leq K(U)\|V\|_{\mathcal{V}},$
where
$\displaystyle K(U):=$
$\displaystyle\|u\|_{W^{1,p}(\Omega)}^{p-1}+2\max\left\\{1,\int_{\Omega}\widetilde{\Lambda}_{1}(x,\alpha_{1}(x,u))\;dx\right\\}$
$\displaystyle+\|u\|_{W^{1,q}(\partial\Omega)}^{q-1}+2\max\left\\{1,\int_{\partial\Omega}\widetilde{\Lambda}_{2}(x,\alpha_{2}(x,u))\;d\sigma\right\\}.$
This shows $\mathcal{A}(U,\cdot)\in\mathcal{V}^{\prime},$ for every
$U\in\mathcal{V}$.
Next, let $U,V\in\mathcal{V}$. Then, using (2.11) and the fact that
$\alpha_{j}(x,\cdot)$ are monotone nondecreasing, that is,
$(\alpha_{j}(x,t)-\alpha_{j}(x,s))(t-s)\geq 0,$ for all $t,s\in\mathbb{R},$ we
obtain
$\displaystyle\mathcal{A}(U,U-V)-\mathcal{A}(V,U-V)$ (5.7)
$\displaystyle=\int_{\Omega}\left(|\nabla u|^{p-2}\nabla u-|\nabla
v|^{p-2}\nabla
v\right)\cdot\nabla(u-v)dx+\int_{\Omega}\left(|u|^{p-2}u-|v|^{p-2}v\right)(u-v)dx$
$\displaystyle+\int_{\Omega}\left(\alpha_{1}(x,u)-\alpha_{1}(x,v)\right)(u-v)dx+\int_{\partial\Omega}\left(|u|^{q-2}u-|v|^{q-2}v\right)(u-v)d\sigma$
$\displaystyle+\int_{\partial\Omega}\left(|\nabla_{\Gamma}u|^{q-2}\nabla_{\Gamma}u-|\nabla_{\Gamma}v|^{q-2}\nabla
v\right)\cdot\nabla_{\Gamma}(u-v)d\sigma$
$\displaystyle+\int_{\partial\Omega}\left(\alpha_{2}(x,u)-\alpha_{1}(x,v)\right)(u-v)d\sigma$
$\displaystyle\geq\int_{\Omega}\left(|\nabla u|+|\nabla
v|\right)^{p-2}|\nabla(u-v)|^{2}dx+\int_{\Omega}\left(|u|+|v|\right)^{p-2}|u-v|^{2}dx$
$\displaystyle+\int_{\partial\Omega}\left(|\nabla_{\Gamma}u|+|\nabla_{\Gamma}v|\right)^{p-2}|\nabla_{\Gamma}(u-v)|^{2}d\sigma+\int_{\partial\Omega}\left(|u|+|v|\right)^{p-2}|u-v|^{2}d\sigma$
$\displaystyle\geq 0.$
This shows that $\mathcal{A}$ is monotone. The estimate (5.7) also shows that
$\mathcal{A}(U,U-V)-\mathcal{A}(V,U-V)>0,$
for all $U,V\in V$ with $U\neq V$, that is, $u\neq v$ or
$u|_{\partial\Omega}\neq v|_{\partial\Omega}$. Thus, $\mathcal{A}$ is strictly
monotone.
The continuity of the norm function and the continuity of
$\alpha_{j}(x,\cdot),$ $j=1,2$ imply that $\mathcal{A}$ is hemicontinuous.
Finally, since $\Lambda_{j}$ and $\widetilde{\Lambda}_{j}$ satisfy the
$(\triangle_{2}^{0})$-condition, from Proposition 2.10 and Corollary 2.11, it
follows
$\displaystyle\lim_{\|u\|_{\Lambda_{1},\Omega}\rightarrow+\infty}\frac{\int_{\Omega}u\alpha_{1}(x,u)\;dx}{\|u\|_{\Lambda_{1},\Omega}}=+\infty,\mbox{
and
}\;\lim_{\|u\|_{\Lambda_{2},\partial\Omega}\rightarrow+\infty}\frac{\int_{\partial\Omega}u\alpha_{2}(x,u)\;d\sigma}{\|u\|_{\Lambda_{2},\partial\Omega}}$
$\displaystyle=+\infty.$
Consequently, we deduce
$\lim_{\|U\|_{\mathcal{V}}\rightarrow+\infty}\frac{\mathcal{A}(U,U)}{\|U\|_{\mathcal{V}}}=+\infty,$
(5.8)
which shows that $\mathcal{A}$ is coercive. The proof of the lemma is
finished.
The following result is concerned with the existence and uniqueness of weak
solutions to problem (5.1).
###### Theorem 5.5.
Assume Assumptions 5.1 and 5.2. Let $1<p,q<\infty$, $p_{1}\geq p\ast$ and
$q_{1}\geq q\ast$, where $p\ast:=p/(p-1)$ and $q\ast:=q/(q-1)$. Then for every
$(f,g)\in X^{p_{1},q_{1}}(\overline{\Omega},\mu)$, there exists a unique
function $U\in\mathcal{V}$ which is a weak solution to (5.1).
###### Proof.
Let $\langle\cdot,\cdot\rangle$ denote the duality between $\mathcal{V}$ and
$\mathcal{V}^{\prime}$. Then, from Lemma 5.4, it follows that for each
$U\in\mathcal{V}$, there exists $A(U)\in\mathcal{V}^{\prime}$ such that
$\mathcal{A}(U,V)=\langle A(U),V\rangle,$
for every $V\in\mathcal{V}$. Hence, this relation defines an operator
$A:\;\mathcal{V}\rightarrow\mathcal{V}^{\prime},$ which is bounded by (5.6).
Exploiting Lemma 5.4 once again, it is easy to see that $A$ is monotone and
coercive. It follows from Brodwer’s theorem (see, e.g., [11, Theorem 5.3.22]),
that $A(\mathcal{V})=\mathcal{V}^{\prime}$. Therefore, for every
$F\in\mathcal{V}^{\prime}$ there exists $U\in\mathcal{V}$ such that $A(U)=F$,
that is, for every $V\in\mathcal{V}$,
$\langle A(U),V\rangle=\mathcal{A}(U,V)=\langle V,F\rangle.$
Since $W^{1,p}(\Omega)\hookrightarrow L^{p}(\Omega)$ and
$W^{1,q}(\partial\Omega)\hookrightarrow L^{q}(\partial\Omega)$ with dense
injection, by duality, we have
$X^{p\ast,q\ast}(\overline{\Omega},\mu)\hookrightarrow\mathcal{V}^{\prime}$.
Since $\Omega$ is bounded and $\sigma(\partial\Omega)<\infty$, we obtain that
$X^{p_{1},q_{1}}(\overline{\Omega},\mu)\hookrightarrow
X^{p\ast,q\ast}(\overline{\Omega},\mu)\hookrightarrow\mathcal{V}^{\prime}.$
This shows the existence of weak solutions. The uniqueness follows from the
fact that $\mathcal{A}$ is strictly monotone (cf. Lemma 5.4). This completes
the proof of the theorem.
###### Corollary 5.6.
Let the assumptions of Theorem 5.5 be satisfied. Let
$p_{h}:=\frac{Np}{N(p-1)+p},\;\;q_{h}:=\frac{p(N-1)}{N(p-1)},\;\mbox{ and
}q_{k}:=\frac{q(N-1)}{N(q-1)+1}.$ (5.9)
1. (a)
Let $1<p<N$, $1<q<p(N-1)/N$, $p_{1}\geq p_{h}$ and $q_{1}\geq q_{k}$. Then for
every $(f,g)\in X^{p_{1},q_{1}}(\Omega,\mu)$, there exists a function
$U\in\mathcal{V}$ which is the unique weak solution to (5.1).
2. (b)
Let $1<q<N-1$, $1<p<Nq/(N-1)$, $p_{1}\geq p_{h}$ and $q_{1}\geq q_{h}$. Then
for every $(f,g)\in X^{p_{1},q_{1}}(\Omega,\mu)$, there exists a function
$U\in\mathcal{V}$ which is the unique weak solution to (5.1).
###### Proof.
We first prove (1). Let $1<p<N$ and $1<q<p(N-1)/N$ and let $p_{1}\geq p_{h}$
and $q_{1}\geq q_{k},$ where $p_{h}$ and $q_{k}$ are given by (5.9). Let
$p_{s}:=Np/(N-p)$ and $q_{t}:=(N-1)q/(N-1-q)$. Since
$W^{1,p}(\Omega)\hookrightarrow L^{p_{s}}(\Omega)$ and
$W^{1,q}(\partial\Omega)\hookrightarrow L^{q_{t}}(\partial\Omega)$ with dense
injection, then by duality,
$X^{p_{h},q_{k}}(\overline{\Omega},\mu)\hookrightarrow\mathcal{V}^{\prime}$,
where $\displaystyle 1/p_{s}+1/p_{h}=1$ and $\displaystyle 1/q_{t}+1/q_{k}=1$.
Since $\mu(\overline{\Omega})<\infty$, we have that
$X^{p_{1},q_{1}}(\overline{\Omega},\mu)\hookrightarrow
X^{p_{h},q_{h}}(\overline{\Omega},\mu)\hookrightarrow\mathcal{V}^{\prime}.$
Hence, for every $F:=(f,g)\in
X^{p_{1},q_{1}}(\overline{\Omega},\mu)\hookrightarrow\mathcal{V}^{\prime}$,
there exists $U\in\mathcal{V}$ such that for every $V\in\mathcal{V}$,
$\langle
A(U),V\rangle=\mathcal{A}(U,V)=\int_{\Omega}fv\;dx+\int_{\partial\Omega}gv\;d\sigma.$
The uniqueness of the weak solution follows again from the fact that
$\mathcal{A}$ is strictly monotone.
In order to prove the second part, we use the the embeddings
$W^{1,p}(\Omega)\hookrightarrow L^{p_{s}}(\Omega)$,
$W^{1,p}(\Omega)\hookrightarrow L^{q_{s}}(\partial\Omega)$ and proceed exactly
as above. We omit the details.
### 5.3. Properties of the solution operator of the perturbed equation
In the sequel, we establish some interesting properties of the solution
operator $A$ to problem (5.1). We begin by assuming the following.
###### Assumption 5.7.
Suppose that $\alpha_{j},$ $j=1,2,$ satisfy the following conditions:
$\begin{cases}\displaystyle\mbox{there are constants }\;c_{j}\in(0,1]\;\mbox{
such that }\\\ \displaystyle
c_{j}\left|\alpha_{j}(x,\xi-\eta)\right|\leq\left|\alpha_{j}(x,\xi)-\alpha_{j}(x,\eta)\right|\;\mbox{
for all }\;\xi,\eta\in\mathbb{R}.\end{cases}$ (5.10)
###### Theorem 5.8.
Assume Assumptions 5.1, 5.2 and 5.7. Let $p,q\geq 2$ and let
$A:\;\mathcal{V}\rightarrow\mathcal{V}^{\prime}$ be the continuous and bounded
operator constructed in the proof of Theorem 5.5. Then $A$ is injective and
hence, invertible and its inverse $A^{-1}$ is also continuous and bounded.
###### Proof.
First, we remark that, since
$\left(\alpha_{j}(x,t)-\alpha_{j}(x,s)\right)(t-s)\geq 0\text{, for all
}t,s\in\mathbb{R},$
for $\lambda_{N}$-a.e.$x\in\Omega$ if $j=1$ and $\sigma$-a.e.
$x\in\partial\Omega$ if $j=2$, it follows from (5.10) that, for all
$t,s\in\mathbb{R}$,
$\left(\alpha_{j}(x,t)-\alpha_{j}(x,s)\right)(t-s)\geq
c_{j}\alpha_{j}(x,t-s)\cdot(t-s).$ (5.11)
Let $U,V\in\mathcal{V}$ and $p,q\in[2,\infty)$. Then, exploiting (2.12),
(5.11) and the ($\triangle_{2}$)-condition, we obtain
$\displaystyle\langle
A(U)-A(V),U-V\rangle=\mathcal{A}(U,U-V)-\mathcal{A}(V,U-V)$ (5.12)
$\displaystyle=\int_{\Omega}\left(|\nabla u|^{p-2}\nabla u-|\nabla
v|^{p-2}\nabla
v\right)\cdot\nabla(u-v)dx+\int_{\Omega}\left(|u|^{p-2}u-|v|^{p-2}v\right)(u-v)dx$
$\displaystyle+\int_{\Omega}\left(\alpha_{1}(x,u)-\alpha_{1}(x,v)\right)(u-v)dx+\int_{\partial\Omega}\left(|\nabla_{\Gamma}u|^{q-2}\nabla_{\Gamma}u-|\nabla_{\Gamma}v|^{q-2}\nabla_{\Gamma}v\right)\cdot\nabla_{\Gamma}(u-v)d\sigma$
$\displaystyle+\int_{\partial\Omega}\left(|u|^{q-2}u-|v|^{q-2}v\right)(u-v)d\sigma+\int_{\partial\Omega}\left(\alpha_{2}(x,u)-\alpha_{2}(x,v)\right)(u-v)d\sigma$
$\displaystyle\geq\|u-v\|_{W^{1,p}(\Omega)}^{p}+c_{1}\int_{\Omega}\Lambda_{1}(x,u-v)dx+\|u-v\|_{W^{1,q}(\partial\Omega)}^{q}+c_{2}\int_{\partial\Omega}\Lambda_{2}(x,u-v)d\sigma.$
This implies that $\langle A(U)-A(V),U-V\rangle>0,$ for all
$U,V\in\mathcal{V}$ with $U\neq V$ (that is, $u\neq v$, or
$u|_{\partial\Omega}\neq v|_{\partial\Omega}$). Therefore, the operator $A$ is
injective and hence, $A^{-1}$ exists. Since for every $U\in\mathcal{V}$,
$\mathcal{A}(U,U)=\langle
A(U),U\rangle\leq\|A(U)\|_{\mathcal{V}^{\prime}}\|U\|_{\mathcal{V}},$
from the coercivity of $\mathcal{A}$ (see (5.8)), it is not difficult to see
that
$\lim_{\|U\|_{\mathcal{V}}\rightarrow+\infty}\|A(U)\|_{\mathcal{V}^{\prime}}=+\infty.$
(5.13)
Thus, $A^{-1}:\;\mathcal{V}^{\prime}\rightarrow\mathcal{V}$ is bounded.
Next, we show that $A^{-1}:\;\mathcal{V}^{\prime}\rightarrow\mathcal{V}$ is
continuous. Assume that $A^{-1}$ is not continuous. Then there is a sequence
$F_{n}\in\mathcal{V}^{\prime}$ with $F_{n}\rightarrow F$ in
$\mathcal{V}^{\prime}$ and a constant $\delta>0$ such that
$\|A^{-1}(F_{n})-A^{-1}(F)\|_{\mathcal{V}}\geq\delta,$ (5.14)
for all $n\in\mathbb{N}$. Let $U_{n}:=A^{-1}(F_{n})$ and $U=A^{-1}(F)$. Since
$\left\\{F_{n}\right\\}$ is a bounded sequence and $A^{-1}$ is bounded, we
have that $\left\\{U_{n}\right\\}$ is bounded in $\mathcal{V}$. Thus, we can
select a subsequence, which we still denote by $\left\\{U_{n}\right\\},$ which
converges weakly to some function $V\in\mathcal{V}$. Since
$A(U_{n})-A(V)\rightarrow F-A(V)$ strongly in $\mathcal{V}$ and $U_{n}-V$
converges weakly to zero in $\mathcal{V}$, we deduce
$\lim_{n\rightarrow\infty}\langle A(U_{n})-A(V),U_{n}-V\rangle=0.$ (5.15)
From (5.12) and (5.15), it follows that
$\lim_{n\rightarrow\infty}\|u_{n}-v\|_{W^{1,p}(\Omega)}=0\;\mbox{and
}\;\lim_{n\rightarrow\infty}\int_{\Omega}\Lambda_{1}(x,u_{n}-v)dx=0,$
while
$\lim_{n\rightarrow\infty}\|u_{n}-v\|_{W^{1,q}(\partial\Omega)}=0\;\mbox{ and
}\;\lim_{n\rightarrow\infty}\int_{\partial\Omega}\Lambda_{2}(x,u_{n}-v)d\sigma=0.$
Therefore, $U_{n}\rightarrow V$ strongly in $\mathcal{V}$. Since $A$ is
continuous and
$F_{n}=A(U_{n})\rightarrow A(V)=F=A(U)$
it follows from the injectivity of $A,$ that $U=V$. This shows that
$\lim_{n\rightarrow\infty}\|A^{-1}(F_{n})-A^{-1}(F)\|_{\mathcal{V}}=\lim_{n\rightarrow\infty}\|U_{n}-U\|_{\mathcal{V}}=0,$
which contradicts (5.14). Hence,
$A^{-1}:\;\mathcal{V}^{\prime}\rightarrow\mathcal{V}$ is continuous. The proof
is finished.
###### Corollary 5.9.
Let the assumptions of Theorem 5.8 be satisfied. Let $p_{h},q_{h}$ and $q_{k}$
be as in (5.9) and let $A:\;\mathcal{V}\rightarrow\mathcal{V}^{\prime}$ be the
continuous and bounded operator constructed in the proof of Theorem 5.5.
1. (a)
If $2\leq p<N$, $2\leq q<p(N-1)/N$, $p_{1}\geq p_{h}$ and $q_{1}\geq q_{k}$,
then $A^{-1}:\;X^{p_{1},q_{1}}(\overline{\Omega},\mu)\rightarrow
X^{p_{s},q_{t}}(\overline{\Omega},\mu)$ is continuous and bounded. Moreover,
$A^{-1}:\;X^{p_{1},q_{1}}(\overline{\Omega},\mu)\rightarrow\mathcal{V}\cap
X^{r,s}(\overline{\Omega},\mu)$ is compact for every $r\in(1,p_{s})$ and
$s\in(1,q_{s})$.
2. (b)
If $2\leq q<N-1$, $2\leq p<qN/(N-1)$, $p_{1}\geq p_{h}$ and $q_{1}\geq q_{h}$,
then the operator $A^{-1}:\;X^{p_{1},q_{1}}(\overline{\Omega},\mu)\rightarrow
X^{p_{s},q_{s}}(\overline{\Omega},\mu)$ is continuous and bounded. Moreover,
$A^{-1}:\;X^{p_{1},q_{1}}(\overline{\Omega},\mu)\rightarrow\mathcal{V}\cap
X^{r,s}(\overline{\Omega},\mu)$ is compact for every $r\in(1,p_{s})$ and
$s\in(1,q_{s})$.
###### Proof.
We only prove the first part. The second part of the proof follows by analogy
and is left to the reader. Let $2\leq p<N$, $2\leq q<p(N-1)/N$, $p_{1}\geq
p_{h}$ and $q_{1}\geq q_{k}$ and let $F\in
X^{p_{1},q_{1}}(\overline{\Omega},\mu)$. Proceeding exactly as in the proof of
Theorem 5.8, we obtain
$\|A^{-1}(F)\|_{p_{s},q_{t}}\leq C_{1}\|A^{-1}(F)\|_{\mathcal{V}}\leq
C\|F\|_{{\mathcal{V}^{\prime}}}\leq C_{2}\|F\|_{p_{1},q_{1}}.$
Hence, the operator
$A^{-1}:\;X^{p_{1},q_{1}}(\overline{\Omega},\mu)\rightarrow
X^{p_{s},q_{t}}(\overline{\Omega},\mu)$ is bounded. Finally, using the facts
that
$X^{p_{1},q_{1}}(\overline{\Omega},\mu)\hookrightarrow\mathcal{V}^{\prime}$,
$A^{-1}:\;\mathcal{V}^{\prime}\rightarrow\mathcal{V}$ is continuous and
$\mathcal{V}\hookrightarrow X^{p_{s},q_{t}}(\overline{\Omega},\mu)$, we easily
deduce that $A^{-1}:\;X^{p_{1},q_{1}}(\overline{\Omega},\mu)\rightarrow
X^{p_{s},q_{t}}(\overline{\Omega},\mu)$ is continuous.
Now, let $1<r<p_{s}$ and $1<s<q_{s}$. Since the injection
$\mathcal{V}\hookrightarrow X^{r,s}(\overline{\Omega},\mu)$ is compact, then
by duality, the injection
$X^{r^{\prime},s^{\prime}}(\overline{\Omega},\mu)\hookrightarrow(\mathcal{V})^{*}$
is compact for every $r^{\prime}>p_{s}^{\prime}=p_{h}$ and
$s^{\prime}>q_{s}^{\prime}=q_{h}$. This, together with the fact that
$A^{-1}:\;(\mathcal{V})^{*}\to\mathcal{V}$ is continuous and bounded, imply
that $A^{-1}:\;X^{p_{1},q_{1}}(\overline{\Omega},\mu)\to\mathcal{V}$ is
compact for every $p_{1}>p_{h}$ and $q_{1}>q_{h}$.
It remains to show that $A^{-1}$ is also compact as a map into
$X^{r,s}(\overline{\Omega},\mu)$ for every $r\in(1,p_{s})$ and
$s\in(1,q_{s})$. Since $A^{-1}$ is bounded, we have to show that the image of
every bounded set $\mathcal{B}\subset\mathbb{X}^{p_{1},q_{1}}(\Omega,\mu)$ is
relatively compact in $X^{r,s}(\overline{\Omega},\mu)$ for every
$r\in(1,p_{s})$ and $s\in(1,q_{s})$. Let $U_{n}$ be a sequence in
$A^{-1}(\mathcal{B})$. Let $F_{n}=A(U_{n})\in\mathcal{B}$. Since $\mathcal{B}$
is bounded, then the sequence $F_{n}$ is bounded. Since $A^{-1}$ is compact as
a map into $\mathcal{V}$, it follows that there is a subsequence $F_{n_{k}}$
such that $A^{-1}(F_{n_{k}})\to U\in\mathcal{V}$. We may assume that
$U_{n}=A^{-1}(F_{n})\to U$ in $\mathcal{V}$ and hence, in
$X^{p,p}(\overline{\Omega},\mu)$. It remains to show that $U_{n}\to U$ in
$X^{r,s}(\overline{\Omega},\mu)$. Let $r\in[p,p_{s})$ and $s\in[p,q_{s})$.
Since $U_{n}:=(u_{n},u_{n}|_{\partial\Omega})$ is bounded in
$X^{p_{s},q_{s}}(\overline{\Omega},\mu)$, a standard interpolation inequality
shows that there exists $\tau\in(0,1)$ such that
$\||U_{n}-U_{m}\||_{r,s}\leq\||U_{n}-U_{m}\||_{p,p}^{\tau}\||U_{n}-U_{m}\||_{p_{s},q_{s}}^{1-\tau}\leq
C\||U_{n}-U_{m}\||_{p,p}^{\tau}.$
As $U_{n}$ converges in $X^{p,p}(\overline{\Omega},\mu)$, it follows from the
preceding inequality that $U_{n}$ is a Cauchy sequence in
$X^{r,s}(\overline{\Omega},\mu)$ and therefore converges in
$X^{r,s}(\overline{\Omega},\mu)$. Hence,
$A^{-1}:\;X^{p_{1},q_{1}}(\overline{\Omega},\mu)\to\mathcal{V}\cap
X^{r,s}(\overline{\Omega},\mu)$ is compact for every $r\in[p,p_{s})$ and
$s\in[p,q_{s})$. The case $r,s\in(1,p)$ follows from the fact that
$X^{p,p}(\overline{\Omega},\mu)\hookrightarrow X^{r,s}(\overline{\Omega},\mu)$
and the proof is finished
### 5.4. Statement and proof of the main result
We will now establish under what conditions the operator $A^{-1}$ maps
$X^{p_{1},q_{1}}(\overline{\Omega},\mu)$ boundedly and continuously into
$X^{\infty}(\overline{\Omega},\mu)$. The following is the main result of this
section.
###### Theorem 5.10.
Let the assumptions of Theorem 5.8 be satisfied.
1. (a)
Suppose $2\leq p<N$ and $2\leq q<\infty$. Let
$p_{1}>\frac{p_{s}}{p_{s}-p}=\frac{N}{p}\;\mbox{ and
}\;q_{1}>\frac{q_{s}}{q_{s}-p}=\frac{N-1}{p-1}.$
Let $f\in L^{p_{1}}(\Omega),\;g\in L^{q_{1}}(\partial\Omega)$ and
$U,V\in\mathcal{V}$ be such that for every function
$\Phi=(\varphi,\varphi|_{\partial\Omega})\in\mathcal{V}$,
$\mathcal{A}(U,\Phi)-\mathcal{A}(V,\Phi)=\int_{\Omega}f\varphi\;dx+\int_{\partial\Omega}g\varphi\;d\sigma.$
(5.16)
Then there is a constant $C=C(N,p,q,\Omega)>0$ such that
$\||U-V\||_{\infty}^{p-1}\leq
C(\|f\|_{p_{1},\Omega}+\|g\|_{q_{1},\partial\Omega}).$
2. (b)
Suppose $2\leq p=q<N-1$. Let
$p_{1}>\frac{p_{s}}{p_{s}-p}=\frac{N}{p}\;\mbox{ and
}\;q_{1}>\frac{p_{t}}{p_{t}-p}=\frac{N-1}{p}.$
Let $f\in L^{p_{1}}(\Omega),\;g\in L^{q_{1}}(\partial\Omega)$ and
$U,V\in\mathcal{V}$ satisfy (5.16). Then there is a constant
$C=C(N,p,q,\Omega)>0$ such that
$\||U-V\||_{\infty}^{p-1}\leq
C(\|f\|_{p_{1},\Omega}+\|g\|_{q_{1},\partial\Omega}).$
###### Proof.
Let $U,V\in\mathcal{V}$ satisfy (5.16). Let $k\geq 0$ be a real number and set
$w_{k}:=(|u-v|-k)^{+}\mathop{\mathrm{s}gn}(u-v)\;W_{k}:=(w_{k},w_{k}|_{\partial\Omega})\;\mbox{
and }\;w:=|u-v|.$
Let $A_{k}:=\\{x\in\overline{\Omega}:|w(x)|\geq k\\}$, and
$A_{k}^{+}:=\\{x\in\overline{\Omega}:w(x)\geq
k\\},\;\;A_{k}^{-}:=\\{x\in\overline{\Omega}:w(x)\leq-k\\}$. Clearly
$W_{k}\in\mathcal{V}$ and $A_{k}=A_{k}^{+}\cup A_{k}^{-}$. We claim that there
exists a constant $C>0$ such that
$C\mathcal{A}(W_{k},W_{k})\leq\mathcal{A}(U,W_{k})-\mathcal{A}(V,W_{k}),$
(5.17)
for all $U,V\in\mathcal{V}$. Using the definition of the form $\mathcal{A}$,
we have
$\displaystyle\mathcal{A}(U,W_{k})-\mathcal{A}(V,W_{k})$ (5.18)
$\displaystyle=\int_{\Omega}(|\nabla u|^{p-2}\nabla u-|\nabla v|^{p-2}\nabla
v)\cdot\nabla w_{k}dx+\int_{\Omega}(|u|^{p-2}u-|v|^{p-2}v)w_{k}dx$
$\displaystyle+\int_{\Omega}(\alpha_{1}(x,u)-\alpha_{2}(x,v))w_{k}dx+\int_{\partial\Omega}(|u|^{q-2}u-|v|^{q-2}v)w_{k}d\sigma$
$\displaystyle+\int_{\partial\Omega}(|\nabla_{\Gamma}u|^{p-2}\nabla_{\Gamma}u-|\nabla_{\Gamma}v|^{p-2}\nabla_{\Gamma}v)\cdot\nabla_{\Gamma}w_{k}d\sigma+\int_{\partial\Omega}(\alpha_{2}(x,u)-\alpha_{2}(x,v))w_{k}d\sigma.$
Since$\nabla w_{k}=\begin{cases}\nabla(u-v)&\mbox{ in }A(k),\\\ 0&\mbox{
otherwise, }\end{cases}$ we can rewrite (5.18) as follows:
$\displaystyle\mathcal{A}(U,W_{k})-\mathcal{A}(V,W_{k})=\int_{A(k)\cap\Omega}(|\nabla
u|^{p-2}\nabla u-|\nabla v|^{p-2}\nabla v)\cdot\nabla(u-v)dx$ (5.19)
$\displaystyle{}+\int_{A(k)\cap\partial\Omega}(|\nabla_{\Gamma}u|^{q-2}\nabla_{\Gamma}u-|\nabla_{\Gamma}v|^{q-2}\nabla_{\Gamma}v)\cdot\nabla_{\Gamma}(u-v)d\sigma$
$\displaystyle{}+\lambda\int_{A(k)\cap\Omega}(|u|^{p-2}u-|v|^{p-2}v)w_{k}dx+\int_{A(k)\cap\Omega}(\alpha_{1}(x,u)-\alpha_{1}(x,v))w_{k}dx$
$\displaystyle{}+\int_{A(k)\cap\partial\Omega}(\alpha_{2}(x,u)-\alpha_{2}(x,v))w_{k}d\sigma.$
Exploiting inequality (2.12), from (5.19) and (5.11), we deduce
$\displaystyle\mathcal{A}(U,W_{k})-\mathcal{A}(V,W_{k})$ (5.20)
$\displaystyle\geq\int_{A(k)\cap\Omega}\left(|\nabla
w_{k}|^{p}+|w_{k}|^{p}\right)dx+\int_{A(k)\cap\Omega}c_{1}\alpha_{1}(x,w_{k})w_{k}dx$
$\displaystyle+\int_{A(k)\cap\Omega}(|u|^{p-2}uw_{k}-|v|^{p-2}vw_{k}-|w_{k}|^{p})dx$
$\displaystyle+\int_{A(k)\cap\Omega}(\alpha_{1}(x,u)-\alpha_{1}(x,v)-c_{1}\alpha_{1}(x,w_{k}))w_{k}dx$
$\displaystyle+\int_{A(k)\cap\partial\Omega}\left(|\nabla_{\Gamma}w_{k}|^{q}+|w_{k}|^{q}\right)d\sigma+\int_{A(k)\cap\partial\Omega}c_{2}\alpha_{2}(x,w_{k})w_{k}d\sigma$
$\displaystyle+\int_{A(k)\cap\partial\Omega}(|u|^{q-2}uw_{k}-|v|^{q-2}vw_{k}-|w_{k}|^{q})d\sigma$
$\displaystyle+\int_{A(k)\cap\partial\Omega}(\alpha_{2}(x,u)-\alpha_{2}(x,v)-c_{2}\alpha_{2}(x,w_{k}))w_{k}d\sigma$
$\displaystyle\geq
C\mathcal{A}(W_{k},W_{k})+\int_{A(k)\cap\Omega}(|u|^{p-2}uw_{k}-|v|^{p-2}vw_{k}-|w_{k}|^{p})dx$
$\displaystyle{}+\int_{A(k)\cap\Omega}(\alpha_{1}(x,u)-\alpha_{1}(x,v)-c_{1}\alpha_{1}(x,w_{k}))w_{k}dx$
$\displaystyle{}+\int_{A(k)\cap\partial\Omega}(|u|^{q-2}uw_{k}-|v|^{q-2}vw_{k}-|w_{k}|^{q})d\sigma$
$\displaystyle{}+\int_{A(k)\cap\partial\Omega}(\alpha_{2}(x,u)-\alpha_{2}(x,v)-c_{2}\alpha_{2}(x,w_{k}))w_{k}d\sigma,$
where $c_{1},c_{2}$ are the constants from (5.11). Using (5.10) and the fact
that $\alpha_{j}(x,\cdot)$ are strictly increasing, for $x\in A_{k}^{+}$, we
have
$\displaystyle c_{j}\alpha_{j}(x,w_{k}(x))$
$\displaystyle=c_{j}\alpha_{j}(x,u(x)-v(x)-k)\leq
c_{j}\alpha_{j}(x,u(x)-v(x))$
$\displaystyle\leq\alpha_{j}(x,u(x))-\alpha_{j}(x,v(x)).$
Multiplying this inequality by $w_{k}(x)\geq 0,$ $x\in A_{k}^{+},$ yields
$(\alpha_{j}(x,u(x))-\alpha_{j}(x,v(x))-c_{j}\alpha_{j}(x,w_{k}(x)))w_{k}(x)\geq
0.$ (5.21)
Similarly, for $x\in A_{k}^{-},$
$\displaystyle c_{j}\alpha_{j}(x,w_{k}(x))$
$\displaystyle=c_{j}\alpha_{j}(x,u(x)-v(x)+k)\geq
c_{j}\alpha_{j}(x,u(x)-v(x))$
$\displaystyle\geq\alpha_{j}(x,u(x))-\alpha_{j}(x,v(x)).$
Hence, multiplying this inequality by $w_{k}(x)\leq 0$, we get
$(\alpha_{j}(x,u(x))-\alpha_{j}(x,v(x))-c_{j}\alpha_{j}(x,w_{k}(x)))w_{k}(x)\geq
0,$ (5.22)
for all $x\in A_{k}^{-}$. Hence, on account of (5.21) and (5.22), from (5.20)
we obtain the required estimate of (5.17).
(a) To prove this part, note that from Definition 5.3 it is clear that,
$\|w_{k}\|_{W^{1,p}(\Omega)}^{p}\leq\mathcal{A}(W_{k},W_{k}).$ (5.23)
Let $f\in L^{p_{1}}(\Omega)$ and $g\in L^{q_{1}}(\partial\Omega)$ with
$p_{1}>\frac{p_{s}}{p_{s}-p}=\frac{N}{p}\;\mbox{ and
}\;q_{1}>\frac{q_{s}}{q_{s}-p}=\frac{N-1}{p-1},$
and let $B\subset\overline{\Omega}$ be any $\mu$-measurable set. We claim that
there exists a constant $C\geq 0$ such that, for every $F\in
X^{p_{1},q_{1}}(\overline{\Omega},\mu)$ and $\varphi\in W^{1,p}(\Omega),$ we
have
$\||F\varphi 1_{B}\||_{1,1}\leq
C\||F\||_{p_{1},q_{1}}\|\varphi\|_{W^{1,p}(\Omega)}\||\chi_{B}\||_{p_{3},q_{3}},$
(5.24)
where $p_{3}$ and $q_{3}$ are such that $1/p_{3}+1/p_{1}+1/p_{s}=1$ and
$1/q_{3}+1/q_{1}+1/q_{s}=1$. In fact, note that if $n\in\mathbb{N}$ and
$p_{i},$ $q_{i}\in[1,\infty],\;(i=1,\dots,n)$ are such that
$\sum_{i=1}^{n}\frac{1}{p_{i}}=\sum_{i=1}^{n}\frac{1}{q_{i}}=1,$
and, if $F_{i}\in X^{p_{i},q_{i}}(\overline{\Omega},\mu),\;(i=1,\dots,n)$,
then by Hölder’s inequality,
$\||\prod_{i=1}^{n}F_{i}\||_{1,1}\leq\prod_{i=1}^{n}\||F_{i}\||_{p_{i},q_{i}}.$
(5.25)
Since $W^{1,p}(\Omega)\hookrightarrow X^{p_{s},q_{s}}(\overline{\Omega},\mu)$,
(5.24) follows immediately from (5.25) and the claim (5.24) is proved. Next,
it follows from (5.24), that
$\displaystyle\int_{\overline{\Omega}}FW_{k}d\mu=\||FW_{k}\||_{1,1}$
$\displaystyle=$ $\displaystyle\||FW_{k}\chi_{A_{k}}\||_{1,1}$
$\displaystyle\leq$
$\displaystyle\||F\||_{p_{1},q_{1}}\||w_{k}\||_{W^{1,p}(\Omega)}\||\chi_{A_{k}}\||_{p_{3},q_{3}},$
where we recall that
$1/p_{3}=\left(1-1/p_{s}-1/p_{1}\right)>\left(p-1\right)/p_{s}$ and
$q_{3}<q_{s}/(p-1)$. Therefore, for every $k\geq 0$,
$\mathcal{A}(U,W_{k})-\mathcal{A}(V,W_{k})\leq\||F\||_{p_{1},q_{1}}\|w_{k}\|_{W^{1,p}(\Omega)}\||\chi_{A_{k}}\||_{p_{3},q_{3}},$
which together with estimate (5.17) yields the desired inequality
$C\mathcal{A}(W_{k},W_{k})\leq\mathcal{A}(U,W_{k})-\mathcal{B}(V,W_{k})\leq\||F\||_{p_{1},q_{1}}\|w_{k}\|_{W^{1,p}(\Omega)}\||\chi_{A_{k}}\||_{p_{3},q_{3}},$
(5.26)
It follows from (5.23) and (5.26), that for every $k>0$,
$\displaystyle C\|w_{k}\|_{W^{1,p}(\Omega)}^{p}$
$\displaystyle\leq\mathcal{A}(W_{k},W_{k})\leq\mathcal{A}(U,W_{k})-\mathcal{A}(V,W_{k})$
$\displaystyle\leq\||F\||_{p_{1},q_{1}}\|w_{k}\|_{W^{1,p}(\Omega)}\||\chi_{A_{k}}|\|_{p_{3},q_{3}}.$
Hence, for every $k>0$, $\|w_{k}\|_{W^{1,p}(\Omega)}^{p-1}\leq
C_{1}\||\chi_{A_{k}}\||_{p_{3},q_{3}}$. Using the fact
$W^{1,p}(\Omega)\hookrightarrow X^{p_{s},q_{s}}(\overline{\Omega},\mu)$, we
obtain for every $k>0$, that
$\||w_{k}\||_{p_{s},q_{s}}^{p-1}\leq
C\||F\||_{p_{1},q_{1}}\||\chi_{A_{k}}\||_{p_{3},q_{3}}.$
Let $h>k$. Then $A_{h}\subset A_{k}$ and on $A_{h}$ the inequality
$|w_{k}|\geq\left(h-k\right)$ holds. Therefore,
$\||(h-k)\chi_{A_{h}}\||_{p_{s},q_{s}}^{p-1}\leq
C\||F\||_{p_{1},q_{1}}\||\chi_{A_{k}}\||_{p_{3},q_{3}},$
which shows that
$\||\chi_{A_{h}}\||_{p_{s},q_{s}}^{p-1}\leq
C\||F\||_{p_{1},q_{1}}(h-k)^{-(p-1)}\||\chi_{A_{k}}\||_{p_{3},q_{3}}.$ (5.27)
Let $C_{3}:=\||1_{\overline{\Omega}}\||_{p_{s},q_{s}}$, and
$\delta:=\min\left\\{\frac{p_{s}}{p_{3}},\frac{q_{s}}{p_{3}}\right\\}>p-1,\;\delta_{0}:=\frac{\delta}{p-1}>1.$
Then
$\displaystyle\|C_{3}^{-p_{s}/p_{3}}\chi_{A_{k}}\|_{\Omega,p_{3}}$
$\displaystyle=\|C_{3}^{-1}\chi_{A_{k}}\|_{\Omega,p_{s}}^{p_{s}/p_{3}}\leq\|C_{3}^{-1}\chi_{A_{k}}\|_{\Omega,p_{s}}^{\delta}$
(5.28)
$\displaystyle\leq\||\chi_{A_{k}}\||_{p_{s},q_{s}}^{\delta}C_{3}^{-\delta}$
and
$\displaystyle\|C_{3}^{-q_{s}/q_{3}}\chi_{A_{k}}\|_{\partial\Omega,q_{3}}$
$\displaystyle=\|C_{3}^{-1}\chi_{A_{k}}\|_{\partial\Omega,q_{s}}^{q_{s}/q_{3}}\leq\|C_{3}^{-1}\chi_{A_{k}}\|_{\partial\Omega,q_{s}}^{\delta}$
(5.29)
$\displaystyle\leq\||\chi_{A_{k}}\||_{p_{s},q_{s}}^{\delta}C_{3}^{-\delta}.$
Choosing $C_{\Omega}:=C_{3}^{p_{s}/p_{3}-\delta}+C_{3}^{q_{s}/q_{3}-\delta}$,
from (5.28)-(5.29) we have
$\||\chi_{A_{k}}\||_{p_{3},q_{3}}\leq
C_{\Omega}\||\chi_{A_{k}}\||_{p_{s},q_{s}}^{\delta}.$ (5.30)
Therefore, combining (5.27) with (5.30), we get
$\displaystyle\||\chi_{A_{h}}\||_{p_{s},q_{s}}^{p-1}$ $\displaystyle\leq
C\||F\||_{p_{1},q_{1}}(h-k)^{-(p-1)}\||\chi_{A_{k}}\||_{p_{s},q_{s}}^{\delta}$
(5.31)
$\displaystyle=C\||F\||_{p_{1},q_{1}}(h-k)^{-(p-1)}\left[\||\chi_{A_{k}}\||_{p_{s},q_{s}}^{p-1}\right]^{\delta_{0}}.$
Setting $\psi(h):=\||\chi_{A_{h}}\||_{p_{s},q_{s}}^{p-1}$ in Lemma 2.13, on
account of (5.31), we can find a constant $C_{2}$ (independent of $F$) such
that
$\||\chi_{A_{K}}\||_{p_{s},q_{s}}^{p-1}=0\;\text{ with
}\;K:=C_{2}\||F\||_{p_{1},q_{1}}^{1/(p-1)}.$
This shows that $\mu(A_{K})=0,$ where
$A_{K}=\\{x\in\overline{\Omega}:|(u-v)(x)|\geq K\\}$. Hence, we have
$|u-v|\leq K,$ $\mu$-a.e. on $\overline{\Omega}$ so that
$\||U-V\||_{\infty}^{p-1}\leq
C_{2}\||F\||_{p_{1},q_{1}}=C_{2}\left(\|f\|_{p_{1},\Omega}+\|g\|_{q_{1},\partial\Omega}\right),$
which completes the proof of part (a).
(b) To prove this part, instead of (5.23) and (5.24), one uses
$\|W_{k}\|_{\mathcal{V}_{1}}^{p}\leq\mathcal{A}(W_{k},W_{k})$ and $\||F\varphi
1_{B}\||_{1,1}\leq
C\||F\||_{p_{1},q_{1}}\|\varphi\|_{W^{1,p}(\Omega)}\||\chi_{B}\||_{p_{3},q_{3}}$,
(where $p_{3}$ and $q_{3}$ are such that $1/p_{3}+1/p_{1}+1/p_{s}=1$ and
$1/q_{3}+1/q_{1}+1/p_{t}=1$) and the embedding
$\mathcal{V}\hookrightarrow\mathcal{V}_{1}\hookrightarrow
X^{p_{s},p_{t}}(\overline{\Omega},\mu)$. The remainder of the proof follows as
in the proof of part (a).
We conclude this section with the following example.
###### Example 5.11.
Let $p\in[2,\infty)$, $b:\partial\Omega\rightarrow(0,\infty)$ be a strictly
positive and $\sigma$-measurable function and let
$\beta(x,\xi):=b(x)|\xi|^{p-2}\xi,\quad\xi\in\mathbb{R}.$
Then, it is easy to verify that $\beta$ satisfies Assumptions 5.1, 5.2 and 5.7
(see, e.g., [5, Example 4.17]).
## References
* [1] R. A. Adams. _Sobolev Spaces_. Pure and Applied Mathematics, Vol. 65. Academic Press, New York, 1975.
* [2] S. Agmon, A. Douglis and L. Nirenberg. _Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions I_. Comm. Pure Appl. Math. 12 (1959), 623–727.
* [3] H. Attouch, G. Buttazo and G. Michaille. _Variational Analysis in Sobolev and BV Spaces_. Applications to PDEs and optimization. MPS/SIAM Series on Optimization, 6. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2006.
* [4] Ph. Bénilan and M.G. Crandall. _Completely accretive operators_. Semigroup theory and evolution equations (Delft, 1989), 41–75, Lecture Notes in Pure and Appl. Math., 135, Dekker, New York, 1991.
* [5] M. Biegert and M. Warma. _The heat equation with nonlinear generalized Robin boundary conditions_. J. Differential Equations 247 (2009), 1949—1979.
* [6] M. Biegert and M. Warma. _Some Quasi-linear elliptic Equations with inhomogeneous generalized Robin boundary conditions on “bad” domains_. Adv. Differential Equations 15 (2010), 893—924.
* [7] H. Brézis. _Opérateurs Maximaux Monotones et Semi-groupes de Contractions dans les Espaces de Hilbert_. American Elsevier Publishing Co., Inc., New York, 1973.
* [8] H. Brézis and A. Haraux. _Image d’une somme d’opérateurs monotones et applications_. Israel J. Math. 23 (1976), 165–186.
* [9] H. Brézis and L. Nirenberg. _Image d’une somme d’opérateurs non linéaires et applications_. C. R. Acad. Sci. Paris Sér. A-B 284 (1977), no. 21, A1365–A1368.
* [10] E. DiBenedetto. _Degenerate Parabolic Equations_. Springer, New York, 1993.
* [11] P. Drábek and J. Milota. _Methods of Nonlinear Analysis. Applications to Differential Equations_. Birkhäuser Advanced Texts, Birkhäuser, Basel, 2007.
* [12] C.G. Gal, G.R. Goldstein, J.A. Goldstein, S. Romanelli and M. Warma. _Fredholm alternative, semilinear eliptic problems, and Wentzell boundary conditions_. Preprint.
* [13] C.G. Gal, M. Grasselli and A. Miranville. _Nonisothermal Allen-Cahn equations with coupled dynamic boundary conditions_. Nonlinear phenomena with energy dissipation, GAKUTO Internat. Ser. Math. Sci. Appl., 29 (2008), 117—139.
* [14] C.G. Gal and A. Miranville. _Uniform global attractors for non-isothermal viscous and non-viscous Cahn–Hilliard equations with dynamic boundary conditions_. Nonlinear Analysis: Real World Applications 10 (2009), 1738–1766.
* [15] J.A. Goldstein. _Nonlinear Semigroups_. Lecture Notes.
* [16] L. Hörmander. _Linear Partial Differential Operators_. Springer-Verlag, Berlin, 1976.
* [17] E.M. Landesman and A.C. Lazer. _Nonlinear perturbations of linear elliptic boundary value problems at resonance_. J. Math. Mech. 19 (1969/1970), 609–623.
* [18] V. G. Maz’ya. __Sobolev Spaces__. Springer-Verlag, Berlin, 1985.
* [19] V.G. Maz’ya and S.V. Poborchi. _Differentiable Functions on Bad Domains_. World Scientific Publishing, 1997.
* [20] G. J. Minty. _Monotone (nonlinear) operators in Hilbert space_. Duke Math. J. 29 (1962), 341–346.
* [21] G. J. Minty. _On the solvability of nonlinear functional equations of monotonic type_. Pacific J. Math. 14 (1964), 249–255.
* [22] M. K. V. Murthy and G. Stampacchia. _Boundary value problems for some degenerate-elliptic operators_. Ann. Mat. Pura Appl. 80 (1968), 1–122.
* [23] J. Nečas. _Les Méthodes Directes en Théorie des Équations Elliptiques_. Masson et Cie, Éditeurs, Paris; Academia, Éditeurs, Prague, 1967.
* [24] J. Peetre. _Another approach to elliptic boundary value problems_. Comm. Pure Appl. Math. 14 (1961), 711–731.
* [25] M. M. Rao and Z. D. Ren. _Theory of Orlicz Spaces_. Monographs and Textbooks in Pure and Applied Mathematics, 146. Marcel Dekker, Inc., New York, 1991.
* [26] M. M. Rao and Z. D. Ren. _Applications of Orlicz Spaces_. Monographs and Textbooks in Pure and Applied Mathematics, 250. Marcel Dekker, Inc., New York, 2002.
* [27] R. E. Showalter. _Monotone Operators in Banach Space and Nonlinear Partial Differential Equations._ Amer. Math. Soc., Providence, RI, 1997\.
* [28] M.I. Vishik. _On general boundary problems for elliptic differential equations_. Trudy Moskow. Math. Obsc. 1 (1952), 187–246.
* [29] M. Warma. _The Robin and Wentzell-Robin Laplacians on Lipschitz domains_. Semigroup Forum 73 (2006), 10–30.
|
arxiv-papers
| 2011-05-19T13:43:48 |
2024-09-04T02:49:18.945219
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Ciprian G. Gal, Mahamadi Warma",
"submitter": "Ciprian Gal",
"url": "https://arxiv.org/abs/1105.3870"
}
|
1105.3936
|
#
${{~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}}^{{}^{{}^{{}^{The~{}Astrophysical~{}Journal,\,~{}746~{}:~{}150\,.~{}~{}~{}/\,2~{}February~{}2012\,/}}}}}$
Tidal dissipation compared to seismic dissipation:
in small bodies, in earths, and in superearths
Michael Efroimsky
US Naval Observatory, Washington DC 20392 USA
e-mail: michael.efroimsky @ usno.navy.mil
###### Abstract
While the seismic quality factor and phase lag are defined solely by the bulk
properties of the mantle, their tidal counterparts are determined both by the
bulk properties and the size effect (self-gravitation of a body as a whole).
For a qualitative estimate, we model the body with a homogeneous sphere, and
express the tidal phase lag through the lag in a sample of material. Although
simplistic, our model is sufficient to understand that the lags are not
identical. The difference emerges because self-gravitation pulls the tidal
bulge down. At low frequencies, this reduces strain and the damping rate, and
makes tidal damping less efficient in larger objects. At higher frequencies,
competition between self-gravitation and rheology becomes more complex, though
for sufficiently large superearths the same rule applies: the larger the
planet, the weaker tidal dissipation in it. Being negligible for small
terrestrial planets and moons, the difference between the seismic and tidal
lagging (and likewise between the seismic and tidal damping) becomes very
considerable for large exoplanets (superearths). In those, it is much lower
than what one might expect from using a seismic quality factor.
The tidal damping rate deviates from the seismic damping rate especially in
the zero-frequency limit, and this difference takes place for bodies of any
size. So the equal in magnitude but opposite in sign tidal torques, exerted on
one another by the primary and the secondary, have their orbital averages
going smoothly through zero as the secondary crosses the synchronous orbit.
We describe the mantle rheology with the Andrade model, allowing it to lean
toward the Maxwell model at the lowest frequencies. To implement this
additional flexibility, we reformulate the Andrade model by endowing it with a
free parameter $\,\zeta\,$ which is the ratio of the Andrade timescale to the
viscoelastic Maxwell time of the mantle. Some uncertainty in this parameter’s
frequency dependence does not influence our principal conclusions.
## 1 The goal and the plan
As the research on exoplanetary systems is gaining momentum, more and more
accurate theoretical tools of planetary dynamics come into demand. Among those
tools are the methods of calculation of tidal evolution of both orbital and
rotational motion of planets and their moons. Such calculations involve two
kind of integral parameters of celestial bodies – the Love numbers and the
tidal quality factors. The values of these parameters depend upon the rheology
of a body, as well as its size, temperature, and the tidal frequency.
It has recently become almost conventional in the literature to assume that
the tidal quality factor of superearths should be of the order of one hundred
to several hundred (Carter et al. 2011, Léger et al. 2009). Although an
acceptable estimate for the seismic $\,Q\,$, this range of numbers turns out
to fall short, sometimes by orders of magnitude, of the tidal $\,Q\,$ of
superearths.
In our paper, the frequency dependence of tidal damping in a near-spherical
homogeneous body is juxtaposed with the frequency dependence of damping in a
sample of the material of which the body consists. For brevity, damping in a
sample will be termed (somewhat broadly) as “seismic damping”.
We shall demonstrate that, while the tidal $\,Q\,$ of the solid Earth happens
not to deviate much from the solid-Earth seismic $\,Q\,$, the situation with
larger telluric bodies is considerably different. The difference stems from
the presence of self-gravitation, which suppresses the tidal bulge and thereby
acts as extra rigidity – a long-known circumstance often neglected in
astronomical studies.111 Including the size effect via $\,k_{l}\,$ is common.
Unfortunately, it is commonly assumed sufficient. This treatment however is
inconsistent in that it ignores the inseparable connection between the Love
number and the tidal quality factor (or the tidal phase lag). In reality, both
the Love number and the sine of the tidal lag should be derived from the
rheology and geometry of the celestial body, and cannot be adjusted separately
from one another. Due to self-gravitation (“size effect”), tidal damping in
superearths is much less efficient than in earths, and the difference may come
to orders of magnitude, as will be demonstrated below. Thus, while the seismic
$\,Q\,$ of a superearth may be comparable to the seismic $\,Q\,$ of the solid
Earth, the tidal $\,Q\,$ of a superearth may exceed this superearth’s seismic
$\,Q\,$ greatly. This is the reason why it is inappropriate to approximate
superearths’ tidal quality factors with that of the solid Earth.
We shall show that the difference between the frequency dependence of the
tidal $\,Q\,$ factor and that of the seismic $\,Q\,$ may explain the
“improper” frequency dependence of the tidal dissipation rate measured by the
lunar laser ranging (LLR) method. We also shall point out that the correct
frequency dependence of the tidal dissipation rate, especially at low
frequencies, plays an important role in modeling the process of entrapment
into spin-orbit resonances. In greater detail, the latter circumstance will be
discussed in Efroimsky (2012).
The rate of the “seismic damping” (a term that we employ to denote also
damping in a sample of the material) is defined, at each frequency, by the
material’s rheology only, i.e., by the constitutive equation linking the
strain and stress at this frequency. The rate of the tidal damping however is
determined both by the rheology and by the intensity of self-gravitation of
the body. At a qualitative level, this can be illustrated by the presence of
two terms, $\,1\,$ and $\,19\mu(\infty)/(2\rho\mbox{g}R)\,$, in the
denominator of the expression for the static Love number $\,k_{2}\,$ of a
homogeneous sphere. Here $\,\mu(\infty)\,$ denotes the relaxed shear modulus,
g signifies the surface gravity, while $\rho$ and $\,R\,$ stand for the mean
density and the radius of the body. The first of these terms, $\,1\,$, is
responsible for the size effect (self-gravitation), the second for the bulk
properties of the medium. Within the applicability realm of an important
theorem called elastic-viscoelastic analogy (also referred to as the
correspondence principle), the same expression interconnects the Fourier
component $\,\bar{k}_{2}(\chi)\,$ of the time derivative of the Love number
with the Fourier component $\,\bar{\mu}(\chi)\,$ of the stress-relaxation
function at frequency $\,\chi\,$. This renders the frequency-dependence of the
tangent of the tidal lag, which is the negative ratio of the imaginary and
real parts of $\,\bar{k}_{2}(\chi)\,$.
This preliminary consideration illustrates the way rheology enters the
picture. First, the constitutive equation defines the frequency dependence of
the complex compliance, $\,\bar{J}(\chi)\,$, and of the complex rigidity
$\,\bar{\mu}(\chi)=1/\bar{J}(\chi)\,$. The functional form of this dependence
determines the frequency dependence of the complex Love number,
$\,\bar{k}_{\it{l}}(\chi)\,$. The latter furnishes the frequency dependence of
the products $\,|\bar{k}_{l}(\chi)|\;\sin\epsilon_{l}(\chi)\,$ which enter the
tidal theory.
In Section 2, we briefly recall the standard description of stress-strain
relaxation and dissipation in linear media. In Section 3, we describe a
rheological model, which has proven to be adequate to the experimental data on
the mantle minerals and partial melts. The goal of the subsequent sections
will be to build the rheology into the theory of bodily tides, and to compare
a tidal response of a near-spherical body to a seismic response rendered by
the medium. Finally, several examples will be provided. Among these, will be
the case of the Moon, whose “improper” tidal-dissipation frequency-dependence
finds an explanation as soon as the difference between the seismic and tidal
friction is brought to light. In the closing section, we shall compare our
results with those obtained by Goldreich (1963).
## 2 Formalism
Everywhere in this paper we shall take into consideration only the deviatoric
stresses and strains, thus neglecting compressibility.
### 2.1 Compliance and rigidity.
The standard linear formalism in the time domain
The value of strain in a material depends only on the present and past values
taken by the stress and not on the current rate of change of the stress. Hence
the compliance operator $\,\hat{J}\,$ mapping the stress
$\,\sigma_{\gamma\nu}\,$ to the strain $\,u_{\gamma\nu}\,$ must be just an
integral operator, linear at small deformations:
$\displaystyle
2\,u_{\gamma\nu}(t)\;=\;\hat{J}(t)\;\sigma_{\gamma\nu}\;=\;\int^{t}_{-\infty}J(t\,-\,t\,^{\prime})\;\stackrel{{\scriptstyle\centerdot}}{{\sigma}}_{\gamma\nu}(t\,^{\prime})\;dt\,^{\prime}\;\;\;,\;\;\;$
(1)
where $\,t\,^{\prime}<t\,$, while overdot denotes $\,d/dt\,^{\prime}\,$. The
kernel $\,J(t-t\,^{\prime})\,$ is termed the compliance function or the creep-
response function.
Integration by parts renders:
$\displaystyle
2\,u_{\gamma\nu}(t)\;=\;\hat{J}(t)~{}\sigma_{\gamma\nu}~{}=~{}J(0)\;\sigma_{\gamma\nu}(t)\;-\;J(\infty)\;\sigma_{\gamma\nu}(-\infty)~{}+~{}\int^{t}_{-\infty}\stackrel{{\scriptstyle\;\centerdot}}{{J}}(t\,-\,t\,^{\prime})~{}{\sigma}_{\gamma\nu}(t\,^{\prime})~{}dt\,^{\prime}~{}~{}\,.~{}~{}~{}~{}\,$
(2)
As the load in the infinite past may be set zero, the term containing the
relaxed compliance $\,J(\infty)\,$ may be dropped. The unrelaxed compliance
$\,J(0)\,$ can be absorbed into the integral if we agree that the elastic
contribution enters the compliance function not as
$\displaystyle J(t-t\,^{\prime})\,=\,J(0)\,+\,\mbox{viscous and hereditary
terms}~{}~{}~{},$ (3)
but as
$\displaystyle
J(t-t\,^{\prime})\,=\,J(0)\,\Theta(t\,-\,t\,^{\prime})\,+\,\mbox{viscous and
hereditary terms}~{}~{}~{}.$ (4)
The Heaviside step-function $\,\Theta(t\,-\,t\,^{\prime})\,$ is set unity for
$\,t-t\,^{\prime}\,\geq\,0\,$, and zero for $\,t-t\,^{\prime}\,<\,0\,$, so its
derivative is the delta-function $\,\delta(t\,-\,t\,^{\prime})\,$. Keeping
this in mind, we reshape (2) into
$\displaystyle
2\,u_{\gamma\nu}(t)\,=\,\hat{J}(t)~{}\sigma_{\gamma\nu}\,=\,\int^{t}_{-\infty}\stackrel{{\scriptstyle\;\centerdot}}{{J}}(t-t\,^{\prime})~{}{\sigma}_{\gamma\nu}(t\,^{\prime})\,dt\,^{\prime}~{}~{},~{}~{}~{}\mbox{with}~{}~{}J(t-t\,^{\prime})~{}~{}\mbox{containing}~{}~{}J(0)\,\Theta(t-t\,^{\prime})~{}~{}.~{}~{}~{}$
(5)
Inverse to the compliance operator
$\displaystyle 2\,u_{\gamma\nu}\;=\;\hat{J}~{}\sigma_{\gamma\nu}~{}~{}~{}.$
(6)
is the rigidity operator
$\displaystyle\sigma_{\gamma\nu}\;=\;2\,\hat{\mu}~{}u_{\gamma\nu}~{}~{}~{}.$
(7)
In the presence of viscosity, operator $\,\hat{\mu}\,$ is not integral but is
integrodifferential, and thus cannot be expressed as
$~{}\sigma_{\gamma\nu}(t)\,=\,2\,\int_{-\infty}^{t}\,\dot{\mu}(t\,-\,t\,^{\prime})\,u_{\gamma\nu}(t\,^{\prime})\,dt\,^{\prime}~{}$.
It can though be written as
$\displaystyle\sigma_{\gamma\nu}(t)\,=\,2\,\int_{-\infty}^{t}\,{\mu}(t\,-\,t\,^{\prime})\,\dot{u}_{\gamma\nu}(t\,^{\prime})\,dt\,^{\prime}\;\;\;,$
(8)
if its kernel, the stress-relaxation function $\,{\mu}(t\,-\,t\,^{\prime})\,$,
is imparted with a term $\,\,2\,\eta\,\delta(t-t\,^{\prime})\,$, integration
whereof renders the viscous portion of stress,
$~{}2\,\eta\,\dot{u}_{\gamma\nu}\,$. The kernel also incorporates an unrelaxed
part $\,\mu(0)\,\Theta(t\,-\,t\,^{\prime})\,$, whose integration furnishes the
elastic portion of the stress. The unrelaxed rigidity $\,\mu(0)\,$ is inverse
to the unrelaxed compliance $\,J(0)\,$.
Each term in $\,\mu(t-t\,^{\prime})\,$, which is neither constant nor
proportional to a delta function, is responsible for hereditary reaction.
### 2.2 In the frequency domain
To Fourier-expand a real function, nonnegative frequencies are sufficient.
Thus we write:
$\displaystyle\sigma_{\gamma\nu}(t)~{}=~{}\int_{0}^{\infty}\,\bar{\sigma}_{\gamma\nu}(\chi)~{}e^{\textstyle{{}^{\,{\it
i}\chi t}}}~{}d\chi\quad\quad\mbox{and}~{}\quad~{}\quad
u_{\gamma\nu}(t)~{}=~{}\int_{0}^{\infty}\,\bar{u}_{\gamma\nu}(\chi)~{}e^{\textstyle{{}^{\,{\it
i}\chi t}}}~{}d\chi~{}~{}~{},$ (9)
where the complex amplitudes are
$\displaystyle{\bar{{\sigma}}_{\gamma\nu}}(\chi)={{{\sigma}}_{\gamma\nu}}(\chi)\,\;e^{{\it
i}\varphi_{\sigma}(\chi)}~{}~{}~{}~{}~{},~{}~{}~{}~{}~{}~{}{\bar{{u}}_{\gamma\nu}}(\chi)={{{u}}_{\gamma\nu}}(\chi)\,\;e^{{\it
i}\varphi_{u}(\chi)}~{}~{}~{},$ (10)
while the initial phases $\,\varphi_{\sigma}(\chi)\,$ and
$\,\varphi_{u}(\chi)\,$ are set to render the real amplitudes
$\,\sigma_{\gamma\nu}(\chi_{\textstyle{{}_{n}}})\,$ and
$\,u_{\gamma\nu}(\chi_{\textstyle{{}_{n}}})\,$ non-negative. To ensure
convergence, the frequency is, whenever necessary, assumed to approach the
real axis from below: $\,{\cal I}{\it{m}}(\chi)\rightarrow 0-\;\,$.
With the same caveats, the complex compliance $\,\bar{J}(\chi)\,$ is
introduced as the Fourier image of the time derivative of the creep-response
function:
$\displaystyle\int_{0}^{\infty}\bar{J}(\chi)\,e^{{\it
i}\chi\tau}d\chi\,=\,\stackrel{{\scriptstyle\;\centerdot}}{{J}}(\tau)~{}~{}.$
(11)
The inverse expression,
$\displaystyle\bar{J}(\chi)~{}=\,\int_{0}^{\infty}\stackrel{{\scriptstyle\;\centerdot}}{{J}}(\tau)\,e^{-{\it
i}\chi\tau}\,d\tau~{}~{}~{},$ (12)
is often written down as
$\displaystyle\bar{J}(\chi)~{}=\,J(0)\,+\,{\it
i}\,\chi\,\int_{0}^{\infty}\left[\;J(\tau)\,-\,J(0)\,\Theta(\tau)\;\right]\;e^{-{\it
i}\chi\tau}\,d\tau~{}~{}\,.\,~{}~{}~{}~{}$ (13)
For causality reasons, the integration over $\tau$ spans the interval
$\,\left[\right.0,\infty\left.\right)\,$ only. Alternatively, we can accept
the convention that each term in the creep-response function is accompanied
with the Heaviside step function.
Insertion of the Fourier integrals (9 \- 11) into (1) leads us to
$\displaystyle
2\,\int_{0}^{\infty}\bar{u}_{\gamma\nu}(\chi)~{}e^{\textstyle{{}^{\,{\it
i}\chi
t}}}~{}d\chi\;=\;\int_{0}^{\infty}\bar{\sigma}_{\mu\nu}(\chi)~{}\bar{J}(\chi)~{}e^{\textstyle{{}^{\,{\it
i}\chi t}}}~{}d\chi~{}~{}~{},$ (14)
whence we obtain:
$\displaystyle
2\;\bar{u}_{\gamma\nu}(\chi)\,=\;\bar{J}(\chi)\;\bar{\sigma}_{\gamma\nu}(\chi)\;\;\;.$
(15)
Expressing the complex compliance as
$\displaystyle\bar{J}(\chi)\;=\;|\bar{J}(\chi)|\;\exp\left[\,-\,{\it
i}\,\delta(\chi)\,\right]\;\;\;,\;$ (16)
where
$\displaystyle\tan\delta(\chi)\;\equiv\;-\;\frac{\cal{I}\it{m}\left[\,\bar{J\,}(\chi)\,\right]}{\cal{R}\it{e}\left[\,\bar{J\,}(\chi)\,\right]}\;\;\;,$
(17)
we see that $\;\delta(\chi)\,\;$ is the phase lag of a strain harmonic mode
relative to the appropriate harmonic mode of the stress:
$\displaystyle\varphi_{u}(\chi)\;=\;\varphi_{\sigma}(\chi)\;-\;\delta(\chi)\;\;\;.$
(18)
### 2.3 The quality factor(s)
In the linear approximation, at each frequency $\,\chi\,$ the average (per
period) energy dissipation rate
$\langle\stackrel{{\scriptstyle\centerdot}}{{E}}(\chi)\rangle$ is defined by
the deformation at that frequency only, and bears no dependence upon the other
frequencies:
$\displaystyle\langle\,\dot{E}(\chi)\,\rangle\;=\;-\;\frac{\textstyle\chi
E_{peak}(\chi)}{\textstyle Q(\chi)}\;$ (19)
or, the same:
$\displaystyle\Delta
E_{cycle}(\chi)\;=\;-\;\frac{2\;\pi\;E_{peak}(\chi)}{Q(\chi)}\;\;\;,$ (20)
$\Delta E_{cycle}(\chi)\,$ being the one-cycle energy loss, and $\,Q(\chi)\,$
being the quality factor related to the phase lag at the frequency $\,\chi\,$.
It should be clarified right away, to which of the lags we are linking the
quality factor. When we are talking about a sample of material, this lag is
simply $\,\delta(\chi)\,$ introduced above as the negative argument of the
appropriate Fourier component of the complex compliance – see formulae (17 \-
18). However, whenever we address tide, the quality factor becomes linked (via
the same formulae) to the tidal phase lag $\,\epsilon(\chi)\,$. Within the
same rheological model, the expression for $\,\epsilon(\chi)\,$ differs from
that for $\,\delta(\chi)\,$, because the tidal lag depends not only upon the
local properties of the material, but also upon self-gravitation of the body
as a whole.
The aforementioned “seismic-or-tidal” ambiguity in definition of $\,Q\,$
becomes curable as soon as one points out to which kind of deformation the
quality factor pertains. More serious is the ambiguity stemming from the
freedom in defining $\,E_{peak}(\chi)\,$.
If $\,E_{peak}(\chi)\,$ in (19 \- 20) signifies the peak energy stored at
frequency $\,\chi\,$, the resulting quality factor is related to the lag via
$\displaystyle Q^{-1}_{\textstyle{{}_{energy}}}~{}=~{}\sin|\delta|~{}~{}~{}$
(21)
(not $\,\tan|\delta|~{}$ as commonly believed – see the calculation in the
Appendix to Efroimsky 2012).
If however $\,E_{peak}(\chi)\,$ is introduced as the absolute maximum of work
carried out on the sample at frequency $\,\chi\,$ over a time interval through
which the power stays positive, then the appropriate $Q$ factor is connected
to the lag via
$\displaystyle
Q^{-1}_{\textstyle{{}_{work}}}\;=\;\frac{\tan|\delta|}{1\;-\;\left(\;\frac{\textstyle\pi}{\textstyle
2}\;-\;|\delta|\;\right)\;\tan|\delta|}\;\;\;,~{}~{}~{}~{}~{}$ (22)
as was shown in Ibid.222In Efroimsky & Williams (2009), $\,E_{peak}(\chi)\,$
was miscalled “peak energy”. However the calculation of $Q$ was performed
there for $\,E_{peak}(\chi)\,$ introduced as the peak work.
The third definition of the quality factor (offered by Goldreich 1963) is
$\displaystyle Q_{\textstyle{{}_{Goldreich}}}^{-1}=\,\tan|\delta|~{}~{}~{}.$
(23)
This definition, though, corresponds neither to the peak work nor to the peak
energy.
In the limit of weak lagging, all three definitions entail
$\displaystyle Q^{-1}\,=~{}|\delta|\;+\;O(\delta^{2})\;\;\;.$ (24)
For the lag approaching $\,\pi/2\,$, the quality factor defined as (21)
assumes its minimal value, $\,Q_{\textstyle{{}_{energy}}}=1\,$, while
definition (22) renders $\,Q_{\textstyle{{}_{work}}}=0\,$. The latter is
natural, since in the considered limit the work performed on the system is
negative, its absolute maximum being zero.333 As $\,Q<2\pi\,$ implies
$\,E_{peak}<\Delta E\,$, such small values of $\,Q\,$ are unattainable in the
case of damped free oscillations. Still, $\,Q\,$ can assume such values under
excitation, tides being the case.
In seismic studies or in exploration of attenuation in small samples, one’s
choice among the three definitions of $\,Q\,$ is a matter of personal taste,
for the quality factor is large and the definitions virtually coincide.
In the theory of tides, the situation is different, because at times one has
to deal with situations where the definitions of $\,Q\,$ disagree noticeably –
this happens when dissipation is intensive and $\,Q\,$ is of order unity. To
make a choice, recall that the actual quantities entering the Fourier
expansion of tides over the modes $\,\omega_{\textstyle{{}_{lmpq}}}\,$ are the
products444 A historical tradition (originating from Kaula 1964) prescribes to
denote the tidal phase lags with $\,\epsilon_{\textstyle{{}_{lmpq}}}\,$, while
keeping for the dynamical Love numbers the same notation as for their static
predecessors: $\,k_{l}\,$. These conventions are in conflict because the
product $~{}k_{l}\,\sin\epsilon_{\textstyle{{}_{lmpq}}}~{}$ is the negative
imaginary part of the complex Love number $\,\bar{k}_{l}\,$. More logical is
to use the unified notation as in (25). At the same time, it should not be
forgotten that for triaxial bodies the functional form of the dependence of
$\,\bar{k}_{l}\,$ on frequency is defined not only by $\,l\,$ but also by
$\,m,\,p,\,q\,$. In those situations, one has to deal with
$\,k_{\textstyle{{}_{lmpq}}}\,\sin\epsilon_{\textstyle{{}_{lmpq}}}\,$, see
Section 4.
$\displaystyle
k_{l}\,\sin\epsilon_{\textstyle{{}_{l}}}\,=\,k_{\textstyle{{}_{l}}}(\omega_{\textstyle{{}_{lmpq}}})\,\sin\epsilon_{\textstyle{{}_{l}}}(\omega_{\textstyle{{}_{lmpq}}})~{}~{}~{},$
(25)
where $\,k_{\textstyle{{}_{l}}}(\omega_{\textstyle{{}_{lmpq}}})\,$ are the
dynamical analogues to the Love numbers. It is these products that show up in
the $\,lmpq\,$ terms of the expansion for the tidal potential (force, torque).
From this point of view, a definition like (21) would be preferable, though
this time with the tidal lag $\,\epsilon\,$ instead of the seismic lag
$\,\delta~{}$:
$\displaystyle
Q^{-1}_{\textstyle{{}_{l}}}~{}=~{}\sin|\,\epsilon_{\textstyle{{}_{l}}}\,|$
(26a) or, in a more detailed manner: $\displaystyle
Q^{-1}_{\textstyle{{}_{l}}}(\omega_{\textstyle{{}_{lmpq}}})~{}=~{}\sin|\,\epsilon_{\textstyle{{}_{l}}}(\omega_{\textstyle{{}_{lmpq}}})\,|~{}~{}~{}.$
(26b)
Under this definition, one is free to substitute
$\,k_{l}\,\sin\epsilon_{\textstyle{{}_{l}}}\,$ with
$\,k_{l}/Q_{\textstyle{{}_{l}}}\,$. The subscript $\,l\,$ accompanying the
tidal quality factor will then serve as a reminder of the distinction between
the tidal quality factor and its seismic counterpart.
While the notion of the tidal quality factor has some illustrative power and
may be employed for rough estimates, calculations involving bodily tides
should be based not on the knowledge of the quality factor but on the
knowledge of the overall frequency dependence of products
$\,k_{l}\,\sin\epsilon_{\textstyle{{}_{l}}}\,=\,k_{\textstyle{{}_{l}}}(\omega_{\textstyle{{}_{lmpq}}})\,\sin\epsilon_{\textstyle{{}_{l}}}(\omega_{\textstyle{{}_{lmpq}}})\,$.
Relying on these functions would spare one of the ambiguity in definition of
$\,Q\,$ and would also enable one to take into account the frequency
dependence of the dynamical Love numbers.
## 3 The Andrade model and its reparameterisation
In the low-frequency limit, the mantle’s behaviour is unlikely to differ much
from that of the Maxwell body, because over timescales much longer than 1 yr
viscosity dominates (Karato & Spetzler 1990). At the same time, the
accumulated geophysical, seismological, and geodetic observations suggest that
at shorter timescales inelasticity takes over and the mantle is described by
the Andrade model. However, the near-Maxwell behavior expected at low
frequencies can be fit into the Andrade formalism, as we shall explain below.
### 3.1 Experimental data: the power scaling law
Dissipation in solids may be effectively modeled using the empirical scaling
law
$\displaystyle\sin\delta\;=\;\left(\,{\cal{E}}\,\chi\,\right)^{\textstyle{{}^{-p}}}~{}~{},$
(27)
$\cal{E}\,$ being a constant having the dimensions of time. This “constant”
may itself bear a (typically, much slower) dependence upon the frequency
$\,\chi\,$. The dependence of $\,{\cal E}\,$ on the temperature is given by
the Arrhenius law (Karato 2008).
Experiments demonstrate that the power dependence (27) is surprisingly
universal, with the exponential $\,p\,$ robustly taking values within the
interval from $\,0.14\,$ to $\,0.4\,$ (more often, from $\,0.14\,$ to
$\,0.3\,$).
For the first time, dependence (27) was measured on metals in a lab. This was
done by Andrade (1910), who also tried to pick up an expression for the
compliance compatible with this scaling law. Later studies have demonstrated
that this law works equally well, and with similar values of $\,p\,$, both for
silicate rocks (Weertman & Weertman 1975, Tan et al. 1997) and ices (Castillo-
Rogez 2009, McCarthy et al 2007).
Independently from the studies of samples in the lab, the scaling behaviour
(27) was obtained via measurements of dissipation of seismic waves in the
Earth (Mitchell 1995, Stachnik et al. 2004, Shito et al. 2004).
The third source of confirmation of the power scaling law came from geodetic
experiments that included: (a) satellite laser ranging (SLR) of tidal
variations in the $\,J_{2}\,$ component of the gravity field of the Earth; (b)
space-based observations of tidal variations in the Earth’s rotation rate; and
(c) space-based measurements of the Chandler wobble period and damping
(Benjamin et al. 2006, Eanes & Bettadpur 1996, Eanes 1995).555 It should be
noted that in reality the geodetic measurements were confirming the power law
(27) not for the seismic lag $\,\delta\,$ but for the tidal lag
$\,\epsilon~{}$, an important detail to be addressed shortly.
While samples of most minerals furnish the values of $\,p\,$ lying within the
interval $\,0.15\,-\,0.4\,$, the geodetic measurements give
$\,0.14\,-\,0.2\,$. At least a fraction of this difference may be attributed
to the presence of partial melt, which is known to have lower values of
$\,p\,$ (Fontaine et al. 2005).
On all these grounds, it is believed that mantles of terrestrial planets are
adequately described by the Andrade model, at least in the higher frequency
band where inelasticity dominates (Gribb & Cooper 1998, Birger 2007, Efroimsky
& Lainey 2007, Zharkov & Gudkova 2009). Some of the other models were
considered by Henning et al. (2009).
The Andrade model is equally well applicable to celestial bodies with ice
mantles (for application to Iapetus see Castillo-Rogez et al. 2011) and to
bodies with considerable hydration in a silicate mantle.666 Damping mechanisms
in a wet planet will be the same as in a dry one, except that their efficiency
will be increased. So the dissipation rate will have a similar frequency
dependence but higher magnitude. The model can also be employed for modeling
of the tidal response of the solid parts of objects with significant liquid-
water layers.777 In the absence of internal oceans, a rough estimate of the
tidal response can be obtained through modeling the body with a homogeneous
sphere. However the presence of such oceans makes it absolutely necessary to
calculate the overall response through integration over the solid and liquid
layers. As demonstrated by Tyler (2009), tidal dissipation in internal ocean
layers can play a big role in rotational dynamics of the body.
### 3.2 The Andrade model in the time domain
The compliance function of the Andrade body (Cottrell & Aytekin 1947, Duval
1978),
$\displaystyle
J(t-t\,^{\prime})\;=\;\left[\;J\;+\;(t-t\,^{\prime})^{\alpha}~{}\beta+\;\left(t-t\,^{\prime}\right)~{}{\eta}^{-1}\;\right]\,\Theta(t-t\,^{\prime})~{}~{}~{},$
(28)
contains empirical parameters $\alpha\,$ and $\,\beta\,$, the steady-state
viscosity $\eta\,$, and the unrelaxed compliance $\,J\equiv
J(0)=1/\mu(0)\,=\,1/\mu\,$. We endow the right-hand side of (28) with the
Heaviside step-function $\,\Theta(t-t\,^{\prime})\,$, to ensure that insertion
of (28) into (5), with the subsequent differentiation, yields the elastic term
$\,J\,\delta(t-t\,^{\prime})\,$ under the integral. The model allows for
description of dissipation mechanisms over a continuum of frequencies, which
is useful for complex materials with a range of grain sizes.
The Andrade model can be thought of as the Maxwell model equipped with an
extra term $\,(t-t\,^{\prime})^{\alpha}\,\beta\,$ describing hereditary
reaction of strain to stress. The Maxwell model
$\displaystyle
J^{\textstyle{{}^{(Maxwell)}}}(t-t\,^{\prime})\;=\;\left[\;J\;+\;\left(t-t\,^{\prime}\right)~{}{\eta}^{-1}\;\right]\,\Theta(t-t\,^{\prime})~{}~{}~{}$
(29)
is a simple rheology, which has a long history of application to planetary
problems, but generally has too strong a frequency dependence at higher
frequencies where inelasticity becomes more efficient than viscosity (Karato
2008). Insertion of (29) into (5) renders strain consisting of two separate
inputs. The one proportional to $\,J\,$ implements the instantaneous (elastic)
reaction, while the one containing $\,{\eta}^{-1}\,$ is responsible for the
viscous part of the reaction.
Just as the viscous term $~{}\left(t-t\,^{\prime}\right)\,{\eta}^{-1}~{}$
showing up in (28 \- 29), so the inelastic term
$~{}(t-t\,^{\prime})^{\alpha}\,\beta~{}$ emerging in the Andrade model (28) is
delayed – both terms reflect how the past stressing is influencing the present
deformation. At the same time, inelastic reaction differs from viscosity both
mathematically and physically, because it is produced by different physical
mechanisms.
A disadvantage of the formulation (28) of the Andrade model is that it
contains a parameter of fractional dimensions, $\,\beta\,$. To avoid
fractional dimensions, we shall express this parameter, following Efroimsky
(2012), as
$\displaystyle\beta\,=\,J~{}\tau_{{}_{A}}^{-\alpha}\,=~{}\mu^{-1}\,\tau_{{}_{A}}^{-\alpha}~{}~{}~{},$
(30a) the new parameter $\,\tau_{{}_{A}}\,$ having dimensions of time. This is
the timescale associated with the Andrade creep, wherefore it may be named as
the “Andrade time” or the “inelastic time”.
Another option is to express $\,\beta\,$ as
$\displaystyle\beta\,=\,{\zeta}^{-\alpha}\,J~{}\tau_{{}_{M}}^{-\alpha}\,=~{}\zeta^{-\alpha}\,\mu^{-1}\,\tau_{{}_{M}}^{-\alpha}~{}~{}~{},$
(30b)
where the dimensionless parameter $\,\zeta\,$ is related through
$\displaystyle\zeta~{}=~{}\frac{\tau_{{}_{A}}}{\tau_{{}_{M}}}~{}~{}~{}$ (31)
to the Andrade timescale $\,\tau_{{}_{A}}\,$ and to the Maxwell time
$\displaystyle\tau_{{}_{M}}\,\equiv\,\frac{\eta}{\mu}\,=\,\eta\,J~{}~{}.$ (32)
In terms of the so-introduced parameters, the compliance assumes the form of
$\displaystyle J(t-t\,^{\prime})$ $\displaystyle=$ $\displaystyle
J~{}\left[\;1\;+\;\left(\,\frac{t-t\,^{\prime}}{\tau_{{}_{A}}}\,\right)^{\alpha}\,+~{}\frac{t-t\,^{\prime}}{\tau_{{}_{M}}}\;\right]\,\Theta(t-t\,^{\prime})\;\;\;$
(33a) $\displaystyle=$ $\displaystyle
J~{}\left[\;1\;+\;\left(\,\frac{t-t\,^{\prime}}{\zeta~{}\tau_{{}_{M}}}\,\right)^{\alpha}\,+~{}\frac{t-t\,^{\prime}}{\tau_{{}_{M}}}\;\right]\,\Theta(t-t\,^{\prime})\;\;\;.$
(33b)
For $\,\tau_{{}_{A}}\ll\,\tau_{{}_{M}}\,$ (or, equivalently, for
$\,\zeta\,\ll\,1\,$), inelasticity plays a more important role than viscosity.
On the other hand, a large $\,\tau_{{}_{A}}\,$ (or large $\,\zeta\,$) would
imply suppression of inelastic creep, compared to viscosity.
It has been demonstrated by Castillo-Rogez that under low stressing (i.e.,
when the grain-boundary diffusion is the dominant damping mechanism – like in
Iapetus) $\,\beta\,$ obeys the relation
$\displaystyle\beta\,\approx~{}J~{}\tau_{{}_{M}}^{-\alpha}\,=\,J^{1-\alpha}\,\eta^{-\alpha}\,=\,\mu^{\alpha-1}\,\eta^{-\alpha}~{}~{}~{},$
(34a) (see, e.g., Castillo-Rogez et al. 2011). Comparing this to (3.2), we can
say that the Andrade and Maxwell timescales are close to one another:
$\displaystyle\tau_{{}_{A}}\,\approx~{}\tau_{{}_{M}}~{}~{}~{}$ (34b) or,
equivalently, that the dimensionless parameter $\,\zeta\,$ is close to unity:
$\displaystyle\zeta~{}\approx~{}1~{}~{}~{}.$ (34c)
Generally, we have no reason to expect the Andrade and Maxwell timescales to
coincide, nor even to be comparable under all possible circumstances. While
(34) may work when inelastic friction is determined mainly by the grain-
boundary diffusion, we are also aware of a case when the timescales
$\,\tau_{{}_{A}}\,$ and $\,\tau_{{}_{M}}\,$ differ considerably. This is a
situation when stressing is stronger, and the inelastic part of dissipation is
defined mainly by dislocations unpinning (i.e., by the Andrade creep). This is
what happens in mantles of earths and superearths.
On theoretical grounds, Karato & Spetzler (1990) point out that the
dislocation-unpinning mechanism remains effective in the Earth’s mantle down
to the frequency threshold $\,\chi_{\textstyle{{}_{0}}}\sim
1~{}\mbox{yr}^{-1}$. At lower frequencies, this mechanism becomes less
efficient, giving way to viscosity. Thus at low frequencies the mantle’s
behaviour becomes closer to that of the Maxwell body.888 Using the Andrade
model as a fit to the experimentally observed scaling law (27), we see that
the exponential $\,p\,$ coincides with the Andrade parameter $\,\alpha<1\,$ at
frequencies above the said threshold, and that $\,p\,$ becomes closer to unity
below the threshold – see subsection 3.4 below. This important example tells
us that the Andrade time $\,\tau_{{}_{A}}\,$ and the dimensionless parameter
$\,\zeta\,$ may, at times, be more sensitive to the frequency than the Maxwell
time would be. Whether $\,\tau_{{}_{A}}\,$ and $\,\zeta\,$ demonstrate this
sensitivity or not – may, in its turn, depend upon the intensity of loading,
i.e., upon the damping mechanisms involved.
### 3.3 The Andrade model in the frequency domain
Through (12), it can be demonstrated (Findley et al. 1976) that in the
frequency domain the compliance of an Andrade material reads as
$\displaystyle{\bar{\mathit{J\,}}}(\chi)$ $\displaystyle=$ $\displaystyle
J\,+\,\beta\,(i\chi)^{-\alpha}\;\Gamma\,(1+\alpha)\,-\,\frac{i}{\eta\chi}$
(35a) $\displaystyle=$ $\displaystyle
J\,\left[\,1\,+\,(i\,\chi\,\tau_{{}_{A}})^{-\alpha}\;\Gamma\,(1+\alpha)~{}-~{}i~{}(\chi\,\tau_{{}_{M}})^{-1}\right]\;\;\;,$
(35b) $\displaystyle=$ $\displaystyle
J\,\left[\,1\,+\,(i\,\chi\,\zeta\,\tau_{{}_{M}})^{-\alpha}\;\Gamma\,(1+\alpha)~{}-~{}i~{}(\chi\,\tau_{{}_{M}})^{-1}\right]\;\;\;,$
(35c)
$\chi\,$ being the frequency, and $\,\Gamma\,$ denoting the Gamma function.
The imaginary and real parts of the complex compliance are:
$\displaystyle{\cal I}{\it m}[\bar{J}(\chi)]$ $\displaystyle=$
$\displaystyle-\;\frac{1}{\eta\,\chi}\;-\;\chi^{-\alpha}\,\beta\;\sin\left(\,\frac{\alpha\,\pi}{2}\,\right)\;\Gamma(\alpha\,+\,1)~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\quad\quad\quad$
(36a) $\displaystyle=$
$\displaystyle-\;J\,(\chi\,\tau_{{}_{M}})^{-1}\;-\;J\,(\chi\,\tau_{{}_{A}})^{-\alpha}\;\sin\left(\,\frac{\alpha\,\pi}{2}\,\right)\;\Gamma(\alpha\,+\,1)~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\quad\quad\quad$
(36b) $\displaystyle=$
$\displaystyle-\;J\,(\chi\,\tau_{{}_{M}})^{-1}\;-\;J\,(\chi\,\zeta\,\tau_{{}_{M}})^{-\alpha}\;\sin\left(\,\frac{\alpha\,\pi}{2}\,\right)\;\Gamma(\alpha\,+\,1)~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\quad\quad\quad$
(36c)
and
$\displaystyle{\cal R}{\it e}[\bar{J}(\chi)]$ $\displaystyle=$ $\displaystyle
J\;+\;\chi^{-\alpha}\,\beta\;\cos\left(\,\frac{\alpha\,\pi}{2}\,\right)\;\Gamma(\alpha\,+\,1)~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(37a) $\displaystyle=$ $\displaystyle
J\;+\;J\,(\chi\tau_{{}_{A}})^{-\alpha}\;\cos\left(\,\frac{\alpha\,\pi}{2}\,\right)\;\Gamma(\alpha\,+\,1)~{}~{}~{}\quad\quad\quad\quad\quad~{}\quad\quad\quad\quad\quad\quad$
(37b) $\displaystyle=$ $\displaystyle
J\;+\;J\,(\chi\,\zeta\,\tau_{{}_{M}})^{-\alpha}\;\cos\left(\,\frac{\alpha\,\pi}{2}\,\right)\;\Gamma(\alpha\,+\,1)~{}~{}~{}.\quad\quad\quad\quad\quad~{}\quad\quad\quad\quad\quad\quad$
(37c)
The ensuing frequency dependence of the phase lag will look as 999 In some
publications (e.g., Nimmo 2008), formula (38a) is given as an expression for
the inverse quality factor. This is legitimate when the latter is defined
through (23).
$\displaystyle\tan\delta(\chi)$ $\displaystyle=$
$\displaystyle-~{}\frac{{{\cal{I}}\textit{m}\left[\bar{J}(\chi)\right]}}{{\cal{R}}{\textit{e}\left[\bar{J}(\chi)\right]}}~{}=~{}\frac{(\eta\;\chi)^{\textstyle{{}^{\,-1}}}\,+~{}\chi^{\textstyle{{}^{\,-\alpha}}}\;\beta\;\sin\left(\frac{\textstyle\alpha~{}\pi}{\textstyle
2}\right)\Gamma\left(\alpha\,+\,1\right)}{\mu^{\textstyle{{}^{~{}-1}}}\,+\;\chi^{\textstyle{{}^{\,-\alpha}}}\;\beta\;\cos\left(\frac{\textstyle\alpha\;\pi}{\textstyle
2}\right)\;\Gamma\left(\textstyle\alpha\,+\,1\right)}~{}$ (38a)
$\displaystyle=$ $\displaystyle\frac{\frac{\textstyle
1}{\textstyle\,\chi~{}\tau_{{}_{M}}\,}\,+\,\left(\frac{\textstyle
1}{\textstyle\,\chi~{}\tau_{{}_{A}}\,}\right)^{\alpha}\,~{}\Gamma\left(\alpha\,+\,1\right)~{}\,\sin\left(\frac{\textstyle\alpha~{}\pi}{\textstyle
2}\right)}{1\,+~{}\left(\,\frac{\textstyle
1}{\textstyle\,\chi~{}\tau_{{}_{A}}\,}\,\right)^{\alpha}~{}\,\Gamma\left(\textstyle\alpha\,+\,1\right)\,~{}\cos\left(\frac{\textstyle\alpha\;\pi}{\textstyle
2}\right)}~{}=~{}\frac{z^{-1}\,\zeta\,+\,z^{-\alpha}\,\sin\left(\frac{\textstyle\alpha~{}\pi}{\textstyle
2}\right)\Gamma\left(\alpha\,+\,1\right)}{1\,+~{}z^{-\alpha}\,\cos\left(\frac{\textstyle\alpha\;\pi}{\textstyle
2}\right)~{}\Gamma\left(\textstyle\alpha\,+\,1\right)}~{}~{}~{},\quad\quad\quad$
(38b)
with $z\,$ being the dimensionless frequency defined through
$\displaystyle
z~{}\equiv~{}\chi~{}\tau_{{}_{A}}~{}=~{}\chi~{}\tau_{{}_{M}}~{}\zeta~{}~{}~{}.$
(39)
Evidently, for $\,\beta\rightarrow 0\,$ (that is, for
$\,\zeta\rightarrow\infty\,$ or $\,\tau_{{}_{A}}\rightarrow\infty\,$),
expression (38) approaches
$\displaystyle\tan\delta(\chi)\,=\,\left(\tau_{{}_{M}}\,\chi\right)^{\textstyle{{}^{\,-1}}}~{}~{}~{},$
(40)
which is the frequency dependence appropriate to the Maxwell body.
### 3.4 Low frequencies: from Andrade toward Maxwell
The Andrade body demonstrates the so-called “elbow dependence” of the
dissipation rate upon frequency. At high frequencies, the lag $\,\delta\,$
satisfies the power law
$\displaystyle\tan\delta~{}\sim~{}\chi^{-p}~{}~{}~{},$ (41)
the exponential being expressed via an empirical parameter $\,\alpha\,$, where
$\,0\,<\,\alpha\,<\,1\,$ for most materials. It follows from (38) that at
higher frequencies $\,p=\alpha\,$, while at low frequencies
$\,p={1-\alpha}\,$.
The statement by Karato & Spetzler (1990), that the mantle’s behaviour at low
frequencies should lean toward that of the Maxwell body, can be fit into the
Andrade formalism, if we agree that at low frequencies either $\,\alpha\,$
approaches zero (so $p$ approaches unity) or $\,\zeta\,$ becomes large (so
$\,\tau_{{}_{A}}\,$ becomes much larger than $\,\tau_{{}_{M}}\,$). The latter
option is more physical, because the increase of $\,\tau_{{}_{A}}\,$ would
reflect the slowing-down of the unpinning mechanism studied in Ibid.
One way or another, the so-parameterised Andrade model is fit to embrace the
result from Ibid. Hence our treatment will permit us to describe both the
high-frequency range where the traditional Andrade model is applicable, and
the low-frequency band where the behaviour of the mantle deviates from the
Andrade model toward the Maxwell body. Comparison of the two models in the
frequency domain is presented in Figure 1.
Figure 1: Andrade and Maxwell models in the frequency domain. The plot shows
the decadic logarithm of $\,\tan\delta\,$, as a function of the decadic
logarithm of the forcing frequency $\,\chi\,$ (in cycles s-1). For the Andrade
body, the tangent of phase lag is given by (38b), with $\,\alpha=0.2\,$ and
$\,\tau_{{}_{A}}=\tau_{{}_{M}}=10^{10}\,$s. For the Maxwell body, the tangent
is rendered by (40), with $\,\tau_{{}_{M}}=10^{10}\,$s.
## 4 Expanding a tidal potential or torque – over the tidal modes or over the
forcing frequencies?
Consider a binary system with mean motion $\,n\,$, and suppose that tidal
dissipation in one of the bodies much exceeds that in its companion. Then the
former body may be treated as the tidally perturbed primary, the latter being
its tide-raising secondary. The sidereal angle and the spin rate of the
primary will be expressed with $\,\theta\,$ and
$\,\stackrel{{\scriptstyle\bf\centerdot}}{{\theta\,}}\,$, while the node,
pericentre, and mean anomaly of the secondary, as seen from the primary,101010
When the role of the primary is played by a planet and the role of the
perturbing secondary is played by the host star, the argument of the
pericentre of the star as seen from the planet, $\,\omega\,$, differs by
$\,\pi\,$ from the argument of the pericentre of the planet as seen from the
star. Also mind that in equation (42) the letter $\,\omega\,$ with the
subscript $\,lmpq\,$ denotes, as ever, a tidal mode, while the same letter
without a subscript stands for the periapse. The latter use of this letter in
equation (42) is exceptional, in that elsewhere in the paper the letter
$\,\omega\,$, with or without a subscript, always denotes a tidal mode. will
be denoted by $\,\Omega\,$, $\omega\,$, and ${\cal M}$.
In the Darwin-Kaula theory, bodily tides are expanded over the modes
$\displaystyle\omega_{lmpq}\;\equiv\;(l-2p)\;\dot{\omega}\,+\,(l-2p+q)\;\dot{\cal{M}}\,+\,m\;(\dot{\Omega}\,-\,\dot{\theta})\,\approx\,(l-2p+q)\;n\,-\,m\;\dot{\theta}~{}~{}~{},~{}~{}~{}$
(42)
with $l,\,m,\,p,\,q\,$ being integers. Dependent upon the values of the mean
motion, spin rate, and the indices, the tidal modes $\,\omega_{{\it l}mpq}\,$
may be positive or negative or zero.
In the expansion of the tidal potential or torque or force, summation over the
integer indices goes as
$~{}\sum_{l=2}^{\infty}\sum_{m=0}^{l}\sum_{p=0}^{\infty}\sum_{q=\,-\,\infty}^{\infty}~{}.$
For example, the secular polar component of the tidal torque will read as:
$\displaystyle{\cal
T}~{}=~{}\sum_{l=2}^{\infty}\sum_{m=0}^{l}\sum_{p=0}^{\infty}\sum_{q=\,-\,\infty}^{\infty}.~{}.~{}.~{}\,k_{\textstyle{{}_{l}}}(\omega_{\textstyle{{}_{lmpq}}})~{}\sin\epsilon_{\textstyle{{}_{l}}}(\omega_{\textstyle{{}_{lmpq}}})~{}~{}~{},$
(43)
where the ellipsis denotes a function of the primary’s radius and the
secondary’s orbital elements. The functions
$\,k_{\textstyle{{}_{l}}}(\omega_{\textstyle{{}_{lmpq}}})\,$ are the dynamical
analogues of the static Love numbers, while the phase lags corresponding to
the tidal modes $\,\omega_{\textstyle{{}_{lmpq}}}\,$ are given by
$\displaystyle\epsilon_{\textstyle{{}_{l}}}(\omega_{\textstyle{{}_{lmpq}}})~{}=~{}\omega_{\textstyle{{}_{lmpq}}}~{}\Delta
t_{\textstyle{{}_{l}}}(|\omega_{\textstyle{{}_{lmpq}}}|)~{}=~{}|\,\omega_{\textstyle{{}_{lmpq}}}\,|~{}\Delta
t_{\textstyle{{}_{l}}}(|\omega_{\textstyle{{}_{lmpq}}}|)~{}\mbox{sgn}\,\omega_{\textstyle{{}_{lmpq}}}~{}=~{}\chi_{\textstyle{{}_{lmpq}}}~{}\Delta
t_{\textstyle{{}_{l}}}(\chi_{\textstyle{{}_{lmpq}}})~{}\,\mbox{sgn}\,\omega_{\textstyle{{}_{lmpq}}}~{}~{}~{}.~{}~{}~{}~{}$
(44)
Here the positively defined quantities
$\displaystyle\chi_{\textstyle{{}_{lmpq}}}\,\equiv~{}|\,\omega_{\textstyle{{}_{lmpq}}}\,|~{}~{}~{}$
(45)
are the forcing frequencies in the material, while the positively defined time
lags $\,\Delta t_{\textstyle{{}_{lmpq}}}\,$ are their functions.
Following Kaula (1964), the phase and time lags are often denoted with
$\,\epsilon_{lmpq}\,$ and $\,\Delta t_{lmpq}\,$. For near-spherical bodies,
though, the notations $~{}\epsilon_{{\it l}}(\chi_{{\it l}mpq})~{}$ and
$\,\Delta t_{{\it l}}(\chi_{{\it l}mpq})\,$ would be preferable, because for
such bodies the functional form of the dependency $\,\epsilon_{lmpq}(\chi)\,$
is defined by l s, but is ignorant of the values of the other three
indices.111111 Within the applicability realm of the elastic-viscoelastic
analogy employed in subsection 5.2 below, the functional form of the complex
Love number $\,\bar{k}_{l}(\chi)\,$ of a near-spherical object is determined
by index $\,\it l\,$ solely, while the integers $\,m,\,p,\,q\,$ show up
through the value of the frequency: $~{}\bar{k}_{{\it
l}}(\chi)\,=\,\bar{k}_{{\it l}}(\chi_{{\it l}mpq})~{}$. This applies to the
lag too, since the latter is related to $\,\bar{k}_{\it l}\,$ via (62). For
triaxial bodies, the functional forms of the frequency dependencies of the
Love numbers and phase lags do depend upon $\,m,\,p,\,q\,$, because of
coupling between spherical harmonics. In those situations, notations
$\,\bar{k}_{{\it l}mpq}\,$ and $\,\epsilon_{{\it l}mpq}\,$ become necessary
(Dehant 1987a,b; Smith 1974). The Love numbers of a slightly non-spherical
primary differ from the Love numbers of the spherical reference body by a term
of the order of the flattening, so a small non-sphericity can usually be
ignored. The same therefore applies to the time lag.
The forcing frequencies in the material of the primary,
$\,\chi_{\textstyle{{}_{lmpq}}}\,$, are positively defined. While the general
formula for a Fourier expansion of a field includes integration (or summation)
over both positive and negative frequencies, it is easy to demonstrate that in
the case of real fields it is sufficient to expand over positive frequencies
only. The condition of the field being real requires that the real part of a
Fourier term at a negative frequency is equal to the real part of the term at
an opposite, positive frequency. Hence one can get rid of the terms with
negative frequencies, at the cost of doubling the appropriate terms with
positive frequencies. (The convention is that the field is the real part of a
complex expression.)
The tidal theory is a rare exception to this rule: here, a contribution of a
Fourier mode into the potential is not completely equivalent to the
contribution of the mode of an opposite sign. The reason for this is that the
tidal theory is developed to render expressions for tidal forces and torques,
and the sign of the tidal mode $\,\omega_{\textstyle{{}_{lmpq}}}\,$ shows up
explicitly in those expression. This happens because the phase lag in (43) is
the product (44) of the tidal mode $\,\omega_{\textstyle{{}_{lmpq}}}\,$ and
the positively defined time lag $\,\Delta t_{\textstyle{{}_{lmpq}}}\,$.
This way, if we choose to expand tide over the positively defined frequencies
$\,\chi\,$ only, we shall have to insert “by hand” the multipliers
$\displaystyle\mbox{sgn}\,\omega_{\textstyle{{}_{lmpq}}}~{}=~{}\mbox{sgn}\left[~{}({\it
l}-2p+q)\,n~{}-~{}m\,\dot{\theta}~{}\right]$ (46)
into the expressions for the tidal torque and force, a result to be employed
below in formula (65). The topic is explained in greater detail in Efroimsky
(2012).
## 5 Complex Love numbers and the elastic-viscoelastic analogy
Let us recall briefly the switch from the stationary Love numbers to their
dynamical counterparts, the Love operators. The method was pioneered,
probably, by Zahn (1966) who applied it to a purely viscous planet. The method
works likewise for an arbitrary linear rheological model, insofar as the
elastic-viscoelastic analogy (also referred to as the correspondence
principle) remains in force.
### 5.1 From the Love numbers to the Love operators
A homogeneous near-spherical incompressible primary alters its shape and
potential, when influenced by a static point-like secondary. At a point
$\mbox{{\boldmath$\vec{R}$}}=(R,\lambda,\phi)$, the potential due to a tide-
generating secondary of mass $M^{*}_{sec}\;$, located at
$\,{\mbox{{\boldmath$\vec{r}$}}}^{~{}*}=(r^{*},\,\lambda^{*},\,\phi^{*})\,$
with $\,r^{*}\geq R\,$, can be expressed through the Legendre polynomials
$\,P_{\it l}(\cos\gamma)\,$ or the Legendre functions
$\,P_{{\it{l}}m}(\sin\phi)\,$:
$\displaystyle
W(\mbox{{\boldmath$\vec{R}$}}\,,\,\mbox{{\boldmath$\vec{r}$}}^{~{}*})$
$\displaystyle=$
$\displaystyle\sum_{{\it{l}}=2}^{\infty}~{}W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,~{}\mbox{{\boldmath$\vec{r}$}}^{~{}*})~{}=~{}-~{}\frac{G\;M^{*}_{sec}}{r^{\,*}}~{}\sum_{{\it{l}}=2}^{\infty}\,\left(\,\frac{R}{r^{~{}*}}\,\right)^{\textstyle{{}^{\it{l}}}}\,P_{\it{l}}(\cos\gamma)~{}~{}~{}~{}$
(47) $\displaystyle=$
$\displaystyle-\,\frac{G~{}M^{*}_{sec}}{r^{\,*}}\sum_{{\it{l}}=2}^{\infty}\left(\frac{R}{r^{~{}*}}\right)^{\textstyle{{}^{\it{l}}}}\sum_{m=0}^{\it
l}\frac{({\it l}-m)!}{({\it
l}+m)!}(2-\delta_{0m})P_{{\it{l}}m}(\sin\phi)P_{{\it{l}}m}(\sin\phi^{*})~{}\cos
m(\lambda-\lambda^{*})~{}~{},\,\quad~{}~{}\quad$
$G\,$ being Newton’s gravitational constant, and $\gamma\,$ being the angle
between the vectors $\,{\mbox{{\boldmath$\vec{r}$}}}^{\;*}\,$ and $\vec{R}$
originating from the primary’s centre. The latitudes $\phi,\,\phi^{*}$ are
reckoned from the primary’s equator, while the longitudes
$\lambda,\,\lambda^{*}$ are reckoned from a fixed meridian on the primary.
The $\,{\it{l}}^{th}$ spherical harmonic
$\,U_{l}(\mbox{{\boldmath$\vec{r}$}})\,$ of the resulting change of the
primary’s potential at an exterior point $\vec{r}$ is connected to the
$\,{\it{l}}^{th}$ spherical harmonic
$\,W_{l}(\mbox{{\boldmath$\vec{R}$}}\,,\,\mbox{{\boldmath$\vec{r}$}}^{\;*})\,$
of the perturbing exterior potential via $~{}U_{\it
l}(\mbox{{\boldmath$\vec{r}$}})=\left({R}/{r}\right)^{{\it l}+1}{k}_{\it
l}\,W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,\,\mbox{{\boldmath$\vec{r}$}}^{\;*})~{}$,
so the total change in the exterior potential of the primary becomes:
$\displaystyle U(\mbox{{\boldmath$\vec{r}$}})~{}=~{}\sum_{{\it
l}=2}^{\infty}~{}U_{\it{l}}(\mbox{{\boldmath$\vec{r}$}})~{}=~{}\sum_{{\it
l}=2}^{\infty}\,\left(\,\frac{R}{r}\,\right)^{{\it
l}+1}k_{l}\;\,W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,\;\mbox{{\boldmath$\vec{r}$}}^{\;*})~{}~{}~{}.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(48)
While in (47) $\vec{R}$ could lie either outside or on the surface of the
primary, in (48) it would be both convenient and conventional to make
$\vec{R}$ a surface point. Both in (47) and (48), the vector $\vec{r}$ denotes
an exterior point located above the surface point $\vec{R}$ at a radius
$\,r\,\geq\,R\,$ (with the same latitude and longitude), while
$\,\mbox{{\boldmath$\vec{r}$}}^{\;*}\,$ signifies the position of the tide-
raising secondary. The quantities $\,k_{l}\,$ are the static Love numbers.
Under dynamical stressing, the Love numbers turn into operators:
$\displaystyle U_{\it
l}(\mbox{{\boldmath$\vec{r}$}},\,t)\;=\;\left(\,\frac{R}{r}\,\right)^{{\it
l}+1}\hat{k}_{\it
l}(t)\;W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,\;\mbox{{\boldmath$\vec{r}$}}^{\;*},\,t\,^{\prime})~{}~{}~{},$
(49)
where integration over the semi-interval
$\,t\,^{\prime}\in\left(\right.-\infty,\,t\,\left.\right]\,$ is implied:
$\displaystyle U_{\it l}(\mbox{{\boldmath$\vec{r}$}},\,t)$ $\displaystyle=$
$\displaystyle\left(\frac{R}{r}\right)^{{\it
l}+1}\int_{t\,^{\prime}=-\infty}^{t\,^{\prime}=t}k_{\it
l}(t-t\,^{\prime})\stackrel{{\scriptstyle\bf\centerdot}}{{W}}_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,\,\mbox{{\boldmath$\vec{r}$}}^{\;*},\,t\,^{\prime})\,dt\,^{\prime}~{}~{}~{}~{}$
(50a) $\displaystyle=$ $\displaystyle\left(\frac{R}{r}\right)^{{\it
l}+1}\,\left[k_{l}(0)W_{l}(t)\,-\,k_{l}(\infty)W_{l}(-\infty)\right]\,+\,\left(\frac{R}{r}\right)^{{\it
l}+1}\int_{-\infty}^{t}{\bf\dot{\it{k}}}_{\textstyle{{}_{l}}}(t-t\,^{\prime})\,~{}W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,\,\mbox{{\boldmath$\vec{r}$}}^{\;*},\,t\,^{\prime})\,dt\,^{\prime}~{}~{}.\quad\quad\quad$
(50b) Like in the compliance operator (1 \- 2), here we too obtain the
boundary terms: one corresponding to the instantaneous elastic reaction,
$~{}k_{l}(0)W(t)~{}$, another caused by the perturbation in the infinite past,
$~{}-k_{l}(\infty)W(-\infty)~{}$. The latter term can be dropped by setting
$\,W(-\infty)\,$ zero, while the former term may be included into the kernel
in the same manner as in (3 \- 5):
$\displaystyle\left(\frac{R}{r}\right)^{{\it
l}+1}\,k_{l}(0)W_{l}(t)\,+\,\left(\frac{R}{r}\right)^{{\it
l}+1}\int_{-\infty}^{t}{\bf\dot{\it{k}}}_{\textstyle{{}_{l}}}(t-t\,^{\prime})\,~{}W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,\,\mbox{{\boldmath$\vec{r}$}}^{\;*},\,t\,^{\prime})\,dt\,^{\prime}\quad\quad\quad\quad$
$\displaystyle=\left(\frac{R}{r}\right)^{{\it
l}+1}\int_{-\infty}^{t}\frac{d}{dt}\left[~{}{k}_{\it
l}(t\,-\,t\,^{\prime})~{}-~{}{k}_{\it l}(0)~{}+~{}{k}_{\it
l}(0)\Theta(t\,-\,t\,^{\prime})~{}\right]~{}W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}},\mbox{{\boldmath$\vec{r}$}}^{\;*},t\,^{\prime})\,dt\,^{\prime}\quad.\quad\quad\quad\quad$
(50c)
All in all, neglecting the unphysical term with $\,W_{l}(-\infty)~{}$, and
inserting the elastic term into the Love number not as $\,k_{l}(0)\,$ but as
$\,{k}_{l}(0)\,\Theta(t-t\,^{\prime})\,$, we arrive at
$\displaystyle U_{\it
l}(\mbox{{\boldmath$\vec{r}$}},\,t)\;=\;\left(\frac{R}{r}\right)^{{\it
l}+1}\int_{-\infty}^{t}{\bf\dot{\it{k}}}_{\textstyle{{}_{l}}}(t-t\,^{\prime})~{}W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,\;\mbox{{\boldmath$\vec{r}$}}^{\;*},\;t\,^{\prime})\,dt\,^{\prime}~{},$
(51)
with $\,{k}_{\it l}(t-t\,^{\prime})\,$ now incorporating the elastic reaction
as $\,{k}_{l}(0)\,\Theta(t-t\,^{\prime})\,$ instead of $\,{k}_{l}(0)\,$. For a
perfectly elastic primary, the elastic reaction would be the only term present
in the expression for $\,{k}_{\it l}(t-t\,^{\prime})\,$. Then the time
derivative of $\,{k}_{l}\,$ would be
$\,{\bf{\dot{\it{k}}}}_{\textstyle{{}_{\it{l}}}}(t-t\,^{\prime})\,=\,k_{\it
l}\,\delta(t-t\,^{\prime})\,$, with $\,k_{\it l}\,\equiv\,k_{\it l}(0)\,$
being the static Love number, and (51) would reduce to $~{}U_{\it
l}(\mbox{{\boldmath$\vec{r}$}},\,t)\;=\;\left(\frac{\textstyle R}{\textstyle
r}\right)^{{\it
l}+1}k_{l}\,W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,\;\mbox{{\boldmath$\vec{r}$}}^{\;*},\;t)~{}$,
like in the static case.
Similarly to (11), the complex Love numbers are defined as the Fourier images
of
$\,\stackrel{{\scriptstyle\bf\centerdot}}{{k}}_{\textstyle{{}_{l}}}(\tau)~{}$:
$\displaystyle\int_{0}^{\infty}\bar{k}_{\textstyle{{}_{l}}}(\chi)e^{{\it
i}\chi\tau}d\chi\;=\;\stackrel{{\scriptstyle\bf\centerdot}}{{k}}_{\textstyle{{}_{l}}}(\tau)~{}~{}~{},$
(52)
overdot denoting $\,d/d\tau\,$. Following Churkin (1998), the time derivatives
$\,\stackrel{{\scriptstyle\bf\centerdot}}{{k}}_{\textstyle{{}_{\,l}}}(t)\,$
can be named _Love functions_.121212 Churkin (1998) used functions, which he
denoted $\,k_{\it l}(t)\,$ and which were, due to a difference in notations,
the same as our
$\,\stackrel{{\scriptstyle\bf\centerdot}}{{k}}_{\textstyle{{}_{l}}}(\tau)\,$.
Inversion of (52) renders:
$\displaystyle\bar{k}_{\textstyle{{}_{l}}}(\chi)~{}=~{}\int_{0}^{\infty}{\bf\dot{\mbox{\it{k}}}}_{\textstyle{{}_{l}}}(\tau)\;\,e^{-{\it
i}\chi\tau}\,d\tau~{}=~{}k_{\textstyle{{}_{l}}}(0)\;+\;{\it
i}~{}\chi~{}\int_{0}^{\infty}\left[\,k_{\textstyle{{}_{l}}}(\tau)\,-\,k_{\textstyle{{}_{l}}}(0)\,\Theta(\tau)\,\right]\;e^{-{\it
i}\chi\tau}\,d\tau~{}~{}~{},~{}~{}~{}~{}$ (53)
where integration from $\;{0}\,$ is sufficient, as the future disturbance
contributes nothing to the present distortion, wherefore $\,k_{\it l}(\tau)\,$
vanishes at $\,\tau<0\,$. Recall that the time $\,\tau\,$ denotes the
difference $t-t\,^{\prime}$ and thus is reckoned from the present moment $t$
backward into the past
In the frequency domain, (50) will take the shape of
$\displaystyle\bar{U}_{\textstyle{{}_{l}}}(\chi)\;=\;\left(\,\frac{R}{r}\,\right)^{l+1}\bar{k}_{\textstyle{{}_{l}}}(\chi)\;\,\bar{W}_{\textstyle{{}_{l}}}(\chi)\;\;\;,$
(54)
$\chi\,$ being the frequency, while $\,\bar{U}_{\textstyle{{}_{l}}}(\chi)\,$
and $\,\bar{W}_{\textstyle{{}_{l}}}(\chi)\,$ being the Fourier or Laplace
components of the potentials $\,{U}_{\textstyle{{}_{l}}}(t)\,$ and
$\,{W}_{\textstyle{{}_{l}}}(t)\,$. The frequency-dependencies
$\,\bar{k}_{\textstyle{{}_{l}}}(\chi)\,$ should be derived from the expression
for $\,\bar{J}(\chi)\,$ or $\,\bar{\mu}(\chi)=1/\bar{J}(\chi)\,$. These
expressions follow from the rheological model of the medium.
Rigorously speaking, we ought to assume in expressions (52 \- 54) that the
spectral components are functions of the tidal mode $\,\omega\,$ and not of
the forcing frequency $\,\chi\,$. However, as explained in the end of Section
4, employment of the positively defined forcing frequencies is legitimate,
insofar as we do not forget to attach the sign multipliers (46) to the terms
of the Darwin-Kaula expansion for the tidal torque. Therefore here and
hereafter we shall expand over $\,\chi\,$, with the said caveat kept in mind.
### 5.2 Complex Love numbers as functions of the complex compliance. The
elastic-viscoelastic analogy
The dependence of the static Love numbers on the static rigidity modulus
$\,\mu(\infty)\,$ looks as
$\displaystyle k^{\textstyle{{}^{(static)}}}_{\it l}\,=\;\frac{3}{2\,({\it
l}\,-\,1)}\;\,\frac{1}{1\;+\;A^{\textstyle{{}^{(static)}}}_{\it l}}~{}~{}~{},$
(55)
where
$\displaystyle A^{\textstyle{{}^{(static)}}}_{\it
l}\,\equiv~{}\frac{\textstyle{(2\,{\it{l}}^{\,2}\,+\,4\,{\it{l}}\,+\,3)}}{\textstyle{{\it{l}}\,\mbox{g}\,\rho\,R}}~{}\,\mu(\infty)~{}=\;\frac{\textstyle{3\;(2\,{\it{l}}^{\,2}\,+\,4\,{\it{l}}\,+\,3)}}{\textstyle{4\;{\it{l}}\,\pi\,G\,\rho^{2}\,R^{2}}}~{}\,\mu(\infty)~{}=~{}\frac{\textstyle{3\;(2\,{\it{l}}^{\,2}\,+\,4\,{\it{l}}\,+\,3)}}{\textstyle{4\;{\it{l}}\,\pi\,G\,\rho^{2}\,R^{2}~{}\,J(\infty)}~{}}~{}\,~{}~{}~{},\quad~{}~{}$
(56)
with $\,\rho\,$, g, and $R$ being the density, surface gravity, and radius of
the body, and $G$ being the Newton gravitational constant. The static rigidity
modulus and its inverse, the static compliance, are denoted here with
$\,\mu(\infty)\,$ and $\,J(\infty)\,$, correspondingly. These notations imply
that we identify static with relaxed.
Specifically, the static quadrupole Love number will read:
$\displaystyle
k^{\textstyle{{}^{(static)}}}_{2}\,=\;\frac{3}{2}\;\,\frac{1}{1\;+\;A^{\textstyle{{}^{(static)}}}_{2}}~{}~{}~{},$
(57)
where the quantity
$\displaystyle A^{\textstyle{{}^{(static)}}}_{2}=\,\frac{\textstyle
57}{\textstyle 8\,\pi}~{}\frac{\textstyle\mu(\infty)}{\textstyle
G\,\rho^{2}\,R^{2}}$ (58)
is sometimes referred to as $\,\tilde{\mu}\,$. Clearly,
$\,A^{\textstyle{{}^{(static)}}}_{2}\,$ in (57), as well as
$\,A^{\textstyle{{}^{(static)}}}_{l}\,$ in (55), is a dimensionless measure of
strength-dominated versus gravity-dominated behaviour.
It is not immediately clear whether the same expression interconnects also
$\,\bar{k}_{\it l}(\chi)\,$ with $\,\bar{\mu}(\chi)\,$. Fortunately, though, a
wonderful theorem called elastic-viscoelastic analogy, also known as the
correspondence principle, ensures that the viscoelastic operational moduli
$\,\bar{\mu}(\chi)\,$ or $\,\bar{J}(\chi)\,$ obey the same algebraic relations
as the elastic parameters $\,\mu\,$ or $\,J\,$ (see, e.g., Efroimsky 2012 and
references therein). For this reason, the Fourier or Laplace images of the
viscoelastic equation of motion 131313 In the equation of motion, we should
neglect the acceleration term and the nonconservative inertial forces. Both
neglects are justified at realistic frequencies (for details see the Appendix
to Efroimsky 2012). and of the constitutive equation look as their static
counterparts, except that the stress, strain, and potentials get replaced with
their Fourier or Laplace images, while $\,k_{l}\,$, $\,\mu\,$, and $\,J\,$ get
replaced with the Fourier or Laplace images of
$\,\stackrel{{\scriptstyle\bf\centerdot}}{{k}}_{\textstyle{{}_{l}}}(t-t\,^{\prime})~{}$,
$\,\stackrel{{\scriptstyle\,\bf\centerdot}}{{\mu}}(t-t\,^{\prime})~{}$, and
$\,\stackrel{{\scriptstyle~{}\bf\centerdot}}{{J}}(t-t\,^{\prime})~{}$. For
example, the constitutive equation will look like:
$~{}\bar{\sigma}_{\textstyle{{}_{\gamma\nu}}}~{}=~{}2~{}\bar{\mu}~{}\bar{u}_{\textstyle{{}_{\gamma\nu}}}~{}$.
Therefore the solution to the problem will retain the mathematical form of
$\,\bar{U}_{\it l}=\bar{k}_{\it l}\,\bar{W}_{l}\,$, with $\,\bar{k}_{\it l}\,$
keeping the same functional dependence on $\,\rho\,$, $\,R\,$, and
$\,\bar{\mu}\,$ (or $\,\bar{J}\,$) as in (56), except that now $\,\mu\,$ and
$\,{J}\,$ are equipped with overbars:
$\displaystyle\bar{k}_{\it l}(\chi)$ $\displaystyle=$
$\displaystyle\frac{3}{2\,({\it l}\,-\,1)}\;\,\frac{\textstyle 1}{\textstyle
1\;+\;A_{\it l}\;\bar{\mu}(\chi)/\mu}$ (59a) $\displaystyle=$
$\displaystyle\frac{3}{2\,({\it l}\,-\,1)}\;\,\frac{\textstyle 1}{\textstyle
1\;+\;A_{\it l}\;J/\bar{J}(\chi)}~{}=~{}\frac{3}{2\,({\it
l}\,-\,1)}\;\,\frac{\textstyle\bar{J}(\chi)}{\textstyle\bar{J}(\chi)\;+\;A_{\it
l}\;J}~{}~{}~{},~{}\quad~{}$ (59b)
where
$\displaystyle A_{\it
l}\,\equiv~{}\frac{\textstyle{(2\,{\it{l}}^{\,2}\,+\,4\,{\it{l}}\,+\,3)\,\mu}}{\textstyle{{\it{l}}\,\mbox{g}\,\rho\,R}}\;=\;\frac{\textstyle{3\;(2\,{\it{l}}^{\,2}\,+\,4\,{\it{l}}\,+\,3)\,\mu}}{\textstyle{4\;{\it{l}}\,\pi\,G\,\rho^{2}\,R^{2}}}~{}=~{}\frac{\textstyle{3~{}(2\,{\it{l}}^{\,2}\,+\,4\,{\it{l}}\,+\,3)~{}~{}J^{-1}}}{\textstyle{4\;{\it{l}}\,\pi\,G\,\rho^{2}\,R^{2}}}~{}~{}~{},~{}~{}~{}$
(60)
Although expression (60) for factors $\,A_{l}\,$ is very similar to expression
(56) for their static counterparts, an important difference between (56) and
(60) should be pointed out. While in (56) we had the static (relaxed) rigidity
and compliance, $\,\mu(\infty)\,$ and $\,J(\infty)=1/\mu(\infty)\,$, in (60)
the letters $\,\mu\,$ and $\,J\,$ may stand for any benchmark values
satisfying $\,J=1/\mu\,$. This freedom stems from the fact that the products
$\,A_{l}\,J\,$ entering (59b) bear no dependence upon $\,J\,$ or $\,\mu\,$.
The second term in the denominator of (59a) contains $\,\bar{\mu}\,$. For
convenience, we multiply and then divide $\,\bar{\mu}(\chi)\,$ by some
$\,\mu\,$, and make the multiplier $\,\mu\,$ a part of $\,A_{l}\,$ as in (60).
This makes it easier for us to compare (60) with its static predecessor (56).
However the constant $\,\mu\,$ in equations (59) and (60) is essentially
arbitrary, and is not obliged to coincide with, say, unrelaxed or relaxed
rigidity. Accordingly, $\,J\,=\,1/\mu\,$ is not obliged to be the unrelaxed or
relaxed compliance.
The above caveat is important, because in certain rheological models some of
the unrelaxed or relaxed moduli may be zero or infinite. This will happen, for
example, if we start with the Maxwell or Kelvin-Voigt body and perform a
transition to a purely viscous medium. Fortunately, in realistic rheologies
such things do not happen. Hence it will be convenient (and possible) to
identify the $\,J\,$ from (60) with the unrelaxed compliance $\,J=J(0)\,$
emerging in the rheological model (33). Accordingly, the rigidity
$\,\mu=1/J\,$ from (60) will be identified with the unrelaxed rigidity
$\,\mu(0)=1/J(0)\,$. This convention will play a crucial role down the road,
when we derive formula (63).
Writing the $~{}l\,$th complex Love number as
$\displaystyle\bar{k}_{\it{l}}(\chi)\;=\;{\cal{R}}{\it{e}}\left[\bar{k}_{\it{l}}(\chi)\right]\;+\;{\it
i}\;{\cal{I}}{\it{m}}\left[\bar{k}_{\it{l}}(\chi)\right]\;=\;|\bar{k}_{\it{l}}(\chi)|\;e^{\textstyle{{}^{-{\it
i}\epsilon_{\it l}(\chi)}}}$ (61)
we express the phase lag $\,\epsilon_{\it l}(\chi)\,$ as:
$\displaystyle|\bar{k}_{\it{l}}(\chi)|\;\sin\epsilon_{\it
l}(\chi)\;=\;-\;{\cal{I}}{\it{m}}\left[\bar{k}_{\it{l}}(\chi)\right]\;\;\;.$
(62)
The importance of the products $\;|\bar{k}_{\it{l}}(\chi)|\;\sin\epsilon_{\it
l}(\chi)\;$ lies in the fact that they show up in the terms of the Darwin-
Kaula expansion of the tidal potential. As a result, it is these products, and
not $\;k_{\it l}/Q\;$ as some think, which emerge in the expansions for tidal
forces and torques, and for the dissipation rate.
In an attempt to preserve the popular notation $\;k_{\it l}/Q\;$, one may
define the inverse quality factor as the sine of the lag – see the discussion
in subsection 2.3. In this case, though, one would have to employ the tidal
lag $\,\epsilon_{l}\,$, and not the lag $\,\delta\,$ in the material (which we
call the “seismic” lag). Accordingly, one will have to write not $\;k_{\it
l}/Q\;$ but $\;k_{\it l}/Q_{l}\;$, where
$\,1/Q_{l}\,\equiv\,\sin\epsilon_{l}\,$.
Importantly, the functional form of the frequency-dependence
$\,\sin\epsilon_{l}(\chi)\,$ is different for different $\,l\,$. Thus an
attempt of naming $\,\sin\epsilon_{l}\,$ as $\,1/Q\,$ would give birth to a
whole array of different functions $\,Q_{l}(\chi)\,$. For a triaxial body,
things will become even more complicated – see footnote 11. To conclude, it is
not advisable to denote $\,\sin\epsilon_{l}\,$ with $\,1/Q\,$.
It should be mentioned that the Darwin-Kaula theory of tides is equally
applicable to tides in despinning and librating bodies. In all cases, the
phase angle $\,\epsilon_{l}\,=\,\epsilon_{l}(\chi_{\textstyle{{}_{lmpq}}})\,$
parameterises the lag of the appropriate component of the bulge, while the
absolute value of the complex Love number
$\,|\bar{k}_{l}|\,=\,|\,\bar{k}_{l}(\chi_{\textstyle{{}_{lmpq}}})\,|\,$
determines the magnitude of this component. The overall bulge being a
superposition of these components, its height may vary in time.
### 5.3 The tangent of the tidal lag
In the denominator of (59a) the term $\,1\,$ emerges due to self-gravitation,
while $\,A_{\it l}\,J/\bar{J}(\chi)\,=\,A_{\it l}\,|\bar{\mu}(\chi)|/\mu\,$
describes how the bulk properties of the medium contribute to deformation and
damping. So for a vanishing $\,A_{\it l}\,J/|\bar{J}(\chi)|\,$ we end up with
the hydrostatic Love numbers $\,k_{l}=\,\frac{\textstyle
3}{\textstyle{2\,({\it l}\,-\,1)}}\,$, while the lag becomes nil, as will be
seen shortly from (64). On the contrary, for very large $\,A_{\it
l}\,J/\bar{J}(\chi)\,$, we expect to obtain the Love numbers and lags ignorant
of the shape of the body.
To see how this works out, combine formulae (35) and (59b), to arrive at
$\displaystyle\tan\epsilon_{\textstyle{{}_{l}}}\,=~{}-~{}\frac{{\cal{I}}{\it{m}}\left[\bar{k}_{\it{l}}(\chi)\right]}{{\cal{R}}{\it{e}}\left[\bar{k}_{\it{l}}(\chi)\right]}~{}=~{}-~{}\frac{\,A_{l}~{}J~{}\,{\cal{I}}{\it{m}}\left[\bar{J}(\chi)\right]}{\,\left\\{\,{\cal{R}}{\it{e}}\left[\bar{J}(\chi)\right]\,\right\\}^{2}\,+~{}\left\\{\,{\cal{I}}{\it{m}}\left[\bar{J}(\chi)\right]\,\right\\}^{2}\,+~{}A_{l}~{}J~{}{\cal{R}}{\it{e}}\left[\bar{J}(\chi)\right]\,}~{}=\quad\quad\quad\quad$
(63a)
$\displaystyle\frac{A_{l}~{}\left[\,\zeta~{}z^{-1}\,+~{}z^{-\alpha}~{}\sin\left(\,\frac{\textstyle\alpha\,\pi}{\textstyle
2}\,\right)~{}\Gamma(1+\alpha)\,\right]}{\left[1+z^{-\alpha}\cos\left(\frac{\textstyle\alpha\pi}{\textstyle
2}\right)\Gamma(1+\alpha)\right]^{2}+\left[\zeta
z^{-1}+z^{-\alpha}\sin\left(\frac{\textstyle\alpha\pi}{\textstyle
2}\right)\Gamma(1+\alpha)\right]^{2}+A_{l}\left[1+z^{-\alpha}\cos\left(\frac{\textstyle\alpha\pi}{\textstyle
2}\right)\Gamma(1+\alpha)\right]}\quad,\quad$ (63b)
$\,z\,$ being the dimensionless frequency defined by (39).
Comparing this expression with expression (38), over different frequency
bands, we shall be able to explore how the tidal lag
$\,\epsilon_{\textstyle{{}_{l}}}\,$ relates to the lag in the material
$\,\delta\,$ (the “seismic lag”).
While expression (63b) is written for the Andrade model, the preceding formula
(63a) is general and works for an arbitrary linear rheology.
### 5.4 The negative imaginary part of the complex Love number
As we already mentioned above, rheology influences the tidal behaviour of a
planet through the following sequence of steps. A rheological model postulates
the form of $\,\bar{J}(\chi)\,$. This function, in its turn, determines the
form of $\,\bar{k}_{\it{l}}(\chi)\,$, while the latter defines the frequency
dependence of the products $\,|\bar{k}_{\it{l}}(\chi)|\;\sin\epsilon(\chi)\,$
which enter the tidal expansions.
To implement this concatenation, one has to express
$\,|\bar{k}_{\it{l}}(\chi)|\;\sin\epsilon(\chi)\,$ via $\,\bar{J}(\chi)\,$.
This can be done by combining (59) with (62). It renders:
$\displaystyle|\bar{k}_{\it l}(\chi)|\;\sin\epsilon_{\it
l}(\chi)\;=\;-\;{\cal{I}}{\it{m}}\left[\bar{k}_{\it
l}(\chi)\right]\;=\;\frac{3}{2\,({\it
l}\,-\,1)}\;\,\frac{-\;A_{l}\;J\;{\cal{I}}{\it{m}}\left[\bar{J}(\chi)\right]}{\left(\;{\cal{R}}{\it{e}}\left[\bar{J}(\chi)\right]\;+\;A_{l}\;J\;\right)^{2}\;+\;\left(\;{\cal{I}}{\it{m}}\left[\bar{J}(\chi)\right]\;\right)^{2}}~{}~{}~{},~{}~{}~{}~{}~{}$
(64)
a quantity often mis-denoted141414 One can write the left-hand side of (64)
with $\,k_{l}/Q\,$ only if the quality factor is defined through (26) and
endowed with the subscript $\,l\,$. as $\,k_{l}/Q\,$. Together, formulae (35)
and (64), give us the frequency dependencies for the factors $\,|\bar{k}_{\it
l}(\chi)|~{}\sin\epsilon_{\it l}(\chi)\,$ entering the theory of bodily tides.
For detailed derivation of those dependencies, see Efroimsky (2012).
As explained in Section 4, employment of expressions (62 \- 64) needs some
care. Since both $\,\bar{U}\,$ and $\,\bar{k}_{l}\,$ are in fact functions not
of the forcing frequency $\,\chi\,$ but of the tidal mode $\,\omega\,$,
formulae (62 \- 64) should be equipped with multipliers
sgn$\,\omega_{\textstyle{{}_{lmpq}}}\,$, when plugged into the expression for
the $lmpq$ component of the tidal torque. With this important caveat in mind,
and with the subscripts $\,lmpq\,$ reinstalled, the complete expression will
read:
$\displaystyle|\bar{k}_{\it
l}(\chi_{\textstyle{{}_{lmpq}}})|\;\sin\epsilon_{\it
l}(\chi_{\textstyle{{}_{lmpq}}})\,=\,\frac{3}{2\,({\it
l}-1)}\;\,\frac{-\;A_{l}\;J\;{\cal{I}}{\it{m}}\left[\bar{J}(\chi_{\textstyle{{}_{lmpq}}})\right]}{\left(\,{\cal{R}}{\it{e}}\left[\bar{J}(\chi_{\textstyle{{}_{lmpq}}})\right]\,+\,A_{l}\,J\,\right)^{2}+\,\left(\,{\cal{I}}{\it{m}}\left[\bar{J}(\chi_{\textstyle{{}_{lmpq}}})\right]\,\right)^{2}}~{}~{}\mbox{sgn}\,\omega_{\textstyle{{}_{lmpq}}}~{}~{},~{}~{}~{}~{}$
(65)
a general formula valid for an arbitrary linear rheological model.
To make use of this and other formulae, it would be instructive to estimate
the values of $\,A_{l}\,$ for terrestrial objects of different size. In Table
1, we present estimates of $\,A_{2}\,$ for Iapetus, Mars, solid Earth, a
hypothetical solid superearth having a density and rigidity of the solid Earth
and a radius equal to 2 terrestrial radii
($\,R=2R_{\textstyle{{}_{\bigoplus}}}\,$), and also a twice larger
hypothetical superearth ($\,R=4R_{\textstyle{{}_{\bigoplus}}}\,$) of the same
rheology.
Table 1: Estimates of $\,A^{\textstyle{{}^{(static)}}}_{2}\,$ for rigid celestial bodies. The values of $\,A^{\textstyle{{}^{(static)}}}_{2}\,$ are calculated using equation (58) and are rounded to the second figure. | radius $\,R\,$ | mean density $\rho\,$ | mean relaxed | the resulting
---|---|---|---|---
| | | shear rigidity $\mu(\infty)$ | estimate for $A_{2}\,$
Iapetus | $7.4\times 10^{5}$ m | $1.1\times 10^{3}$ kg/m 3 | $4.0\times 10^{9}\,$ Pa | $200$
Mars | $3.4\times 10^{6}$ m | $3.9\times 10^{3}$ kg/m 3 | $1.0\times 10^{11}\,$ Pa | $19$
The Earth | $6.4\times 10^{6}$ m | $5.5\times 10^{3}$ kg/m 3 | $0.8\times 10^{11}\,$ Pa | $2.2$
A hypothetical superearth | | | |
with $~{}~{}R\,=\,2\,R_{\textstyle{{}_{\bigoplus}}}~{}~{}$ and the | $4.5\times 10^{8}$ m | $5.5\times 10^{3}$ kg/m 3 | $0.8\times 10^{11}\,$ Pa | $0.55$
same rheology as the Earth | | | |
A hypothetical superearth | | | |
with $~{}~{}R\,=\,4\,R_{\textstyle{{}_{\bigoplus}}}~{}~{}$ and the | $9.0\times 10^{8}$ m | $5.5\times 10^{3}$ kg/m 3 | $0.8\times 10^{11}\,$ Pa | $0.14$
same rheology as the Earth | | | |
Taken the uncertainty of structure and the roughness of our estimate, all
quantities in the table have been rounded to the first decimal. The values of
Iapetus’ and Mars’ rigidity were borrowed from Castillo-Rogez et al. (2011)
and Johnson et al. (2000), correspondingly.
In Figure 2, we compare the behaviour of
$\,k_{2}\,\sin\epsilon_{2}=|\bar{k}_{2}(\chi)|\,\sin\epsilon_{2}(\chi)\,$ for
the values of $\,A_{2}\,$ appropriate to Iapetus, Mars, solid Earth, and
hypothetical superearths with $~{}R\,=\,2\,R_{\textstyle{{}_{\bigoplus}}}~{}$
and $~{}R\,=\,4\,R_{\textstyle{{}_{\bigoplus}}}~{}$ , as given in Table 1.
Self-gravitation pulls the tides down, mitigating their magnitude and the
value of the tidal torque. Hence, the heavier the body the lower the
appropriate curve. This rule is observed well at low frequencies (the
viscosity-dominated range). In the intermediate zone and in the high-frequency
band (where inelastic creep dominates friction), this rule starts working only
for bodies larger than about twice the Earth size. If we fix the tidal
frequency at a sufficiently high value, we shall see that the increase of the
size from that of Iapetus to that of Mars and further to that of the Earth
results in an increase of the intensity of the tidal interaction. For a
$~{}R\,=\,2\,R_{\textstyle{{}_{\bigoplus}}}~{}$ superearth, the tidal factor
$\,k_{2}\,\sin\epsilon_{2}\,$ is about the same as that for the solid Earth,
and begins to decrease for larger radii (so the green curve for the larger
superearth is located fully below the cyan curve for a smaller superearth).
Figure 2: Negative imaginary part of the complex quadrupole Love number,
$\,k_{2}\,\sin\epsilon_{2}=\,-\,{\cal{I}}{\it{m}}\left[\bar{k}_{2}(\chi)\right]\,$,
as a function of the tidal frequency $\chi\,$. The black, red, and blue curves
refer, correspondingly, to Iapetus, Mars, and the solid Earth. The cyan and
green curves refer to the two hypothetical superearths described in Table 1.
These superearths have the same rheology as the solid Earth, but have sizes
$~{}R\,=\,2\,R_{\textstyle{{}_{\bigoplus}}}~{}$ and
$~{}R\,=\,4\,R_{\textstyle{{}_{\bigoplus}}}~{}$. Each of these five objects is
modeled with a homogeneous near-spherical self-gravitating Andrade body with
$\,\alpha=0.2\,$ and $\,\tau_{{}_{A}}=\tau_{{}_{M}}=10^{10}\,$s. In the limit
of vanishing tidal frequency $\chi$, the factors $\,k_{2}\,\sin\epsilon_{2}\,$
approach zero, which is natural from the physical point of view. Indeed, an
$\,lmpq\,$ term in the expansion for tidal torque contains the factor
$~{}k_{l}(\chi_{{}_{lmpq}})\,\sin\epsilon_{l}(\chi_{{}_{lmpq}})\,$. On
crossing the $\,lmpq\,$ resonance, where the frequency $\chi_{{}_{lmpq}}$ goes
through zero, the factor
$\,k_{l}(\chi_{{}_{lmpq}})\,\sin\epsilon_{l}(\chi_{{}_{lmpq}})\,$ must vanish,
so that the $\,lmpq\,$ term of the torque could change its sign.
## 6 Tidal dissipation versus seismic dissipation,
in the inelasticity-dominated band
In this section, we shall address only the higher-frequency band of the
spectrum, i.e., the range where inelasticity dominates viscosity and the
Andrade model is applicable safely. Mind though that the Andrade model can
embrace also the near-Maxwell behaviour, and thus can be applied to the low
frequencies, provided we “tune” the dimensionless parameter $\,\zeta\,$
appropriately – see subsection 3.4 above.
### 6.1 Response of a sample of material
At frequencies higher than some threshold value $\,\chi_{0}\,$, dissipation in
minerals is mainly due to inelasticity rather than to viscosity.151515 For the
solid Earth, this threshold is about 1 yr-1 (Karato & Spetzler 1990). Being
temperature sensitive, the threshold may assume different values for other
terrestrial planets. Also mind that the transition is not sharp and can extend
over a decade or more. Hence at these frequencies $\,\zeta\,$ should be of
order unity or smaller, as can be seen from (33b). This entails two
consequences. First, the condition $\,\chi\gg{\textstyle
1}/{(\textstyle\zeta\,\tau_{{}_{M}})}~{}$, i.e., $\,z\gg 1\,$ is obeyed
reliably, for which reason the first term dominates the denominator in (38).
Second, either the condition $\,z\,\gg\,1\,$ is stronger than
$\,z\,\gg\,\zeta^{\textstyle{{}^{\textstyle{\frac{1}{1-\alpha}}}}}\,$ or the
two conditions are about equivalent. Hence the inelastic term dominates the
numerator in (38): $\,~{}z^{-\alpha}\,\gg\,z^{-1}\,\zeta~{}$.
Altogether, over the said frequency range, (38) simplifies to:
$\displaystyle\tan\delta~{}\approx~{}(\chi\,\tau_{{}_{A}})^{\textstyle{{}^{-\alpha}}}\sin\left(\frac{\alpha\,\pi}{2}\right)~{}\Gamma(\alpha+1)~{}=~{}\left(\chi\,\zeta\,\tau_{{}_{M}}\right)^{\textstyle{{}^{-\alpha}}}\sin\left(\frac{\alpha\,\pi}{2}\right)~{}\Gamma(\alpha+1)~{}~{}~{}.~{}~{}\,~{}$
(66)
Clearly $\,\tan\delta\ll 1\,$, wherefore
$\,\tan\delta\approx\sin\delta\approx\delta\,$. For the seismic quality
factor, we then have:
$\displaystyle{}^{(seismic)}Q^{-1}~{}\approx~{}(\chi\,\tau_{{}_{A}})^{\textstyle{{}^{-\alpha}}}\sin\left(\frac{\alpha\,\pi}{2}\right)~{}\Gamma(\alpha+1)~{}~{}~{},$
(67)
no matter which of the three definitions (21 \- 23) we accept. Be mindful,
that here we use the term seismic broadly, applying it also to a sample in a
lab.
### 6.2 Tidal response of a homogeneous near-spherical body
Recall that defect unpinning stays effective at frequencies above some
threshold $\,\chi_{0}\,$, which is likely to be above or, at least, not much
lower than the inverse Maxwell time.161616 Dislocations may break away from
the pinning agents (impurities, nodes, or jogs), or the pinning agents
themselves may move along with dislocations. These two processes are called
“unpinning”, and they go easier at low frequencies, as the energy barriers
become lower (Karato & Spetzler 1990, section 5.2.3). E.g., for the solid
Earth, $\,\chi_{0}\sim 1\,$yr${}^{-1}\,$ while
$\,\tau_{{}_{M}}\,\sim\,500\,$yr. Over this frequency band, the free parameter
$\,\zeta\,$ may be of order unity or slightly less than that. (This parameter
grows as the frequencies become short of $\,\chi_{0}\,$.) Under these
circumstances, in equation (63) we have: $~{}\zeta z^{-1}\ll z^{-\alpha}\ll
1~{}$, whence equation (63) becomes:
$\displaystyle\tan\epsilon_{\textstyle{{}_{l}}}\,\approx~{}\frac{A_{l}}{1\,+\,A_{l}}~{}z^{-\alpha}~{}\sin\left(\frac{\textstyle\alpha\pi}{\textstyle
2}\right)~{}\Gamma(1+\alpha)~{}~{}~{}.$ (68)
In combination with (66), this renders:
$\displaystyle\tan\epsilon_{\textstyle{{}_{l}}}~{}=~{}\frac{A_{l}}{1\,+\,A_{l}}~{}\tan\delta~{}~{}~{}.$
(69)
Had we defined the quality factors as cotangents, like in (23), then we would
have to conclude from (69) that the tidal and seismic quality factors coincide
for small objects (with $\,A_{l}\gg 1\,$) and differ very considerably for
superearths (i.e., for $\,A_{l}\ll 1\,$). Specifically, the so-defined quality
factor $\,Q_{\textstyle{{}_{l}}}\,$ of a superearth would be larger than its
seismic counterpart $\,Q\,$ by a factor of about $\,A_{l}^{-1}\,$.
In reality, the quality factors should be used for illustrative purposes only,
because practical calculations involve the factor
$\,|\bar{k}_{\textstyle{{}_{l}}}(\chi_{\textstyle{{}_{lmpq}}})|\,\sin\epsilon_{\textstyle{{}_{l}}}(\chi_{\textstyle{{}_{lmpq}}})\,$
rendered by (65). It is this factor which enters the $\,lmpq\,$ term of the
Fourier expansion of tides. Insertion of (36 \- 37) into (65) furnishes the
following expression valid in the inelasticity-dominated band:
$\displaystyle|\bar{k}_{\textstyle{{}_{l}}}(\chi_{\textstyle{{}_{lmpq}}})|\,\sin\epsilon_{\textstyle{{}_{l}}}(\chi_{\textstyle{{}_{lmpq}}})\,\approx\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$
$\displaystyle\quad\quad\frac{3}{2\,(l-1)}\;\frac{A_{\textstyle{{}_{l}}}}{(A_{\textstyle{{}_{l}}}+\,1)^{2}}~{}\sin\left(\frac{\alpha\pi}{2}\right)~{}\Gamma(\alpha+1)~{}\,\zeta^{-\alpha}\,\left(\tau_{{}_{M}}~{}\chi_{\textstyle{{}_{lmpq}}}\right)^{-\alpha}\,\mbox{sgn}\,\omega_{\textstyle{{}_{lmpq}}}\,\quad,\quad\mbox{for}\quad\chi_{\textstyle{{}_{lmpq}}}\,\gg\,\chi_{{}_{HI}}~{}~{},\quad$
(70)
$\chi_{{}_{HI}}\,$ being the boundary between the high and intermediate
frequencies, i.e., between the inelasticity-dominated band and the
transitional zone. Expression (70) resembles the frequency dependency for
$\,|\bar{J}(\chi)|\,\sin\delta(\chi)\,=\,-\,{\cal I}{\it m}[\bar{J}(\chi)]~{}$
at high frequencies (see equation 36). In Figure 2, dependency (70)
corresponds to the slowly descending slope on the far right.
A detailed derivation of (70) from formulae (36 \- 37) and (65) is presented
in the Appendix to Efroimsky (2012). For terrestrial objects several times
smaller than the Earth (so $\,A_{\textstyle{{}_{l}}}\gg 1\,$), the threshold
turns out to be
$\displaystyle\chi_{{{}_{HI}}}\,=\,\tau_{{}_{M}}^{-1}\,\zeta^{\textstyle{{}^{\textstyle\,\frac{\alpha}{1-\alpha}}}}~{}~{}~{}.$
(71)
For superearths (i.e., for $\,A_{\textstyle{{}_{l}}}\ll 1\,$), the threshold
becomes
$\displaystyle\chi_{{{}_{HI}}}\,=\,\tau_{{}_{A}}^{-1}\,=\,\tau_{{}_{M}}^{-1}\,\zeta^{-1}~{}~{}~{}.$
(72)
Near the borderline between the inelasticity-dominated band and the
transitional zone, the parameter $\,\zeta\,$ could be of order unity. It may
as well be lower than unity, though not much (hardly by an order of
magnitude), because too low a value of $\,\zeta\,$ would exclude viscosity
from the play completely. We however expect viscosity to be noticeable near
the transitional zone.
Finally, it should be reiterated that at frequencies lower than some
$\,\chi_{0}\,$ the defect-unpinning process becomes less effective, so
inelasticity becomes less effective than viscosity, and the free parameter
$\,\zeta\,$ begins to grow. Hence, if the above estimates for
$\,\chi_{{{}_{HI}}}\,$ turn out to be lower than $\,\chi_{0}\,$, we should set
$\,\chi_{{{}_{HI}}}\,=\,\chi_{0}\,$ “by hand”.
## 7 Tidal dissipation versus seismic dissipation,
in the viscosity-dominated band
When frequency $\,\chi\,$ becomes short of some
$\,\chi_{\textstyle{{}_{0}}}\,$, the rate of defect-unpinning-caused inelastic
dissipation decreases and viscosity begins to take over inelasticity.
If we simply assume the free parameter $\,\zeta\,$ to be of order unity
everywhere, i.e., assume that the Maxwell and Andrade timescales are
everywhere comparable, then application of the Andrade model will set
$\,\chi_{\textstyle{{}_{0}}}\,$ to be of order $\,\tau^{-1}_{{}_{M}}\,$.
Anelasticity will dominate at frequencies above that threshold, while below it
the role of viscosity will be higher. This approach however would be
simplistic, because the actual location of the threshold should be derived
from microphysics and may turn out to differ from $\,\tau^{-1}_{{}_{M}}\,$
noticeably. For example, in the terrestrial mantle the transition takes place
at frequencies as high as 1 $yr^{-1}$ (Karato & Spetzler 1990) and may be
spread over a decade or more into lower frequencies, as we shall see from
equation (73).
Another somewhat simplistic option would be to assume that $\,\zeta\sim 1\,$
at frequencies above $\,\chi_{\textstyle{{}_{0}}}\,$, and to set
$\,\zeta=\infty\,$ at the frequencies below $\,\chi_{\textstyle{{}_{0}}}\,$.
The latter would be equivalent to claiming that below this threshold the
mantle is described by the Maxwell model. In reality, here we are just
entering a transition zone, where $\,\zeta\,$ increases with the decrease of
the frequency. While it is clear that in the denominator of (38) the first
term dominates, the situation with the numerator is less certain. Only after
the condition
$\displaystyle\zeta\,\gg\,(\chi\,\tau_{{}_{M}})^{\textstyle{{}^{\textstyle\frac{1-\alpha}{\alpha}}}}\,\approx~{}(\chi\,\tau_{{}_{M}})^{4}$
(73)
is obeyed, the viscous term $\,1/(\chi\,\tau_{{}_{M}})\,$ becomes leading.
This way, although $\,\zeta\,$ begins to grow as the frequency decreases below
$\,\chi_{0}\,$, the frequency may need to decrease by another decade or more
before threshold (73) is reached.
### 7.1 Response of a sample of material
Accepting the approximation that the transition zone is narrow 171717 For a
broader transition zone, the rheology will approach that of Maxwell at lower
frequencies. This though will not influence our main conclusions. and that the
predominantly viscous regime is reached already at $\,\chi_{0}\,$ or shortly
below, we approximate the tangent of the lag with
$\displaystyle\tan\delta~{}\approx~{}(\chi\,\tau_{{}_{M}})^{-1}~{}~{}~{},$
(74)
whence
$\displaystyle\sin\delta~{}\approx~{}\left\\{\begin{array}[]{c}\quad(\chi\,\tau_{{}_{M}})^{-1}\quad~{}\mbox{for}\quad\tau_{{}_{M}}^{-1}\,\ll\,\chi\,\ll\,\chi_{\textstyle{{}_{0}}}~{}~{}~{},\\\
{}\hfil\\\ \quad\quad 1\quad\quad\,\quad\mbox{for}\quad
0\,\leq\,\chi\,\ll\,\tau_{{}_{M}}^{-1}~{}~{}~{}.\end{array}{}\right.$ (78)
### 7.2 Tidal response of a homogeneous near-spherical body
When viscosity dominates inelasticity, expression (63) gets reduced to the
following form:
$\displaystyle\tan\epsilon_{\textstyle{{}_{l}}}\,\approx~{}\frac{A_{l}}{1\,+\,A_{l}\,+\,(\zeta\,z^{-1})^{\textstyle{{}^{2}\,}}}~{}\,\zeta\,z^{-1}~{}=~{}~{}\frac{A_{l}}{1\,+\,A_{l}\,+\,\left(\textstyle{\chi\,\tau_{{}_{M}}}\right)^{{{-\,2}}}\,}~{}\,\frac{1}{\chi\,\tau_{{}_{M}}}~{}~{}~{},$
(79)
comparison whereof with (74) renders:
$\displaystyle\tan\epsilon_{\textstyle{{}_{l}}}\,\approx~{}\frac{A_{l}}{1\,+\,A_{l}\,+\,\left(\textstyle{\chi\,\tau_{{}_{M}}}\right)^{-2}\,}~{}\,\tan\delta~{}=~{}\frac{A_{l}}{1\,+\,A_{l}\,+\,\tan^{2}\delta\,}~{}\,\tan\delta~{}~{}~{}.$
(80)
Now two special cases should be considered separately.
#### 7.2.1 Small bodies and small terrestrial planets
As illustrated by Table 1, small bodies and small terrestrial planets have
$\,A_{l}\gg 1\,$. So formulae (79) and (80) take the form of
$\displaystyle\tan\epsilon_{\textstyle{{}_{l}}}\,~{}\approx~{}\left\\{\begin{array}[]{c}\quad\quad\frac{\textstyle
1}{\textstyle\chi\,\tau_{{}_{M}}}\quad~{}\quad~{}\quad\quad\mbox{for}\quad\quad\frac{\textstyle
1}{\textstyle{\tau_{{}_{M}}}\,\sqrt{A_{l}+1}}\,\ll\,\chi\ll\chi_{\textstyle{{}_{0}}}~{}~{}~{},\\\
{}\hfil\\\ \quad\quad~{}\textstyle
A_{l}\,\chi\,\tau_{{}_{M}}~{}\quad\quad\quad\quad~{}\mbox{for}\quad\quad
0\,\leq\,\chi\ll\,\frac{\textstyle
1}{\textstyle{\tau_{{}_{M}}}\,\sqrt{A_{l}+1}}~{}\approx~{}\frac{\textstyle
1}{\textstyle{\tau_{{}_{M}}}\,\sqrt{A_{l}}}~{}~{}~{},{}\end{array}\right.$
(84)
and
$\displaystyle\tan\epsilon_{\textstyle{{}_{l}}}\,~{}\approx~{}\left\\{\begin{array}[]{c}\frac{\textstyle
A_{l}}{\textstyle{1\,+\,A_{l}}}~{}\tan\delta~{}\approx~{}\tan\delta\quad\quad\quad\mbox{for}\quad\quad\frac{\textstyle
1}{\textstyle{\tau_{{}_{M}}}\,\sqrt{A_{l}+1}}\ll\chi\ll\chi_{\textstyle{{}_{0}}}~{}~{}~{},\\\
{}\hfil\\\ \quad\quad\quad\quad\quad\quad~{}\frac{\textstyle
A_{l}}{\,\textstyle\tan\delta\,}~{}\quad\quad\quad\quad\quad\mbox{for}\quad\quad
0\,\leq\,\chi\ll\frac{\textstyle
1}{\textstyle{\tau_{{}_{M}}}\,\sqrt{A_{l}+1}}~{}\approx~{}\frac{\textstyle
1}{\textstyle{\tau_{{}_{M}}}\,\sqrt{A_{l}}}~{}~{}~{}.{}\end{array}\right.$
(88)
Had we defined the quality factors as cotangents of
$\,\epsilon_{\textstyle{{}_{l}}}\,$ and $\,\delta\,$, we would be faced with a
situation that may at first glance appear embarrassing: in the zero-frequency
limit, the so-defined tidal $\,Q_{\textstyle{{}_{l}}}\,$ would become
inversely proportional to the so-defined seismic $\,Q\,$ factor. This would
however correspond well to an obvious physical fact: when the satellite
crosses the $\,lmpq\,$ commensurability, the $\,lmpq\,$ term of the average
tidal torque acting on a satellite must smoothly pass through nil, together
with the $\,lmpq\,$ tidal mode. (For example, the orbital average of the
principal tidal torque $\,lmpq\,=\,2200\,$ must vanish when the satellite
crosses the synchronous orbit.) For a more accurate explanation in terms of
the $\,|\bar{k}_{l}(\chi)|\,\sin\epsilon_{l}(\chi)\,$ factors see subsection
7.3 below.
#### 7.2.2 Superearths
For superearths, we have $\,A_{l}\ll 1\,$, so (80) becomes
$\displaystyle\tan\epsilon_{\textstyle{{}_{l}}}\,~{}\approx~{}A_{l}\,\chi\,\tau_{{}_{M}}~{}=~{}\frac{\textstyle
A_{l}}{\,\textstyle\tan\delta\,}~{}\quad\quad\quad\quad\mbox{for}\quad\quad
0\leq\chi\ll\frac{\textstyle
1}{\textstyle{\tau_{{}_{M}}}\,\sqrt{A_{\textstyle{{}_{l}}}+1}}~{}\approx~{}\frac{\textstyle
1}{\textstyle{\tau_{{}_{M}}}}~{}~{}.\,~{}~{}~{}$ (89)
Here we encounter the same apparent paradox: had we defined the quality
factors as cotangents of $\,\epsilon_{\textstyle{{}_{l}}}\,$ and $\,\delta\,$,
we would end up with a tidal $\,Q_{\textstyle{{}_{l}}}\,$ inversely
proportional to its seismic counterpart $\,Q\,$. A qualitative explanation to
this “paradox” is the same as in the subsection above, a more accurate
elucidation to be given shortly in subsection 7.3 .
Another seemingly strange feature is that in this case (i.e., for $\,A_{l}\ll
1\,$) the tangent of the tidal lag skips the range of inverse-frequency
behaviour and becomes linear in the frequency right below the inverse Maxwell
time. This however should not surprise us, because the physically meaningful
products $\,k_{l}\,\sin\epsilon_{l}\,$ still retain a short range over which
they demonstrate the inverse-frequency behaviour. This can be understood from
Figure 2. There, on each plot, a short segment to the right of the maximum
corresponds to the situation when $\,k_{l}\,\sin\epsilon_{l}\,$ scales as
inverse frequency – see formula (90) below.
Thus we once again see that the illustrative capacity of the quality factor is
limited. To spare ourselves of surprises and “paradoxes”, we should always
keep in mind that the actual calculations are based on the frequency
dependence of
$\,|\bar{k}_{\textstyle{{}_{l}}}(\chi_{\textstyle{{}_{lmpq}}})|\,\sin\epsilon_{\textstyle{{}_{l}}}(\chi_{\textstyle{{}_{lmpq}}})\,$.
### 7.3 Tidal response in terms of
$\,|\bar{k}_{\textstyle{{}_{l}}}(\chi)|\,\sin\epsilon_{\textstyle{{}_{l}}}(\chi)\,$
Combining (36 \- 37) with (65), one can demonstrate that in the intermediate-
frequency zone the tidal factors scale as
$\displaystyle|\bar{k}_{\textstyle{{}_{l}}}(\chi)|\,\sin\epsilon_{\textstyle{{}_{l}}}(\chi)\,\approx\,\frac{3}{2\,(l-1)}\;\frac{A_{\textstyle{{}_{l}}}}{(A_{\textstyle{{}_{l}}}+1)^{2}}\,~{}\left(\,\tau_{{}_{M}}\,\chi\,\right)^{-1}\quad,\quad~{}\quad\mbox{for}\quad\quad\tau_{{}_{M}}^{-1}\gg\chi\gg\tau_{{}_{M}}^{-1}\,(A_{\textstyle{{}_{l}}}+1)^{-1}~{}\,~{},\quad\quad$
(90)
which corresponds to the short segment on the right of the maximum on Figure
2.
From the same formulae (36 \- 37) and (65), it ensues that the low-frequency
behaviour looks as
$\displaystyle|\bar{k}_{\textstyle{{}_{l}}}(\chi)|~{}\sin\epsilon_{\textstyle{{}_{l}}}(\chi)\,\approx\,\frac{3}{2\,(l-1)\textbf{}}~{}{A_{\textstyle{{}_{l}}}}~{}\,\tau_{{}_{M}}~{}\chi\quad\quad,\quad\quad\quad\quad~{}\quad\quad\,\mbox{for}~{}\quad~{}\quad\,\tau_{{}_{M}}^{-1}\,(A_{\textstyle{{}_{l}}}+1)^{-1}\,\gg\,\chi~{}~{}~{},\quad\quad~{}\quad$
(91)
a regime illustrated by the slope located on the left of the maximum on Figure
2.
Details of derivation of (90) and (91) can be found in the Appendix to
Efroimsky (2012).
Just as expression (70) resembled the frequency dependency (36) for
$\,|\bar{J}(\chi)|\,\sin\delta(\chi)~{}$ at high frequencies, so (90)
resembles the behaviour of $\,|\bar{J}(\chi)|\,\sin\delta(\chi)~{}$ at low
frequencies. At the same time, (91) demonstrates a feature inherent only in
tides, and not in the behaviour of a sample of material: at
$\,\chi<\tau_{{}_{M}}^{-1}(A_{\textstyle{{}_{l}}}+1)^{-1}\,=\,\frac{\textstyle\mu}{\textstyle\eta}\,(A_{\textstyle{{}_{l}}}+1)^{-1}$,
the factor $\;|\bar{k}_{\it{l}}(\chi)|\;\sin\epsilon_{\it{l}}(\chi)\;$ becomes
linear in $\,\chi\,$. This is not surprising, as the $\,lmpq\,$ component of
the average tidal torque or force must pass smoothly through zero and change
its sign when the $\,lmpq\,$ commensurability is crossed (and the $lmpq$ tidal
mode goes through zero and changes sign).
## 8 Why the $\,lmpq\,$ component of the tidal torque does not scale as
$\,R^{{{\,2l+1}}}$
A Fourier component $\,{\cal{T}}_{\textstyle{{}_{lmpq}}}\,$ of the tidal
torque acting on a perturbed primary is proportional to
$\,R^{\textstyle{{}^{\,2l+1}}}\,k_{l}\,\sin\epsilon_{l}\,$, where $\,R\,$ is
the primary’s mean equatorial radius. Neglect of the $\,R-$dependence of the
tidal factors $\,k_{l}\,\sin\epsilon_{l}\,$ has long been source of
misunderstanding on how the torque scales with the radius.
From formulae (70) and (90), we see that everywhere except in the closest
vicinity of the resonance the tidal factors are proportional to
$~{}A_{l}/(1+A_{l})^{2}~{}$ where $\,A_{l}\sim R^{-2}\,$ according to (60).
Thence the overall dependence of the tidal torque upon the radius becomes:
$\displaystyle\mbox{Over the frequency
band}\quad\chi\,\gg\,\tau^{-1}_{{}_{M}}\,\left(1\,+\,A_{l}\right)^{-1}~{},~{}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad~{}$
$\displaystyle{\cal{T}}_{\textstyle{{}_{lmpq}}}\sim
R^{\,2l+1}k_{l}\,\sin\epsilon_{l}\sim\frac{R^{\,2l+1}\,A_{l}}{\left(1+A_{l}\right)^{2}}\sim\left\\{\begin{array}[]{c}R^{\,2l-1}~{},~{}~{}\mbox{for}~{}~{}A_{l}\ll
1~{}~{}\mbox{(superearths)}~{},{}\\\ {}\hfil\\\
R^{\,2l+3}~{},~{}~{}\mbox{for}~{}~{}A_{l}\gg 1~{}~{}\mbox{(small bodies,
~{}small terrestrial planets)}~{}.\end{array}\right.$ (95)
In the closest vicinity of the $\,lmpq\,$ commensurability, i.e., when the
tidal frequency $\,\chi_{\textstyle{{}_{lmpq}}}\,$ approaches zero, the tidal
factors’s behaviour is described by (91). This furnishes a different scaling
law for the torque, and the form of this law is the same for telluric bodies
of all sizes:
$\displaystyle\mbox{Over the frequency
band}\quad\chi\,\ll\,\tau^{-1}_{{}_{M}}\,\left(1\,+\,A_{l}\right)^{-1}~{},~{}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad~{}\quad\quad\quad\quad\quad\quad\quad\quad~{}\quad\quad~{}$
$\displaystyle\quad\quad{\cal{T}}_{\textstyle{{}_{lmpq}}}\sim
R^{\,2l+1}k_{l}\,\sin\epsilon_{l}\sim
R^{\,2l+1}\,A_{l}\sim\,R^{\,2l-1}~{}~{}~{}.\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$
(96)
## 9 Conclusions and examples
Within the inelasticity-dominated band, the phase lags in a homogeneous near-
spherical body and in a sample of material interrelate as
$\displaystyle\tan\epsilon_{\textstyle{{}_{l}}}\,=\,\frac{A_{l}}{1\,+\,A_{l}}~{}\tan\delta\,~{}\approx~{}\left\\{\begin{array}[]{c}A_{l}~{}\tan\delta\quad\quad\mbox{for}\quad
A_{l}\ll 1~{}~{}\mbox{(superearths)}~{}~{},\\\ {}\hfil\\\
\quad\quad\tan\delta\quad~{}~{}~{}\mbox{for}\quad A_{l}\gg 1~{}~{}\mbox{(small
bodies, ~{}small terrestrial planets)}~{}\,.\end{array}\right.$ (100)
However within the transitional zone, the link between the seismic and tidal
dissipation rates becomes more complicated.
The interrelation between the tidal and seismic damping becomes apparently
paradoxical at low frequencies, where viscosity dominates. As can be seen from
(84 \- 89), in the zero-frequency limit the tidal and seismic $\,Q\,s\,$ (if
defined as cotangents of the appropriate lags) become inversely proportional
to one another:
$\displaystyle\tan\epsilon_{\textstyle{{}_{l}}}\,~{}\approx~{}A_{l}\,\chi\,\tau_{{}_{M}}~{}=~{}\frac{\textstyle
A_{l}}{\,\textstyle\tan\delta\,}~{}\quad\quad\quad\quad\mbox{for}\quad\quad
0\leq\chi\ll\frac{\textstyle
1}{\textstyle{\tau_{{}_{M}}}\,\sqrt{A_{\textstyle{{}_{l}}}+1}}~{}~{}~{}.\,~{}~{}~{}$
(101)
This behaviour however has a good qualitative explanation – the average tidal
torque $\,lmpq\,$ should vanish on crossing of the $\,lmpq\,$ resonance.
While in qualitative discussions it is easier to deal with the quality factors
$\,Q_{l}\,$, in practical calculations we should rely on the factors
$\,k_{l}\,\sin\epsilon_{l}\,$, which show up in the Darwin-Kaula expansion of
tides. Just as $\,\tan\epsilon_{l}\,$, so the quantity
$\,k_{l}\,\sin\epsilon_{l}\,$ too becomes linear in $\,\chi\,$ for low values
of $\,\chi\,$. As we saw in subsection 7.3, this happens over the frequencies
below $\,\chi\ll\frac{\textstyle
1}{\textstyle\tau_{{}_{M}}\,{(A_{l}+1)\,}}~{}$. The slight difference between
this threshold and the one shown in (101) stems from the fact that not only
the lag but also the Love number is frequency dependent.
The factors $\,k_{l}\,\sin\epsilon_{l}\,$ bear dependence upon the radius
$\,R\,$ of a tidally disturbed primary, and the form of this dependence is not
always trivial. At low frequencies, this dependence follows the intuitively
obvious rule that the heavier the body the stronger it mitigates tides (and
thence the smaller the value of $\,k_{l}\,\sin\epsilon_{l}\,$). However at
high frequencies the calculated frequency dependence obeys this rule only
beginning from sizes about or larger than the double size of the Earth, i.e.,
when self-gravitation clearly plays a larger role in tidal friction than the
rheology does – see the discussion at the end of subsection 5.4.
The dependence of $\,k_{l}\,\sin\epsilon_{l}\,$ upon $\,R\,$ helps one to
write down the overall $\,R$-dependence of the tidal torque. Contrary to the
common belief, the $\,lmpq\,$ component of the torque does not scale as
$\,R^{\,2l+1}\,$, see formulae (95) and (96).
Here follow some examples illustrating how our machinery applies to various
celestial bodies.
* 1\.
For small bodies and small terrestrial planets, the effect of self-gravitation
is negligible, except in the closest vicinity of the zero frequency.
Accordingly, for these bodies there is no difference between the tidal and
seismic dissipations. 181818 This can be understood also through the following
line of reasoning. For small objects, we have $\,A_{\textstyle{{}_{l}}}\gg
1\,$; so the complex Love numbers (59b) may be approximated with
$\displaystyle\bar{k}_{\textstyle{{}_{l}}}(\chi)\,=\;-\;\frac{3}{2}\;\frac{{\textstyle\,\stackrel{{\scriptstyle\mbox{\bf\it\\_}}}{{J}}(\chi)}}{{\textstyle\,\stackrel{{\scriptstyle\mbox{\bf\it\\_}}}{{J}}(\chi)}\;+\;A_{l}\;{\textstyle
J}}\;=\;-\;\frac{3}{2}\;\frac{{\textstyle\,\stackrel{{\scriptstyle\mbox{\bf\it\\_}}}{{J}}(\chi)}}{A_{l}\;{\textstyle
J}}~{}+~{}O\left(~{}|{\textstyle\,\stackrel{{\scriptstyle\mbox{\bf\it\\_}}}{{J}}}/(A_{l}\,J)\,|^{2}~{}\right)\;\;\;.~{}~{}~{}~{}~{}$
The latter entails:
$\displaystyle\tan\epsilon_{\textstyle{{}_{l}}}(\chi)\;\equiv\;-\;\frac{{\cal{I}}{\it{m}}\left[\bar{k}_{\it{l}}(\chi)\right]}{{\cal{R}}{\it{e}}\left[\bar{k}_{\it{l}}(\chi)\right]}\;\approx\;-\;\frac{{\cal{I}}{\it{m}}\left[\bar{J}(\chi)\right]}{{\cal{R}}{\it{e}}\left[\bar{J}(\chi)\right]}\;=\;\tan\delta(\chi)\;\;\;,$
which is, in fact, correct up to a sign – see the closing paragraph of
subsection 5.1.
Things change in the closest vicinity of the zero frequency. As can be
observed from the second line of (84), for small bodies and small planets the
tangent of the tidal lag becomes linear in the tidal frequency $\,\chi\,$ when
the frequency $\,\chi\,$ becomes short of a certain threshold:191919 Recall
that for small objects $\,A_{l}\gg 1\,$. $~{}\,\chi\ll\frac{\textstyle
1}{\textstyle\tau_{{}_{M}}\,\sqrt{A_{l}+1\,}}\,\approx\,\frac{\textstyle
1}{\textstyle\tau_{{}_{M}}\,\sqrt{A_{l}}}~{}$. As can be seen from (91), the
tidal factor
$\,k_{l}\,\sin\epsilon_{l}\equiv|\bar{k}_{l}(\chi)|\,\sin\epsilon_{l}(\chi)\,$
becomes linear in $\,\chi\,$ for
$~{}\,\chi\ll\textstyle\tau_{{}_{M}}^{-1}\,(A_{l}+1)^{-1}\,\approx\,\textstyle\tau_{{}_{M}}^{-1}\,A_{l}^{-1}~{}$.
* 2\.
Tidal dissipation in superearths is much less efficient than in smaller
terrestrial planets or moons – a circumstance that should reduce considerably
the rates of orbit circularisation. This cautionary point has ramifications
also upon the other tidal-dynamic timescales (e.g., despinning, migration).
In simple words, self-gravity reduces tidal dissipation because gravitational
attraction pulls the tidal bulge back down, and thus reduces strain in a way
similar to material strength.
As can be seen from (89), at tidal frequencies $\,\chi\,$ lower than the
inverse Maxwell time,202020 For superearths, $\,A_{l}\ll 1\,$. the tangent of
the tidal lag changes its behaviour considerably, thereby avoiding divergence
at the zero frequency. According to (91), the same pertains to the factor
$\,k_{l}\,\sin\epsilon_{l}\,$.
* 3\.
While the role of self-gravity is negligible for small planets and is dominant
for superearths, the case of the Earth is intermediate. For our mother planet,
the contribution of self-gravitation into the Love numbers and phase lags is
noticeable, though probably not leading. Indeed, for $\,\mu\approx 0.8\times
10^{11}~{}$Pa, one arrives at:
$\displaystyle A_{2}\,\approx~{}2.2~{}~{}~{},$ (102)
so formula (69) tells us that the Earth’s tidal quality factor is a bit larger
than its seismic counterpart, taken at the same frequency: 212121 When
Benjamin et al. (2006) say that, according to their data, the tidal quality
factor is slightly lower than the seismic one, these authors compare the two
$\,Q\,$ factors measured at different frequencies. Hence their statement is in
no contradiction to our conclusions.
$\displaystyle{}^{(tidal)}Q_{\textstyle{{}_{2}}}^{(solid~{}Earth)}\,\approx~{}1.5~{}\times~{}\,{}^{(seismic)}Q^{(solid~{}Earth)}{\left.~{}~{}\right.}_{\textstyle{{}_{\textstyle{.}}}}$
(103)
The geodetic measurements of semidiurnal tides, carried out by Ray et al.
(2001), yield
$\,{}^{(tidal)}Q_{\textstyle{{}_{2}}}^{(solid~{}Earth)}\,\approx\,280\,$. The
seismic quality factor $\,{}^{(seismic)}Q^{(solid~{}Earth)}\,$ varies over the
mantle, assuming values from $100$ through $300$. Accepting $200$ for an
arguable average, we see that (103) furnishes a satisfactory qualitative
estimate.
This close hit should not of course be accepted too literally, taken the
Earth’s complex structure and the uncertainty in our knowledge of the Earth’s
rigidity. Still, on a qualitative level, we may enjoy this proximity with
cautious optimism.
* 4\.
The case of the Moon deserves a special attention. Fitting of the LLR data to
the power scaling law $\,Q\sim\chi^{p}\,$ has rendered a small negative value
of the exponential: $\,p\,=\,-\,0.19~{}$ (Williams et al. 2001). Further
attempts by the JPL team to reprocess the data have led to
$\,p\,=\,-\,0.07~{}$. According to Williams & Boggs (2009),
“There is a weak dependence of tidal specific dissipation $\,Q\,$ on period.
The $\,Q\,$ increases from $\,\sim 30\,$ at a month to $\,\sim 35\,$ at one
year. $~{}Q\,$ for rock is expected to have a weak dependence on tidal period,
but it is expected to decrease with period rather than increase. The frequency
dependence of $\,Q\,$ deserves further attention and should be improved.”
To understand the origin of the small negative value of the power, recall that
it emerged through fitting of the tidal $Q_{2}$ and not of the seismic $Q$. If
the future laser ranging confirms these data, this will mean that the
principal tide in the Moon is located close to the maximum of the inverse
tidal quality factor, i.e., close to the maximum taken by
$\,\tan\epsilon_{2}\,$ in (84) at the frequency inverse to
$\,\tau_{{}_{M}}\,\sqrt{A_{l}}\,$. Rigorously speaking, it was of course the
factor $\,k_{2}\,\sin\epsilon_{2}\,$ which was actually observed. The maximum
of this factor is attained at the frequency
$\,\tau_{{}_{M}}^{-1}\,(A_{\textstyle{{}_{l}}}+1)^{-1}\,$, as can be seen from
(90 \- 91). It then follows from the LLR data that the corresponding timescale
$\,\tau_{{}_{M}}\,(A_{\textstyle{{}_{l}}}+1)\,$ should be of order $\,0.1\,$
year. As explained in Efroimsky (2012), this would set the mean viscosity of
the Moon as low as
$\displaystyle\eta_{{}_{Moon}}\,=~{}3\,\times\,10^{16}~{}\mbox{Pa~{}s}~{}~{}~{},$
(104)
which in its turn would imply a very high concentration of the partial melt in
the low mantle – quite in accordance with the existing models (Nakamura et al.
1974, Weber et al. 2011).
The future LLR programs may be instrumental in resolving this difficult issue.
The value of the exponential $\,p\,$ will have ramifications for the current
models of the lunar mantle.
## 10 Comparison of our result with that by Goldreich (1963)
A formula coinciding with our (69) was obtained, through remarkably economic
and elegant semi-qualitative reasoning, by Peter Goldreich (1963).
The starting point in Ibid. was the observation that the peak work performed
by the second-harmonic disturbing potential should be proportional to this
potential taken at the primary’s surface, multiplied by the maximal surface
inequality:
$\displaystyle E_{peak}\,\sim~{}R^{5}\,~{}\frac{R}{\frac{\,\textstyle
19\,\mu\,}{\,\textstyle
2\,\mbox{g}\,\rho\,R}\,+\,1\,}~{}\sim~{}\frac{R^{7}}{19\,\mu\,+\,2\,\mbox{g}\,\rho\,R}~{}~{}~{},$
(105)
$R\,$ being the primary’s radius.
In the static theory of Love, the surface strain is proportional to
$\,R^{2}/\left(\,19\,\mu\,+\,2\,\mbox{g}\,\rho\,R\,\right)\,$. The energy loss
over a cycle must be proportional to the square of the surface strain.
Integration over the volume will give an extra multiplier of $\,R^{3}\,$, up
to a numerical factor:
$\displaystyle\Delta
E_{cycle}\,\sim~{}-~{}\frac{R^{7}}{\left(\,19\,\mu\,+\,2\,\mbox{g}\,\rho\,R\,\right)^{2}}~{}~{}~{}.$
(106)
Comparison of (105) and (106) rendered
$\displaystyle Q~{}=~{}-~{}\frac{2\pi~{}E_{peak}}{\Delta
E_{cycle}}~{}\sim~{}\left(\,19\,\mu\,+\,2\,\mbox{g}\,\rho\,R\,\right)~{}~{}~{},$
wherefrom Goldreich (1963) deduced that
$\displaystyle\frac{Q}{Q_{0}}~{}=~{}1~{}+~{}\frac{2\,\mbox{g}\,\rho\,R}{19\,\mu}~{}~{}~{},$
$Q_{0}\,$ being the value of $\,Q\,$ for a body where self-gravitation is
negligible. This coincides with our formula (69).
In reality, the coincidence of our results is only partial, for two reasons:
* •
First, our derivation of the right-hand side of (63) was based on the prior
convention that the quantity $\,J\,$ entering expression (60) is the unrelaxed
compliance $\,J(0)\,$ of the mantle. Accordingly, the quantity $\,\mu=1/J\,$
entering the expression for $\,A_{l}\,$ should be the unrelaxed rigidity
$\,\mu(0)=1/J(0)\,$. In Goldreich (1963) however, the static, i.e., relaxed
moduli were implied.
In Ibid., this mismatch was tolerable, because the paper was devoted to small
bodies. For these objects, $\,A_{l}\,$ is large, no matter whether we plug the
relaxed or unrelaxed $\,\mu\,$ into (56). Thence the difference between the
tidal and seismic $\,Q\,$ factors is small, as can be seen from the second
line of (100).
For earths and superearths, however, the distinction between the unrelaxed and
relaxed (static) moduli is critical. As can be seen from the first line of
(100), the tidal $\,Q\,$ factor is inversely proportional to $\,A_{l}\,$ and,
thereby, is inversely proportional to the mantle rigidity $\,\mu\,$. As well
known (e.g., Ricard et al. 2009, Figure 3), the unrelaxed $\,\mu\,$ of the
mantle exceeds the relaxed $\,\mu\,$ by about two orders of magnitude.
* •
Second, as our calculation demonstrates, the simple interrelation given by
(69) and (100) works only in the inelasticity-dominated band. In the
transition zone (which begins, in the solid Earth, at timescales longer than
$\sim$ 1 yr) and in the viscosity-dominated band of lower frequencies, the
interrelation between the tidal and seismic lagging is more complicated, and
it deviates from Goldreich’s formula in a fundamental way. In the zero-
frequency limit, the cleavage between the tidal and seismic dissipation laws
gets even larger: the tidal and seismic Q s become not proportional but
inversely proportional to one another. Description of tidal lagging in all
these, low-frequency bands requires a rheological model and the subsequent
mathematics, and cannot be obtained through the simple arguments used by
Goldreich (1963).
Despite these differences, the estimate by Goldreich (1963) provided as close
a hit as was possible without resorting to heavy mathematics. The elegance of
Peter Goldreich’s arguments and the depth of his insight are especially
impressive, taken the complexity of the problem and the volume of calculations
required to obtain the exact answer.
Acknowledgements
To a large extent, my understanding of the theory of bodily tides was
developing through the enlightening conversations which I had on numerous
occasions with Bruce Bills, Julie Castillo-Rogez, Véronique Dehant, Sylvio
Ferraz-Mello, Valéry Lainey, Valeri Makarov, Francis Nimmo, Stan Peale, Tim
Van Hoolst, and James G. Williams. It is a great pleasure for me to thank
deeply all these colleagues for their time and advise. Needless to say, none
of them shares the responsibility for my possible omissions.
I also wish to pay tribute to the late Vladimir Churkin, whose tragic death
prevented him from publishing his preprint cited in this paper. Written with a
great pedagogical mastership, the preprint helped me to understand how the
Love-number formalism should be combined with rheology.
My special gratitude is due to Shun-ichiro Karato for the help he so kindly
provided to me, when I was just opening for myself this intriguing area, and
for the stimulating exchanges, which we have had for years since then.
Last, and by no means least, I sincerely appreciate the support from my
colleagues at the US Naval Observatory, especially from John Bangert.
##
Appendix
Symbol Key
$A_{l}$ | Dimensionless product emerging in the denominator of the expression for the Love number $\,k_{l}\,$
---|---
$E$ | Energy
${\cal{E}}$ | Empirical constant having the dimensions of time, in the generic rheological law (27)
g | Surface gravity
$G$ | Newton’s gravitational constant
$l$ | Degree (spherical harmonics, Legendre polynomials)
$m$ | Order (spherical harmonics, associated Legendre polynomials)
$J,~{}J(0)$ | Unrelaxed compliance
$J(\infty)$ | Relaxed compliance
$J(t\,-\,t\,^{\prime})$ | Creep-response function (compliance function, kernel of the compliance operator)
$\hat{J}$ | Compliance operator
$k_{l}$ | Tidal Love number of degree $l$
$k_{l}(t-t\,^{\prime})$ | Kernel of the Love operator of degree $l$
$\hat{k}_{l}$ | The Love operator of degree $l$
$\bar{k}_{l}(\chi)$ | Fourier component, at frequency $\chi$, of the time derivative of the kernel $k_{l}(t-t\,^{\prime})$
${\cal{M}}$ | Mean anomaly
$n$ | Mean motion
$p$ | Exponential in the generic rheological law (27)
$P_{l}$ | Legendre polynomials of degree $l$
$P_{lm}$ | Legendre associated functions (associated Legendre polynomials) of degree $l$ and order $m$
$Q$ | Dissipation quality Factor
$r$ | Distance
$\vec{r}$ | Vector connecting the centre of the tidally-perturbed body (interpreted as the primary) with a point exterior to this body
${\mbox{{\boldmath$\vec{r}$}}}^{\,*}$ | Vector connecting the centre of the tidally-perturbed body (the primary) with a point-like tide-raising secondary
$R$ | Primary’s mean radius
$t$ | Time
$u_{\gamma\nu}$ | Shear strain tensor
$\bar{u}_{\gamma\nu}(\chi)$ | Fourier component, at frequency $\,\chi\,$, of the shear strain tensor
$U$ | Change in the potential of the tidally-perturbed body (interpreted as the primary)
$W$ | Disturbing potential generated by the tide-raising body (interpreted as the secondary)
$\alpha,\beta$ | Parameters of the Andrade model
$\gamma,\nu$ | Tensor indices
$\Gamma$ | the Gamma function
$\delta$ | Material phase lag
$\Delta t$ | Time lag
$\epsilon$ | Tidal phase lag
$\epsilon_{\textstyle{{}_{lmpq}}}$ | Tidal phase lag of the mode $lmpq$ in the Darwin-Kaula expansion
$\lambda$ | Longitude
$\zeta$ | Parameter of the reformulated Andrade model (ratio of the Andrade timescale $\tau_{{}_{A}}$ to the Maxwell time $\tau_{{}_{M}}$)
$\eta$ | Viscosity
$\mu,~{}\mu(0)$ | Unrelaxed shear modulus (unrelaxed rigidity)
$\mu(\infty)$ | Relaxed shear modulus (relaxed rigidity)
$\mu(t-t\,^{\prime})$ | Stress-relaxation function (kernel of the rigidity operator)
$\hat{\mu}$ | Rigidity operator
$\phi$ | Latitude
$\rho$ | Mass density
$\sigma_{\gamma\nu}$ | Shear stress tensor
$\bar{\sigma}_{\gamma\nu}(\chi)$ | Fourier component, at frequency $\,\chi\,$, of the shear stress tensor
$\tau$ | Time
$\tau_{{}_{M}}$ | Maxwell time (viscoelastic timescale)
$\tau_{{}_{A}}$ | Andrade time (inelastic timescale)
${\cal{T}}$ | Tidal torque
$\Theta(t-t\,^{\prime})$ | the Heaviside function
$\theta$ | Sidereal angle of the tidally-disturbed body (interpreted as the the primary)
$\stackrel{{\scriptstyle\bf\centerdot}}{{\theta\,}}$ | Spin rate of the primary
$\varphi_{\sigma}\,,~{}\varphi_{u}$ | Initial phases of the stress and strain
$\chi$ | Frequency
$\chi_{{}_{0}}$ | Frequency threshold marking the boundary between the inelasticity- and viscosity-dominated frequency bands
$\chi_{\textstyle{{}_{lmpq}}}$ | Physical frequencies of deformation emerging in the tidal theory (absolute values of the tidal modes $\omega_{\textstyle{{}_{lmpq}}}$ )
$\omega_{\textstyle{{}_{lmpq}}}$ | Tidal modes in the Darwin-Kaula expansion of tides
$\omega$ | Argument of the pericentre
$\Omega$ | Longitude of the node
##
## References
* [1] Andrade, E. N. da C. 1910. “On the Viscous Flow in Metals, and Allied Phenomena.” _Proceedings of the Royal Society of London. Series A._ Vol. 84, pp. 1 - 12
* [2] Benjamin, D.; Wahr, J. ; Ray, R. D.; Egbert, G. D.; and Desai, S. D. 2006. “Constraints on mantle anelasticity from geodetic observations, and implications for the $\,J_{2}\,$ anomaly.” Geophysical Journal International, Vol. 165, pp. 3 - 16
* [3] Birger, B. I. 2007. “Attenuation of Seismic Waves and the Universal Rheological Model of the Earth’s Mantle.” Izvestiya. Physics of the Solid Earth. Vol. 49, pp. 635 - 641
* [4] Carter, J. A.; Winn, J. N.; Holman, M. J.; Fabrycky, D.; Berta, Z. K.; Burke, C. J.; and Nutzman, P. 2011. “The Transit Light Curve Project. XIII. Sixteen Transits of the Super-Earth GJ 1214b.” The Astrophysical Journal, Vol. 730, p. 82
* [5] Castillo-Rogez, J. 2009. “New Approach to Icy Satellite Tidal Response Modeling.” American Astronomical Society, DPS meeting 41, Abstract 61.07.
* [6] Castillo-Rogez, J. C.; Efroimsky, M., and Lainey, V. 2011. “The tidal history of Iapetus. Dissipative spin dynamics in the light of a refined geophysical model”. Journal of Geophysical Research – Planets, Vol. 116, p. E09008
doi:10.1029/2010JE003664
* [7] Churkin, V. A. 1998. “The Love numbers for the models of inelastic Earth.” Preprint No 121. Institute of Applied Astronomy. St.Petersburg, Russia. /in Russian/
* [8] Cottrell, A. H., and Aytekin, V. 1947. “Andrade’s creep law and the flow of zinc crystalls.” Nature, Vol. 160, pp. 328 - 329
* [9] Dehant V. 1987a. “Tidal parameters for an inelastic Earth.” Physics of the Earth and Planetary Interiors, Vol. 49, pp. 97 - 116
* [10] Dehant V. 1987b. “Integration of the gravitational motion equations for an elliptical uniformly rotating Earth with an inelastic mantle.” Physics of the Earth and Planetary Interiors, Vol. 49, pp. 242 - 258
* [11] Duval, P. 1978. “Anelastic behaviour of polycrystalline ice.” Journal of Glaciology, Vol. 21, pp. 621 - 628
* [12] Eanes, R. J. 1995. A study of temporal variations in Earth’s gravitational field using LAGEOS-1 laser ranging observations. PhD thesis, University of Texas at Austin
* [13] Eanes, R. J., and Bettadpur, S. V. 1996. “Temporal variability of Earth’s gravitational field from laser ranging.” In: Rapp, R. H., Cazenave, A. A., and Nerem, R. S. (Eds.) Global gravity field and its variations. Proceedings of the International Association of Geodesy Symposium No 116 held in Boulder CO in July 1995. IAG Symposium Series, Vol. 116, pp. 30 - 41. Springer, NY 1997
ISBN: 978-3-540-60882-0
* [14] Efroimsky, M., and V. Lainey. 2007. “The Physics of Bodily Tides in Terrestrial Planets, and the Appropriate Scales of Dynamical Evolution.” _Journal of Geophysical Research – Planets_ , Vol. 112, p. E12003. doi:10.1029/2007JE002908
* [15] Efroimsky, M., and Williams, J. G. 2009. “Tidal torques. A critical review of some techniques.” Celestial mechanics and Dynamical Astronomy, Vol. 104, pp. 257 - 289
arXiv:0803.3299
* [16] Efroimsky, M. 2012. “Bodily tides near spin-orbit resonances.” Celestial mechanics and Dynamical Astronomy, Vol. 112, pp. 283 - 330.
Extended version available at: arXiv:1105.6086
* [17] Findley, W. N.; Lai, J. S.; & Onaran, K. 1976. Creep and relaxation of nonlinear viscoelastic materials. Dover Publications, 384 pp.
* [18] Fontaine, F. R.; Ildefonse, B.; and Bagdassarov, N. 2005. “Temperature dependence of shear wave attenuation in partially molten gabbronorite at seismic frequencies.” Geophysical Journal International, Vol. 163, pp. 1025 - 1038
* [19] Goldreich, P. 1963. “On the eccentricity of the satellite orbits in the Solar System.” Monthly Notices of the Royal Astronomical Society of London, Vol. 126, pp. 257 - 268
* [20] Gribb, T.T., & Cooper, R.F. 1998. “Low-frequency shear attenuation in polycrystalline olivine: Grain boundary diffusion and the physical significance of the Andrade model for viscoelastic rheology.” Journal of Geophysical Research – Solid Earth, Vol. 103, pp. B27267 - B27279
* [21] Henning, W. G.; O’Connell, R.; and Sasselov, D. 2009. “Tidally Heated Terrestrial Exoplanets: Viscoelastic Response Models.” The Astrophysical Journal, Vol 707, pp 1000-1015
* [22] Johnson, C. L.; Solomon, S. C.; Head, J. W.; Phillips, R. J.; Smith, D. E.; and Zuber, M. T. 2000. “Lithospheric Loading by the Northern Polar Cap on Mars.” Icarus, Vol. 144, pp. 313 - 328
* [23] Karato, S.-i. 2008. Deformation of Earth Materials. An Introduction to the Rheology of Solid Earth. Cambridge University Press, UK.
* [24] Karato, S.-i., and Spetzler, H. A. 1990. “Defect Microdynamics in Minerals and Solid-State Mechanisms of Seismic Wave Attenuation and Velocity Dispersion in the Mantle.” Reviews of Geophysics, Vol. 28, pp. 399 - 423
* [25] Kaula, W. M. 1964. “Tidal Dissipation by Solid Friction and the Resulting Orbital Evolution.” Reviews of Geophysics, Vol. 2, pp. 661 - 684
* [26] Léger, A.; Rouan, D.; Schneider, J.; Barge, P.; Fridlund, M.; Samuel, B.; Ollivier, M.; Guenther, E.; Deleuil, M.; Deeg, H. J.; and 151 coauthors. 2009. “Transiting exoplanets from the CoRoT space mission. VIII. CoRoT-7b: the first super-Earth with measured radius.” Astronomy and Astrophysics, Vol. 506 pp. 287 - 302
* [27] McCarthy, C.; Goldsby, D. L.; and Cooper, R. F. 2007. “Transient and Steady-State Creep Responses of Ice-I/Magnesium Sulfate Hydrate Eutectic Aggregates.” 38th Lunar and Planetary Science Conference XXXVIII, held on 12 - 16 March 2007 in League City, TX. LPI Contribution No 1338, p. 2429
* [28] Mitchell, B. 1995. “Anelastic structure and evolution of the continental crust and upper mantle from seismic surface wave attenuation.” Reviews of Geophysics, Vol 33, pp 441 - 462
* [29] Nakamura, Y.; Latham, G.; Lammlein, D.; Ewing, M.; Duennebier, F.; and Dorman, J. 1974. “Deep lunar interior inferred from recent seismic data.” Geophysical Research Letters, Vol. 1, pp. 137 - 140
* [30] Nimmo, F. 2008. “Tidal Dissipation and Faulting.” Paper presented at the Science of Solar System Ices Workshop. Lunar and Planetary Institute, Oxnard CA
* [31] Ray, R. D.; Eanes, R. J.; and Lemoine, F. G. 2001. “Constraints on energy dissipation in the Earth’s body tide from satellite tracking and altimetry.” Geophysical Journal International, Vol. 144, pp. 471 - 480
* [32] Ricard, Y.; Matas, J.; and Chambat, F. 2009. “Seismic attenuation in a phase change coexistence loop.” Physics of the Earth and Planetary Interiors., Vol. 176, pp. 124 - 131
* [33] Shito, A., Karato, S.-I.,, Park, J. 2004. “Frequency dependence of Q in Earth’s upper mantle inferred from continuous spectra of body waves.” Geophysical Research Letters 31, L12603, doi:10.1029/2004GL019582.
* [34] Smith M. 1974. “The scalar equations of infinitesimal elastic-gravitational motion for a rotating, slightly elliptical Earth.” The Geophysical Journal of the Royal Astronomical Society, Vol. 37, pp. 491 - 526
* [35] Stachnik, J. C.; Abers, G. A.; and Christensen, D. H. 2004. “Seismic attenuation and mantle wedge temperatures in the Alaska subduction zone.” Journal of Geophysical Research – Solid Earth, Vol. 109, No B10, p. B10304, doi:10.1029/2004JB003018
* [36] Tan, B. H.; Jackson, I.; and Fitz Gerald J. D. 1997. “Shear wave dispersion and attenuation in fine-grained synthetic olivine aggregates: preliminary results.” Geophysical Research Letters, Vol. 24, No 9, pp. 1055 - 1058, doi:10.1029/97GL00860
* [37] Tyler, R. H. 2009. “Ocean tides heat Enceladus.” Geophysical Research Letters, Vol. 36, p. L15205, doi:10.1029/2009GL038.300
* [38] Weber, R. C.; Lin, Pei-Ying; Garnero, E.; Williams, Q.; and Lognonné, P. 2011. “Seismic Detection of the Lunar Core.” Science, Vol. 331, Issue 6015, pp. 309 - 312
* [39] Weertman, J., and Weertman, J. R. 1975. “High Temperature Creep of Rock and Mantle Viscosity.” Annual Review of Earth and Planetary Sciences, Vol. 3, pp. 293 - 315
* [40] Williams, J. G., Boggs, D. H., Yoder, C. F., Ratcliff, J. T., and Dickey, J. O. 2001. “Lunar rotational dissipation in solid-body and molten core.” The Journal of Geophysical Research – Planets, Vol. 106, No E11, pp. 27933 - 27968. doi:10.1029/2000JE001396
* [41] Williams, J. G., and Boggs, D. H. 2009. “Lunar Core and Mantle. What Does LLR See?”
Proceedings of the 16th International Workshop on Laser Ranging held on 12-17
October 2008 in Poznan, Poland. Edited by S. Schilliak. Published by: Space
Research Centre, Polish Academy of Sciences, Warsaw. Vol. 101, pp. 101 - 120
http://cddis.gsfc.nasa.gov/lw16/docs/papers/sci$\\_$1$\\_$Williams$\\_$p.pdf
http://cddis.gsfc.nasa.gov/lw16/docs/papers/proceedings$\\_$vol2.pdf
* [42] Zahn, J.-P. 1966. “Les marées dans une étoile double serrée.” Annales d’Astrophysique, Vol. 29, pp. 313 - 330
* [43] Zharkov, V.N., and Gudkova, T.V. 2009. “The period and $Q$ of the Chandler wobble of Mars.” Planetary and Space Science, Vol. 57, pp. 288 - 295
|
arxiv-papers
| 2011-05-19T17:23:27 |
2024-09-04T02:49:18.961750
|
{
"license": "Public Domain",
"authors": "Michael Efroimsky",
"submitter": "Michael Efroimsky",
"url": "https://arxiv.org/abs/1105.3936"
}
|
1105.4029
|
# On the three-body Schrödinger equation with decaying potentials
Rytis Juršėnas
Institute of Theoretical Physics and Astronomy
of Vilnius University, A. Goštauto 12, LT-01108
###### Abstract.
The three-body Schrödinger operator in the space of square integrable
functions is found to be a certain extension of operators which generate the
exponential unitary group containing a subgroup with nilpotent Lie algebra of
length $\kappa+1$, $\kappa=0,1,\ldots$ As a result, the solutions to the
three-body Schrödinger equation with decaying potentials are shown to exist in
the commutator subalgebras. For the Coulomb three-body system, it turns out
that the task is to solve - in these subalgebras - the radial Schrödinger
equation in three dimensions with the inverse power potential of the form
$r^{-\kappa-1}$. As an application to Coulombic system, analytic solutions for
some lower bound states are presented. Under conditions pertinent to the
three-unit-charge system, obtained solutions, with $\kappa=0$, are reduced to
the well-known eigenvalues of bound states at threshold. PACS: 03.65.Ge,
03.65.Db, 03.65.Fd
## 1\. Introduction
The goal of the present paper is to demonstrate an analytical approach for
solving the three-body Schrödinger equation with translation invariant
decaying potentials. The three-body Schrödinger operator $H$ (the Hamiltonian
operator, henceforth) is represented by the closure of operator sum $T+V$. The
kinetic energy operator $T$ is defined so that its closure, denoted by $T$ as
well, is a self-adjoint operator on the domain $D(T)$ and acting in
$L^{2}(\bm{R}^{9})$ by
$T=-\sum_{1\leq i\leq 3}(2m_{i})^{-1}\Delta_{i}.$ (1.1)
The constants (referred to as masses) $m_{i}>0$ ($i=1,2,3$), the Laplacian
$\Delta_{i}$ is in three-dimensional vectors
$\bm{r}_{i}=(x_{i},y_{i},z_{i})\in\bm{R}^{3}$, with absolute value
$r_{i}\in[0,\infty)$. The potential energy operator $V$ is a scalar
translation invariant operator of multiplication by $V$, where real function
$V$ fulfills several assumptions: (A1)
$V=\sum_{1\leq i<j\leq 3}V_{ij}(\bm{r}_{i}-\bm{r}_{j})\quad\text{and}\quad
V_{ij}\to 0\quad\text{as}\quad|\bm{r}_{i}-\bm{r}_{j}|\to\infty.$ (1.2)
(A2) $V_{ij}$ is of the differentiability class $C^{\infty}(\bm{R}^{3})$ and
it is analytic everywhere except, possibly, at $\bm{r}_{i}=\bm{r}_{j}$ for
$i\neq j$. (A3) The operator $V$ is assumed to be a symmetric $T$-bounded
operator in the sense of Kato [Kat51] (see also [Sim00] and the citation
therein), with its domain satisfying $D(V)\supset D(T)$. This assumption
ensures the self-adjointness of $H$ on $D(T)$.
Let $\nabla_{ij}$ be the gradient in vectors
$\bm{r}_{ij}=\bm{r}_{i}-\bm{r}_{j}$ with the absolute value $r_{ij}$. If one
defines the sum $\sum_{i<j}\nabla_{ij}$ by $G$ (Lemma 2), with $G_{z}$ its
$z$-component, and $V_{n}$ by $G_{z}^{n}V$ ($n=0,1,\ldots$), then there exists
a subset $\bm{I}_{\kappa}\subset\bm{R}^{6}$ (§2.2, eq. (2.3)) such that
operators $G_{z}$, $V_{0}\equiv V$, $V_{1}$, $\ldots$, $V_{\kappa}$ form a
$(\kappa+2)$-dimensional nilpotent Lie algebra $\mathcal{A}$ in
$\bm{I}_{\kappa}$ (see Theorem 1), whereas $V_{\kappa+p}$ is the operator of
multiplication by zero for all integers $p=1,2,\ldots$ For a particular
Coulomb three-body system, the latter leads to a well-known observation that
bound states exist whenever one of the three charges has a different sign
(see, for example, [BD87, FB92, MRW92]), though this is not a sufficient
condition of boundedness (see also §3.1 or, in particular, eq. (3.1)).
After establishing the nilpotency of $\mathcal{A}$ and thereby the existence
of its commutator subalgebras $\mathcal{A}^{c}\subseteq\mathcal{A}$
($c=0,1,\ldots,\kappa+1$) we come to an important conclusion that the
eigenvalue $E$ of $H$ in $L^{2}(\bm{R}^{9})$ is the sum of $E_{\kappa}$, the
eigenvalues of $H_{\kappa}$ in $L^{2}(\bm{I}_{\kappa})$, where $\kappa$ runs,
in general, from $0$ to $\infty$ (eq. (2.5)), whereas the eigenvalue equation
for $H_{\kappa}$ can be decomposed, to some extent, into two equations in
distinct subspaces so that their common solutions with respect to the
eigenvalues were equal to $E_{\kappa}^{0}$, eq. (3.5). Then the eigenvalue
$E_{\kappa}$ equals $E_{\kappa}^{0}$ plus the correction due to the
Hughes–Eckart term, eq. (3.3). In case that $V$ represents the Coulomb
potential, these two equations are nothing but the separated radial
Schrödinger equations in three dimensions with the inverse power potential of
$r^{-\kappa-1}$ type. Namely, one of our main goals is to demonstrate that
solutions to the three-body Schrödinger equation in a Coulomb potential are
approximated by solving a one-dimensional second order ordinary differential
equation in $L^{2}(0,\infty;dr)$,
$u_{\kappa,l}^{\prime\prime}(r)+\bigl{(}B_{\kappa}-[l(l+1)/r^{2}]-[A_{\kappa}/r^{\kappa+1}]\bigr{)}u_{\kappa,l}(r)=0,\quad
l=0,1,\ldots,$ (1.3)
with some real constants $A_{\kappa}$ and $B_{\kappa}$, the latter being
proportional to $E_{\kappa}^{0}$. The subspace $\bm{I}_{\kappa}$ yields
$V_{p}=0$ for all integers $p\geq 1$, as $\kappa=0$; in particular, $V_{1}=0$
is a familiar relation known as the Wannier saddle point (see eg [SG87]).
Under appropriate boundary conditions functions $u_{0,l}(r)$ are expressed in
terms of the confluent Whittaker functions, and the associated eigenvalues are
proportional to $E_{0}^{0}\propto-A_{0}^{2}/(4n^{2})$, where integers $n\geq
l+1$. These eigenvalues represent the energies of bound states at threshold,
§3.2. Based on perturbative arguments one deduces that the Hughes–Eckart term
does not influence the ground state of $H_{\kappa}$, Proposition 1; see also
[Sim70].
Unlike the previous case, admitting $\kappa=1$ we are dealing with operators
in the subspace $\mathcal{A}^{1}=[\mathcal{A},\mathcal{A}]$, for
$V_{1}=A_{1}r^{-2}$ is the transition potential in the sense of [FL71] while
$V_{p}=0$ for all integers $p\geq 2$; $V_{1}\neq 0$ ensures a higher accuracy
in obtaining the eigenvalues $E_{0}^{0}+E_{1}^{0}$ from a series expansion.
According to Case [Cas50], for potentials as singular as $r^{-2}$ or greater,
there exists a phase factor - proportional to the cut-off radius $r_{0}$ \-
that describes the breakdown of the power law at small distances $r$. Hence
$E_{\kappa}=E_{\kappa}(r_{0})$ for $\kappa=1,2,\ldots$ On the other hand,
$A_{\kappa}$ is proportional to $(-1)^{\kappa}$ (Corollary 4), which separates
attractive potentials from the repulsive ones thus bringing in different
characteristic aspects of eigenstates.
The task to solve the radial Schrödinger equation entailing singular
potentials has been of a particular interest for years ([Cas50, FL71, Spe64,
Yaf74]) and still it draws the attention of many authors ([Gao08, GOK+01,
IS11, MEF01, Rob00]). If following [Cas50] or [Spe64], there is no ground
state for these singular potentials as well as there are only a finite number
of bound states, as abstracted eg in [Gao98, Gao99a, Gao99b]. So one should
expect a more valuable contribution to eigenvalue expansion at higher energy
levels; for more details, see [RS78, §XIII]. Even so, several attempts have
been initiated to find the ground state energy for a particular class of
singular potentials (see eg [BBC80, NCU94]).
The calculation problems of singular potentials are beyond the core of the
present paper, though the results on the subject are highly appreciated for
several reasons (see §3). We shall not concern ourselves with accomplishing
the task to find general solution to eq. (1.3) but rather demonstrate a method
for obtaining it in the Coulomb case; only the eigenvalues $E_{0}$ as well as
$E_{1}$, in some respects, will be discerned for some illustrative purposes
(§3.2.1–3.2.3).
## 2\. Similarity for the three-body Hamiltonian operator
Throughout the whole exposition, we shall exploit several Hilbert spaces of
square integrable functions. The typical of them are: $L^{2}(\bm{R}^{9})$ (as
the base space), $L^{2}(\bm{R}^{6})$ (as a subspace of translation invariant
functions), $L^{2}(\bm{I}_{\kappa})$ (as a space over vector space
$\bm{I}_{\kappa}\subset\bm{R}^{6}$), and $\bm{H}_{\kappa}$ (so that
$\bm{H}_{\kappa}\otimes\bm{H}_{\kappa}\cong L^{2}(\bm{I}_{\kappa})$). The norm
and the scalar (inner) product in a given Hilbert space will be denoted by
$\|\cdot\|$ and $(\cdot,\cdot)$; whenever necessary, the subscripts
identifying the space will be written as well.
### 2.1. Translation invariance
This section summarizes some requisite results obtained from the translation
invariance of the potential energy operator $V$.
We say $e(\bm{t})$ is a representation of the group of translations in
$\bm{R}^{3}$, denoted E(3), in the space of $C^{\infty}(\bm{R}^{3})$-functions
$f(\bm{r}_{i})$ if it fulfills $e(\bm{t})f(\bm{r}_{i})=f(\bm{r}_{i}+\bm{t})$,
where vector $\bm{t}=(t_{x},t_{y},t_{z})$. A Taylor series expansion of
$f(\bm{r}_{i}+\bm{t})$ yields
$e(\bm{t})f(\bm{r}_{i})=\exp(\mathrm{i}\bm{t}\cdot\bm{p}_{i})f(\bm{r}_{i})$,
where the $\bm{p}_{i}=-\mathrm{i}\nabla_{i}$ ($i=1,2,3$) are the generators of
E(3). Their sum over all $i$ is denoted by $K$.
###### Lemma 1.
Define the operator sum $T+V$ by $H$, with $T$ and $V$ as in eqs. (1.1)–(1.2)
and assumptions (A1)–(A3). Let $\psi\in\mathrm{Ker}(E-H)$ with
$E\in\sigma(H)$, where $\psi=\psi(E;\bm{r}_{1},\bm{r}_{2},\bm{r}_{3})$. If $V$
is invariant under the action of E(3), then (a) functions $\psi$ are
translation invariant, one writes
$\psi=\varphi(E;\bm{r}_{12},\bm{r}_{23},\bm{r}_{13})$, (b) functions $\varphi$
solve the eigenvalue equation for $T_{0}+V^{\prime}$ with the same $E$,
namely,
$\displaystyle(T_{0}+V^{\prime})\varphi(E)=E\varphi(E),$ (2.1a) $\displaystyle
T_{0}=-\sum_{1\leq i\leq 3}(2m_{i})^{-1}\partial_{i}^{2},\quad\text{on}\quad
D(T_{0})$ (2.1b)
where
$\partial_{1}=\nabla_{12}+\nabla_{13},\quad\partial_{2}=\nabla_{23}-\nabla_{12},\quad\partial_{3}=-\nabla_{23}-\nabla_{13},$
and (c) $D(T_{0})=D(T)$.
###### Remark 1.
Here and elsewhere, we distinguish potentials $V$ and $V^{\prime}$ by writing
the equation $V_{ij}(\bm{r}_{i}-\bm{r}_{j})=V_{ij}^{\prime}(\bm{r}_{ij})$.
Thus $V\varphi=V^{\prime}\varphi$ but $\nabla_{i}V=\partial_{i}V^{\prime}$.
###### Remark 2.
Vectors $\bm{r}_{12}$, $\bm{r}_{23}$, $\bm{r}_{13}$ are linearly dependent,
$\bm{r}_{12}+\bm{r}_{23}=\bm{r}_{13}$, so the above given parametrization of
$\varphi$ is rather formal, yet a convenient one for our considerations. Hence
$\varphi\in D(T)\subset L^{2}(\bm{R}^{6})$. We shall regard this aspect once
more in §3.
###### Remark 3.
It appears that $(T+V)\psi=(T_{0}+V^{\prime})\varphi$ is the identity. For
this reason, we shall define $T_{0}+V^{\prime}$ by the same $H$; there will be
no possibility of confusion.
###### Proof of Lemma 1.
The translation invariance of $V$ infers $KV=0$. On the other hand,
$[H,\bm{p}_{i}]=-\bm{p}_{i}V$. Hence $[H,K]=0$. The three components $K_{x}$,
$K_{y}$, $K_{z}$ of $K$ commute with each other and thus for our purposes, it
suffices to choose one of them, say $K_{z}$.
The commutator $[H,K_{z}]=0$ yields $\psi\in\mathrm{Ker}(\lambda-
K_{z})\cap\mathrm{Ker}(E-H)\neq\varnothing$, with $\lambda\in\sigma(K_{z})$.
Subsequently, functions $\psi$ associated with $E$ are labeled by $\lambda$ as
well. One writes $\psi=\psi(E\lambda)$. These functions solve the following
equation
$\frac{\partial\psi}{\partial z_{1}}+\frac{\partial\psi}{\partial
z_{2}}+\frac{\partial\psi}{\partial z_{3}}=\mathrm{i}\lambda\psi.$
The partial differential equation is satisfied whenever functions $\psi$ take
one of the following forms: $\exp(\mathrm{i}\lambda z_{1})\varphi$,
$\exp(\mathrm{i}\lambda z_{2})\varphi$ or $\exp(\mathrm{i}\lambda
z_{3})\varphi$, where translation invariant functions $\varphi$ (with
$K_{z}\varphi=0$) are labeled by $E$ only,
$\varphi=\varphi(E;\bm{r}_{12},\bm{r}_{23},\bm{r}_{13})$. Since all three
forms are equivalent (with their appropriate functions $\varphi$), we choose
the first one. Note that it suffices to choose functions $\varphi$ being
invariant under translations along the $z$ axis only. By applying the same
procedure for the remaining components, $K_{x}$, $K_{y}$, we would deduce that
functions $\varphi$ are invariant under translations along the $x$, $y$ axes
as well. Bearing this in mind, we deduce that functions $\varphi$ are
invariant under translations along all three axes associated with each
$\bm{r}_{ij}$, that is, in $\bm{R}^{3}\times\bm{R}^{3}\times\bm{R}^{3}$.
The application of $e(t_{z})$ to $\psi=\exp(\mathrm{i}\lambda z_{1})\varphi$
yields $e(t_{z})\psi=\exp(\mathrm{i}\lambda t_{z})\psi$. This means that
functions $\psi$ labeled by a particular E(3)-scalar representation,
$\lambda=0$, are translation invariant, $\psi(E0)=\varphi(E)$.
We wish to find the operator $T_{\lambda}$ whose range for all $\varphi$ in
$D(T_{\lambda})$ is the same as that of $T$ for all $\psi$ in $D(T)$, namely,
$T_{\lambda}\varphi=T\psi$. First, we calculate the gradients of
$\exp(\mathrm{i}\lambda z_{1})\varphi$. Second, we calculate the corresponding
Laplacians. Third, we substitute obtained expressions in eq. (1.1). The result
reads
$T_{\lambda}\varphi=e^{\mathrm{i}\lambda z_{1}}(T_{0}+\lambda
t_{\lambda})\varphi\quad\text{with}\quad
t_{\lambda}\varphi=-(2m_{1})^{-1}\biggl{[}-\lambda+2\mathrm{i}\biggl{(}\frac{\partial}{\partial
z_{12}}+\frac{\partial}{\partial z_{13}}\biggr{)}\biggr{]}\varphi.$
For a particular $\lambda=0$, we get the tautology $T_{0}=T_{0}$. [Note:
$\partial_{i}\varphi=\nabla_{i}\psi$.] For arbitrary $\lambda$, substitute
$T_{\lambda}$ in $(T+V)\psi(E\lambda)=E\psi(E\lambda)$ and get the equation
$(T_{0}+V^{\prime}+\lambda t_{\lambda})\varphi(E)=E\varphi(E)$. The number of
eigenvalues $\lambda$ is infinite, and the latter must hold for all of them.
It follows from $\lambda t_{\lambda}\varphi(E)=0$ that $\lambda=0$ or
$t_{\lambda}\varphi(E)=0$. But $t_{\lambda}\varphi(E)=0$ is improper since
$\varphi=\varphi(E)$ is $\lambda$ independent. Therefore, $\lambda=0$ and
functions $\psi$ are translation invariant, $\psi=\varphi(E)$, and they
satisfy eq. (2.1a). This gives items (a)–(b). Item (c) follows immediately due
to $\psi=\varphi$. The proof is complete. ∎
###### Lemma 2.
Define $\sum_{i<j}\nabla_{ij}$ by $G$, where the sum runs over all $1\leq
i<j\leq 3$. Then there exist domains $D$, $D^{\prime}\subset D(T_{0})$ such
that $D=\\{\varphi\in D(T_{0})\\!\colon\thinspace G\varphi=0\\}$, $D\cup
D^{\prime}\subseteq D(T_{0})$ and $D\cap D^{\prime}=\varnothing$.
###### Proof.
As above, let us choose the component $G_{z}$. Then the commutators
$[G_{z},K_{z}]=0$, $[H,K_{z}]=0$, but $[G_{z},H]\neq 0$, in general. This
indicates that the (only one) eigenvalue $\lambda=0$ (proof of Lemma 1) of
$K_{z}$ is degenerate, where degenerate eigenfunctions are $\varphi\in
D(T_{0})$ (Remark 3). Therefore, if given $\varphi_{1}\in D_{1}$, $\phi\in
D^{\prime}$, $D_{1}\cup D^{\prime}=D(T_{0})$ and $D_{1}\cap
D^{\prime}=\varnothing$, then $G_{z}\varphi_{1}=\mu\varphi_{1}$ and
$H\phi=E\phi$, with some real numbers $\mu$ for all $\varphi_{1}\in D_{1}$ and
$\phi\in D^{\prime}$. Solutions to $G_{z}\varphi_{1}=\mu\varphi_{1}$ are
translation invariant functions $\varphi_{1}$. In turn, $\varphi_{1}$ is
represented by a certain translation invariant function $\tilde{\varphi}_{1}$
multiplied by either $\exp(\mu z_{12})$ or $\exp(\mu z_{23})$ or $\exp(\mu
z_{13})$. Subsequently, solutions satisfying $G_{z}\varphi_{1}=\mu\varphi_{1}$
with $\mu=0$ are in $D_{1}$ as well, by the proof of Lemma 1. The nonempty
subset $D\subseteq D_{1}$ of these solutions is exactly what we were looking
for. ∎
###### Remark 4.
It follows from the above lemma that $H\varphi=E\varphi$, with
$H\\!\colon\thinspace D(T_{0})\to L^{2}(\bm{R}^{6})$, actually means
$H\phi=E\phi$, with $H\\!\colon\thinspace D^{\prime}\to L^{2}(\bm{R}^{6})$
(see also Remarks 2–3).
###### Corollary 1.
The operator $T_{0}\\!\colon\thinspace D\to L^{2}(\bm{R}^{6})$ is represented
by the following equivalent forms
$\displaystyle T_{0}=$
$\displaystyle-\alpha\Delta_{12}-\beta\Delta_{23}+\gamma(\nabla_{12}\cdot\nabla_{23}),$
(2.2a) $\displaystyle=$
$\displaystyle-\xi\Delta_{23}-\alpha\Delta_{13}+\zeta(\nabla_{13}\cdot\nabla_{23}),$
(2.2b) $\displaystyle=$
$\displaystyle-\xi\Delta_{12}-\beta\Delta_{13}+\eta(\nabla_{12}\cdot\nabla_{13}).$
(2.2c)
The real numbers $\alpha$, $\beta$, $\gamma$, $\xi$, $\zeta$, $\eta$ are equal
to
$\displaystyle\alpha=\frac{1}{2}\biggl{(}\frac{1}{m_{2}}+\frac{1}{m_{3}}\biggr{)},\quad\beta=\frac{1}{2}\biggl{(}\frac{1}{m_{1}}+\frac{1}{m_{2}}\biggr{)},\quad\gamma=\frac{1}{m_{2}},$
$\displaystyle\xi=\frac{1}{2}\biggl{(}\frac{1}{m_{1}}+\frac{4}{m_{2}}+\frac{1}{m_{3}}\biggr{)},\quad\zeta=-\biggl{(}\frac{2}{m_{2}}+\frac{1}{m_{3}}\biggr{)},\quad\eta=-\biggl{(}\frac{1}{m_{1}}+\frac{2}{m_{2}}\biggr{)}.$
###### Proof.
Combining Lemmas 1–2, simply substitute $G\theta=0$ in eq. (2.1b) for
$\theta\in D$, and the result follows. ∎
###### Remark 5.
Supposing that all three $T_{0}$-forms are equivalent, we choose the first
one, eq. (2.2a). Note that, in general, Corollary 1 does not apply to $T_{0}$,
eq. (2.1b), representing the map from $D^{\prime}$ to $L^{2}(\bm{R}^{6})$
(Remark 4).
### 2.2. Unitary equivalence
Let $V_{n}$, for every integer $n=0,1,\ldots$, denote $G_{z}^{n}V^{\prime}$,
where $G_{z}^{n}=G_{z}G_{z}\ldots G_{z}$ ($n$ times); clearly, $G_{z}^{0}$ is
the identity operator and $V_{0}\equiv V^{\prime}$. Let $\bm{I}_{\kappa}$
($\kappa=0,1,\ldots$),
$\bm{I}_{\kappa}=\\{(\bm{r}_{12},\bm{r}_{23},\bm{r}_{13})\in\bm{R}^{9}\\!\colon\thinspace\bm{r}_{12}+\bm{r}_{23}=\bm{r}_{13},V_{\kappa+p}=0\;\forall
p=1,2,\ldots\\},$ (2.3)
be a nonempty subset in $\bm{R}^{6}$ such that for some arbitrary integer
$\kappa\geq 0$, the $\kappa_{1}^{\text{th}}$ derivative of smooth function
$V^{\prime}$ were equal to zero for all $\kappa_{1}>\kappa$ (the derivative
under consideration is defined in Lemma 2). In what follows, we shall identify
the spaces endowed with vectors in $\bm{I}_{\kappa}$ by $\kappa$; hence
$D_{\kappa}$, $D_{\kappa}^{\prime}$ and $D_{0,\kappa}$. Here, domains
$D_{\kappa}$ and $D_{\kappa}^{\prime}$ are considered in a similar way as $D$
and $D^{\prime}$ in Lemma 2 with $D(T_{0})$ replaced by $D_{0,\kappa}$,
whereas $D_{0,\kappa}$ is the set of functions from $L^{2}(\bm{I}_{\kappa})$
such that: (1) $\|T_{0}\varphi\|_{\kappa}<\infty$ for all $\varphi\in
D_{0,\kappa}$; (2) $T_{0}$ is self-adjoint on $D_{0,\kappa}$; (3)
$D(V_{n})\supset D_{0,\kappa}$ for all $n=0,1,\ldots$ Here and elsewhere
$\|\cdot\|_{\kappa}$ is the $L^{2}(\bm{I}_{\kappa})$-norm. By items (1) to
(3), the operator sum $T_{0}+V_{n}$, denoted $H_{n}$, is a self-adjoint
operator in $L^{2}(\bm{I}_{\kappa})$ with domain $D_{0,\kappa}$. In
particular, $H_{n}=T_{0}$ for all $n=\kappa+1,\kappa+2,\ldots$
###### Remark 6.
We shall clarify the meaning of $L^{2}(\bm{I}_{\kappa})$-norm. As it is clear
from the definition, eq. (2.3), $L^{2}(\bm{I}_{\kappa})$ is nothing but
$L^{2}(\bm{R}^{6}$), with the (Lebesgue) measure $d\mu_{\kappa}$ whose exact
form depends on the form of $V^{\prime}$. One writes
$L^{2}(\bm{I}_{\kappa})\equiv L^{2}(\bm{R}^{6},d\mu_{\kappa})$ and
$L^{2}(\bm{R}^{6})\equiv L^{2}(\bm{R}^{6},d\mu)$, where
$\mu_{\kappa}=\mu_{\kappa}(\bm{I}_{\kappa})$ and
$\mu_{\kappa}(\bm{I}_{\kappa})\subset\mu(\bm{R}^{6})$. Moreover, given
$\kappa=0,1,\ldots$, the measures $\\{\mu_{\kappa}\\}$ are mutually singular
since they satisfy
$\mu_{\kappa}(\bm{R}^{6})=\mu_{\kappa}(\bm{I}_{\kappa})+\mu_{\kappa}(\bm{R}^{6}\backslash\bm{I}_{\kappa})=\mu_{\kappa}(\bm{I}_{\kappa})$
provided $\mu_{\kappa}(X\backslash\bm{I}_{\kappa})=0$ by eq. (2.3) for any
subset $\bm{I}_{\kappa}\subset X\subset\bm{R}^{6}$. Indeed, given $\kappa=0$
and $1$. Define $\bm{I}_{*}=\bm{I}_{0}\cup\bm{I}_{1}$. Then
$\mu_{0}(\bm{I}_{*})+\mu_{1}(\bm{I}_{*})=\mu_{0}(\bm{I}_{0})+\mu_{1}(\bm{I}_{*}\backslash\bm{I}_{0})$.
But $\mu_{1}(\bm{I}_{*}\backslash\bm{I}_{0})=\mu_{1}(\bm{I}_{1})$. Therefore,
for $\mu_{*}=\mu_{0}+\mu_{1}$, $L^{2}(\bm{R}^{6},d\mu_{*})$ is isomorphic to
$L^{2}(\bm{R}^{6},d\mu_{0})\oplus L^{2}(\bm{R}^{6},d\mu_{1})$, which can be
naturally applied to arbitrary $\kappa$. Hence for
$\mu^{\prime}=\mu_{0}+\mu_{1}+\ldots$, $L^{2}(\bm{R}^{6},d\mu^{\prime})$ is
isomorphic to $L^{2}(\bm{R}^{6},d\mu_{0})\oplus
L^{2}(\bm{R}^{6},d\mu_{1})\oplus\ldots$. Below we shall demonstrate that
$\mu^{\prime}=\mu$.
Finally, we are in a position to define the $L^{2}(\bm{I}_{\kappa})$-norm. By
assumption $\mu^{\prime}=\mu$, one may write
$\int_{\bm{R}^{6}}d\mu=\sum_{\kappa}\int_{\bm{I}_{\kappa}}d\mu_{\kappa}$,
where the sum runs from $0$ to $\infty$. Equivalently,
$\int_{\bm{I}_{\kappa}}d\mu_{\kappa}=\lambda_{\kappa}\int_{\bm{R}^{6}}d\mu$
for some nonzero $\lambda_{\kappa}$ such that
$\sum_{\kappa}\lambda_{\kappa}=1$. With this definition, that the operator
$H_{n}$ is in $L^{2}(\bm{I}_{\kappa})$, it actually means that the operator
$\lambda_{\kappa}^{-1}H_{n}$ is in $L^{2}(\bm{R}^{6},\lambda_{\kappa}d\mu)$,
and therefore there is unitary $U$ such that
$U(\lambda_{0}^{-1}H_{0}\oplus\lambda_{1}^{-1}H_{1}\oplus\ldots)U^{-1}$ is in
$L^{2}(\bm{R}^{6})$.
The reason for defining a subset $\bm{I}_{\kappa}$ is to divide a
noncommutative relation $[G_{z},H]$ into the commutative ones, as it will be
shown thereon. The properties of $\bm{I}_{\kappa}$ in the case of Coulomb
potentials will be assembled in §3.1.
The main goal of the present paragraph is to demonstrate that the eigenvalues
of $H$ in $L^{2}(\bm{R}^{6})$ can be established from the eigenvalues of
$H_{\kappa}$ in $L^{2}(\bm{I}_{\kappa})$.
###### Theorem 1.
Given the operator $[u,v](\varphi)=w(\varphi)$ in $\bm{I}_{\kappa}$ for
$\kappa=0,1,\ldots$ and $\varphi\in D_{0,\kappa}$. The elements $u$, $v$, $w$
denote any operator from $G_{z}$, $V_{0}$, $V_{1}$, $\ldots$, $V_{\kappa}$.
Then (1)
$[G_{z},V_{n}]=V_{n+1},\quad[V_{n},V_{m}]=0\quad\text{for all}\quad
n,m=0,1,2,\ldots,\kappa.$ (2.4)
The element $V_{n}$ denotes the operator of multiplication by $V_{n}$. (2) The
commutation relations in eq. (2.4) define the Lie algebra, denoted
$\mathcal{A}=\mathcal{A}(\bm{I}_{\kappa})$, with an operation
$\bm{I}_{\kappa}\times\bm{I}_{\kappa}\to\bm{I}_{\kappa}$.
###### Proof.
First, we shall prove eq. (2.4). Second, we shall demonstrate that
$\mathcal{A}$ is a Lie algebra indeed. Note that eventually the commutator
$[G_{z},V_{n}]$ terminates at $n=\kappa$ due to the definition of
$\bm{I}_{\kappa}$, eq. (2.3).
(1) Let us calculate the first commutator in eq. (2.4), namely,
$\displaystyle[G_{z},V_{n}](\varphi)=$ $\displaystyle
G_{z}V_{n}(\varphi)-V_{n}G_{z}(\varphi)=G_{z}(V_{n}\varphi)-V_{n}(G_{z}\varphi)$
$\displaystyle=$
$\displaystyle(G_{z}V_{n})(\varphi)=(G_{z}^{n+1}V^{\prime})(\varphi).$
But $(G_{z}^{n+1}V^{\prime})(\varphi)=V_{n+1}(\varphi)$, by definition.
Therefore, the identity is immediate. The second commutator,
$[V_{n},V_{m}](\varphi)=V_{n}V_{m}(\varphi)-V_{m}V_{n}(\varphi)=0$, is evident
provided $V_{n}$ and $V_{m}$ represent numerical functions.
(2) Elements $u=G_{z}$, $V_{0}$, $V_{1}$, $\ldots$, $V_{\kappa}$ form the
basis of a linear space $\mathcal{A}(\bm{I}_{\kappa})$ of dimension
$\kappa+2$. If endowed with the binary operation
$\mathcal{A}\times\mathcal{A}\to\mathcal{A}$, denoted $(u,v)\mapsto[u,v]$ for
all $u$, $v$ in $\mathcal{A}$, the linear space $\mathcal{A}$ must fulfill the
bilinearity, anticommutativity and Jacobi identity.
Bilinearity: $[au+bv,w]=a[u,w]+b[v,w]$ for any scalars $a$, $b$. Due to the
commutativity $au+bv=bv+au$ and distributivity $au+bu=(a+b)u$, it suffices to
consider two cases: (a) $u=G_{z}$, $v=V_{n}$, $w=V_{m}$; (b) $u=G_{z}$,
$v=V_{n}$, $w=G_{z}$; in all cases $n$, $m=0,1,\ldots,\kappa$.
(a):
$\displaystyle[aG_{z}+bV_{n},V_{m}](\varphi)=(aG_{z}+bV_{n})V_{m}(\varphi)-V_{m}(aG_{z}+bV_{n})(\varphi)$
$\displaystyle=aG_{z}V_{m}(\varphi)+bV_{n}V_{m}(\varphi)-aV_{m}G_{z}(\varphi)-bV_{m}V_{n}(\varphi)$
$\displaystyle=a[G_{z},V_{m}](\varphi)+b[V_{n},V_{m}](\varphi)$ (b):
$\displaystyle[aG_{z}+bV_{n},G_{z}](\varphi)=(aG_{z}+bV_{n})G_{z}(\varphi)-G_{z}(aG_{z}+bV_{n})(\varphi)$
$\displaystyle=aG_{z}G_{z}(\varphi)+bV_{n}G_{z}(\varphi)-aG_{z}G_{z}(\varphi)-bG_{z}V_{n}(\varphi)$
$\displaystyle=a[G_{z},G_{z}](\varphi)+b[V_{n},G_{z}](\varphi)$
Hence $[\cdot,\cdot]$ is bilinear.
Anticommutativity: $[u,v]=-[v,u]$; in particular, $[u,u]=0$. This property is
easy to verify by using the distributivity of the addition operation. Two
cases are considered: (a) $u=G_{z}$, $v=V_{n}$; (b) $u=V_{n}$, $v=V_{m}$; in
all cases $n$, $m=0,1,\ldots,\kappa$.
(a):
$\displaystyle[G_{z},V_{n}](\varphi)=G_{z}V_{n}(\varphi)-V_{n}G_{z}(\varphi)=-(V_{n}G_{z}-G_{z}V_{n})(\varphi)=-[V_{n},G_{z}](\varphi)$
(b):
$\displaystyle[V_{n},V_{m}](\varphi)=V_{n}V_{m}(\varphi)-V_{m}V_{n}(\varphi)=-(V_{m}V_{n}-V_{n}V_{m})(\varphi)=-[V_{m},V_{n}](\varphi)$
Hence $[\cdot,\cdot]$ is anticommutative.
Jacobi identity: $[[u,v],w]+[[v,w],u]+[[w,u],v]=0$. The identity is
antisymmetric with respect to the permutation of any two elements. Thus it
suffices to choose $u=G_{z}$, $v=V_{n}$, $w=V_{m}$ ($n$,
$m=0,1,\ldots,\kappa$). Then applying eq. (2.4) and anticommutativity of
$[\cdot,\cdot]$ one finds that
$\displaystyle[[G_{z},V_{n}],V_{m}](\varphi)+[[V_{n},V_{m}],G_{z}](\varphi)+[[V_{m},G_{z}],V_{n}](\varphi)=[G_{z},V_{n}]V_{m}(\varphi)$
$\displaystyle-
V_{m}V_{n+1}(\varphi)+[V_{n},V_{m}]G_{z}(\varphi)-[G_{z},V_{m}]V_{n}(\varphi)+V_{n}V_{m+1}(\varphi)$
$\displaystyle=[V_{n+1},V_{m}](\varphi)+[V_{n},V_{m+1}](\varphi)=0.$
This completes the proof. ∎
###### Corollary 2.
The Lie algebra $\mathcal{A}(\bm{I}_{\kappa})$ is nilpotent with the
nilpotency class $\kappa+1$.
###### Proof.
We need only to compute the length of the lower central series containing
ideals $\mathcal{A}^{c}=[\mathcal{A},\mathcal{A}^{c-1}]$, where
$\mathcal{A}^{0}=\mathcal{A}$, $c\geq 1$. Assume that $c=1$. Then
$\mathcal{A}^{1}=[\mathcal{A},\mathcal{A}]$ is spanned by the elements
$[u,v]$, where $u$, $v$ are in $\mathcal{A}$. Thus by eq. (2.4), the first
commutator subalgebra $\mathcal{A}^{1}$ contains elements
$\\{V_{n}\\}_{n=1}^{\kappa}$. Similarly, the second subalgebra ($c=2$)
$\mathcal{A}^{2}=[\mathcal{A},\mathcal{A}^{1}]$ is spanned by the elements
$[u,v]$, where $u\in\mathcal{A}$ and $v\in\mathcal{A}^{1}$, hence
$\\{V_{n}\\}_{n=2}^{\kappa}$ etc. Finally, $\mathcal{A}^{\kappa+1}=0$. This
proves that $\mathcal{A}$ is nilpotent with the central series of length
$\kappa+1$. ∎
The nilpotency of $\mathcal{A}(\bm{I}_{\kappa})$ implies the existence of an
isomorphism from $\mathcal{A}$ to the Lie algebra of strictly upper-triangular
matrices ([Hum72, §I.3]; [ZS83, §3.1.6]). Equivalently, there exists a
representation
$\varrho\\!\colon\thinspace\mathcal{A}\to\mathfrak{gl}(\bm{I}_{\kappa})$ given
by $\varrho(e)=G_{z}$, $\varrho(f_{n})=V_{n}$ ($n=0,1,\ldots,\kappa$), where
elements $g=e,f_{n}\in\mathcal{A}$ denote the strictly upper-triangular
matrices; in particular, $\mathcal{A}(\bm{I}_{1})$ is isomorphic to the
Heisenberg algebra, whereas $\mathcal{A}(\bm{I}_{0})$ is commutative. Clearly,
$[e,f_{n}]=f_{n+1}$, $[f_{n},f_{m}]=0$ for all $n,m=0,1,\ldots,\kappa$, and
$f_{\kappa+p}=0$ for all $p\geq 1$.
Assume now that $\mathcal{L}$ is a matrix Lie group with Lie algebra
$\mathcal{A}$ and $\exp\\!\colon\thinspace\mathcal{A}\to\mathcal{L}$ is the
exponential mapping for $\mathcal{L}$. Then the $\exp(\mathrm{i}tg)$ are in
$\mathcal{L}$ for all $g\in\mathcal{A}$ and for all $t\in\bm{R}$. Provided
$\Pi\\!\colon\thinspace\mathcal{L}\to\mathtt{GL}(\bm{I}_{\kappa})$ is a
representation of $\mathcal{L}$ on $\bm{I}_{\kappa}$, we get that
$\Pi\bigl{(}\exp(\mathrm{i}g)\bigr{)}=\exp\bigl{(}\mathrm{i}\varrho(g)\bigr{)}$.
But $\varrho(e)=G_{z}$, and thus
$\Pi\bigl{(}\exp(\mathrm{i}e)\bigr{)}(\theta)=I(\theta)$, the identity for all
$\theta\in D_{\kappa}$, by Lemma 2. On the other hand, $\varrho(f_{n})=V_{n}$
($n=0,1,\ldots,\kappa$) and
$\Pi\bigl{(}\exp(\mathrm{i}f_{n})\bigr{)}(\theta)=\exp(\mathrm{i}V_{n})(\theta)$.
The elements $I$, $\\{\exp(\mathrm{i}tV_{n})\\}_{n=0}^{\kappa}$ therefore form
a (bounded) group of unitary operators given by the map $D_{\kappa}\to
L^{2}(\bm{I}_{\kappa})$ for all $t\in\bm{R}$. In turn, it is a subgroup of the
group generated by $\mathrm{i}H_{n}$, where $H_{n}=T_{0}+V_{n}$ and
$T_{0}\\!\colon\thinspace D_{0,\kappa}\to L^{2}(\bm{I}_{\kappa})$ is self-
adjoint. Due to the filtering
$\mathcal{A}\supset\mathcal{A}^{1}\supset\ldots\supset\mathcal{A}^{\kappa}$,
the elements of a set $\\{H_{n}\\}_{n=0}^{\kappa}$ converge to a single
$H_{\kappa}$ [recall that the kernel of a Lie algebra homomorphism
$\mathcal{A}^{c}\to\mathcal{A}^{c+1}$ is $\\{f_{c}\\}$], which in turn
commutes with $G_{z}$, namely, $[G_{z},H_{\kappa}]=0$, by Corollary 2. As a
result, the eigenfunctions of operator $H_{\kappa}\\!\colon\thinspace
D_{\kappa}\to L^{2}(\bm{I}_{\kappa})$ are those of $G_{z}$, and thus
$\theta\in\mathrm{Ker}(E_{\kappa}-H_{\kappa})\cap\mathrm{Ker}(0-G_{z})\neq\varnothing$
for $E_{\kappa}\in\sigma(H_{\kappa})$, by Lemmas 1–2. In particular, whenever
$\bm{I}_{0}$ is nonempty, one should expect that $E_{0}=E$ due to the formal
coincidence of $H_{0}$ with $H$ (Remark 4). However, $H$ is in
$L^{2}(\bm{R}^{6})$ and it is defined on $D^{\prime}$ whereas $H_{0}$ in
$L^{2}(\bm{I}_{\kappa})$ is defined on $D_{\kappa}$ at a particular value
$\kappa=0$ (see also Remark 5). This means $E_{0}\neq E$, in general (that is,
for smooth $V^{\prime}$). On the other hand, provided $\bm{I}_{\kappa}$ is
nonempty for arbitrary $\kappa$, one finds from the above considered Lie
algebra filtering that $D^{\prime}$ is the space decomposition
$\oplus_{\kappa}D_{\kappa}$, where $\kappa$ goes from $0$ to $\infty$. But
$D^{\prime}$ is dense in $L^{2}(\bm{R}^{6})$ and $D_{\kappa}$ in
$L^{2}(\bm{I}_{\kappa})$. Thus by Remark 6, $U$ is the unitary transformation
from $\oplus_{\kappa}L^{2}(\bm{I}_{\kappa})$ to $L^{2}(\bm{R}^{6})$ so that
$U(\lambda_{1}^{-1}H_{1}\oplus\lambda_{2}^{-1}H_{2}\oplus\ldots)U^{-1}=H$.
Subsequently, for $E\in\sigma(H)$ and $E_{\kappa}\in\sigma(H_{\kappa})$, one
finds that $E=\sum_{\kappa}c_{\kappa}E_{\kappa}$, where
$c_{\kappa}=\lambda_{\kappa}^{-1}(\|\theta\|_{\kappa}/\|\phi\|)^{2}$. But
$\sum_{\kappa}\lambda_{\kappa}=1$ and
$\|\phi\|^{2}=\sum_{\kappa}\|\theta\|_{\kappa}^{2}$, with
$\theta=\theta(E_{\kappa})$. One thus derives
$\lambda_{\kappa}=(\|\theta\|_{\kappa}/\|\phi\|)^{2}$ and
$E=\sum_{\kappa=0}^{\infty}E_{\kappa}.$ (2.5)
As a result, we have established that solutions to the initially admitted
eigenvalue equation $H\phi=E\phi$ in $L^{2}(\bm{R}^{6})$ are obtained by
solving $H_{\kappa}\theta=E_{\kappa}\theta$ in $L^{2}(\bm{I}_{\kappa})$, where
$H_{\kappa}=T_{0}+V_{\kappa}$ with $T_{0}$ given in eq. (2.2a) and $\theta\in
D_{\kappa}$.
In the next section, we shall be concerned with the Coulomb potentials, though
one can easily enough apply the method to be presented to other spherically
symmetric potentials imposed under (A1)–(A3).
## 3\. Solutions for the three-body Hamiltonian operator with Coulomb
potentials
The Coulomb potential $V^{\prime}$ is a spherically symmetric translation
invariant function represented as a sum of functions
$V_{ij}^{\prime}=Z_{ij}/r_{ij}$, eq. (1.2); for the notations exploited here,
recall Remark 1. The scalar $Z_{ij}=Z_{ji}=Z_{i}Z_{j}$, where $Z_{i}$
($i=1,2,3$) denotes a nonzero integer (the charge of the $i^{\text{th}}$
particle). The spherical symmetry in $\bm{R}^{3}$ preserves rotation
invariance under SO(3) thus simplifying the Laplacian
$\Delta_{ij}=d^{2}/dr_{ij}^{2}+(2/r_{ij})d/dr_{ij}-l(l+1)/r_{ij}^{2}$, where
$l$ labels the SO(3)-irreducible representation. Bearing in mind Remarks 2–5,
we shall use $l_{1}$ to label representations for $ij=12$, and $l_{2}$ for
$ij=23$; the associated basis indices will be identified by
$\pi_{1}=-l_{1},-l_{1}+1,\ldots,0,1,\ldots,l_{1}$ and by
$\pi_{2}=-l_{2},-l_{2}+1,\ldots,0,1,\ldots,l_{2}$, respectively.
### 3.1. The stability criterion
Let us first study the properties of a subset $\bm{I}_{\kappa}$ introduced in
§2.2. Proceeding from the definition, eq. (2.3), we deduce that the nilpotency
of $\mathcal{A}$ is ensured whenever (see also the proof of Corollary 4)
$\sum_{1\leq i<j\leq 3}\frac{Z_{ij}}{r_{ij}^{k}}=0\quad\text{for all}\quad
k=\kappa+p+1=2,3,\ldots\quad\text{for all}\quad p=1,2,\ldots$ (3.1)
provided the $z$ axis is suitably oriented. Equation (3.1) suggests that at
least one integer $Z_{i}$ from $Z_{1}$, $Z_{2}$, $Z_{3}$ must be of opposite
sign – this is what we call the stability criterion for the Coulomb three-body
system (see also Remark 10). There is the classical picture to it: If the
three particles are all negatively (positively) charged, they move off from
each other to infinity due to the Coulomb repulsion. A well-known observation
follows therefore from the requirement that the Lie algebra $\mathcal{A}$ were
nilpotent. Henceforth, we accept the criterion validity.
Linearly dependent vectors $\\{\bm{r}_{ij}\\}$ (Remark 2) form a triangle
embedded in $\bm{R}^{3}$. Based on the present condition, we can prove the
following result.
###### Lemma 3.
Let $\omega_{k}$, $\sigma_{k}$, $\tau_{k}$ denote the angles between the pairs
of vectors $(\bm{r}_{12}$, $\bm{r}_{13})$, $(\bm{r}_{13}$, $\bm{r}_{23})$,
$(\bm{r}_{12}$, $\bm{r}_{23})$, respectively. If a given three-body system is
stable, then for any integer $k\geq 2$ such that (i)
$(Z_{2}/Z_{3})+(Z_{2}/Z_{1})\wp^{k}<0$ and (ii)
$\displaystyle C_{1}(\wp)\leq$
$\displaystyle\Biggl{(}-\frac{Z_{13}}{Z_{12}+Z_{23}\wp^{k}}\Biggr{)}^{1/k}\leq\frac{1+\wp}{\wp},\quad
C_{1}(\wp)=\left\\{\\!\\!\\!\begin{array}[]{ll}\frac{1-\wp}{\wp},&0<\wp\leq
1,\\\ 0,&\wp>1,\end{array}\right.$ ($\sigma_{k}$ is acute) $\displaystyle
0\leq$
$\displaystyle\Biggl{(}-\frac{Z_{13}}{Z_{12}+Z_{23}\wp^{k}}\Biggr{)}^{1/k}\leq
C_{2}(\wp),\quad C_{2}(\wp)=\left\\{\\!\\!\\!\begin{array}[]{ll}0,&0<\wp\leq
1,\\\ \frac{\wp-1}{\wp},&\wp>1,\end{array}\right.$
($\sigma_{k}$ is obtuse), there exists a multiplier $\wp\geq 0$ satisfying
$\sin\sigma_{k}=\wp\sin\omega_{k}$ so that eq. (3.1) holds for all
$0\leq\omega_{k},\sigma_{k}\leq\pi$ such that: (1) if
$0\leq\omega_{k},\sigma_{k}$ $\leq\pi/2$, then $0\leq\wp\leq
1/\sin\omega_{k}$; (2) if $0\leq\omega_{k}\leq\pi/2$ and
$\pi/2<\sigma_{k}\leq\pi$, then $1<\wp\leq 1/\sin\omega_{k}$; (3) if
$\pi/2<\omega_{k}\leq\pi$ and $0\leq\sigma_{k}\leq\pi/2$, then $0\leq\wp<1$;
(4) if $\pi/2<\omega_{k},\sigma_{k}\leq\pi$, then $\wp$ does not exist. In
case that $Z_{1}/Z_{3}<0$, the multiplier $\wp\neq(-Z_{1}/Z_{3})^{1/k}$ for
all suitable $0\leq\omega_{k},\sigma_{k}\leq\pi$.
###### Remark 7.
In particular, lemma states that for a certain integer $k\geq 2$, if such
exists at all, one can find a multiplier $\wp\geq 0$ such that the angles
$\omega_{k}$, $\sigma_{k}$, $\tau_{k}$ obtained from relations
$\sin\sigma_{k}=\wp\sin\omega_{k}$ and $\sin\tau_{k}=c_{k}\sin\omega_{k}$
(where $\tau_{k}=\omega_{k}+\sigma_{k}$) solve eq. (3.1). The multiplier
$c_{k}=\bigl{(}-Z_{13}/(Z_{12}+Z_{23}\wp^{k})\bigr{)}^{1/k}\wp$. Clearly, one
should bring to mind the sine law relating angles with the associated sides of
a triangle $\bm{r}_{12}+\bm{r}_{23}=\bm{r}_{13}$.
###### Corollary 3.
Let
$D_{k}(\wp)=\bigl{\\{}0\leq\omega_{k},\sigma_{k},\tau_{k}\leq\pi\\!\colon\thinspace\omega_{k}+\sigma_{k}-\tau_{k}=0,\sin\sigma_{k}=\wp\sin\omega_{k},\sin\tau_{k}=c_{k}\sin\omega_{k}\bigr{\\}}.$
The set $D_{k}(\wp)$ is nonempty if $k\geq 2$, the triplet ($1$, $\wp$,
$c_{k}$) fulfills the triangle validity, and (1) $Z_{1},Z_{2}<0$, $Z_{3}>0$ or
$Z_{1},Z_{2}>0$, $Z_{3}<0$ and $\wp^{k}<-Z_{1}/Z_{3}$ or (2) $Z_{1}>0$,
$Z_{2},Z_{3}<0$ or $Z_{1}<0$, $Z_{2},Z_{3}>0$ and $\wp^{k}>-Z_{1}/Z_{3}$ or
(3) $Z_{1},Z_{3}>0$, $Z_{2}<0$ or $Z_{1},Z_{3}<0$, $Z_{2}>0$ and $\wp>0$.
Otherwise, $D_{k}(\wp)=\varnothing$.
###### Proof of Lemma 3.
Although the proof to be produced fits any positive integer $k$, we shall make
a stronger restrictive condition, $k\geq 2$, due to eq. (3.1).
The combination of the sine law,
$r_{23}^{-1}\sin\omega_{k}=r_{12}^{-1}\sin\sigma_{k}=r_{13}^{-1}\sin\tau_{k}$,
and eq. (3.1) points to the following equation
$Z_{12}+Z_{23}\frac{\sin^{k}\sigma_{k}}{\sin^{k}\omega_{k}}+Z_{13}\frac{\sin^{k}\sigma_{k}}{\sin^{k}\tau_{k}}=0.$
[Note that the values $\omega_{k},\tau_{k}=0,\pi$ are allowed as well by
implying $\sigma_{k}=0,\pi$.] Then the expression for $c_{k}$ (refer to Remark
7) follows immediately if $\sin\sigma_{k}=\wp\sin\omega_{k}$ ($\wp\geq 0$).
The quantity in parantheses $()^{1/k}$ in $c_{k}$ is positive definite and
thus item (i) follows as well. Clearly, the denominator is nonzero; otherwise
$Z_{1}/Z_{3}<0$ and $\wp\neq(-Z_{1}/Z_{3})^{1/k}$ must hold.
By noting that
$0<r_{13}/r_{12}=\cos\omega_{k}\pm(1/\wp^{2}-\sin^{2}\omega_{k})^{1/2}$, we
find from eq. (3.1)
$Z_{12}+Z_{23}\wp^{k}+\frac{Z_{13}}{\bigl{(}\cos\omega_{k}\pm(1/\wp^{2}-\sin^{2}\omega_{k})^{1/2}\bigr{)}^{k}}=0,$
where "$+$" is for $0\leq\sigma_{k}\leq\pi/2$, and "$-$" for
$\pi/2<\sigma_{k}\leq\pi$. Items (1)–(4) follow directly from the above
equation: Eg let $\omega_{k}$, $\sigma_{k}>\pi/2$. The denominator is of the
form $(-x)^{1/k}$, $x>0$, hence improper (item (4) in lemma).
Substitute $\tau_{k}=\omega_{k}+\sigma_{k}$ and
$\sin\sigma_{k}=\wp\sin\omega_{k}$ in $\sin\tau_{k}=c_{k}\sin\omega_{k}$ and
get
$c_{k}\sin\omega_{k}=\sin\bigl{(}\pm\omega_{k}+\arcsin(\wp\sin\omega_{k})\bigr{)}$
[here, again, "$+$" is for $0\leq\sigma_{k}\leq\pi/2$, and "$-$" for
$\pi/2<\sigma_{k}\leq\pi$] yielding the estimates $1-\wp\leq c_{k}\leq 1+\wp$
for acute $\sigma_{k}$, and $0\leq c_{k}\leq\wp-1$ for obtuse $\sigma_{k}$.
Provided $c_{k}\geq 0$, substitute the definition for $c_{k}$ in obtained
inequalities and get item (ii). This completes the proof. ∎
###### Proof of Corollary 3.
Items (1)–(3) are obvious due to item (i) of Lemma 3. It remains to
demonstrate the triangle validity for $1$, $\wp$, $c_{k}$. This is done by
solving the equation
$c_{k}=\wp\biggl{(}\cos\omega_{k}\pm\sqrt{1/\wp^{2}-\sin^{2}\omega_{k}}\biggr{)}=\wp\Biggl{(}-\frac{Z_{13}}{Z_{12}+Z_{23}\wp^{k}}\Biggr{)}^{1/k}$
which yields
$\sin\omega_{k}=\frac{\bigl{(}(1+c_{k}+\wp)(1+c_{k}-\wp)(1-c_{k}+\wp)(c_{k}+\wp-1)\bigr{)}^{1/2}}{2\wp
c_{k}}$ (3.2)
and hence the triangle validity for the triplet $(1,\wp,c_{k})$ must hold due
to inequality $0\leq\sin\omega_{k}\leq 1$. ∎
###### Corollary 4.
The spherically symmetric functions $V_{\kappa}$ can be represented by three
equivalent forms, where two of them are
$V_{1}^{\kappa}=Z_{12}(\wp;\kappa,k)/r_{12}^{\kappa+1}$,
$V_{2}^{\kappa}=Z_{23}(\wp;\kappa,k)/r_{23}^{\kappa+1}$ with
$Z_{12}(\wp;\kappa,k)=(-1)^{\kappa}\kappa!\biggl{(}Z_{12}+Z_{23}\wp^{\kappa+1}+Z_{13}\Bigl{(}\frac{\wp}{c_{k}}\Bigr{)}^{\kappa+1}\biggr{)},$
and $Z_{23}(\wp;\kappa,k)=Z_{12}(\wp;\kappa,k)/\wp^{\kappa+1}$.
###### Proof.
Differentiate $V_{ij}^{\prime}$ $\kappa$ times with respect to $r_{ij}$,
$\frac{d^{\kappa}}{dr_{ij}^{\kappa}}\frac{Z_{ij}}{r_{ij}}=\frac{(-1)^{\kappa}\kappa!Z_{ij}}{r_{ij}^{\kappa+1}}.$
Then
$V_{\kappa}=\sum_{1\leq i<j\leq
3}\frac{d^{\kappa}}{dr_{ij}^{\kappa}}V_{ij}^{\prime}=(-1)^{\kappa}\kappa!\sum_{1\leq
i<j\leq 3}\frac{Z_{ij}}{r_{ij}^{\kappa+1}}.$
Put into use the sine law and Lemma 3,
$\frac{r_{12}}{r_{23}}=\wp,\quad\frac{r_{12}}{r_{13}}=\frac{\wp}{c_{k}}.$
Substitute the above equations in $V_{\kappa}$ and get the result. ∎
###### Remark 8.
Lemma 3 and Corollary 3 provide sufficient information to find nonempty
subsets $\bm{I}_{\kappa}$. Indeed, consider given nonzero integers $Z_{i}$
($i=1,2,3$) and the real, yet unspecified, multiplier $\wp\geq 0$. First,
establish possible integers $k\geq 2$, by Lemma 3. Second, substitute
determined values of $k$ in $c_{k}$. Third, substitute obtained coefficients
$c_{k}$ in eq. (3.2) and get possible angles $\omega_{k}\in D_{k}(\wp)$
(alternatively, simply apply Corollary 3); the subset $\bm{I}_{\kappa}$ is
nonempty whenever $D_{k}(\wp)$ is. Applications to some physical systems will
be displayed in §3.2.3.
###### Remark 9.
Although conditions in Lemma 3 and Corollary 3 are invariant under the
interchange of integers $Z_{i}$, there might appear some arrangement that does
not satisfy Lemma 3. If this is the case, one should select another one. For
example, $Z_{1}=Z_{2}=-1$, $Z_{3}=+1$, $\wp=1$ brings in $Z_{1}/Z_{3}<0$ and
$\wp\neq 1$ (see Lemma 3), which contradicts the initially defined $\wp=1$. On
the other hand, $Z_{1}=Z_{3}=-1$, $Z_{2}=+1$, $\wp=1$ brings in
$c_{k}=2^{-1/k}$, and all conditions in Lemma 3 as well as in Corollary 3 are
fulfilled. However, if none of arrangements of $Z_{i}$ fulfill the lemma, one
should conclude that the three-body system is unstable.
###### Remark 10.
We point out that in the present discussion, the definition for stability
differs from that exploited by [CS90, FB92, Hil77]. Here, we do not study the
cases of stability against dissociation (see also [MRW92, RFGS93]) by assuming
these conditions are fulfilled whenever bound states are considered. On the
other hand, provided the three-body system is subjected to the stability
criterion, eq. (3.1), one should deduce from Lemma 3 that at least
$\bm{I}_{0}\neq\varnothing$. As an important example of unbound three-body
system consider the positron-hydrogen system. Our calculated first excited
energy [substitute, in atomic units, $Z_{1}=Z_{3}=+1$, $Z_{2}=-1$,
$m_{1}=m_{2}=1$, $m_{3}=1836.1527$, $n_{1}=2$, $n_{2}=1$ in eq. (3.8b) and
then convert the result into Rydberg units] equals $E_{0}\simeq-0.25$ Ry,
which is almost consistent with that of Kar and Ho’s [KH05] (see also [DNW78])
derived $S$-wave resonance energy (around $-0.257$ Ry) associated with the
hydrogen $n=2$ threshold. In the positron-hydrogen system, $E_{1}$ does not
affect the total energy $E$ when the first excited states are considered,
which means some higher eigenstates $E_{\kappa}$ ($\kappa\geq 2$), if such
exist, should be included in order to obtain more accurate energies.
### 3.2. Eigenstates
We wish to evaluate $E\in\sigma_{\textit{disc}}(H)$. By eq. (2.5), $E$ is the
sum of $E_{\kappa}\in\sigma_{\textit{disc}}(H_{\kappa})$, where
$\displaystyle
H_{\kappa}=H_{\kappa}^{0}+\gamma(\nabla_{12}\cdot\nabla_{23}),\quad
H_{\kappa}^{0}=T_{1}+T_{2}+V^{\prime}$ $\displaystyle\text{and}\quad
T_{1}=-\alpha\Delta_{12},\quad T_{2}=-\beta\Delta_{23}$ (3.3)
($\alpha$, $\beta$ and $\gamma$ are as in Corollary 1). We first consider the
Hughes–Eckart term. Following [Sim70, Appendix 2], we demonstrate that:
###### Proposition 1.
$\inf\sigma(H_{\kappa})=\inf\sigma(H_{\kappa}^{0})$.
###### Proof.
In agreement with Corollary 4 consider $H_{\kappa}$ in $\bm{p}$-space
$H_{\kappa}=h_{\kappa}+\beta\bm{p}_{23}^{2}-\gamma(\bm{p}_{12}\cdot\bm{p}_{23}),\quad
h_{\kappa}=\alpha\bm{p}_{12}^{2}+Z_{12}(\wp;\kappa,k)r_{12}^{-\kappa-1}.$
[Note that $Z_{12}(\wp;\kappa,k)r_{12}^{-\kappa-1}$ can be replaced by
$Z_{23}(\wp;\kappa,k)r_{23}^{-\kappa-1}$; see Corollary 4. Subsequently,
$\alpha\bm{p}_{12}^{2}$ is replaced by $\beta\bm{p}_{23}^{2}$ in
$h_{\kappa}$]. Since $\beta>0$, and $h_{\kappa}$ and $\beta\bm{p}_{23}^{2}$
involve independent coordinates, we see
$\inf\sigma(h_{\kappa})=\inf\sigma(H_{\kappa}^{0})$.
Let $\bm{p}=a\bm{p}_{12}+b\bm{p}_{23}$ with $b>a>0$. Then
$H_{\kappa}^{\prime}=H_{\kappa}+\mu\bm{p}^{2}$ with $\mu>0$, where
$H_{\kappa}^{\prime}=H_{\kappa}^{\prime\prime}+(\mu ab-\gamma/2)\bm{q}^{2}$,
where $\bm{q}=\bm{p}_{12}+\bm{p}_{23}$ and
$H_{\kappa}^{\prime\prime}=h_{\kappa}+[\mu
a(a-b)+\gamma/2]\bm{p}_{12}^{2}+[\mu b(b-a)+\gamma/2]\bm{p}_{23}^{2}.$
We choose $\mu=\gamma/[2a(b-a)]>0$ for $b>a>0$. Then $\mu ab-\gamma/2$ equals
$\gamma a/[2(b-a)]>0$ and
$H_{\kappa}^{\prime\prime}=h_{\kappa}+[\beta+\gamma(1+b/a)/2]\bm{p}_{23}^{2}.$
But then $\inf\sigma(H_{\kappa}^{\prime\prime})=\inf\sigma(h_{\kappa})$ since
$h_{\kappa}$ and $\bm{p}_{23}^{2}$ involve independent coordinates.
Subsequently,
$\inf\sigma(H_{\kappa}^{\prime})=\inf\sigma(H_{\kappa}^{\prime\prime})$, for
$\mu ab-\gamma/2>0$, and finally,
$\inf\sigma(H_{\kappa}^{\prime})=\inf\sigma(H_{\kappa})$. Hence
$\inf\sigma(H_{\kappa}^{0})=\inf\sigma(H_{\kappa})$ as desired. ∎
###### Remark 11.
Proposition 1 tells us that the ground state of $H_{\kappa}$ is that of
$H_{\kappa}^{0}$.
Second, consider $H_{\kappa}^{0}$. It is a self-adjoint operator on
$D_{0,\kappa}$ whose eigenfunctions are in $D_{\kappa}$. By Remark 6,
$L^{2}(\bm{I}_{\kappa})\equiv L^{2}(\bm{R}^{6},d\mu_{\kappa})$. But, on the
other hand, $L^{2}(\bm{I}_{\kappa})$ is isomorphic to
$L^{2}(\bm{R}^{3},d\mu_{\kappa,1})\otimes L^{2}(\bm{R}^{3},d\mu_{\kappa,2})$
provided $d\mu_{\kappa}=d\mu_{\kappa,1}\otimes d\mu_{\kappa,2}$ [RS80, Theorem
II.10]. We denote $L^{2}(\bm{R}^{3},d\mu_{\kappa,i})$ by $\bm{H}_{\kappa}$ for
$i=1,2$. Thus there exist unitary operators $U_{1}$ and $U_{2}$ such that
$\displaystyle U_{1}H_{\kappa}^{0}U_{1}^{-1}=H_{\kappa,1}^{0}\otimes
I+I\otimes T_{2},\quad U_{2}H_{\kappa}^{0}U_{2}^{-1}=T_{1}\otimes I+I\otimes
H_{\kappa,2}^{0}$ $\displaystyle\text{with}\quad
H_{\kappa,i}^{0}=T_{i}+V_{i}^{\kappa}\quad(i=1,2)$ (3.4)
and $V_{i}^{\kappa}$ is as in Corollary 4 for $i=1,2$. But then
$\sigma_{\textit{disc}}(H_{\kappa}^{0})=\sigma_{\textit{disc}}(H_{\kappa,1}^{0})=\sigma_{\textit{disc}}(H_{\kappa,2}^{0})$
(3.5)
since
$\sigma_{\textit{disc}}(T_{1})=\sigma_{\textit{disc}}(T_{2})=\varnothing$.
Equation (3.5) allows one to determine $\wp$ (Lemma 3) for a given $\kappa$ as
well as $E_{\kappa}^{0}\in\sigma_{\textit{disc}}(H_{\kappa}^{0})$.
Indeed, an ordinary decomposition of product $L^{2}(0,\infty;r^{2}dr)\otimes
L^{2}(S^{2})$, with $r=r_{12}$ for $i=1$ and $r=r_{23}$ for $i=2$ ($S^{2}$ is
a unit sphere), by an infinite sum of SO(3)-irreducible subspaces yields the
eigenfunctions $\theta_{\kappa,l}$ of $H_{\kappa,i}^{0}$ which are of the form
$C_{\kappa}r^{-1}u_{\kappa,l}(r)Y_{l\pi}(\Omega)$: $C_{\kappa}$ the
normalization constant, $u_{\kappa,l}$ as in eq. (1.3), $Y_{l\pi}$ the
spherical harmonics normalized to $1$ (recall that $\pi=\pi_{1}$ for $i=1$ and
$\pi=\pi_{2}$ for $i=2$; the same for $l$ and the spherical angles $\Omega$).
In eq. (1.3), the parameters $A_{\kappa}=Z_{12}(\wp;\kappa,k)/\alpha$ and
$B_{\kappa}=E_{\kappa}^{0}/\alpha$ for $i=1$, and
$A_{\kappa}=Z_{23}(\wp;\kappa,k)/\beta$ and $B_{\kappa}=E_{\kappa}^{0}/\beta$
for $i=2$. Here $\alpha$ and $\beta$ (as well as $\gamma$ in eq. (3.3)) are as
in Corollary 1, and $Z_{12}(\wp;\kappa,k)$ and $Z_{23}(\wp;\kappa,k)$ are as
in Corollary 4. Therefore, if one solves eq. (1.3) with respect to
$B_{\kappa}$ for both $i=1$ and $2$, then the eigenvalues $E_{\kappa}^{0}$ are
found from eq. (3.5): $E_{\kappa}^{0}\propto B_{\kappa}$, where the
coefficient of proportionality is either $\alpha$ ($i=1$) or $\beta$ ($i=2$).
Since $B_{\kappa}$ depends on $A_{\kappa}$ and $A_{\kappa}$ is a function of
$\wp$, eq. (3.5) allows one to establish $\wp$ as well.
Below we shall calculate $\sigma_{\textit{disc}}(H_{\kappa}^{0})$ and in
particular $\inf\sigma_{\textit{disc}}(H_{\kappa})$ for integers $\kappa=0$
and $1$.
#### 3.2.1. Bound states for $\kappa=0$.
Allowing $A_{0}<0$, solutions $u_{0,l}(r)$ to eq. (1.3) appear as a linear
combination of the Whittaker [Whi03] function $W_{n,l+1/2}(2r\sqrt{-B_{0}})$
and its linearly independent, in general, companion solution
$M_{n,l+1/2}(2r\sqrt{-B_{0}})$, with $B_{0}=-A_{0}^{2}/(4n^{2})$ and
$n=l+1,l+2,\ldots$ For $n>l$,
$W_{n,l+1/2}(z)=\bigl{[}(-1)^{n+l+1}(n+l)!/(2l+1)!\bigr{]}M_{n,l+1/2}(z)$ and
thus $M_{n,l+1/2}$ and $W_{n,l+1/2}$ are linearly dependent. It suffices
therefore to select one of them, say $M_{n,l+1/2}(z)$. The boundary conditions
for $u_{0,l}(r)$ in $L^{2}(0,\infty;dr)$ as well as for $u_{0,l}(r)/r$ in
$L^{2}(0,\infty;r^{2}dr)$ are fulfilled: $M_{n,l+1/2}(z)\to 0$ and
$M_{n,l+1/2}(z)/z\to\delta_{l0}$ as $z\to 0$, and $M_{n,l+1/2}(z)\to 0$ and
$M_{n,l+1/2}(z)/z\to 0$ as $z\to\infty$.
On the other hand, if $A_{0}/r$ is a repulsive potential, $A_{0}>0$, then
$u_{0,l}(r)$ is represented by a linear combination of functions
$W_{-n,l+1/2}(2r\sqrt{-B_{0}})$ and $M_{-n,l+1/2}(2r\sqrt{-B_{0}})$. But
$M_{-n,l+1/2}(z)\to\infty$ as $z\to\infty$ and $W_{-n,l+1/2}(z)/z\to\infty$ as
$z\to 0$. Hence none of bound states are observed. However, as pointed out by
Albeverio et al. [AGHKH04, Theorem 2.1.3] (see also [EFG12]), a single bound
state exists even if $A_{0}\geq 0$, provided that the Hamiltonian operator
with $l=0$ is defined on a domain of one-parameter self-adjoint extensions.
For an attractive potential, eq. (3.5) yields
$\wp=\frac{n_{1}}{n_{2}}\sqrt{\frac{\alpha}{\beta}}$ (3.6)
(refer to Lemma 3 for the definition of $\wp$) with integers
$n_{1}=l_{1}+1,l_{1}+2,\ldots$ and $n_{2}=l_{2}+1,l_{2}+2,\ldots$ Equation
(3.6) indicates that eigenvalues $E_{0}^{0}$ are labeled by integers
$n_{1},n_{2}=1,2,\ldots$ and $k=2,3,\ldots$, namely,
$E_{0}^{0}=E(n_{1},n_{2},k)$, and
$\displaystyle\sigma_{\textit{disc}}(H_{0}^{0})=$
$\displaystyle\inf_{D_{k}(\wp)\neq\varnothing}\Biggr{\\{}-\frac{1}{4}\biggl{(}\frac{Z_{12}}{n_{1}\sqrt{\alpha}}+\frac{Z_{3}}{n_{2}\sqrt{\beta}}\Bigl{(}Z_{2}+\frac{Z_{1}}{c_{k}}\Bigr{)}\biggr{)}^{2}\\!\colon\thinspace$
$\displaystyle
c_{k}=\biggl{(}-\frac{Z_{13}}{Z_{12}+Z_{23}\wp^{k}}\biggr{)}^{1/k}\wp;n_{i}=l_{i}+1,l_{i}+2,\ldots;$
$\displaystyle l_{i}=0,1,\ldots;i=1,2\Biggr{\\}}.$ (3.7)
The procedure to find appropriate $k=p+1$ is described in §3.1 (see Remarks
7–8). In particular, if $D_{\infty}(\wp)$ is nonempty (Corollary 3), that is,
for $p=\infty$, eq. (3.1), it holds $E(1,1,\infty)\leq
E(n_{1},n_{2},\infty)\leq E(n_{1},n_{2},k)$, and eq. (3.7) is simplified to
$\displaystyle\sigma_{\textit{disc}}(H_{0}^{0})=$
$\displaystyle\Biggl{\\{}-\frac{1}{4}\biggl{(}\frac{Z_{1}(Z_{2}+Z_{3})}{n_{1}\sqrt{\alpha}}+\frac{Z_{23}}{n_{2}\sqrt{\beta}}\biggr{)}^{2}\\!\colon\thinspace
0<\wp\leq 1,$ $\displaystyle
n_{i}=l_{i}+1,l_{i}+2,\ldots;l_{i}=0,1,\ldots;i=1,2\Biggr{\\}},$ (3.8a)
$\displaystyle=$
$\displaystyle\Biggl{\\{}-\frac{1}{4}\biggl{(}\frac{Z_{12}}{n_{1}\sqrt{\alpha}}+\frac{Z_{3}(Z_{1}+Z_{2})}{n_{2}\sqrt{\beta}}\biggr{)}^{2}\\!\colon\thinspace\wp>1,$
$\displaystyle
n_{i}=l_{i}+1,l_{i}+2,\ldots;l_{i}=0,1,\ldots;i=1,2\Biggr{\\}}.$ (3.8b)
###### Remark 12.
Assume that given $Z_{1}=Z_{2}=+1$, $Z_{3}=-1$ and $m_{1}\leq m_{3}$. Then
$0<\wp\leq n_{1}/n_{2}$ and $k=2,3,\ldots$ for all $n_{1}\leq n_{2}$. By eq.
(3.8a), the lower bound of $\sigma_{\textit{disc}}(H_{0}^{0})$ equals
$E(1,1,\infty)=-(4\beta)^{-1}$, which is, under the same conditions, in exact
agreement with that given by Martin et al. [MRW92, eq. (13)] (see also [CS90,
§II A], [FB92, eq. (3a)]). That is to say, for the bound three-unit-charge
system, $\inf\sigma_{\textit{disc}}(H_{0}^{0})$ is the lowest bound state
energy at threshold.
#### 3.2.2. Bound states for $\kappa=1$.
We deduce from Corollary 4 that $A_{1}>0$ in case $A_{0}<0$. Following the
method developed by Nicholson [Nic62], for a repulsive potential
$A_{1}/r^{2}$, we specify the eigenstates which result when this potential is
cut off by an infinite repulsive core at $r=0$. Namely, the only solutions
$u_{1,l}(r)$ in $L^{2}(0,\infty;dr)$ which vanish at $r=\infty$ are
$\sqrt{r}K_{\nu}(r\sqrt{-B_{1}})$, $\nu^{2}=A_{1}+(l+1/2)^{2}$, where
$K_{\nu}(z)$ denotes the modified Bessel function of the second kind [we
specify positive values of $\nu$ due to $K_{\nu}(z)=K_{-\nu}(z)$]. On the
other hand, $u_{1,l}(r)$ is infinite at $r=0$. As demonstrated in [Nic62], the
solutions $\sqrt{r}K_{\nu}(r\sqrt{-B_{1}})$ exist if $r>r_{0}$, provided that
variables $\nu$ and $B_{1}$ satisfy $K_{\nu}(r_{0}\sqrt{-B_{1}})=0$; $r_{0}$
is known as the cut-off radius. This agrees with Case [Cas50] who was the
first to establish that for the potentials as singular as $r^{-2}$ or greater,
bound states are determined up to the phase factor associated with $r_{0}$.
The solutions to $u_{1,l}(r_{0})=0$ are found by expanding $K_{\nu}(z)$ in
terms of $I_{\nu}(z)$, the modified Bessel function of the first kind. The
result is $I_{\nu}(r_{0}\sqrt{-B_{1}})=I_{-\nu}(r_{0}\sqrt{-B_{1}})$ or
equivalently,
$\sum_{n=1}^{\infty}\frac{[(r_{0}/2)\sqrt{-B_{1}}]^{2n-2+\nu}}{(n-1)!\Gamma(n+\nu)}=\sum_{n=1}^{\infty}\frac{[(r_{0}/2)\sqrt{-B_{1}}]^{2n-2-\nu}}{(n-1)!\Gamma(n-\nu)}.$
Explicitly,
$B_{1}=-\biggl{(}\frac{2}{r_{0}}\biggr{)}^{2}\biggl{(}\frac{\Gamma(n+\nu)}{\Gamma(n-\nu)}\biggr{)}^{1/\nu}\quad\text{($\Gamma$
the Gamma function)}$ (3.9)
for integers $n>\nu$, and $B_{1}=0$ (no bound states) for integers $1\leq
n\leq\nu$. Substitute $\nu^{2}=A_{1}+(l+1/2)^{2}$ in $n>\nu$ and, as in the
case when $\kappa=0$, come by $n=l+1,l+2,\ldots$ Subsequently, the coupling
constant is bounded by $0<A_{1}<n^{2}-(l+1/2)^{2}\leq n^{2}-1/4$. It appears
that $B_{1}<0$ is unbounded from below while the upper bound comes through
$\nu\to n$.
As in the case for $\kappa=0$, the eigenvalues
$E_{1}^{0}\in\sigma_{\textit{disc}}(H_{1}^{0})$ are found from eq. (3.5), or
in particular, from eq. (3.9). The result reads
$\displaystyle\sigma_{\textit{disc}}(H_{1}^{0})=$
$\displaystyle\inf_{D_{k}(\wp)\neq\varnothing}\Biggl{\\{}E_{1}^{0}=\varkappa
B_{1}\\!\colon\thinspace\varkappa=\varkappa(i)=\biggl{\\{}\begin{array}[]{ll}\alpha,&i=1\\\
\beta,&i=2\end{array};B_{1}=B_{1}(i)$ (3.12)
$\displaystyle=-\biggl{(}\frac{2}{r_{0}}\biggr{)}^{2}\biggl{(}\frac{\Gamma(n_{i}+\nu_{i})}{\Gamma(n_{i}-\nu_{i})}\biggr{)}^{1/\nu_{i}};\nu_{i}^{2}=A_{1}^{2}+(l_{i}+1/2)^{2};0<\nu_{i}<n_{i};$
$\displaystyle n_{i}=l_{i}+1,l_{i}+2,\ldots;l_{i}=0,1,\ldots;r_{0}>0;$
$\displaystyle\alpha B_{1}(1)=\beta B_{1}(2)\Biggr{\\}}.$ (3.13)
###### Remark 13.
The fact that $B_{1}$, eq. (3.9), is not bounded from below (for $n\to\infty$)
does not necessary mean that $\sigma_{\textit{disc}}(H_{1}^{0})$ is unbounded
either, as this is still to be verified by solving $\alpha B_{1}(1)=\beta
B_{1}(2)$ as in eq. (3.13). In agreement with Lemma 3, it is apparent that
solutions $\wp$ and $k\geq 3$ exist only for appropriate integers $n_{1}$ and
$n_{2}$ whose range strictly depends on masses (or equivalently, on
multipliers $\alpha$, $\beta$). In those cases when none of common solutions
are obtained, one should deduce that $E_{1}^{0}$ does not affect the total
energy $E$ and higher eigenstates $E_{\kappa}$ ($\kappa\geq 2$), if such exist
at all, should be added up to the series of $E$ for obtaining more accurate
energies (see also Remark 10 for analogous discussion in the case when
$\kappa=0$). The numerical confirmation to it will be given below.
#### 3.2.3. Some numerical results
To illustrate the application of the approach presented in this paper, let us
consider the helium atom (He) and the positronium negative ion (Ps-). Although
numerical methods to calculate bound states of these physical systems are
known in great detail for the most part due to Hylleraas [Hyl29], our goal is
to comment on results following the analytic solutions obtained in the paper.
The reason for choosing these atomic systems is due to different
characteristics of the particles they are composed of. We shall calculate some
lower bound states associated with the scalar SO(3) representation ($l=0$). On
that account, the issue of possible function antisymmetrization is left out
from further consideration as well as Proposition 1 holds. All calculations
are performed in atomic units.
The helium atom contains two electrons ($Z_{1}=Z_{2}=-1$, $m_{1}=m_{2}=1$) and
a nucleus ($Z_{3}=+2$, $m_{3}=7294.299536$). Here and elsewhere below,
$n_{1}=n_{2}=1$ ($l_{1}=l_{2}=0$). First, consider the case $\kappa=0$. Our
task is to find integers $k\geq 2$ such that $D_{k}(\wp)\neq\varnothing$. By
eq. (3.6), $\wp=0.707155<1$, thus $c_{k}=\wp(1/2-\wp^{k})^{-1/k}$ exists for
all $k\geq 3$. The variables $\wp$, $c_{k}$ satisfy all necessary conditions
in Corollary 3 for all $k\geq 3$. Subsequently,
$\inf\sigma_{\textit{disc}}(H_{0})$ equals $E(1,1,\infty)=-2.914048$, by eq.
(3.8a). Note that the result is invariant under the change of charges and
corresponding masses (see Remark 9) with $Z_{1}=+2$, $Z_{2}=Z_{3}=-1$ and
$m_{1}=7294.299536$, $m_{2}=m_{3}=1$. In this case, $\wp=1.41412>1$ and
$c_{k}=\wp(\wp^{k}/2-1)^{-1/k}$ exists for all $k\geq 3$. Again, the
conditions in Corollary 3 are fulfilled, and the minimal eigenvalue (refer to
eq. (3.8b)) is that obtained just above. In comparison, assume that
$Z_{1}=Z_{3}=-1$, $Z_{2}=+2$. Then $\wp=1$, $k\geq 2$ and the lower bound
$E(1,1,\infty)=-(9/4)\alpha^{-1}=-4.499383$. As seen, the present eigenvalue
is much lower than the above given one. A somewhat identical tendency is
observed in all helium-like ions (Li+, Be2+, B3+ etc.): while the two out of
three arrangements provide the same eigenvalues, the third one differs and it
is much lower than the other two. A distinctive feature of this particular
arrangement is that the multiplier $\wp=1$, and vectors
$\\{\bm{r}_{ij}\\}_{1\leq i<j\leq 3}$ form the equilateral triangle at
$k=\infty$, $\omega_{\infty}=\sigma_{\infty}=\pi-\tau_{\infty}=\pi/3$. To
explain the appearance of solutions $-(9/4)\alpha^{-1}\simeq-9/2$, we refer to
the zeroth order perturbation theory which gives the energy $-Z_{2}^{2}=-4$,
provided that the interaction potential $r_{13}^{-1}$ between two electrons
$Z_{1}=-1$ and $Z_{3}=-1$ is neglected. This is not the case, as demonstrated
above, for the remaining two arrangements because both interactions
$r_{12}^{-1}$ and $r_{23}^{-1}$ are included in $H_{\kappa,i}^{0}$ explicitly;
see eq. (3.4).
(a) The contribution of $E_{1}$ does not affect the ground state energy of the
helium atom, which is $E_{0}=-2.914048$ a.u. The curves are plotted for $k=3$.
(b) The contribution of $E_{1}$ to the ground state energy of the positronium
negative ion is well-defined with the inclusion of the cut-off radius.
Figure 1. (Color online) Solution to $\alpha B_{1}(1)=\beta B_{1}(2)$ with
respect to $k$ and $\wp$; see eq. (3.13). Solutions are found at the points,
where the plane curves intersect with the same color (the same $k$) dashed
curves.
Second, consider the case $\kappa=1$. For the arrangement $Z_{1}=Z_{2}=-1$ and
$Z_{3}=+2$, the coefficient $c_{k}=\wp(1/2-\wp^{k})^{-1/k}$, hence
$1/2-\wp^{k}>0$ for all $k\geq 3$. In this case, the estimate
$1/2<\nu_{1},\nu_{2}<1$ yields $k=3$. However, none of $0<\wp<2^{-1/3}$
satisfy $\alpha B_{1}(1)=\beta B_{1}(2)$ in eq. (3.13), as it is clear from
Fig. 1(a). None of common solutions are obtained for the remaining two
arrangements as well. Following [Cas50], therefore, we deduce that for the
helium atom, the lowest (ground) eigenvalue of $H_{0}$ is obtained by the
first expansion term $E_{0}^{0}$ in eq. (2.5).
The positronium negative ion contains two electrons ($Z_{1}=Z_{3}=-1$ and
$m_{1}=m_{3}=1$) and positron ($Z_{2}=+1$, $m_{2}=1$); for $\kappa=0$, the
other two arrangements are improper due to Lemma 3:
$\wp=(-Z_{1}/Z_{3})^{1/k}=1$ $\forall k\geq 2$. The lowest state
$\inf\sigma_{\textit{disc}}(H_{0})$ is found for $k=\infty$ ($\wp=1$,
$\omega_{\infty}=\pi/3$), and it equals $E(1,1,\infty)=-1/4$. For $\kappa=1$,
the bound $1/2<\nu_{1},\nu_{2}<1$ yields $\wp=1$, $k\geq 3$. Then
$\nu_{1}=\nu_{2}=(9/4-4^{1/k})^{1/2}$, hence $k=3,4,5,6$ (see Fig. 1(b)). By
eq. (3.13), $\inf\sigma_{\textit{disc}}(H_{1})$ is at $k=3$ and it equals
$E_{1}\simeq-0.515488/r_{0}^{2}$. On condition that the ground state of Ps- is
$-0.261995$, by [MRW92], we find that the cut-off radius is $r_{0}\simeq 6.56$
(eg $r_{0}=4$ in [MP02]). Therefore, for the positronium negative ion, only
the cut-off radius $r_{0}$ is needed to calculate the ground state energy.
Acknowledgement The author is very grateful to Dr. G. Merkelis for very
instructive and stimulating discussions. It is a pleasure to thank Prof. R.
Karazija for comments on an earlier version of the paper and Dr. A. Bernotas
for attentive revision of the present manuscript and for valuable remarks.
## References
* [AGHKH04] S. Albeverio, F. Gesztesy, R. Hoegh-Krohn, and H. Holden. Solvable Models in Quantum Mechanics. AMS Chelsea Publishing (Providence, Rhode Island), 2 edition, 2004.
* [BBC80] A. O. Barut, M. Berrondo, and G. G. Calderón. J. Math. Phys., 21:1851, 1980.
* [BD87] A. K. Bhatia and R. J. Drachman. Phys. Rev. A, 35(10):4051, 1987.
* [Cas50] K. M. Case. Phys. Rev., 80(5):797, 1950.
* [CS90] Z. Chen and L. Spruch. Phys. Rev. A, 42(1):133, 1990.
* [DNW78] G. D. Doolen, J. Nuttall, and C. J. Wherry. Phys. Rev. Lett., 40(5):313, 1978.
* [EFG12] J. G. Esteve, F. Falceto, and P. R. Giri. Phys. Rev. A, 85:022104, 2012.
* [FB92] A. M. Frolov and D. M. Bishop. Phys. Rev. A, 45(9):6236, 1992.
* [FL71] W. M. Frank and D. J. Land. Rev. Mod. Phys., 43(1):36, 1971.
* [Gao98] B. Gao. Phys. Rev. A, 58(5):4222, 1998.
* [Gao99a] B. Gao. Phys. Rev. A, 59(4):2778, 1999.
* [Gao99b] B. Gao. Phys. Rev. Lett., 83(21):4225, 1999.
* [Gao08] B. Gao. Phys. Rev. A, 78:012702, 2008.
* [GOK+01] B. Gönül, O. Özer, M. Kocak, D. Tutcu, and Y. Cancelik. J. Phys. A: Math. Gen., 34:8271, 2001.
* [Hil77] R. N. Hill. J. Math. Phys., 18:2316, 1977.
* [Hum72] J. E. Humphreys. Introduction to Lie Algebras and Representation Theory. Springer, 1972.
* [Hyl29] E. A. Hylleraas. Z. Phys., 54:347, 1929.
* [IS11] S. Iqbal and F. Saif. J. Math. Phys., 52(082105), 2011.
* [Kat51] T. Kato. Trans. Amer. Math. Soc., 70:195, 1951.
* [KH05] S. Kar and Y. K. Ho. J. Phys. B: At. Mol. Opt. Phys., 38:3299, 2005.
* [MEF01] M. J. Moritz, C. Eltschka, and H. Friedrich. Phys. Rev. A, 63:042102, 2001.
* [MP02] A. P. Mills and P. M. Platzman. New experiments with bright positron and positronium beams. In C. M. Surko and F. A. Gianturco, editors, New Directions in Antimatter Chemistry and Physics, pages 115–126. Springer Netherlands, 2002\.
* [MRW92] A. Martin, J-M. Richard, and T. T. Wu. Phys. Rev. A, 46(7):3697, 1992.
* [NCU94] V. C. A. Navarro, A. L. Coelho, and N. Ullah. Phys. Rev. A, 49(2):1477, 1994.
* [Nic62] A. F. Nicholson. Austr. J. Phys., 15:174, 1962.
* [RFGS93] J.-M. Richard, J. Fröhlich, G.-M. Graf, and M. Seifert. Phys. Rev. Lett., 71(9):1332, 1993.
* [Rob00] R. W. Robinett. J. Math. Phys., 41(4):1801, 2000.
* [RS78] M. Reed and B. Simon. Methods of Modern Mathematical Physics. IV: Analysis of Operators, volume 4. Academic Press, Inc. (London) LTD., 1978.
* [RS80] M. Reed and B. Simon. Methods of Modern Mathematical Physics I: Functional Analysis, volume 1. Academic Press, Inc. (London) LTD., 1980.
* [SG87] N. Simonović and P. Grujić. J. Phys. B: At. Mol. Opt. Phys., 20:3427, 1987.
* [Sim70] B. Simon. Helv. Phys. Acta, 43(6):607, 1970.
* [Sim00] B. Simon. J. Math. Phys., 41(6):3523, 2000.
* [Spe64] R. M. Spector. J. Math. Phys., 5(9):1185, 1964.
* [Whi03] E. T. Whittaker. Bull. Amer. Math. Soc., 10(3):125, 1903.
* [Yaf74] D. R. Yafaev. Mat. Sb. [in Russian], 94(136)(4(8)):567, 1974. English translation in Math. USSR Sb. 23:535, 1974.
* [ZS83] D. P. Zhelobenko and A. I. Shtern. Representations of Lie groups [in Russian]. Nauka, Moscow, 1983.
|
arxiv-papers
| 2011-05-20T07:55:59 |
2024-09-04T02:49:18.983788
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Rytis Jursenas",
"submitter": "Rytis Jursenas Dr.",
"url": "https://arxiv.org/abs/1105.4029"
}
|
1105.4058
|
Biometrics / Book 1Human Identity Verification based on Heart Sounds: Recent
Advances and Future Directions Italy
## 1 Introduction
Identity verification is an increasingly important process in our daily lives.
Whether we need to use our own equipment or to prove our identity to third
parties in order to use services or gain access to physical places, we are
constantly required to declare our identity and prove our claim.
Traditional authentication methods fall into two categories: proving that you
know something (i.e., password-based authentication) and proving that you own
something (i.e., token-based authentication).
These methods connect the identity with an alternate and less rich
representation, for instance a password, that can be lost, stolen, or shared.
A solution to these problems comes from biometric recognition systems.
Biometrics offers a natural solution to the authentication problem, as it
contributes to the construction of systems that can recognize people by the
analysis of their anatomical and/or behavioral characteristics. With biometric
systems, the representation of the identity is something that is directly
derived from the subject, therefore it has properties that a surrogate
representation, like a password or a token, simply cannot have (Prabhakar et
al. (2003); Jain et al. (2004, 2006)).
The strength of a biometric system is determined mainly by the trait that is
used to verify the identity. Plenty of biometric traits have been studied and
some of them, like fingerprint, iris and face, are nowadays used in widely
deployed systems.
Today, one of the most important research directions in the field of
biometrics is the characterization of novel biometric traits that can be used
in conjunction with other traits, to limit their shortcomings or to enhance
their performance.
The aim of this chapter is to introduce the reader to the usage of heart
sounds for biometric recognition, describing the strengths and the weaknesses
of this novel trait and analyzing in detail the methods developed so far and
their performance.
The usage of heart sounds as physiological biometric traits was first
introduced in Beritelli and Serrano (2007), in which the authors proposed and
started exploring this idea. Their system is based on the frequency analysis,
by means of the Chirp $z$-Transform (CZT), of the sounds produced by the heart
during the closure of the mitral tricuspid valve and during the closure of the
aortic pulmonary valve. These sounds, called S1 and S2, are extracted from the
input signal using a segmentation algorithm. The authors build the identity
templates using feature vectors and test if the identity claim is true by
computing the Euclidean distance between the stored template and the features
extracted during the identity verification phase.
In Phua et al. (2008), the authors describe a different approach to heart-
sounds biometry. Instead of doing a structural analysis of the input signal,
they use the whole sequences, feeding them to two recognizers built using
Vector Quantization and Gaussian Mixture Models; the latter proves to be the
most performant system.
In Beritelli and Spadaccini (2009b, a), the authors further develop the system
described in Beritelli and Serrano (2007), evaluating its performance on a
larger database, choosing a more suitable feature set (Linear Frequency
Cepstrum Coefficients, LFCC), adding a time-domain feature specific for heart
sounds, called First-to-Second Ratio (FSR) and adding a quality-based data
selection algorithm.
In Beritelli and Spadaccini (2010b, a), the authors take an alternative
approach to the problem, building a system that leverages statistical
modelling using Gaussian Mixture Models. This technique is different from Phua
et al. (2008) in many ways, most notably the segmentation of the heart sounds,
the database, the usage of features specific to heart sounds and the
statistical engine. This system proved to yield good performance in spite of a
larger database, and the final Equal Error Rate (EER) obtained using this
technique is 13.70 % over a database of 165 people, containing two heart
sequences per person, each lasting from 20 to 70 seconds.
This chapter is structured as follows: in Section 2, we describe in detail the
usage of heart sounds for biometric identification, comparing them to other
biometric traits, briefly explaining how the human cardio-circulatory system
works and produces heart sounds and how they can be processed; in Section 3 we
present a survey of recent works on heart-sounds biometry by other research
groups; in Section 4 we describe in detail the structural approach; in Section
5 we describe the statistical approach; in Section 6 we compare the
performance of the two methods on a common database, describing both the
performance metrics and the heart sounds database used for the evaluation;
finally, in Section 7 we present our conclusions, and highlight current issues
of this method and suggest the directions for the future research.
## 2 Biometric recognition using heart sounds
Biometric recognition is the process of inferring the identity of a person via
quantitative analysis of one or more traits, that can be derived either
directly from a person’s body (physiological traits) or from one’s behaviour
(behavioural traits).
Speaking of physiological traits, almost all the parts of the body can already
be used for the identification process (Jain et al. (2008)): eyes (iris and
retina), face, hand (shape, veins, palmprint, fingerprints), ears, teeth etc.
In this chapter, we will focus on an organ that is of fundamental importance
for our life: the heart.
The heart is involved in the production of two biological signals, the
Electrocardiograph (ECG) and the Phonocardiogram (PCG). The first is a signal
derived from the electrical activity of the organ, while the latter is a
recording of the sounds that are produced during its activity (heart sounds).
While both signals have been used as biometric traits (see Biel et al. (2001)
for ECG-based biometry), this chapter will focus on hearts-sounds biometry.
### 2.1 Comparison to other biometric traits
The paper Jain et al. (2004) presents a classification of available biometric
traits with respect to 7 qualities that, according to the authors, a trait
should possess:
* •
Universality: each person should possess it;
* •
Distinctiveness: it should be helpful in the distinction between any two
people;
* •
Permanence: it should not change over time;
* •
Collectability: it should be quantitatively measurable;
* •
Performance: biometric systems that use it should be reasonably performant,
with respect to speed, accuracy and computational requirements;
* •
Acceptability: the users of the biometric system should see the usage of the
trait as a natural and trustable thing to do in order to authenticate;
* •
Circumvention: the system should be robust to malicious identification
attempts.
Each trait is evaluated with respect to each of these qualities using 3
possible qualifiers: H (high), M (medium), L (low).
We added to the original table a row with our subjective evaluation of heart-
sounds biometry with respect to the qualities described above, in order to
compare this new technique with other more established traits. The updated
table is reproduced in Table 1.
The reasoning behind each of our subjective evaluations of the qualities of
heart sounds is as follows:
* •
High Universality: a working heart is a conditio sine qua non for human life;
* •
Medium Distinctiveness: the actual systems’ performance is still far from the
most discriminating traits, and the tests are conducted using small databases;
the discriminative power of heart sounds still must be demonstrated;
* •
Low Permanence: although to the best of our knowledge no studies have been
conducted in this field, we perceive that heart sounds can change their
properties over time, so their accuracy over extended time spans must be
evaluated;
* •
Low Collectability: the collection of heart sounds is not an immediate
process, and electronic stethoscopes must be placed in well-defined positions
on the chest to get a high-quality signal;
* •
Low Performance: most of the techniques used for heart-sounds biometry are
computationally intensive and, as said before, the accuracy still needs to be
improved;
* •
Medium Acceptability: heart sounds are probably identified as unique and
trustable by people, but they might be unwilling to use them in daily
authentication tasks;
* •
Low Circumvention: it is very difficult to reproduce the heart sound of
another person, and it is also difficult to record it covertly in order to
reproduce it later.
Biometric identifier | Universality | Distinctiveness | Permanence | Collectability | Performance | Acceptability | Circumvention
---|---|---|---|---|---|---|---
DNA | H | H | H | L | H | L | L
Ear | M | M | H | M | M | H | M
Face | H | L | M | H | L | H | H
Facial thermogram | H | H | L | H | M | H | L
Fingerprint | M | H | H | M | H | M | M
Gait | M | L | L | H | L | H | M
Hand geometry | M | M | M | H | M | M | M
Hand vein | M | M | M | M | M | M | L
Iris | H | H | H | M | H | L | L
Keystroke | L | L | L | M | L | M | M
Odor | H | H | H | L | L | M | L
Palmprint | M | H | H | M | H | M | M
Retina | H | H | M | L | H | L | L
Signature | L | L | L | H | L | H | H
Voice | M | L | M | L | L | M | H
Heart sounds | H | M | L | L | L | M | L
Table 1: Comparison between biometric traits as in Jain et al. (2004) and
heart sounds
Of course, heart-sounds biometry is a new technique, and some of its drawbacks
probably will be addressed and resolved in future research work.
### 2.2 Physiology and structure of heart sounds
The heart sound signal is a complex, non-stationary and quasi-periodic signal
that is produced by the heart during its continuous pumping work (Sabarimalai
Manikandan and Soman (2010)). It is composed by several smaller sounds, each
associated with a specific event in the working cycle of the heart.
Heart sounds fall in two categories:
* •
primary sounds, produced by the closure of the heart valves;
* •
other sounds, produced by the blood flowing in the heart or by pathologies;
The primary sounds are S1 and S2. The first sound, S1, is caused by the
closure of the tricuspid and mitral valves, while the second sound, S2, is
caused by the closure of the aortic and pulmonary valves.
Among the other sounds, there are the S3 and S4 sounds, that are quieter and
rarer than S1 and S2, and murmurs, that are high-frequency noises.
In our systems, we only use the primary sounds because they are the two
loudest sounds and they are the only ones that a heart always produces, even
in pathological conditions. We separate them from the rest of the heart sound
signal using the algorithm described in Section 2.3.1.
### 2.3 Processing heart sounds
Heart sounds are monodimensional signals, and can be processed, to some
extent, with techniques known to work on other monodimensional signals, like
audio signals. Those techniques then need to be refined taking into account
the peculiarities of the signal, its structure and components.
In this section we will describe an algorithm used to separate the S1 and S2
sounds from the rest of the heart sound signal (2.3.1) and three algorithms
used for feature extraction (2.3.2, 2.3.3, 2.3.4), that is the process of
transforming the original heart sound signal into a more compact, and possibly
more meaningful, representation. We will briefly discuss two algorithms that
work in the frequency domain, and one in the time domain.
#### 2.3.1 Segmentation
In this section we describe a variation of the algorithm that was employed in
(Beritelli and Serrano (2007)) to separate the S1 and S2 tones from the rest
of the heart sound signal, improved to deal with long heart sounds.
Such a separation is done because we believe that the S1 and S2 tones are as
important to heart sounds as the vowels are to the voice signal. They are
stationary in the short term and they convey significant biometric
information, that is then processed by feature extraction algorithms.
A simple energy-based approach can not be used because the signal can contain
impulsive noise that could be mistaken for a significant sound.
The first step of the algorithm is searching the frame with the highest
energy, that is called SX1. At this stage, we do not know if we found an S1 or
an S2 sound.
Then, in order to estimate the frequency of the heart beat, and therefore the
period $P$ of the signal, the maximum value of the autocorrelation function is
computed. Low-frequency components are ignored by searching only over the
portion of autocorrelation after the first minimum.
The algorithm then searches other maxima to the left and to the right of SX1,
moving by a number $P$ of frames in each direction and searching for local
maxima in a window of the energy signal in order to take into account small
fluctuations of the heart rate. After each maximum is selected, a constant-
width window is applied to select a portion of the signal.
After having completed the search that starts from SX1, all the corresponding
frames in the original signal are zeroed out, and the procedure is repeated to
find a new maximum-energy frame, called SX2, and the other peaks are found in
the same way.
Finally, the positions of SX1 and SX2 are compared, and the algorithm then
decides if SX1, and all the frames found starting from it, must be classified
as S1 or S2; the remaining identified frames are classified accordingly.
The nature of this algorithm requires that it work on short sequences, 4 to 6
seconds long, because as the sequence gets longer the periodicity of the
sequence fades away due to noise and variations of the heart rate.
To overcome this problem, the signal is split into 4-seconds wide windows and
the algorithm is applied to each window. The resulting sets of heart sounds
endpoint are then joined into a single set.
Figure 1: Example of S1 and S2 detection
#### 2.3.2 The chirp $z$-transform
The Chirp $z$-Transform (CZT) is an algorithm for the computation of the
$z$-Transform of sampled signals that offers some additional flexibility to
the Fast Fourier Transform (FFT) algorithm.
The main advantage of the CZT exploited in the analysis of heart sounds is the
fact that it allows high-resolution analysis of narrow frequency bands,
offering higher resolution than the FFT.
For more details on the CZT, please refer to Rabiner et al. (1969)
#### 2.3.3 Cepstral analysis
Mel-Frequency Cepstrum Coefficients (MFCC) are one of the most widespread
parametric representation of audio signals (Davis and Mermelstein (1980)).
The basic idea of MFCC is the extraction of cepstrum coefficients using a non-
linearly spaced filterbank; the filterbank is instead spaced according to the
Mel Scale: filters are linearly spaced up to 1 kHz, and then are
logarithmically spaced, decreasing detail as the frequency increases.
This scale is useful because it takes into account the way we perceive sounds.
The relation between the Mel frequency $\hat{f}_{mel}$ and the linear
frequency $f_{lin}$ is the following:
$\hat{f}_{mel}=2595\cdot\log_{10}\left(\frac{1+f_{lin}}{700}\right)$ (1)
Some heart-sound biometry systems use MFCC, while others use a linearly-spaced
filterbank.
The first step of the algorithm is to compute the FFT of the input signal; the
spectrum is then feeded to the filterbank, and the $i$-th cepstrum coefficient
is computed using the following formula:
$C_{i}=\sum_{k=1}^{K}X_{k}\cdot\cos\left(i\cdot\left(k-\frac{1}{2}\right)\cdot\frac{\pi}{K}\right)i=0,...,M$
(2)
where $K$ is the number of filters in the filterbank, $X_{k}$ is the log-
energy output of the $k$-th filter and $M$ is the number of coefficients that
must be computed.
Many parameters have to be chosen when computing cepstrum coefficients. Among
them: the bandwidth and the scale of the filterbank (Mel vs. linear), the
number and spectral width of filters, the number of coefficients.
In addition to this, differential cepstrum coefficients, tipically denoted
using a $\Delta$ (first order) or $\Delta\Delta$ (second order), can be
computed and used.
Figure 2 shows an example of three S1 sounds and the relative MFCC
spectrograms; the first two (a, b) belong to the same person, while the third
(c) belongs to a different person.
Figure 2: Example of waveforms and MFCC spectrograms of S1 sounds
#### 2.3.4 The First-to-Second Ratio (FSR)
In addition to standard feature extraction techniques, it would be desirable
to develop ad-hoc features for the heart sound, as it is not a simple audio
sequence but has specific properties that could be exploited to develop
features with additional discriminative power.
This is why we propose a time-domain feature called First-to-Second Ratio
(FSR). Intuitively, the FSR represents the power ratio of the first heart
sound (S1) to the second heart sound (S2). During our work, we observed that
some people tend to have an S1 sound that is louder than S2, while in others
this balance is inverted. We try to represent this diversity using our new
feature.
The implementation of the feature is different in the two biometric systems
that we described in this chapter, and a discussion of the two algorithms can
be found in 4.4 and 5.4.
## 3 Review of related works
In the last years, different research groups have been studying the
possibility of using heart sounds for biometric recognition. In this section,
we will briefly describe their methods.
In Table 2 we summarized the main characteristics of the works that will be
analyzed in this section, using the following criteria:
* •
Database \- the number of people involved in the study and the amount of heart
sounds recorded from each of them;
* •
Features \- which features were extracted from the signal, at frame level or
from the whole sequence;
* •
Classification \- how features were used to make a decision.
We chose not to represent performance in this table for two reasons: first,
most papers do not adopt the same performance metric, so it would be difficult
to compare them; second, the database and the approach used are quite
different one from another, so it would not be a fair comparison.
Paper | Database | Features | Classification
---|---|---|---
Phua et al. (2008) | 10 people | MFCC | GMM
100 HS each | LBFC | VQ
Tran et al. (2010) | 52 people | Multiple | SVM
100m each | |
Jasper and Othman (2010) | 10 people | Energy | Euclidean
20 HS each | peaks | distance
Fatemian et al. (2010) | 21 people | MFCC, LDA, | Euclidean
6 HS each | energy peaks | distance
8 seconds per HS | |
El-Bendary et al. (2010) | 40 people | autocorrelation | MSE
10 HS | cross-correlation | kNN
10 seconds per HS | complex cepstrum |
Table 2: Comparison of recent works about heart-sound biometrics
In the rest of the section, we will briefly review each of these papers.
Phua et al. (2008) was one of the first works in the field of heart-sounds
biometry. In this paper, the authors first do a quick exploration of the
feasibility of using heart sounds as a biometric trait, by recording a test
database composed of 128 people, using 1-minute heart sounds and splitting the
same signal into a train and a testing sequence. Having obtained good
recognition performance using the HTK Speech Recognition toolkit, they do a
deeper test using a database recorded from 10 people and containing 100 sounds
for each person, investigating the performance of the system using different
feature extraction algorithms (MFCC, Linear Frequency Band Cepstra (LFBC)),
different classification schemes (Vector Quantization (VQ) and Gaussian
Mixture Models (GMM)) and investigating the impact of the frame size and of
the training/test length. After testing many combinations of those parameters,
they conclude that, on their database, the most performing system is composed
of LFBC features (60 cepstra + log-energy + 256ms frames with no overlap),
GMM-4 classification, 30s of training/test length.
The authors of Tran et al. (2010), one of which worked on Phua et al. (2008),
take the idea of finding a good and representative feature set for heart
sounds even further, exploring 7 sets of features: temporal shape, spectral
shape, cepstral coefficientrs, harmonic features, rhythmic features, cardiac
features and the GMM supervector. They then feed all those features to a
feature selection method called RFE-SVM and use two feature selection
strategies (optimal and sub-optimal) to find the best set of features among
the ones they considered. The tests were conducted on a database of 52 people
and the results, expressed in terms of Equal Error Rate (EER), are better for
the automatically selected feature sets with respect to the EERs computed over
each individual feature set.
In Jasper and Othman (2010), the authors describe an experimental system where
the signal is first downsampled from 11025 Hz to 2205 Hz; then it is processed
using the Discrete Wavelet Transform, using the Daubechies-6 wavelet, and the
D4 and D5 subbands (34 to 138 Hz) are then selected for further processing.
After a normalization and framing step, the authors then extract from the
signal some energy parameters, and they find that, among the ones considered,
the Shannon energy envelogram is the feature that gives the best performance
on their database of 10 people.
The authors of Fatemian et al. (2010) do not propose a pure-PCG approach, but
they rather investigate the usage of both the ECG and PCG for biometric
recognition. In this short summary, we will focus only on the part of their
work that is related to PCG. The heart sounds are processed using the
Daubechies-5 wavelet, up to the 5th scale, and retaining only coefficients
from the 3rd, 4th and 5th scales. They then use two energy thresholds (low and
high), to select which coefficients should be used for further stages. The
remaining frames are then processed using the Short-Term Fourier Transform
(STFT), the Mel-Frequency filterbank and Linear Discriminant Analysis (LDA)
for dimensionality reduction. The decision is made using the Euclidean
distance from the feature vector obtained in this way and the template stored
in the database. They test the PCG-based system on a database of 21 people,
and their combined PCG-ECG systems has better performance.
The authors of El-Bendary et al. (2010) filter the signal using the DWT; then
they extract different kinds of features: auto-correlation, cross-correlation
and cepstra. They then test the identities of people in their database, that
is composed by 40 people, using two classifiers: Mean Square Error (MSE) and
k-Nearest Neighbor (kNN). On their database, the kNN classifier performs
better than the MSE one.
## 4 The structural approach to heart-sounds biometry
The first system that we describe in depth was introduced in Beritelli and
Serrano (2007); it was designed to work with short heart sounds, 4 to 6
seconds long and thus containing at least four cardiac cycles (S1-S2).
The restriction on the length of the heart sound was removed in Beritelli and
Spadaccini (2009a), that introduced the quality-based best subsequence
selection algorithm, described in 4.1.
We call this system “structural” because the identity templates are stored as
feature vectors, in opposition to the “statistical” approach, that does not
directly keep the feature vectors but instead it represents identities via
statistical parameters inferred in the learning phase.
Figure 3 contains the block diagram of the system. Each of the steps will be
described in the following sections.
detectorS1/S2 endpointdetectorS1/S2 soundsS1/S2
soundsMFCCFSRTemplateyesnoMatcher$\hat{x}(n)$$x(n)$Low-pass filterBest
subsequencedetector
Figure 3: Block diagram of the proposed cardiac biometry system
### 4.1 The best subsequence selection algorithm
The fact that the segmentation and matching algorithms of the original system
were designed to work on short sequences was a strong constraint for the
system. It was required that a human operator selected a portion of the input
signal based on some subjective assumptions. It was clearly a flaw that needed
to be addressed in further versions of the system.
To resolve this issue, the authors developed a quality-based subsequence
selection algorithm, based on the definition of a quality index $DHS_{QI}(i)$
for each contiguous subsequence $i$ of the input signal.
The quality index is based on a cepstral similarity criterion: the selected
subsequence is the one for which the cepstral distance of the tones is the
lowest possible. So, for a given subsequence $i$, the quality index is defined
as:
$DHS_{QI}(i)=\frac{1}{\displaystyle\sum_{k=1}^{4}\sum_{\begin{subarray}{c}j=1\\\
j\neq
k\end{subarray}}^{4}d_{S1}(j,k)+\sum_{k=1}^{4}\sum_{\begin{subarray}{c}j=1\\\
j\neq k\end{subarray}}^{4}d_{S2}(j,k)}$ (3)
Where $d_{S1}$ and $d_{S2}$ are the cepstral distances defined in 4.5.
The subsequence $\overline{i}$ with the maximum value of
$DHS_{QI}(\overline{i})$ is then selected as the best one and retained for
further processing, while the rest of the input signal is discarded.
### 4.2 Filtering and segmentation
After the best subsequence selection, the signal is then given in input to the
heart sound endpoint detection algorithm described in 2.3.1.
The endpoints that it finds are then used to extract the relevant portions of
the signal over a version of the heart sound signal that was previously
filtered using a low-pass filter, which removed the high-frequency extraneous
components.
### 4.3 Feature extraction
The heart sounds are then passed to the feature extraction module, that
computes the cepstral features according to the algorithm described in 2.3.
This system uses $M=12$ MFCC coefficients, with the addition of a 13-th
coefficient computed using an $i=0$ value in Equation 2, that is the log-
energy of the analyzed sound.
### 4.4 Computation of the First-to-Second Ratio
For each input signal, the system computes the FSR according to the following
algorithm.
Let $N$ be the number of complete S1-S2 cardiac cycles in the signal. Let
$P_{{S1}_{i}}$ (resp. $P_{{S2}_{i}}$) be the power of the $i$-th S1 (resp. S2)
sound.
We can then define $\overline{P_{S1}}$ and $\overline{P_{S2}}$, the average
powers of S1 and S2 heart sounds:
$\overline{P_{S1}}=\frac{1}{N}\sum_{i=1}^{N}P_{S1_{i}}$ (4)
$\overline{P_{S2}}=\frac{1}{N}\sum_{i=1}^{N}P_{S2_{i}}$ (5)
Using these definitions, we can then define the First-to-Second Ration of a
given heart sound signal as:
$FSR=\frac{\overline{P_{S1}}}{\overline{P_{S2}}}$ (6)
For two given DHS sequences $x_{1}$ and $x_{2}$, we define the FSR distance
as:
$d_{FSR}\left(x_{1},x_{2}\right)=\left|FSR_{dB}\left(x_{1}\right)-FSR_{dB}\left(x_{2}\right)\right|$
(7)
### 4.5 Matching and identity verification
The crucial point of identity verification is the computation of the distance
between the feature set that represents the input signal and the template
associated with the identity claimed in the acquisition phase by the person
that is trying to be authenticated by the system.
This system employs two kinds of distance: the first in the cepstral domain
and the second using the FSR.
MFCC are compared using the Euclidean metric ($d_{2}$). Given two heart sound
signals $X$ and $Y$, let $X_{S1}(i)$ (resp. $X_{S2}(i)$) be the feature vector
for the $i$-th S1 (resp. S2) sound of the $X$ signal and $Y_{S1}$ and $Y_{S2}$
the analogous vectors for the $Y$ signal. Then the cepstral distances between
$X$ and $Y$ can be defined as follows:
$d_{S1}(X,Y)=\frac{1}{N^{2}}\sum_{i,j=1}^{N}d_{2}(X_{S1}(i),Y_{S1}(j))$ (8)
$d_{S2}(X,Y)=\frac{1}{N^{2}}\sum_{i,j=1}^{N}d_{2}(X_{S2}(i),Y_{S2}(j))$ (9)
Now let us take into account the FSR. Starting from the $d_{FSR}$ as defined
in Equation 7, we wanted this distance to act like an amplifying factor for
the cepstral distance, making the distance bigger when it has an high value
while not changing the distance for low values.
We then normalized the values of $d_{FSR}$ between 0 and 1
($d_{{FSR}_{norm}}$), we chose a threshold of activation of the FSR
($th_{F}SR$) and we defined defined $k_{FSR}$, an amplifying factor used in
the matching phase, as follows:
$k_{FSR}=\max\left(1,\frac{d_{{FSR}_{norm}}}{th_{FSR}}\right)$ (10)
In this way, if the normalized FSR distance is lower than $th_{FSR}$ it has no
effect on the final score, but if it is larger, it will increase the cepstral
distance.
Finally, the distance between $X$ and $Y$ can be computed as follows:
$d(X,Y)=k_{FSR}\cdot\sqrt{d_{S1}(X,Y)^{2}+d_{S2}(X,Y)^{2}}$ (11)
## 5 The statistical approach to heart-sounds biometry
In opposition to the system analyzed in Section 4, the one that will be
described in this section is based on a learning process that does not
directly take advantage of the features extracted from the heart sounds, but
instead uses them to infer a statistical model of the identity and makes a
decision computing the probability that the input signal belongs to the person
whose identity was claimed in the identity verification process.
### 5.1 Gaussian Mixture Models
Gaussian Mixture Models (GMM) are a powerful statistical tool used for the
estimation of multidimensional probability density representation and
estimation (Reynolds and Rose (1995)).
A GMM $\lambda$ is a weighted sum of $N$ Gaussian probability densities:
$p(\bm{x}|\lambda)=\sum_{i=1}^{N}w_{i}p_{i}(\bm{x})$ (12)
where $\bm{x}$ is a $D$-dimensional data vector, whose probability is being
estimated, and $w_{i}$ is the weight of the $i$-th probability density, that
is defined as:
$p_{i}(\bm{x})=\frac{1}{\sqrt{(2\pi)^{D}\left|\Sigma_{i}\right|}}e^{-\frac{1}{2}(\bm{x}-\mu_{i})^{\prime}\Sigma_{i}(\bm{x}-\mu_{i})}$
The parameters of $p_{i}$ are $\mu_{i}$ ($\in\mathbb{R}^{D}$) and $\Sigma_{i}$
($\in\mathbb{R}^{D\times D}$), that together with $w_{i}$
($\in\mathbb{R}^{N}$) form the set of values that represent the GMM:
$\lambda=\left\\{w_{i},\mu_{i},\Sigma_{i}\right\\}$ (13)
Those parameters of the model are learned in the training phase using the
Expectation-Maximization algorithm (McLachlan and Krishnan (1997)), using as
input data the feature vectors extracted from the heart sounds.
### 5.2 The GMM/UBM method
The problem of verifying whether an input heart sound signal $s$ belongs to a
stated identity $I$ is equivalent to a hypothesis test between two hypotheses:
$\displaystyle H_{0}:$ $\displaystyle\,s\mbox{ belongs to }I$ $\displaystyle
H_{1}:$ $\displaystyle\,s\mbox{ does not belong to }I$
This decision can be taken using a likelihood test:
$S(s,I)=\frac{p(s|H_{0})}{p(s|H_{1})}\left\\{\begin{aligned} \geq\theta&\mbox{
accept }H_{0}\\\ <\theta&\mbox{ reject }H_{0}\end{aligned}\right.$ (14)
where $\theta$ is the decision threshold, a fundamental system parameter that
is chosen in the design phase.
The probability $p(s|H_{0})$, in our system, computed using Gaussian Mixture
Models.
The input signal is converted by the front-end algorithms to a set of $K$
feature vectors, each of dimension $D$, so:
$p(s|H_{0})=\prod_{j=1}^{K}p(x_{j}|\lambda_{I})$ (15)
In Equation 14, the $p(s|H_{1})$ is still missing. In the GMM/UBM framework
(Reynolds et al. (2000)), this probability is modelled by building a model
trained with a set of identities that represent the demographic variability of
the people that might use the system. This model is called Universal
Background Model (UBM).
The UBM is created during the system design, and is subsequently used every
time the system must compute a matching score.
The final score of the identity verification process, expressed in terms of
log-likelihood ratio, is
$\Lambda(s)=\log S(s,I)=\log p(s|\lambda_{I})-\log p(s|\lambda_{W})$ (16)
### 5.3 Front-end processing
Each time the system gets an input file, whether for training a model or for
identity verification, it goes through some common steps.
First, heart sounds segmentation is carried on, using the algorithm described
in Section 2.3.1.
Then, cepstral features are extracted using a tool called sfbcep, part of the
SPro suite (Gravier (2003)). Finally, the FSR, computed as described in
Section 5.4, is appended to each feature vector.
### 5.4 Application of the First-to-Second Ratio
The FSR, as first defined in Section 4.4, is a sequence-wise feature, i.e., it
is defined for the whole input signal. It is then used in the matching phase
to modify the resulting score.
In the context of the statistical approach, it seemed more appropriate to just
append the FSR to the feature vector computed from each frame in the feature
extraction phase, and then let the GMM algorithms generalize this knowledge.
To do this, we split the input heart sound signal in 5-second windows and we
compute an average FSR ($\overline{FSR}$) for each signal window. It is then
appended to each feature vector computed from frames inside the window.
### 5.5 The experimental framework
The experimental set-up created for the evaluation of this technique was
implemented using some tools provided by ALIZE/SpkDet , an open source toolkit
for speaker recognition developed by the ELISA consortium between 2004 and
2008 (Bonastre et al. (n.d.)).
The adaptation of parts of a system designed for speaker recognition to a
different problem was possible because the toolkit is sufficiently general and
flexible, and because the features used for heart-sounds biometry are similar
to the ones used for speaker recognition, as outlined in Section 2.3.
During the world training phase, the system estimates the parameters of the
world model $\lambda_{W}$ using a randomly selected subset of the input
signals.
The identity models $\lambda_{i}$ are then derived from the world model $W$
using the Maximum A-Posteriori (MAP) algorithm.
During identity verification, the matching score is computed using Equation
16, and the final decision is taken comparing the score to a threshold
($\theta$), as described in Equation 14
### 5.6 Optimization of the method
During the development of the system, some parameters have been tuned in order
to get the best performance. Namely, three different cepstral feature sets
have been considered in (Beritelli and Spadaccini (2010b)):
* •
16 + 16 $\Delta$ \+ $E$ \+ $\Delta E$
* •
16 + 16 $\Delta$ \+ 16 $\Delta\Delta$
* •
19 + 19 $\Delta$ \+ $E$ \+ $\Delta E$
However, the first of these sets proved to be the most effective
In (Beritelli and Spadaccini (2010a)) the impact of the FSR and of the number
of Gaussian densities in the mixtures was studied. Four different model sizes
(128, 256, 512, 1024) were tested, with and without FSR, and the best
combination of those parameters, on our database, is 256 Gaussians with FSR.
## 6 Performance evaluation
In this section, we will compare the performance of the two systems described
in Section 4 and 5 using a common heart sounds database, that will be further
described in Section 6.1.
### 6.1 Heart sounds database
One of the drawbacks of this biometric trait is the absence of large enough
heart sound databases, that are needed for the validation of biometric
systems. To overcome this problem, we are building a heart sounds database
suitable for identity verification performance evaluation.
Currently, there are 206 people in the database, 157 male and 49 female; for
each person, there are two separate recordings, each lasting from 20 to 70
seconds; the average length of the recordings is 45 seconds. The heart sounds
have been acquired using a Thinklabs Rhythm Digital Electronic Stethoscope,
connected to a computer via an audio card. The sounds have been converted to
the Wave audio format, using 16 bit per second and at a rate of 11025 Hz.
One of the two recordings available for each personused to build the models,
while the other is used for the computation of matching scores.
### 6.2 Metrics for performance evaluation
A biometric identity verification system can be seen as a binary classifier.
Binary classification systems work by comparing matching scores to a
threshold; their accuracy is closely linked with the choice of the threshold,
which must be selected according to the context of the system.
There are two possible errors that a binary classifier can make:
* •
False Match (Type I Error): accept an identity claim even if the template does
not match with the model;
* •
False Non-Match (Type II Error): reject an identity claim even if the template
matches with the model
The importance of errors depends on the context in which the biometric system
operates; for instance, in a high-security environment, a Type I error can be
critical, while Type II errors could be tolerated.
When evaluating the performance of a biometric system, however, we need to
take a threshold-independent approach, because we cannot know its applications
in advance. A common performance measure is the Equal Error Rate (EER) (Jain
et al. (2008)), defined as the error rate at which the False Match Rate (FMR)
is equal to the False Non-Match Rate (FNMR).
A finer evaluation of biometric systems can be done by plotting the Detection
Error Tradeoff (DET) curve, that is the plot of FMR against FNMR. This allows
to study their performance when a low FNMR or FMR is imposed to the system.
The DET curve represents the trade-off between security and usability. A
system with low FMR is a highly secure one but will lead to more non-matches,
and can require the user to try the authentication step more times; a system
with low FNMR will be more tolerant and permissive, but will make more false
match errors, thus letting more unauthorized users to get a positive match.
The choice between the two setups, and between all the intermediate security
levels, is strictly application-dependent.
### 6.3 Results
The performance of our two systems has been computed over the heart sounds
database, and the results are reported in Table 3.
System | EER (%)
---|---
Structural | 36.86
Statistical | 13.66
Table 3: Performance evaluation of the two heart-sounds biometry systems
The huge difference in the performance of the two systems reflects the fact
that the first one is not being actively developed since 2009, and it was
designed to work on small databases, while the second has already proved to
work well on larger databases.
It is important to highlight that, in spite of a 25% increment of the size of
the database, the error rate remained almost constant with respect to the last
evaluation of the system, in which a test over a 165 people database yielded a
13.70% EER.
Figure 4: Detection Error Tradeoff (DET) curves of the two systems
Figure 4 shows the Detection Error Trade-off (DET) curves of the two systems.
As stated before, a DET curve shows how the analyzed system performs in terms
of false matches/false non-matches as the system threshold is changed.
In both cases, fixing a false match (resp. false non-match) rate, the system
that performs better is the one with the lowest false non-match (resp. false
match) rate.
Looking at Figure 4, it is easy to understand that the statistical system
performs better in both high-security (e.g., FMR = 1-2%) and low-security
(e.g., FNMR = 1-2%) setups.
We can therefore conclude that the statistical approach is definitely more
promising that the structural one, at least with the current algorithms and
using the database described in 6.1..
## 7 Conclusions
In this chapter, we presented a novel biometric identification technique that
is based on heart sounds.
After introducing the advantages and shortcomings of this biometric trait with
respect to other traits, we explained how our body produces heart sounds, and
the algorithms used to process them.
A survey of recent works on this field written by other research groups has
been presented, showing that there has been a recent increase of interest of
the research community in this novel trait.
Then, we described the two systems that we built for biometric identification
based on heart sounds, one using a structural approach and another leveraging
Gaussian Mixture Models. We compared their performance over a database
containing more than 200 people, concluding that the statistical system
performs better.
### 7.1 Future directions
As this chapter has shown, heart sounds biometry is a promising research topic
in the field of novel biometric traits.
So far, the academic community has produced several works on this topic, but
most of them share the problem that the evaluation is carried on over small
databases, making the results obtained difficult to generalize.
We feel that the community should start a joint effort for the development of
systems and algorithms for heart-sounds biometry, at least creating a common
database to be used for the evaluation of different research systems over a
shared dataset that will make possible to compare their performance in order
to refine them and, over time, develop techniques that might be deployed in
real-world scenarios.
As larger databases of heart sounds become available to the scientific
community, there are some issues that need to be addressed in future research.
First of all, the identification performance should be kept low even for
larger databases. This means that the matching algorithms will be fine-tuned
and a suitable feature set will be identified, probably containing both
elements from the frequency domain and the time domain.
Next, the mid-term and long-term reliability of heart sounds will be assessed,
analyzing how their biometric properties change as time goes by. Additionally,
the impact of cardiac diseases on the identification performance will be
assessed.
Finally, when the algorithms will be more mature and several independent
scientific evaluations will have given positive feedback on the idea, some
practical issues like computational efficiency will be tackled, and possibly
ad-hoc sensors with embedded matching algorithms will be developed, thus
making heart-sounds biometry a suitable alternative to the mainstream
biometric traits.
## References
* (1)
* Beritelli and Serrano (2007) Beritelli, F. and Serrano, S. (2007). Biometric Identification based on Frequency Analysis of Cardiac Sounds, IEEE Transactions on Information Forensics and Security 2(3): 596–604.
* Beritelli and Spadaccini (2009a) Beritelli, F. and Spadaccini, A. (2009a). Heart sounds quality analysis for automatic cardiac biometry applications, Proceedings of the 1st IEEE International Workshop on Information Forensics and Security.
* Beritelli and Spadaccini (2009b) Beritelli, F. and Spadaccini, A. (2009b). Human Identity Verification based on Mel Frequency Analysis of Digital Heart Sounds, Proceedings of the 16th International Conference on Digital Signal Processing.
* Beritelli and Spadaccini (2010a) Beritelli, F. and Spadaccini, A. (2010a). An improved biometric identification system based on heart sounds and gaussian mixture models, Proceedings of the 2010 IEEE Workshop on Biometric Measurements and Systems for Security and Medical Applications, IEEE, pp. 31–35.
* Beritelli and Spadaccini (2010b) Beritelli, F. and Spadaccini, A. (2010b). A statistical approach to biometric identity verification based on heart sounds, Proceedings of the Fourth International Conference on Emerging Security Information, Systems and Technologies (SECURWARE2010), IEEE, pp. 93–96.
http://dx.medra.org/10.1109/SECURWARE.2010.23
* Biel et al. (2001) Biel, L. and Pettersson, O. and Philipson, L. and Wide, P. (2001). ECG Analysis: A New Approach in Human Identification, IEEE Transactions on Instrumentation and Measurement 50(3): 808–812.
* Bonastre et al. (n.d.) Bonastre, J.-F., Scheffer, N., Matrouf, D., Fredouille, C., Larcher, A., Preti, R., Pouchoulin, G., Evans, N., Fauve, B. and Mason, J. (n.d.). Alize/Spkdet: a state-of-the-art open source software for speaker recognition.
* Davis and Mermelstein (1980) Davis, S. and Mermelstein, P. (1980). Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences, IEEE Transactions on Acoustics, Speech and Signal Processing 28(4): 357–366.
* El-Bendary et al. (2010) El-Bendary, N., Al-Qaheri, H., Zawbaa, H. M., Hamed, M., Hassanien, A. E., Zhao, Q. and Abraham, A. (2010). Hsas: Heart sound authentication system, Nature and Biologically Inspired Computing (NaBIC), 2010 Second World Congress on, pp. 351 –356.
* Fatemian et al. (2010) Fatemian, S., Agrafioti, F. and Hatzinakos, D. (2010). Heartid: Cardiac biometric recognition, Biometrics: Theory Applications and Systems (BTAS), 2010 Fourth IEEE International Conference on, pp. 1 –5.
* Gravier (2003) Gravier, G. (2003). SPro: speech signal processing toolkit.
http://gforge.inria.fr/projects/spro
* Jain et al. (2008) Jain, A. K., Flynn, P. and Ross, A. A. (2008). Handbook of Biometrics, Springer.
* Jain et al. (2006) Jain, A. K., Ross, A. A. and Pankanti, S. (2006). Biometrics: A tool for information security, IEEE Transactions on Information Forensics and Security 1(2): 125–143.
* Jain et al. (2004) Jain, A. K., Ross, A. A. and Prabhakar, S. (2004). An introduction to biometric recognition, IEEE Transactions on Circuits and Systems for Video Technology 14(2): 4–20.
* Jasper and Othman (2010) Jasper, J. and Othman, K. (2010). Feature extraction for human identification based on envelogram signal analysis of cardiac sounds in time-frequency domain, Electronics and Information Engineering (ICEIE), 2010 International Conference On, Vol. 2, pp. V2–228 –V2–233.
* McLachlan and Krishnan (1997) McLachlan, G. J. and Krishnan, T. (1997). The EM Algorithm and Extensions, Wiley.
* Phua et al. (2008) Phua, K., Chen, J., Dat, T. H. and Shue, L. (2008). Heart sound as a biometric, Pattern Recognition 41(3): 906–919.
* Prabhakar et al. (2003) Prabhakar, S., Pankanti, S. and Jain, A. K. (2003). Biometric recognition: Security & privacy concerns, IEEE Security and Privacy Magazine 1(2): 33–42.
* Rabiner et al. (1969) Rabiner, L., Schafer, R. and Rader, C. (1969). The chirp z-transform algorithm, Audio and Electroacoustics, IEEE Transactions on 17(2): 86 – 92.
* Reynolds et al. (2000) Reynolds, D. A., Quatieri, T. F. and Dunn, R. B. (2000). Speaker verification using adapted gaussian mixture models, Digital Signal Processing, p. 2000.
* Reynolds and Rose (1995) Reynolds, D. A. and Rose, R. C. (1995). Robust text-independent speaker identification using gaussian mixture speaker models, IEEE Transactions on Speech and Audio Processing 3: 72–83.
* Sabarimalai Manikandan and Soman (2010) Sabarimalai Manikandan, M. and Soman, K. (2010). Robust heart sound activity detection in noisy environments, Electronics Letters 46(16): 1100 –1102.
* Tran et al. (2010) Tran, D. H., Leng, Y. R. and Li, H. (2010). Feature integration for heart sound biometrics, Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, pp. 1714 –1717.
|
arxiv-papers
| 2011-05-20T11:08:48 |
2024-09-04T02:49:18.994792
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Francesco Beritelli and Andrea Spadaccini",
"submitter": "Andrea Spadaccini",
"url": "https://arxiv.org/abs/1105.4058"
}
|
1105.4219
|
#
Length contraction in Very Special Relativity
Biplab Dutta†, Kaushik Bhattacharya‡
Department of Physics, Indian Institute of Technology, Kanpur
Kanpur 208016, India email: †bdutta@iitk.ac.in, ‡kaushikb@iitk.ac.in
###### Abstract
Glashow and Cohen claim that many results of special theory of relativity (SR)
like time dilation, relativistic velocity addition, etc, can be explained by
using certain proper subgroups, of the Lorentz group, which collectively form
the main body of Very special relativity (VSR). They did not mention about
length contraction in VSR. Length contraction in VSR has not been studied at
all. In this article we calculate how the length of a moving rod contracts in
VSR, particularly in the $HOM(2)$ version. The results are interesting in the
sense that in general the length contraction formulas in VSR are different
from SR but in many cases the two theories predict similar length contraction
of moving rods.
## 1 Introduction
In the recent past Glashow and Cohen [1] proposed the interesting idea of a
very special relativity. By very special relativity (VSR) the above mentioned
authors meant a theory which is constituted by subgroups of the Lorentz group,
but amazingly, these subgroup transformations keep the velocity of light
invariant in inertial frames and time-dilation remains the same as in special
relativity (SR). Velocity addition has been studied in VSR theories [2] and
there has been an attempt to utilize the VSR theory as the theory of space-
time transformations of dark matter candidates [3]. In a parallel development
some authors have attempted to incorporate the framework of VSR in non-
commutative space-times [4].
The specialty of VSR is that it can produce the constancy of light velocity
and time-dilation with much smaller subgroups of the Lorentz group. Going by
standard convention where ${\bf K}$ specify the boost generators and ${\bf J}$
specify the angular momentum generators of the full Lorentz group, there are
four subgroups of the Lorentz group which exhausts all the candidates of VSR.
One of the four possible versions of a theory of VSR has just two generators,
namely $T_{1}=K_{x}+J_{y}$ and $T_{2}=K_{y}-J_{x}$. This group is called
$T(2)$. If in addition to the above generators of $T(2)$ one adds another
generator $J_{z}$ then the resulting group is called $E(2)$. Instead of adding
$J_{z}$ if one includes $K_{z}$ as the third generator in addition to the two
generators of $T(2)$ one attains another subgroup of the Lorentz group which
is called $HOM(2)$. Lastly, if some one includes both $J_{z}$ and $K_{z}$ in
addition to the two generators of $T(2)$ then one obtains another subgroup of
the Lorentz group which is called the $SIM(2)$. These above four subgroups of
the Lorentz group which admits of local energy-momentum conservation
collectively form the main body of VSR transformations.
The topic which still remains untouched is related to the topic of length
contraction in VSR. Till now none of the papers on VSR has clearly stated
about the way how lengths of moving rods differ from their proper length . In
this article we discuss about length transformations in VSR. To do so we use
the subgroup $HOM(2)$. This subgroup preserves similarity transformations or
homotheties. It is seen that the length transformation formulas in VSR are
dramatically different from the one we have in SR. In VSR we observe length
contraction but this contraction is not equivalent to the one found in SR.
More over length-contraction in VSR is direction dependent. If a rod is placed
along the $z$-direction of the fixed frame $S$ and the moving frame,
$S^{\prime}$, moves with an uniform velocity along the $z$ \- $z^{\prime}$
axes then the length transformation formula in VSR is exactly the same as in
SR. But if the rod is placed along the $x$-axis ($y$-axis) in the $S$ frame
and the $S^{\prime}$ frame moves with an uniform velocity along $x$ \-
$x^{\prime}$ axes ( $y$ \- $y^{\prime}$ axes) then the length transformation
relation is not the same as found in SR. More over for very high velocities
there is no length-contraction along the $x$ or $y$ axes motion.
The other important find in this article is that the phenomenon of length
contraction is not symmetrical in the frames $S$ and $S^{\prime}$. By
symmetrical we mean that if the rod is kept at rest in the $S^{\prime}$ frame,
which is moving with respect to frame $S$ with velocity ${\bf u}$, and the
observer is in the $S$ frame then the length contraction results does not in
general match with the case where the rod is at rest in the $S$ frame and the
observer is in $S^{\prime}$ frame. This phenomenon arises because the VSR
transformation which links the coordinates of the primed frame to the unprimed
frame is not the same as the inverse transformation with the sign of the
velocity changed. The results presented in the article can be experimentally
tested in heavy ion collisions and future experiments in LHC. The experiments
can conclusively state whether VSR can actually replace SR in describing the
subtleties of nature.
The material in this article is presented in the following format. The next
section explains the VSR transformation, particularly the $HOM(2)$ version.
The notations and conventions are introduced and using them the expressions of
the $HOM(2)$ transformation and its inverse transformations are deduced.
Section 3 deals with the particular question of length contraction in VSR. The
ultimate section 4 presents the conclusion with a brief discussion of the
results derived in this article. For the sake of completeness we attach two
appendices as appendix A and appendix B where the $HOM(2)$ transformation
matrix and its inverse are derived explicitly.
## 2 Space-time transformations in the $HOM(2)$ group
The HOM(2) subgroup of the Lorentz group consists of 3 generators
$T_{1}=K_{x}+J_{y},T_{2}=K_{y}-J_{x}$ and $K_{z}$ where $K_{i}$’s and
$J_{i}$’s are the generators of Lorentz boosts and 3-space rotations
respectively. The HOM(2) generators are $T_{1}$, $T_{2}$ and $K_{z}$,
satisfying the following commutation relations [2],
$[T_{1},T_{2}]=0,\hskip 10.0pt[T_{1},K_{z}]=iT_{1},\hskip
10.0pt[T_{2},K_{z}]=iT_{2}\,.$ (1)
In VSR if one transforms from the rest frame of a particle to a moving frame,
moving with a velocity ${\bf u}$ with respect to the other frame, the
4-velocity of the particle gets transformed. If the 4-velocity of the particle
in the rest frame is $u_{0}$ and its 4-velocity in the moving frame be $u$
then the 4-vectors must be like
$\displaystyle u_{0}=\left(\begin{array}[]{c}1\\\ 0\\\ 0\\\
0\end{array}\right)\,,\,\,\,\,u=\left(\begin{array}[]{c}\gamma\\\ -\gamma
u_{x}\\\ -\gamma u_{y}\\\ -\gamma u_{z}\end{array}\right)\,,$ (10)
where $\gamma_{u}={1}/{\sqrt{1-{\bf u}^{2}}}$ and ${\bf
u}^{2}=u_{x}^{2}+u_{y}^{2}+u_{z}^{2}$. The $HOM(2)$ transformation acts as:
$L(u)u_{0}=u\,,$ (11)
where the VSR transformation matrix, $L(u)$, is given by the following
equation
$L(u)=e^{i\alpha T_{1}}e^{i\beta T_{2}}e^{i\phi K_{z}}.$ (12)
The negative sign of the 3-vector part of the 4-vector given in Eq. (10) is
chosen such that the sign matches to the corresponding 3-vector in SR. This
sign convention is different from the sign convention used in Ref. [2, 1]. The
appropriateness of our sign convention will be discussed once we write $L(u)$
in the matrix form. The parameters $\alpha$, $\beta$ and $\phi$ are the
parameters specifying the transformation and they are given as
$\displaystyle\alpha$ $\displaystyle=$
$\displaystyle-\frac{u_{x}}{1+u_{z}}\,,$ (13) $\displaystyle\beta$
$\displaystyle=$ $\displaystyle-\frac{u_{y}}{1+u_{z}}\,,$ (14)
$\displaystyle\phi$ $\displaystyle=$
$\displaystyle-\ln(\gamma_{u}(1+u_{z}))\,,$ (15)
as specified in Ref. [2, 1]. The parameters specified above can be found out
by using the form of the 4-vectors in Eq. (10) and the VSR transformation
equation in Eq. (11). An explicit derivation of the above parameters is given
in appendix A.
The form of the matrices corresponding to the three transformations
$e^{i\alpha T_{1}}$, $e^{i\beta T_{2}}$ and $e^{i\phi K_{z}}$ can be
calculated by using the standard representations of ${\bf J}$ and ${\bf K}$.
The following matrices encapsulate all the properties of the VSR
transformations:
$\displaystyle e^{i\alpha T_{1}}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{rrrr}1+\frac{\alpha^{2}}{2!}&\alpha&0&-\frac{\alpha^{2}}{2!}\\\
\alpha&1&0&-\alpha\\\ 0&0&1&0\\\
\frac{\alpha^{2}}{2!}&\alpha&0&1-\frac{\alpha^{2}}{2!}\end{array}\right)\,,$
(20) $\displaystyle e^{i\beta T_{2}}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{rrrr}1+\frac{\beta^{2}}{2!}&0&\beta&-\frac{\beta^{2}}{2!}\\\
0&1&0&0\\\ \beta&0&1&-\beta\\\
\frac{\beta^{2}}{2!}&0&\beta&1-\frac{\beta^{2}}{2!}\end{array}\right)\,,$ (25)
$\displaystyle e^{i\phi K_{z}}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{rrrr}\cosh\phi&0&0&\sinh\phi\\\ 0&1&0&0\\\
0&0&1&0\\\ \sinh\phi&0&0&\cosh\phi\end{array}\right)\,.$ (30)
From the form of the three transformations as listed above a general $HOM(2)$
transformation, $L(u)$ as given in Eq. (12), written in terms of the velocity
components $u_{x}$, $u_{y}$ and $u_{z}$, can be written as,
$L(u)=\left(\begin{array}[]{cccc}\gamma_{u}&-\frac{u_{x}}{1+u_{z}}&-\frac{u_{y}}{1+u_{z}}&-\frac{\gamma_{u}\left(u_{z}+{\bf
u}^{2}\right)}{\left(1+u_{z}\right)}\\\
-\gamma_{u}u_{x}&1&0&\gamma_{u}u_{x}\\\
-\gamma_{u}u_{y}&0&1&\gamma_{u}u_{y}\\\
-\gamma_{u}u_{z}&-\frac{u_{x}}{1+u_{z}}&-\frac{u_{y}}{1+u_{z}}&\frac{\gamma_{u}\left(1-{\bf
u}^{2}+u_{z}+u_{z}{}^{2}\right)}{\left(1+u_{z}\right)}\end{array}\right)\,.$
(31)
In the above expression of $L(u)$ if we put $u_{x}=0$ and $u_{y}=0$ then we
get a resultant $L(u_{z})$ which is equivalent to the Lorentz transformation
matrix in SR. The signs of the resulting SR transformation matrix matches with
the sign of our $L(u_{z})$. This matching of $L(u_{z})$ with the corresponding
SR Lorentz transformation matrix dictates the sign convention of the 3-vector
of u in Eq. (10). The convention of the SR Lorentz transformation followed in
this article matches with the convention of Landau and Lifshitz as they
explained it in Ref. [5].
In a similar way one can also calculate the inverse of the $HOM(2)$
transformation, $L^{-1}(u)$, where the inverse transformation is defined as
$L^{-1}(u)u=u_{0}\,.$ (32)
In the present case,
$L^{-1}(u)=e^{-i\phi K_{z}}e^{-i\beta T_{2}}e^{-i\alpha T_{1}}\,.$ (33)
The individual transformation matrices now are given as:
$\displaystyle e^{-i\phi K_{z}}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{rrrr}\cosh\phi&0&0&-\sinh\phi\\\
0&1&0&0\\\ 0&0&1&0\\\ -\sinh\phi&0&0&\cosh\phi\end{array}\right)\,,$ (38)
$\displaystyle e^{-i\beta T_{2}}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{rrrr}1+\frac{\beta^{2}}{2!}&0&-\beta&-\frac{\beta^{2}}{2!}\\\
0&1&0&0\\\ -\beta&0&1&\beta\\\
\frac{\beta^{2}}{2!}&0&-\beta&1-\frac{\beta^{2}}{2!}\end{array}\right)\,,$
(43) $\displaystyle e^{-i\alpha T_{1}}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{rrrr}1+\frac{\alpha^{2}}{2!}&-\alpha&0&-\frac{\alpha^{2}}{2!}\\\
-\alpha&1&0&\alpha\\\ 0&0&1&0\\\
\frac{\alpha^{2}}{2!}&-\alpha&0&1-\frac{\alpha^{2}}{2!}\end{array}\right)\,,$
(48)
where the parameters $\alpha,\beta,\phi$ are given by (13), (14), and (15).
The inverse transformation matrix of the ${HOM}(2)$ group is given as,
$L^{-1}(u)=\left(\begin{array}[]{cccc}\gamma_{u}&\gamma_{u}u_{x}&\gamma_{u}u_{y}&\gamma_{u}u_{z}\\\
\frac{u_{x}}{1+u_{z}}&1&0&-\frac{u_{x}}{1+u_{z}}\\\
\frac{u_{y}}{1+u_{z}}&0&1&-\frac{u_{y}}{1+u_{z}}\\\
\frac{\gamma_{u}\left(u_{z}+{\bf
u}^{2}\right)}{\left(1+u_{z}\right)}&\gamma_{u}u_{x}&\gamma_{u}u_{y}&\frac{\gamma_{u}\left(1-{\bf
u}^{2}+u_{z}+u_{z}{}^{2}\right)}{\left(1+u_{z}\right)}\end{array}\right)\,.$
(49)
It can be shown that with these forms of $L(u)$ and $L^{-1}(u)$ one obtains
$L(u)L^{-1}(u)=L^{-1}(u)L(u)=1$. From the expressions in Eq. (31) and Eq. (49)
it is clear that the inverse transformation in VSR is not obtained by altering
the signs of the velocity components in the $L(u)$ matrix. This property of
the VSR transformations differ from the corresponding property of SR
transformations. Putting $u_{x}=0$ and $u_{y}=0$ in the expression of
$L^{-1}(u)$ we get a resultant $L(u_{z})$ which is equivalent to the
corresponding inverse Lorentz transformation matrix in SR.
Let us consider two inertial frames $S^{\prime}$ and $S$ which coincide with
each other at $t=t^{\prime}=0$. Suppose the $S^{\prime}$ frame is moving with
velocity ${\bf u}$ with respect to $S$ frame. The coordinates of the two
frames are related by
$\mathbf{x}=L^{-1}(u)\,\mathbf{x^{\prime}}\,,$ (50)
where $\mathbf{x}=(t,x,y,z)$ and
$\mathbf{x^{\prime}}=(t^{\prime},x^{\prime},y^{\prime},z^{\prime})$. Using Eq.
(49) the coordinate transformation equations can be explicitly written as,
$\displaystyle t$ $\displaystyle=$
$\displaystyle\gamma_{u}t^{\prime}+\gamma_{u}u_{x}x^{\prime}+\gamma_{u}u_{y}y^{\prime}+\gamma_{u}u_{z}z^{\prime}$
(51) $\displaystyle x$ $\displaystyle=$
$\displaystyle\frac{u_{x}}{1+u_{z}}t^{\prime}+x^{\prime}-\frac{u_{x}}{1+u_{z}}z^{\prime}$
(52) $\displaystyle y$ $\displaystyle=$
$\displaystyle\frac{u_{y}}{1+u_{z}}t^{\prime}+y^{\prime}-\frac{u_{y}}{1+u_{z}}z^{\prime}$
(53) $\displaystyle z$ $\displaystyle=$
$\displaystyle\frac{\gamma_{u}\left(u_{z}+{\bf
u}^{2}\right)}{\left(1+u_{z}\right)}t^{\prime}+\gamma_{u}u_{x}x^{\prime}+\gamma_{u}u_{y}y^{\prime}+\frac{\gamma_{u}\left(1-{\bf
u}^{2}+u_{z}+u_{z}^{2}\right)}{\left(1+u_{z}\right)}z^{\prime}$ (54)
From the above equations one can explicitly verify that
$ds^{2}={ds^{\prime}}^{2}$, where the invariant line-element squared is
${ds}^{2}=-{dt}^{2}+{dx}^{2}+{dy}^{2}+{dz}^{2}$. As the square of the line-
element remains invariant under $HOM(2)$ transformations one can derive a time
dilation formula in this case. The result matches exactly with that of SR.
## 3 Length of a moving rod in VSR
In this section we will discuss the length contraction formulas in VSR. As VSR
does have a preferred direction, which is along the $z$-axis of the $S$ frame,
one cannot arbitrarily rotate the coordinate systems to suit ones need as in
SR. In this case one has to rely more on mathematical description of the
physical problem and the concept of isotropy has to be kept aside. The most
general treatment of the length contraction formula requires the moving rod to
be arbitrarily placed in the moving frame which can have any arbitrary
velocity (although the magnitude of velocity must be smaller than one). The
general setting of the length contraction problem is too complicated and
cumbersome in VSR as because the transformation equations are themselves
complicated as compared to SR. But a meaningful approach and some interesting
results can be obtained by some specific cases and in this section we will try
to elucidate these points explicitly.
### 3.1 The rod is at rest in the $S$ frame
In this section we discuss the issue about length transformations in VSR. We
will focus our attention particularly to the ${HOM}(2)$ transformations. For
the first case we suppose that a rod is at rest along the $x$-axis in the $S$
frame. The length of the rod is $\Delta x=x_{2}-x_{1}\equiv l_{0}$. An
observer in the $S^{\prime}$ frame, which is moving with a velocity ${\bf u}$
with respect to the $S$ frame, can measure the length of the rod in his/her
frame. For the measurement of the length of the rod in motion one has to know
the coordinates of the two ends of the rod
(($x^{\prime}_{1},y^{\prime}_{1},z^{\prime}_{1}$) and
($x^{\prime}_{2},y^{\prime}_{2},z^{\prime}_{2}$)) simultaneously (at
$t^{\prime}$). From the form of the coordinate transformations given in the
last section we can write that
$\displaystyle x_{1}$ $\displaystyle=$
$\displaystyle\frac{u_{x}}{1+u_{z}}t^{\prime}+x^{\prime}_{1}-\frac{u_{x}}{1+u_{z}}z^{\prime}_{1}\,,$
$\displaystyle y_{1}$ $\displaystyle=$
$\displaystyle\frac{u_{y}}{1+u_{z}}t^{\prime}+y^{\prime}_{1}-\frac{u_{y}}{1+u_{z}}z^{\prime}_{1}\,,$
$\displaystyle z_{1}$ $\displaystyle=$
$\displaystyle\frac{\gamma_{u}\left(u_{z}+{\bf
u}^{2}\right)}{\left(1+u_{z}\right)}t^{\prime}+\gamma_{u}u_{x}x^{\prime}_{1}+\gamma_{u}u_{y}y^{\prime}_{1}+\frac{\gamma_{u}\left(1-{\bf
u}^{2}+u_{z}+u_{z}^{2}\right)}{\left(1+u_{z}\right)}z^{\prime}_{1}\,,$
which connects the coordinates of one end of the rod in $S$ frame to its
corresponding coordinates in the $S^{\prime}$ frame. A similar set of
relations can be written for $x_{2}$, $y_{1}$ and $z_{1}$ (coordinates of the
other end of the rod in $S$ frame) as
$\displaystyle x_{2}$ $\displaystyle=$
$\displaystyle\frac{u_{x}}{1+u_{z}}t^{\prime}+x^{\prime}_{2}-\frac{u_{x}}{1+u_{z}}z^{\prime}_{2}\,,$
$\displaystyle y_{1}$ $\displaystyle=$
$\displaystyle\frac{u_{y}}{1+u_{z}}t^{\prime}+y^{\prime}_{2}-\frac{u_{y}}{1+u_{z}}z^{\prime}_{2}\,,$
$\displaystyle z_{1}$ $\displaystyle=$
$\displaystyle\frac{\gamma_{u}\left(u_{z}+{\bf
u}^{2}\right)}{\left(1+u_{z}\right)}t^{\prime}+\gamma_{u}u_{x}x^{\prime}_{2}+\gamma_{u}u_{y}y^{\prime}_{2}+\frac{\gamma_{u}\left(1-{\bf
u}^{2}+u_{z}+u_{z}^{2}\right)}{\left(1+u_{z}\right)}z^{\prime}_{2}\,.$
It is interesting to note that although $y$ and $z$ coordinates remain the
same for the two ends of the rod in the $S$ frame, in the $S^{\prime}$ frame
it does not remain so. Subtracting the first triplet of equations from the
second triplet we have
$\displaystyle l_{0}$ $\displaystyle=$ $\displaystyle\Delta
x^{\prime}-\frac{u_{x}}{1+u_{z}}\Delta z^{\prime}\,,$ (55) $\displaystyle 0$
$\displaystyle=$ $\displaystyle\Delta y^{\prime}-\frac{u_{y}}{1+u_{z}}\Delta
z^{\prime}\,,$ (56) $\displaystyle 0$ $\displaystyle=$
$\displaystyle\gamma_{u}u_{x}\Delta x^{\prime}+\gamma_{u}u_{y}\Delta
y^{\prime}+\frac{\gamma_{u}\left(1-{\bf
u}^{2}+u_{z}+u_{z}^{2}\right)}{\left(1+u_{z}\right)}\Delta z^{\prime}\,,$ (57)
where $\Delta x^{\prime}\equiv x^{\prime}_{2}-x^{\prime}_{1}$, $\Delta
y^{\prime}\equiv y^{\prime}_{2}-y^{\prime}_{1}$ and $\Delta z^{\prime}\equiv
z^{\prime}_{2}-z^{\prime}_{1}$. Using the first two equations, in the above
set of equations, one can deduce from Eq. (57) that
$\displaystyle\Delta z^{\prime}=-u_{x}l_{0}\,.$ (58)
Using Eq. (55), Eq. (56) and the above equation one can show that
$\displaystyle
l=l_{0}\sqrt{\left(1-\frac{u_{x}^{2}}{1+u_{z}}\right)^{2}+\frac{u_{x}^{2}u_{y}^{2}}{\left(1+u_{z}\right)^{2}}+u_{x}^{2}}\,,$
(59)
where
$\displaystyle l\equiv\sqrt{(\Delta x^{\prime})^{2}+(\Delta
y^{\prime})^{2}+(\Delta z^{\prime})^{2}}\,,$ (60)
is the length of the rod in the $S^{\prime}$ frame. It may happen that the
rod, at rest, is placed along the $x$ axis of the $S$ frame while some of the
velocity components of ${\bf u}$ are zero. In this case Eq. (59) gets
simplified. If $u_{y}\neq 0$ and $u_{z}=u_{x}=0$, or $u_{z}\neq 0$ and
$u_{x}=u_{y}=0$, in both cases the length of the rod remains the same. On the
other hand if $u_{x}\neq 0$ and $u_{y}=u_{z}=0$ we will have length
contraction and Eq. (59) becomes
$\displaystyle l=l_{0}\sqrt{(1-u_{x}^{2})^{2}+u_{x}^{2}}\,,$ (61)
which shows that in general there will be a length contraction but the amount
of contraction depends on $u_{x}$. For $u_{x}\to 1$ length contraction
disappears.
If the rod at rest in the $S$ frame was kept along the $y$ axis with
coordinates $(x_{1},y_{1},z_{1})$ and $(x_{1},y_{2},z_{1})$ then analyzing in
a similar way one can write
$\displaystyle
l=l_{0}\sqrt{\left(1-\frac{u_{y}^{2}}{1+u_{z}}\right)^{2}+\frac{u_{y}^{2}u_{x}^{2}}{\left(1+u_{z}\right)^{2}}+u_{y}^{2}}\,,$
(62)
where now $l_{0}=y_{2}-y_{1}$ and $l$ is as given in Eq. (60). If $u_{x}\neq
0$ and $u_{y}=u_{z}=0$, or $u_{z}\neq 0$ and $u_{x}=u_{y}=0$, in both cases
the length of the rod remains the same. On the other hand if $u_{y}\neq 0$ and
$u_{z}=u_{x}=0$ we will have a length contraction formula equivalent to the
one in Eq. (61) where $u_{x}$ is replaced by $u_{y}$. The $HOM(2)$
transformations have a preferred axis which is along the $z$-axis and
consequently the length transformation formulas along these two directions are
similar.
Lastly we come to the case where the rod at rest in the $S$ frame is along the
$z$ axis with coordinates of its ends given by $(x_{1},y_{1},z_{1})$ and
$(x_{1},y_{2},z_{2})$. In this case if the length of the rod at rest is given
by $l_{0}=z_{2}-z_{1}$ then, using Eq.(55), Eq. (56) and Eq. (57), it can be
shown that
$\displaystyle l=l_{0}\sqrt{\frac{u_{x}^{2}(1-{\bf
u}^{2})}{(1+u_{z})^{2}}+\frac{u_{y}^{2}(1-{\bf u}^{2})}{(1+u_{z})^{2}}+(1-{\bf
u}^{2})}\,,$ (63)
where $l$ is as given in Eq. (60). It is immediately observed that the length
transformation formula for the third case is different from the previous two
cases. This has to do with the special status of the $z$-axis in $HOM(2)$ and
in general in VSR. If $u_{x}\neq 0$ and $u_{y}=u_{z}=0$, or $u_{y}\neq 0$ and
$u_{z}=u_{x}=0$, in both cases the length of the rod remains the same. On the
other hand if $u_{z}\neq 0$ and $u_{x}=u_{y}=0$ we will have a length
contraction formula equivalent to the one in SR as
$\displaystyle l=l_{0}\sqrt{1-u_{z}^{2}}\,.$ (64)
This formula, corresponding to relative motion along $z$-axis, again shows the
special status of the preferred axis. If fractional length contraction is
defined by $\Delta l/l_{0}$ where $\Delta l\equiv l_{0}-l$, for the case of
VSR and SR then the contents of Eq. (61) and Eq. (64) can be plotted to show
the difference of VSR and SR. Such a plot is given in Fig. 1.
Figure 1: The plot of the $\Delta l/l_{0}$, where $\Delta l\equiv l_{0}-l$,
for the case of VSR and SR when the rod is at rest in $S$ frame. In the
abscissa we have either $u_{x}$ (when $u_{y}=u_{z}=0$) or $u_{y}$ (when
$u_{x}=u_{z}=0$).
It can be checked, using the fact that $|{\bf u}|^{2}<1$, that the expressions
in Eq. (59), Eq. (62) and Eq. (63) gives length contractions. Although VSR
transformations constitute only a small subgroup of the total Lorentz group
but here the length of a moving rod never expands.
### 3.2 The rod is at rest in the $S^{\prime}$ frame
In this case we suppose that a rod is at rest along the $x^{\prime}$-axis in
the $S^{\prime}$ frame. The length of the rod is $\Delta
x^{\prime}=x^{\prime}_{2}-x^{\prime}_{1}\equiv l_{0}$. An observer in the $S$
frame, which is moving with a velocity $-{\bf u}$ with respect to the $S$
frame, can measure the length of the rod in his/her frame. For the measurement
of the length of the rod in motion one has to know the coordinates of the two
ends of the rod (($x_{1},y_{1},z_{1}$) and ($x_{2},y_{2},z_{2}$))
simultaneously (at $t$). From the form of $L(u)$, in Eq. (31), we can write
$\displaystyle x^{\prime}_{1}$ $\displaystyle=$
$\displaystyle-\gamma_{u}u_{x}t+x_{1}+\gamma_{u}u_{x}z_{1}\,,$ $\displaystyle
y^{\prime}_{1}$ $\displaystyle=$
$\displaystyle-\gamma_{u}u_{y}t+y_{1}+\gamma_{u}u_{y}z_{1}\,,$ $\displaystyle
z^{\prime}_{1}$ $\displaystyle=$
$\displaystyle-\gamma_{u}u_{z}t-\frac{u_{x}}{1+u_{z}}x_{1}-\frac{u_{y}}{1+u_{z}}y_{1}+\frac{\gamma_{u}\left(1-{\bf
u}^{2}+u_{z}+u_{z}^{2}\right)}{\left(1+u_{z}\right)}z_{1}\,,$
which connects the spatial coordinates of one end of the rod in the two
frames. For the the other end we must have
$\displaystyle x^{\prime}_{2}$ $\displaystyle=$
$\displaystyle-\gamma_{u}u_{x}t+x_{2}+\gamma_{u}u_{x}z_{2}\,,$ $\displaystyle
y^{\prime}_{1}$ $\displaystyle=$
$\displaystyle-\gamma_{u}u_{y}t+y_{2}+\gamma_{u}u_{y}z_{2}\,,$ $\displaystyle
z^{\prime}_{1}$ $\displaystyle=$
$\displaystyle-\gamma_{u}u_{z}t-\frac{u_{x}}{1+u_{z}}x_{2}-\frac{u_{y}}{1+u_{z}}y_{2}+\frac{\gamma_{u}\left(1-{\bf
u}^{2}+u_{z}+u_{z}^{2}\right)}{\left(1+u_{z}\right)}z_{2}\,.$
Subtracting the first triplet of equations from the second triplet we have
$\displaystyle l_{0}$ $\displaystyle=$ $\displaystyle\Delta
x+\gamma_{u}u_{x}\Delta z\,,$ (65) $\displaystyle 0$ $\displaystyle=$
$\displaystyle\Delta y+\gamma_{u}u_{y}\Delta z\,,$ (66) $\displaystyle 0$
$\displaystyle=$ $\displaystyle-\frac{u_{x}}{1+u_{z}}\Delta
x-\frac{u_{y}}{1+u_{z}}\Delta y+\frac{\gamma_{u}\left(1-{\bf
u}^{2}+u_{z}+u_{z}^{2}\right)}{\left(1+u_{z}\right)}\Delta z\,,$ (67)
where $\Delta x\equiv x_{2}-x_{1}$, $\Delta y\equiv y_{2}-y_{1}$ and $\Delta
z\equiv z_{2}-z_{1}$. From the last set of equations one obtains
$\displaystyle\Delta z=\frac{l_{0}u_{x}}{\gamma_{u}(1+u_{z})}\,.$ (68)
If the length of the moving rod be
$\displaystyle l\equiv\sqrt{(\Delta x)^{2}+(\Delta y)^{2}+(\Delta z)^{2}}\,,$
(69)
then in this case we will have
$\displaystyle
l=l_{0}\sqrt{\left(1-\frac{u_{x}^{2}}{1+u_{z}}\right)^{2}+\frac{u_{x}^{2}u_{y}^{2}}{\left(1+u_{z}\right)^{2}}+u_{x}^{2}\frac{(1-{\bf
u}^{2})}{(1+u_{z})^{2}}}\,.$ (70)
The equation is not equivalent to Eq. (59) showing that length contraction
depends upon the frame in VSR. If the rod was kept along the $y^{\prime}$
direction in the $S^{\prime}$ frame, then
$l_{0}=y^{\prime}_{2}-y^{\prime}_{1}$, we would expect the length contraction
formula to be exactly same as that given above except an interchange in
$u_{x}$ and $u_{y}$. If $u_{x}=0$ then there is no length transformation. If
on the other hand $u_{x}\neq 0$ but $u_{y}=u_{z}=0$ then the above formula
becomes
$\displaystyle l=l_{0}\sqrt{1-u_{x}^{2}}\,,$ (71)
which matches exactly with the corresponding result from SR. If on the other
hand the rod is placed along the $y^{\prime}$ axis and $u_{y}\neq 0$ but
$u_{z}=u_{x}=0$ then also we get a length contraction formula exactly similar
to Eq. (71) except that there $u_{x}$ is replaced by $u_{y}$. In both these
cases it is observed that we get the same results as we get from SR.
If the rod in the $S^{\prime}$ frame is placed along the $z$ axis it can be
easily found out that in this case
$\displaystyle l=l_{0}\sqrt{u_{x}^{2}+u_{y}^{2}+(1-{\bf u})^{2}}\,,$ (72)
where $l$ is as given in Eq. (69) and $l_{0}=z^{\prime}_{2}-z^{\prime}_{1}$.
If in this case if $u_{z}\neq 0$ but $u_{x}=u_{y}=0$ we again get back the SR
formula as $l=l_{0}\sqrt{1-u_{z}^{2}}$. In this case also one can check, using
the fact that $|{\bf u}|^{2}<1$, that Eq. (70) and Eq. (72) gives length
contractions.
## 4 Discussion and conclusion
From the above analysis of length transformations in VSR we see that in
general the VSR results and SR result do not match. But there are remarkable
similarities which may hinder one from discovering the difference in the
results predicted by the two theories. In our convention there are two frames
$S$ and $S^{\prime}$ which coincides with each other at $t=t^{\prime}=0$. As
time evolves $S^{\prime}$ frame moves relative to $S$ frame with an uniform
3-velocity ${\bf u}$. If the rod is kept along any axes in the $S^{\prime}$
frame and one measures its length from $S$ frame then the length
transformations do not coincide with the SR results. But interestingly if the
rod is sliding along any common axes of $S$ and $S^{\prime}$ frame then we get
the exact SR length contraction results. Consequently if the observer is in
$S$ frame and the rod is oriented along any 3-axes, where the particular axis
is along the direction of the relative velocity ${\bf u}$, one will never
discover whether the theory of relativity is SR or VSR. This difference
between SR and VSR becomes more blurred because as in SR in VSR also the
velocity of light is independent of the reference frame and the time-dilation
formula is exactly the same as in SR.
On the other hand if the rod is at rest in the $S$ frame itself and the
observer is in the $S^{\prime}$ frame then the length transformation formulas
are different from SR if the orientation of the rod and the relative velocity
is along the $x$ or $y$ axes. But if the rod is placed along the $z$ axis in
the $S$ frame and the frame $S^{\prime}$ also moves along the $z$ axis of the
$S$ frame with velocity $u_{z}$ then the length of the rod measured in the
$S^{\prime}$ frame is contracted in the same way as one expects from SR.
From the forms of $L(u)$ and $L^{-1}(u)$ as given in Eq. (31) and Eq. (49) it
is observed that $L^{-1}(u)$ is not obtained from $L(u)$ by putting a minus
sign in front of all the velocity components appearing in the expression of
$L(u)$. This is the cause of different length contraction formulas for two
different frames as shown in the last section. The interesting thing about VSR
is that inspite of these asymmetric nature of the transformations, the square
of the line element remains invariant as in SR and consequently the time-
dilation formulas remain exactly the same as in SR. The length contraction
formulas in VSR do depend upon the sign of $u_{z}$ which shows that the amount
of contraction of length of a moving rod depends upon its direction of motion
along the $z$ or $z^{\prime}$-axes.
If VSR is really the theory which nature follows, may be in the very high
energy sector or near the electro-weak symmetry breaking scale, then one may
hope to see the effects of VSR length contractions in the LHC or in future
colliders. At present there is no confirmation of any difference of the length
contraction results obtained from SR. In nearby future heavy ion collision
experiments and other related experiments can really be done to look for any
discrepancy of the length contraction formulas from SR.
In conclusion it must be stated that in this article we have studied how a
moving rod’s length changes from its rest length in VSR and specifically in
the $HOM(2)$ version of VSR. Length contraction is observed for all the cases
but there are some variation in the transformation equations in contrast to
that in SR, although SR results are reproduced in many special cases of VSR.
The other important conclusion is related to the fact that in general the
phenomenon of length contraction of a rod in VSR do depend upon the frame from
which one observes, a fact which is very difficult to accept in any
relativistic theory.
## Appendix
## Appendix A $HOM(2)$ transformations
The VSR generators are given by $T_{1}=K_{x}+J_{y},T_{2}=K_{y}-J_{x}$ and
$K_{z}$ where $K_{i}$’s and $J_{i}$’s are the generators of Lorentz boosts and
3-space rotations of the full Lorentz group. In this article we choose the
following form of the generators of ${\bf J}$ and ${\bf K}$ as given in the
book by L. H. Ryder [6]:
$\displaystyle J_{x}=-i\left(\begin{array}[]{cccc}0&0&0&0\\\ 0&0&0&0\\\
0&0&0&1\\\
0&0&-1&0\end{array}\right)\,,\,\,J_{y}=-i\left(\begin{array}[]{cccc}0&0&0&0\\\
0&0&0&-1\\\ 0&0&0&0\\\
0&1&0&0\end{array}\right)\,,\,\,J_{z}=-i\left(\begin{array}[]{cccc}0&0&0&0\\\
0&0&1&0\\\ 0&-1&0&0\\\ 0&0&0&0\end{array}\right)\,,$ (85)
and
$\displaystyle K_{x}=-i\left(\begin{array}[]{cccc}0&1&0&0\\\ 1&0&0&0\\\
0&0&0&0\\\
0&0&0&0\end{array}\right)\,,\,\,K_{y}=-i\left(\begin{array}[]{cccc}0&0&1&0\\\
0&0&0&0\\\ 1&0&0&0\\\
0&0&0&0\end{array}\right)\,,\,\,K_{z}=-i\left(\begin{array}[]{cccc}0&0&0&1\\\
0&0&0&0\\\ 0&0&0&0\\\ 1&0&0&0\end{array}\right)\,.$ (98)
Consequently one must have
$\displaystyle T_{1}=-i\left(\begin{array}[]{cccc}0&1&0&0\\\ 1&0&0&-1\\\
0&0&0&0\\\
0&1&0&0\end{array}\right)\,,\,\,\,T_{2}=-i\left(\begin{array}[]{cccc}0&0&1&0\\\
0&0&0&0\\\ 1&0&0&-1\\\ 0&0&1&0\end{array}\right)\,.$ (107)
Noting that
$\displaystyle T^{2}_{1}=-\left(\begin{array}[]{cccc}1&0&0&-1\\\ 0&0&0&0\\\
0&0&0&0\\\
1&0&0&-1\end{array}\right)\,,\,\,\,T^{2}_{2}=-\left(\begin{array}[]{cccc}1&0&0&-1\\\
0&0&0&0\\\ 0&0&0&0\\\ 1&0&0&-1\end{array}\right)\,,$ (116)
and $T_{1}^{3}=T_{2}^{3}=0$ we have
$\displaystyle e^{i\alpha
T_{1}}=\left(\begin{array}[]{cccc}1+\frac{\alpha^{2}}{2!}&\alpha&0&-\frac{\alpha^{2}}{2!}\\\
\alpha&1&0&-\alpha\\\ 0&0&1&0\\\
\frac{\alpha^{2}}{2!}&\alpha&0&1-\frac{\alpha^{2}}{2!}\end{array}\right)\,,\,\,\,e^{i\beta
T_{2}}=\left(\begin{array}[]{cccc}1+\frac{\beta^{2}}{2!}&0&\beta&-\frac{\beta^{2}}{2!}\\\
0&1&0&0\\\ \beta&0&1&-\beta\\\
\frac{\beta^{2}}{2!}&0&\beta&1-\frac{\beta^{2}}{2!}\end{array}\right)\,.$
(125)
The square of $K_{z}$ is given by
$\displaystyle K_{z}^{2}=-\left(\begin{array}[]{cccc}1&0&0&0\\\ 0&0&0&0\\\
0&0&0&0\\\ 0&0&0&1\end{array}\right)\,,$ (130)
and it comes out trivially that $K_{z}^{3}=-K_{z}$. From these facts one can
write
$\displaystyle e^{i\phi
K_{z}}=\left(\begin{array}[]{cccc}\cosh\phi&0&0&\sinh\phi\\\ 0&1&0&0\\\
0&0&1&0\\\ \sinh\phi&0&0&\cosh\phi\end{array}\right)\,.$ (135)
Using the above forms of the matrices we can now calculate $L(u)=e^{i\alpha
T_{1}}e^{i\beta T_{2}}e^{i\phi K_{z}}$ which comes out as
$\displaystyle
L(u)=\left(\begin{array}[]{cccc}\cosh\phi+\left(\frac{\alpha^{2}}{2}+\frac{\beta^{2}}{2}\right)e^{-\phi}&\alpha&\beta&\sinh\phi-\left(\frac{\alpha^{2}}{2}+\frac{\beta^{2}}{2}\right)e^{-\phi}\\\
\alpha e^{-\phi}&1&0&-\alpha e^{-\phi}\\\ \beta e^{-\phi}&0&1&-\beta
e^{-\phi}\\\
\sinh\phi+\left(\frac{\alpha^{2}}{2}+\frac{\beta^{2}}{2}\right)e^{-\phi}&\alpha&\beta&\cosh\phi-\left(\frac{\alpha^{2}}{2}+\frac{\beta^{2}}{2}\right)e^{-\phi}\end{array}\right)\,,$
(140)
where we have used the relation
$e^{-\phi}=\cosh\phi-\sinh\phi\,.$
As $L(u)u_{0}=u$ where $u_{0}$ and $u$ are as given in Eq. (10), we get the
following set of equations
$\displaystyle\gamma_{u}$ $\displaystyle=$
$\displaystyle\cosh\phi+\left(\frac{\alpha^{2}}{2}+\frac{\beta^{2}}{2}\right)e^{-\phi}\,,$
(141) $\displaystyle\gamma_{u}u_{x}$ $\displaystyle=$ $\displaystyle-\alpha
e^{-\phi}\,,$ (142) $\displaystyle\gamma_{u}u_{y}$ $\displaystyle=$
$\displaystyle-\beta e^{-\phi}\,,$ (143) $\displaystyle\gamma_{u}u_{z}$
$\displaystyle=$
$\displaystyle-\sinh\phi-\left(\frac{\alpha^{2}}{2}+\frac{\beta^{2}}{2}\right)e^{-\phi}\,.$
(144)
Adding Eq. (144) and Eq. (141) we get
$e^{-\phi}=\gamma_{u}(1+u_{z})\,.$
Taking logarithm of both sides we get
$\displaystyle\phi=-\ln[\gamma_{u}(1+u_{z})]\,.$ (145)
Using the expression of $e^{-\phi}$ in Eq. (142) and Eq. (143) we get
$\displaystyle\alpha=-\frac{u_{x}}{1+u_{z}}\,,\,\,\,\,\beta=-\frac{u_{y}}{1+u_{z}}\,.$
(146)
Remembering that $L(u)$ is a $4$ matrix its individual matrix elements can be
written as $L_{n,m}$. As in SR, we want to specify all the matrix elements
$L_{n,m}$ in terms of the velocity components $u_{i}$ where $i=x,\,y,\,z$.
There remains two matrix elements of $L(u)$, $L_{1,4}$ and $L_{4,4}$, in Eq.
(140) which requires some dressing up before they can be expressed in terms of
the velocity components. Adding Eq. (141) to
$L_{1,4}=\sinh\phi-\left(\frac{\alpha^{2}}{2}+\frac{\beta^{2}}{2}\right)e^{-\phi}$
we get $L_{1,4}+\gamma_{u}=e^{\phi}$. Some trivial manipulation then yields
$\displaystyle L_{1,4}=-\frac{\gamma_{u}(u^{2}+u_{z})}{1+u_{z}}\,.$ (147)
In a similar fashion subtracting Eq. (144) from the expression of
$L_{4,4}=\cosh\phi-\left(\frac{\alpha^{2}}{2}+\frac{\beta^{2}}{2}\right)e^{-\phi}$
we get
$\displaystyle L_{4,4}=\frac{\gamma_{u}(1-u^{2}+u_{z}-u_{z}^{2})}{1+u_{z}}\,.$
(148)
Ultimately using Eq. (141) to Eq. (144) and Eq. (147) and Eq. (148) one can
write $L(u)$ purely in terms of the velocity components as written in Eq.
(31).
## Appendix B $HOM(2)$ inverse transformations
The $HOM(2)$ inverse transformation is
$L^{-1}(u)=e^{-i\phi K_{z}}e^{-i\beta T_{2}}e^{-i\alpha T_{1}}\,.$
The form of the matrices $e^{-i\phi K_{z}}$, $e^{-i\beta T_{2}}$ and
$e^{-i\alpha T_{1}}$ can be obtained from Eq. (125) and Eq. (135) by the
following replacements: $\alpha\to-\alpha$, $\beta\to-\beta$ and
$\phi\to-\phi$. Multiplying $e^{-i\phi K_{z}}$, $e^{-i\beta T_{2}}$ and
$e^{-i\alpha T_{1}}$ in the specific order, as required for the inverse
transformation, we get
$\displaystyle
L^{-1}(u)=\left(\begin{array}[]{cccc}\cosh\phi+\left(\frac{\alpha^{2}}{2}+\frac{\beta^{2}}{2}\right)e^{-\phi}&-\alpha
e^{-\phi}&-\beta
e^{-\phi}&-\sinh\phi-\left(\frac{\alpha^{2}}{2}+\frac{\beta^{2}}{2}\right)e^{-\phi}\\\
-\alpha e^{-\phi}&1&0&\alpha\\\ -\beta&0&1&\beta\\\
-\sinh\phi+\left(\frac{\alpha^{2}}{2}+\frac{\beta^{2}}{2}\right)e^{-\phi}&-\alpha
e^{-\phi}&-\beta
e^{-\phi}&\cosh\phi-\left(\frac{\alpha^{2}}{2}+\frac{\beta^{2}}{2}\right)e^{-\phi}\end{array}\right).$
(153)
Again using Eq. (141) to Eq. (144) and Eq. (147) and Eq. (148) one can write
$L^{-1}(u)$ purely in terms of the velocity components as written in Eq. (49).
It can easily be checked that the resulting inverse $HOM(2)$ transformations
satisfy
$L^{-1}(u)u=u_{0}\,,$
where $u_{0}$ and $u$ are as given in Eq. (10).
## References
* [1] A. G. Cohen and S. L. Glashow, Phys. Rev. Lett. 97, 021601 (2006) [arXiv:hep-ph/0601236].
* [2] S. Das and S. Mohanty, Mod. Phys. Lett. A 26, 139 (2011) [arXiv:0902.4549 [hep-ph]].
* [3] D. V. Ahluwalia and S. P. Horvath, JHEP 1011, 078 (2010) [arXiv:1008.0436 [hep-ph]].
* [4] M. M. Sheikh-Jabbari and A. Tureanu, Phys. Rev. Lett. 101, 261601 (2008) [arXiv:0806.3699 [hep-th]].
* [5] L. D. Landau, E. M. Lifshitz, “The classical theory of fields”, $4^{\rm th}$ edition, Elsevier, (2008)
* [6] Lewis H. Ryder, “Quantum field theory”, $2^{\rm nd}$ Edition, Cambridge University Press (1996).
|
arxiv-papers
| 2011-05-21T06:59:06 |
2024-09-04T02:49:19.008454
|
{
"license": "Public Domain",
"authors": "Biplab Dutta, Kaushik Bhattacharya",
"submitter": "Kaushik Bhattacharya",
"url": "https://arxiv.org/abs/1105.4219"
}
|
1105.4228
|
# Simulation of Chemical Reaction Dynamics on an NMR Quantum Computer
Dawei Lu,1 Nanyang Xu,1 Ruixue Xu,1 Hongwei Chen,1
Jiangbin Gong,2,3 Xinhua Peng,1 Jiangfeng Du,1∗
1Hefei National Laboratory for Physical Sciences at Microscale and Department
of Modern Physics,
University of Science and Technology of China, Hefei, Anhui 230026, People’s
Republic of China
2 Department of Physics and Centre for Computational Science and Engineering,
National University of Singapore, 117542, Republic of Singapore
3 NUS Graduate School for Integrative Sciences and Engineering, 117597,
Republic of Singapore
∗To whom correspondence should be addressed; E-mail: djf@ustc.edu.cn
> Quantum simulation can beat current classical computers with minimally a few
> tens of qubits and will likely become the first practical use of a quantum
> computer. One promising application of quantum simulation is to attack
> challenging quantum chemistry problems. Here we report an experimental
> demonstration that a small nuclear-magnetic-resonance (NMR) quantum computer
> is already able to simulate the dynamics of a prototype chemical reaction.
> The experimental results agree well with classical simulations. We conclude
> that the quantum simulation of chemical reaction dynamics not computable on
> current classical computers is feasible in the near future.
##### Introduction.
In addition to offering general-purpose quantum algorithms with substantial
speed-ups over classical algorithms (?) [e.g., Shor’s quantum factorizing
algorithm (?)], a quantum computer can be used to simulate specific quantum
systems with high efficiency (?). This quantum simulation idea was first
conceived by Feynman (?). Lloyd proved that with quantum computation
architecture, the required resource for quantum simulation scales polynomially
with the size of the simulated system (?), as compared with the exponential
scaling on classical computers. During the past years several quantum
simulation algorithms designed for individual problems were proposed (?, ?, ?,
?, ?) and a part of them have been realized using physical systems such as NMR
(?, ?, ?) or trapped-ions (?). For quantum chemistry problems, Aspuru-Guzik et
al. and Kassal et al. proposed quantum simulation algorithms to calculate
stationary molecular properties (?) as well as chemical reaction rates (?),
with the quantum simulation of the former experimentally implemented on both
NMR (?) and photonic quantum computers (?). In this work we aim at the quantum
simulation of the more challenging side of quantum chemistry problems –
chemical reaction dynamics, presenting an experimental NMR implementation for
the first time.
Theoretical calculations of chemical reaction dynamics play an important role
in understanding reaction mechanisms and in guiding the control of chemical
reactions (?, ?). On classical computers the computational cost for
propagating the Schrödinger equation increases exponentially with the system
size. Indeed, standard methods in studies of chemical reaction dynamics so far
have dealt with up to 9 degrees of freedom (DOF) (?). Some highly
sophisticated approaches, such as the multi-configurational time-dependent
Hartree (MCTDH) method (?), can treat dozens of DOF but various approximations
are necessary. So generally speaking, classical computers are unable to
perform dynamical simulations for large molecules. For example, for a 10–DOF
system and if only 8 grid points are needed for the coordinate representation
of each DOF, classical computation will have to store and operate $8^{10}$
data points, already a formidable task for current classical computers. By
contrast, such a system size is manageable by a quantum computer with only 30
qubits. Furthermore, the whole set of data can be processed in parallel in
quantum simulation.
In this report we demonstrate that the quantum dynamics of a laser-driven
hydrogen transfer model reaction can be captured by a small NMR quantum
simulator. Given the limited number of qubits, the potential energy curve is
modeled by 8 grid points. The continuous reactant-to-product transformation
observed in our quantum simulator is in remarkable agreement with a classical
computation based also upon an 8-dimensional Hilbert space. To our knowledge,
this is the first explicit implementation of the quantum simulation of a
chemical reaction process. Theoretical methods and general experimental
techniques described in this work should motivate next-generation simulations
of chemical reaction dynamics with a larger number of qubits.
##### Theory.
Previously we were able to simulate the ground-state energy of Hydrogen
molecule (?). Here, to simulate chemical reaction dynamics, we consider a one-
dimensional model of a laser-driven chemical reaction (?), namely, the
isomerization reaction of nonsymmetric substituted malonaldehydes, depicted in
(Fig. 1A). The system Hamiltonian in the presence of an external laser field
is given by
$H(t)=T+V+E(t){\quad\rm with\quad}E(t)=-\mu\varepsilon(t).$ (1)
In (Eq. 1), $E(t)$ is the laser-molecule interaction Hamiltonian, $\mu=eq$ is
the dipole moment operator, $\varepsilon(t)$ represents the driving electric
field, ${T}={p}^{2}/2m$ is the kinetic energy operator, and
${V}=\frac{\Delta}{2q_{0}}(q-q_{0})+\frac{V^{\ddagger}-\Delta/2}{q_{0}^{4}}(q-q_{0})^{2}(q+q_{0})^{2}$
(2)
is a double-well potential of the system along the reaction coordinate. In
(Eq. 2) $V^{\ddagger}$ is the barrier height, $\Delta$ gives the asymmetry of
the two wells, and $\pm q_{0}$ give the locations of the potential well
minima. See the figure caption of (Fig. 1B) for more details of this model.
We first employ the split-operator method (?, ?) to obtain the propagator
${U}(t+\delta t,t)$ associated with the time interval from $t$ to $t+\delta
t$. We then have
$\displaystyle{U}(t+\delta t,t)\approx$
$\displaystyle\,e^{-\frac{i}{\hbar}{V}\delta
t/2}e^{-\frac{i}{\hbar}{E}(t+\delta t/2)\delta
t/2}e^{-\frac{i}{\hbar}{T}\delta t}$ $\displaystyle\times
e^{-\frac{i}{\hbar}{E}(t+\delta t/2)\delta t/2}e^{-\frac{i}{\hbar}{V}\delta
t/2}.$ (3)
The unitary operator $e^{-i{T}\delta t/\hbar}$ in (Eq. 3) is diagonal in the
momentum representation whereas all the other operators are unitary and
diagonal in the coordinate representation. Such ${U}(t+\delta t,t)$ can be
simulated in a rather simple fashion if we work with both representations and
make transformations between them by quantum Fourier transform (QFT)
operations. To take snapshots of the dynamics we divide the reaction process
into 25 small time steps, with $\delta t=1.5\,$fs and the total duration
$t_{f}=37.5\,$fs. The electric field of an ultrashort strong laser pulse is
chosen as
$\varepsilon(t)=\left\\{\begin{array}[]{cc}\varepsilon_{0}\sin^{2}(\frac{\pi
t}{2s_{1}});&\qquad 0\leq t\leq s_{1}\\\ \varepsilon_{0};&\qquad
s_{1}<t<s_{2}\\\
\varepsilon_{0}\sin^{2}[\frac{\pi(t_{f}-t)}{2(t_{f}-s_{2})}];&\qquad s_{2}\leq
t\leq t_{f}\end{array}\right.$ (4)
with $s_{1}=5\,$fs and $s_{2}=32.5\,$fs. More details, including an error
analysis of the split-operator technique, are given in the supplementary
material. The reactant state at $t=0$ is assumed to be the ground-state
$\left|\phi_{0}\right\rangle$ of the bare Hamiltonian $T+V$, which is mainly
localized in the left potential well. The wavefunction of the reacting system
at later times is denoted by $\left|\psi({t})\right\rangle$. The product state
of the reaction is taken as the first excited state
$\left|\phi_{1}\right\rangle$ of $T+V$, which is mainly localized in the right
potential well.
With the system Hamiltonian, the initial reactant state, the product state,
and the propagation method outlined above, the next step is to encode the
time-evolving wavefunction $\left|\psi({t})\right\rangle$ and the $T$, $V$,
$E(t)$ operators by _n_ qubits. To that end we first obtain the expressions of
these operators in representation of a set of $N=2^{n}$ discretized position
basis states. The evolving state can then be encoded as
$\displaystyle\left|\psi(t)\right\rangle=$
$\displaystyle\sum_{q=0}^{2^{n}-1}m_{q}(t)\left|q\right\rangle$
$\displaystyle=$ $\displaystyle m_{0}(t)\left|0\cdots
00\right\rangle+...+m_{2^{n}-1}(t)\left|1\cdots 11\right\rangle,$ (5)
and as a result the system operators become
$\displaystyle{T}$ $\displaystyle\\!\\!=$
$\displaystyle\\!\\!\sum_{k_{1},\cdots,k_{n}=z,i}\alpha_{k_{1}\cdots
k_{n}}\sigma_{k_{1}}^{1}\sigma_{k_{2}}^{2}\cdots\sigma_{k_{n}}^{n},$ (6)
$\displaystyle{V}$ $\displaystyle\\!\\!=$
$\displaystyle\\!\\!\sum_{k_{1},\cdots,k_{n}=z,i}\beta_{k_{1}\cdots
k_{n}}\sigma_{k_{1}}^{1}\sigma_{k_{2}}^{2}\cdots\sigma_{k_{n}}^{n},$ (7)
$\displaystyle q$ $\displaystyle\\!=$
$\displaystyle\\!\sum_{j=1}^{n}\gamma_{j}\sigma_{z}^{j},$ (8)
where $\sigma_{z}^{j}$ $(j=1,2,\cdots,n)$ is the Pauli matrix and
$\sigma_{i}^{j}$ is the _N_ -dimensional identity matrix. Because our current
quantum computing platform can only offer a limited number of qubits and the
focus of this work is on an implementation of the necessary gate operations
under the above encoding, we have employed a rather aggressive 8-point
discretization using $n=3$ qubits. The associated diagonal forms of the $T$,
$V$, and $q$ matrices are given in the supplementary material. In particular,
the end grid points are at $q=\pm 0.8$ Å and the locations of other 6 grid
points are shown in (Fig. 1B). The eigenvalues of the ground and first excited
states of the bare Hamiltonian treated in the 8-dimensional encoding Hilbert
space are close to the exact answers. The associated eigenfunctions are
somewhat deformed from exact calculations using, e.g., 64 grid points.
Nonetheless, their unbalanced probability distribution in the two potential
wells is maintained. For example, the probability for the first excited state
being found in the right potential well is about $80\%$.
##### Experiment.
In our experiment qubits 1,2, and 3 are realized by the 19F, 13C, and 1H
nuclear spins of Diethyl-fluoromalonate. The structure of Diethyl-
fluoromalonate is shown in (Fig. 2A), where the three nuclei used as qubits
are marked by oval. The internal Hamiltonian of this system is given by
$\displaystyle\mathcal{H}_{int}=$
$\displaystyle\sum\limits_{j=1}^{3}{2\pi\nu_{j}}I_{z}^{j}+\sum\limits_{j<k,=1}^{3}{2\pi}J_{jk}I_{z}^{j}I_{z}^{k},$
(9)
where $\nu_{j}$ is the resonance frequency of the _j_ th spin and
$\emph{J}_{jk}$ is the scalar coupling strength between spins _j_ and _k_ ,
with $\emph{J}_{12}=-194.4$ Hz, $\emph{J}_{13}=47.6$ Hz, and
$\emph{J}_{23}=160.7$ Hz. The relaxation time $T_{1}$ and dephasing time
$T_{2}$ for each of the three nuclear spins are tabulated in (Fig. 2B). The
experiment is conducted on a Bruker Avance 400 MHz spectrometer at room
temperature.
The experiment consists of three parts: (A) Initial state preparation. In this
part we prepare the ground state $\left|\phi_{0}\right\rangle$ of the bare
Hamiltonian $T+V$ as the reactant state; (B) Dynamical evolution, that is, the
explicit implementation of the system evolution such that the continuous
chemical reaction dynamics can be simulated; (C) Measurement. In this third
part the probabilities of the reactant and product states associated with each
of the 25 snapshots of the dynamical evolution are recorded. For the $j$th
snapshot at $t_{j}\equiv j\delta t$, we measure the overlaps
$C(\left|\psi(t_{j})\right\rangle,\left|\phi_{0}\right\rangle)=|\langle\phi_{0}|\psi(t_{j})\rangle|^{2}$
and
$C(\left|\psi(t_{j})\right\rangle,\left|\phi_{1}\right\rangle)=|\langle\phi_{1}|\psi(t_{j})\rangle|^{2}$,
through which the continuous reactant-to-product transformation can be
displayed. The main experimental details are as follows. Readers may again
refer to the supplementary material for more technical explanations.
(A) Initial State Preparation. Starting from the thermal equilibrium state,
firstly we create the pseudo-pure state (PPS)
$\rho_{000}=(1-\epsilon)\mathbb{{I}}/8+\epsilon\left|000\right\rangle\left\langle
000\right|$ using the spatial average technique (?), where $\epsilon\approx
10^{-5}$ represents the polarization of the system and ${\mathbb{{I}}}$ is the
$8\times 8$ identity matrix. The initial state $\left|\phi_{0}\right\rangle$
was prepared from $\rho_{000}$ by applying one shaped radio-frequency (RF)
pulse characterized by 1000 frequency segments and determined by the GRadient
Ascent Pulse Engineering (GRAPE) algorithm (?, ?, ?). The preparation pulse
thus obtained is shown in (Fig. 2C) with the pulse width chosen as 10 ms and a
theoretical fidelity 0.995. Because the central resonance frequencies of the
nuclear spins are different, (Fig. 2C) shows the RF field amplitudes vs time
in three panels. To confirm the successful preparation of the state
$|\phi_{0}\rangle$, we carry out a full state tomography and examine the
fidelity between the target density matrix
$\rho_{0}=|\phi_{0}\rangle\langle\phi_{0}|$ and the experimental one
$\rho_{exp}(0)$. Using the fidelity definition
$F(\rho_{1},\rho_{2})\equiv\texttt{Tr}(\rho_{1}{\rho_{2}})/\sqrt{(\texttt{Tr}(\rho_{1}^{2})\texttt{Tr}(\rho_{2}^{2})}$,
we obtain $F[\rho_{0},\rho_{exp}(0)]=0.950$. Indeed, their real parts shown in
(Fig. 4A) are seen to be in agreement.
(B) Dynamical Evolution. The reaction process was divided into $M=25$ discrete
time intervals of the same duration $\delta t$. Associated with the $m$th time
interval, the unitary evolution operator is given by
$U_{m}\approx V_{\delta t/2}{E}_{\delta t/2}(t_{m})U_{QFT}T_{\delta
t}U_{QFT}^{\dagger}{E}_{\delta t/2}(t_{m}){V}_{\delta t/2},$ (10)
where $U_{QFT}$ represents a QFT operation, and other operators are defined by
${V}_{\delta t/2}\equiv e^{-\frac{i}{\hbar}{V}\frac{\delta t}{2}}$,
${T}_{\delta t}\equiv e^{-\frac{i}{\hbar}{T}\delta t}$, and ${E}_{\delta
t/2}(t_{m})\equiv e^{\frac{i}{\hbar}\varepsilon(t_{m-1}+\delta
t/2)eq\frac{\delta t}{2}}$, with $V$, $T$, and $q$ all in their diagonal
representations. Such a loop of operations is $m$-dependent because the
simulated system is subject to a time-dependent laser field. The numerical
values of the diagonal operators $T_{\delta t}$, $V_{\delta t/2}$ and
$E_{\delta t/2}$ are elaborated in the supplementary material. A circuit to
realize $U_{QFT}$ and a computational network to realize the $U_{m}$ operator
are shown in (Fig. 3).
Each individual operation in the $U_{m}$ loop can be implemented by a
particular RF pulse sequence applied to our system. However, in the experiment
such a direct decomposition of $U_{m}$ requires a very long gate operation
time and highly complicated RF pulse sequences. This bottom-up approach hence
accumulates considerable experimental errors and also invites serious
decoherence effects. To circumvent this technical problem we find a better
experimental approach, which further exploits the GRAPE technique to
synthesize $U_{m}$ or their products with one single engineered RF pulse only.
That is, the quantum evolution operator $U(t_{j},0)$, which is simulated by
$\prod_{m=1}^{j}U_{m}$, is implemented by one GRAPE coherent control pulse
altogether, with a preset fidelity and a typical pulse length ranging from 10
ms to 15 ms. For the 25 snapshots of the dynamics, totally 25 GRAPE pulses are
worked out, with their fidelities always set to be larger than 0.99. As a
result, the technical complexity of the experiment decreases dramatically but
the fidelity is maintained at a high level. The task of finding a GRAPE pulse
itself may be fulfilled via feedback learning control (?) that can exploit the
quantum evolution of our NMR system itself. However, this quantum procedure is
not essential or necessary in our experiment because here the GRAPE pulses on
a 3-qubit system can be found rather easily.
(C) Measurement. To take the snapshots of the reaction process at
$t_{j}=j\delta t$ we need to measure the overlaps of
C($\left|\psi(t_{j})\right\rangle$,$\left|\phi_{0}\right\rangle$) and
C($\left|\psi({t_{j}})\right\rangle$,$\left|\phi_{1}\right\rangle$). A full
state tomography at $t_{j}$ will do, but this will produce much more
information than needed. Indeed, assisted by a simple diagonalization
technique, sole population measurements already suffice to observe the
reactant-to-product transformation.
Specifically, in order to obtain
$C(\left|\psi({t_{j}})\right\rangle,\left|\phi_{0}\right\rangle)=\texttt{Tr}[\rho(t_{j})\rho_{0}]$
with
$\rho(t_{j})=\left|\psi({t_{j}})\right\rangle\left\langle\psi(t_{j})\right|$,
we first find a transformation matrix _R_ to diagonalize $\rho_{0}$, that is,
$\rho_{0}^{\prime}=R\rho_{0}R^{\dagger}$, where $\rho_{0}^{\prime}$ is a
diagonal density matrix. Letting
$\rho^{\prime}(t_{j})=R\rho(t_{j})R^{\dagger}$ and using the identity
$\texttt{Tr}[\rho(t_{j})\rho_{0}]=\texttt{Tr}[\rho^{\prime}(t_{j})\rho_{0}^{\prime}]$,
it becomes clear that only the diagonal elements or the populations of
$\rho^{\prime}(t_{j})$ are required to measure
$\texttt{Tr}[\rho^{\prime}(t_{j})\rho_{0}^{\prime}]$, namely, the overlap
$C(\left|\psi(t_{j})\right\rangle,\left|\phi_{0}\right\rangle)$. To obtain
$\rho^{\prime}(t_{j})$ from $\rho(t_{j})$, we simply add the extra $R$
operation to the quantum gate network. The actual implementation of the $R$
operation can be again mingled with all other gate operations using one GRAPE
pulse. A similar procedure is used to measure
C($\left|\psi(t_{j})\right\rangle$,$\left|\phi_{1}\right\rangle$).
The populations of $\rho{{}^{\prime}}(t_{j})$ can be measured by applying
$[\pi/2]_{y}$ pulses to the three qubits and then read the ensuing free
induction decay signal. In our sample of natural abundance, only $\sim 1\%$ of
all the molecules contain a 13C nuclear spin. The signals from the 1H and 19F
nuclear spins are hence dominated by those molecules with the 12C isotope. To
overcome this problem we apply SWAP gates to transmit the information of the
1H and 19F channels to the 13C channel and then measure the 13C qubit.
To assess the difference between theory and experiment, we carry out one full
state tomography for the final state density matrix at $t=t_{f}$. Because the
GRAPE pulse is made to reach a fidelity larger than 0.995, the experimental
density matrix $\rho_{exp}(t_{f})$ is indeed very close to the theoretical
density matrix $\rho_{theory}(t_{f})$ obtained in an 8-dimensional Hilbert
space, with a fidelity $F[\rho_{theory}(t_{f}),\rho_{exp}(t_{f})]=0.957$. The
experimental density matrix elements of the final state shown in (Fig. 4B)
match the theoretical results to a high degree. With confidence in the
experimental results on the full density matrix level, we can now examine the
simulated reaction dynamics, reporting only the probabilities of the reactant
and product states. (Fig. 4C) shows the time-dependence of the probabilities
of both the reactant and product states obtained from our quantum simulator.
It is seen that the product-to-reactant ratio increases continuously with
time, with the probability of the product state reaching 77% at the end of the
simulated reaction. At all times, the experimental observations of the
reaction process are in impressive agreement with the smooth curves calculated
theoretically on a classical computer. Further, the experimental results are
also in qualitative agreement with the exact classical calculation using 64
grid points (see Fig. 1B). A prototype laser-driven reaction is thus
successfully simulated by our 3-qubit system. We emphasize that due to the use
of GRAPE pulses in synthesizing the gate operations, our simulation experiment
lasts about 30 ms only, which is much shorter than the spin decoherence time
of our system. The slight difference between theory and experiment can be
attributed to imperfect GRAPE pulses, as well as inhomogeneity in RF pulses
and in the static magnetic field.
##### Conclusion.
Quantum simulation with only tens of qubits can already exceed the capacity of
a classical computer. Before realizing general-purpose quantum algorithms that
typically require thousands of qubits, a quantum simulator attacking problems
not solvable on current classical computers will be one conceivable milestone
in the very near future. The realization of quantum simulations will
tremendously change the way we explore quantum chemistry in both stationary
and dynamical problems (?, ?). Our work reported here establishes the first
experimental study of the quantum simulation of a prototype laser-driven
chemical reaction. The feasibility of simulating chemical reaction processes
on a rather small quantum computer is hence demonstrated. Our proof-of-
principle experiment also realizes a promising map from laser-driven chemical
reactions to the dynamics of interacting spin systems under shaped RF fields.
This map itself is of significance because it bridges up two research subjects
whose characteristic time scales differ by many orders of magnitude.
References and Notes
## References
* 1. M. A. Nielson and I. L. Chuang, Quantum Computation and Quantum Information (Cambrige Univ. Press, Cambridge, U. K., 2000).
* 2. P. Shor, in Proceddings of the 35th Annual Symposium on Foundations of Computer Science (IEEE Computer Society Press, New York, Santa Fe, NM, 1994), p. 124.
* 3. I. Buluta and F. Nori, Science 326, 108 (2009).
* 4. R. P. Feynman, Int. J. Theor. Phys. 21, 467 (1982).
* 5. S. Lloyd, Science 273, 1073 (1996).
* 6. C. Zalka, in ITP Conference on Quantum Coherence and Decoherence (Royal Soc. London, Santa Barbara, California, 1996), pp. 313-322.
* 7. D. S. Abrams and S. Lloyd, Phys. Rev. Lett. 79, 2586 (1997).
* 8. L. A. Wu, M. S. Byrd, and D. A. Lidar, Phys. Rev. Lett. 89, 057904 (2002).
* 9. A. Y. Smirnov, S. Savel ev, L. G. Mourokh, and F. Nori, Europhys. Lett. 80, 67008 (2007).
* 10. D. A. Lidar and H. Wang, Phys. Rev. E 59, 2429 (1999).
* 11. X. H. Peng, J. F. Du, and D. Suter, Phys. Rev. A 71, 012307 (2005).
* 12. S. Somaroo, C. H. Tseng, T. F. Havel, R. Laflamme, and D. G. Cory, Phys. Rev. Lett. 82, 5381 (1999).
* 13. C. Negrevergne, R. Somma, G. Ortiz, E. Knill, and R. Laflamme, Phys. Rev. A 71, 032344 (2005).
* 14. A. Friedenauer, H. Schmitz, J. T. Glueckert, D. Porras, and T. Schaetz, Nature Phys. 4, 757 (2008).
* 15. A. Aspuru-Guzik, A. D. Dutoi, P. J. Love, and M. Head-Gordon, Science 309, 1704 (2005).
* 16. I. Kassal, S. P. Jordan, P. J. Love, M. Mohseni, and A. Aspuru-Guzik, Proc. Natl. Acad. Sci. USA 105, 18681-18686 (2008).
* 17. J. F. Du, N. Y. Xu, X. H. Peng, P. F. Wang, S. F. Wu, and D. W. Lu, Phys. Rev. Lett. 104, 030502 (2010).
* 18. B. P. Lanyon, J. D. Whitfield, G. G. Gillett, M. E. Goggin, M. P. Almeida, I. Kassal, J. D. Biamonte, M. Mohseni, B. J. Powell, M. Barbieri, A. Aspuru-Guzik, and A. G. White, Nature Chem. 2, 106 (2010).
* 19. H. Rabitz, R. de Vivie-Riedle, M. Motzkus, and K. Kompa, Science 288, 824 (2000); W. S. Warren, H. Rabitz, and M. Dahleh, Science 259, 1581 (1993).
* 20. S. A. Rice and M. Zhao, Optical Control of Molecular Dynamics (John Wiley, New York, 2000); M. Shapiro and P. Brumer, Principles of the Quantum Control of Molecular Processes (John Wiley, New York, 2003).
* 21. D. Wang, J. Chem. Phys. 124, 201105 (2006).
* 22. H. -D. Meyer and G. A. Worth, Theor. Chem. Acc. 109, 251 (2003).
* 23. N. Došlić, O. Kühn, J. Manz, and K. Sundermann, J. Phys. Chem. A 102, 9645 (1998).
* 24. M. D. Feit, J. A. Fleck, and A. Steiger, J. Comput. Phys. 47, 412 (1982).
* 25. J. Z. H. Zhang, Theory and Application of Quantum Molecular Dynamics (World Scientific, Singapore, 1999).
* 26. D. G. Cory, A. F. Fahmy, and T. F. Havel, Proc. Natl. Acad. Sci. USA. 94, 1634 (1997).
* 27. N. Khaneja, T. Reiss, C. Kehlet, T. S. Herbr$\ddot{u}$ggen, and S. J. Glaser, J. Magn. Reson. 172, 296 (2005).
* 28. J. Baugh, J. Chamilliard, C. M. Chandrashekar, M. Ditty, A. Hubbard, R. Laflamme, M. Laforest, D. Maslov, O. Moussa, C. Negrevergne, M. Silva, S. Simmons, C. A. Ryan, D. G. Cory, J. S. Hodges, and C. Ramanathan, Phys. in Can. 63, No.4 (2007).
* 29. C. A. Ryan, C. Negrevergne, M. Laforest, E. Knill, and R. Laflamme, Phys. Rev. A 78, 012328 (2008).
* 30. Helpful discussions with J. L. Yang are gratefully acknowledged. This work was supported by National Nature Science Foundation of China, the CAS, and the National Fundamental Research Program 2007CB925200.
Fig. 1. Prototype chemical reaction and potential energy curve. (A)
Isomerization reaction of nonsymmetric substituted malonaldehydes. (B) Upper
panel: Potential energy curve, together with the eigenfunctions of the ground
(red) and the first excited (blue) states. The main system parameters [(Eq.
2)] are taken from Ref. (?), with $V^{\ddagger}=0.00625\ E_{\rm h}$,
$\Delta=0.000257\ E_{\rm h}$, and $q_{0}=1\ a_{0}$. As a modification, the
potential values for $q$ approaching the left and right ends are increased
sharply to ensure rapid decay of the wavefunction amplitudes. In particular,
this procedure increases the $V$ value at $q=\pm 0.8$ Å by a factor of 30. The
six discrete squares shown on the potential curve and the two end points at
$q=\pm 0.8$ Å constitute the 8 grid points for our 3-qubit encoding. Lower
panel: Numerically exact time-dependence of populations of the ground state
(reactant state, denoted P0) and the first excited state (product state,
denoted P1).
Fig. 2. (A) Molecular structure of Diethyl-fluoromalonate. The 1H, 13C and 19F
nuclear spins marked by oval are used as three qubits. (B) System parameters
and important time scales of Diethyl-fluoromalonate. Diagonal elements are the
Larmor frequencies (Hz) and off-diagonal elements are scalar coupling strength
(Hz) between two nuclear spins. Relaxation and dephasing time scales (second)
$T_{1}$ and $T_{2}$ for each nuclear spin are listed on the right. (C) The
GRAPE pulse that realizes the initial state $\left|\phi_{0}\right\rangle$ from
the PPS $\left|000\right\rangle$, with a pulse width 10 ms and a fidelity over
0.995. The (blue) solid line represents the pulse power in $x$-direction, and
the (red) dotted line represents the pulse power in $y$-direction. The three
panels from top to bottom represent the RF features at three central
frequencies associated with the 19F, 13C and 1H spins, respectively.
Fig. 3. Upper panel: The network of quantum operations to simulate the
chemical reaction dynamics, with the reactant state
$\left|\phi_{0}\right\rangle$. The whole process is divided into 25 loops. The
operators $T_{\delta t}$, $V_{\delta t}$ and $E_{\delta t/2}$ are assumed to
be in their diagonal representations. Lower panel: H is the Hadamard gate and
S, T are phase gates as specified on the right. Vertical lines ending with a
solid dot represent controlled phase gates and the vertical line between two
crosses represents a SWAP gate.
Fig. 4. Experimental tomography results and the reaction dynamics obtained
both theoretically and experimentally. (A)-(B) Real part of the density matrix
of the initial and final states of the simulated reaction. Upper panels show
the theoretical results based on an 8-dimensional Hilbert space, and lower
panels show the experimental results. (C) The measured probabilities of the
reactant and product states to give 25 snapshots of the reaction dynamics. The
(red) plus symbols represent measured results of
C($\left|\psi(t_{j})\right\rangle$,$\left|\phi_{0}\right\rangle$) and the
(blue) circles represent measured results of
C($\left|\psi(t_{j})\right\rangle$,$\left|\phi_{1}\right\rangle$), both in
agreement with the theoretical smooth curves. Results here also agree
qualitatively with the numerically exact dynamics shown in (Fig. 1B).
Fig. 1:
Fig. 2:
Fig. 3:
Fig. 4:
## Supporting Online Material
### BACKGROUND ON QUANTUM DYNAMICS SIMULATION
Let us start with the Schrödinger equation:
$i\hbar\dot{\psi}(t)=H(t)\psi(t).$ (11)
Its formal solution can be written as
$\psi(t)=U(t,t_{0})\psi(t_{0}),$ (12)
where the quantum propagator $U(t,t_{0})$ is a unitary operator and is given
by
$U(t,t_{0})={\cal
T}\exp\left[-\frac{i}{\hbar}\int_{t_{0}}^{t}\\!\\!d\tau\,H(\tau)\right],$ (13)
with ${\cal T}$ being the time ordering operator. There are a number of
established numerical methods for propagating the Schrödinger equation, such
as Feynman’s path integral formalism (?), the split-operator method (?) and
the Chebychev polynomial method (?, ?), etc. For our purpose here we adopt the
split-operator method.
The propagator $U(t,t_{0})$ satisfies
$U(t,t_{0})=U(t,t_{N-1})U(t_{N-1},t_{N-2})\cdots\cdots U(t_{1},t_{0})\,,$ (14)
where, for example, the intermediate time points can be equally spaced with
$t_{m}=m\delta t+t_{0}$. For one of such a small time interval, e.g., from
$t_{m-1}$ to $t_{m}=t_{m-1}+\delta t$, we have
$\displaystyle U(t_{m},t_{m-1})$ $\displaystyle=$ $\displaystyle{\cal
T}\exp\left[-\frac{i}{\hbar}\int_{t_{m-1}}^{t_{m}}\\!\\!d\tau\,H(\tau)\right]$
(15) $\displaystyle\approx$
$\displaystyle\exp\left[-\frac{i}{\hbar}\int_{t_{m-1}}^{t_{m}}\\!\\!d\tau\,H(\tau)\right],$
where terms of the order $(\delta t)^{3}$ or higher are neglected. For
sufficiently small $\delta t$, the integral in the above equation can be
further carried out by a midpoint rule, leading to
$\displaystyle
U(t_{m},t_{m-1})\approx\exp\left[-\frac{i}{\hbar}H(t_{m-1}+\delta t/2)\delta
t\right].$ (16)
This integration step has an error of the order of $(\delta t)^{2}$, which is
still acceptable if the total evolution time is not large. Next we separate
the total Hamiltonian into two parts:
$H(t)=H_{0}(t)+H^{\prime}(t).$ (17)
For example, $H_{0}(t)$ is the kinetic energy part of the total Hamiltonian
and $H^{\prime}(t)$ represents the potential energy part. In general
$H_{0}(t)$ and $H^{\prime}(t)$ do not commute with each other. The split-
operator scheme (?) applied to (Eq. 16) then leads to
${U}(t_{m},t_{m-1})\approx e^{-\frac{i}{\hbar}H^{\prime}(t_{m-1}+\delta
t/2)\delta t/2}e^{-\frac{i}{\hbar}H_{0}(t_{m-1}+\delta t/2)\delta
t}e^{-\frac{i}{\hbar}H^{\prime}(t_{m-1}+\delta t/2)\delta t/2}.$ (18)
The small error of this operator splitting step arises from the nonzero
commutator between $H_{0}(t)$ and $H^{\prime}(t)$, which is at least of the
order of $(\delta t)^{3}$. The advantage of the split-operator method is that
each step represents a unitary evolution and each exponential in (Eq. 18) can
take a diagonal form in either the position or the momentum representation.
The error of this operator splitting step is in general smaller than that
induced by the aforementioned midpoint rule integration in (Eq. 16). Because
in our work the total duration of the simulated chemical reaction is short,
the above low-order approach already has a great performance. If long-time
simulations with preferably larger time steps are needed in a quantum
simulation, then one may use even higher-order split-operator techniques for
explicitly time-dependent Hamiltonians (?, ?).
### EXPERIMENTAL IMPLEMENTATION
The experiment consists of three steps: (a) Initial state preparation, which
is to prepare the ground state $\left|\phi_{0}\right\rangle$ of the bare
Hamiltonian $T+{V}$ (representing the reactant state); (b) 25 discrete steps
of dynamical evolution to simulate the actual continuous chemical reaction
dynamics; (c) Measurement of the overlaps
$C(\left|\psi(t_{j})\right\rangle,\left|\phi_{0}\right\rangle)=|\langle\phi_{0}|\psi(t_{j})\rangle|^{2}$
and
$C(\left|\psi(t_{j})\right\rangle,\left|\phi_{1}\right\rangle)=|\langle\phi_{1}|\psi(t_{j})\rangle|^{2}$
at $t_{j}=j\delta t$, which is to show the transformation between the reactant
and product states.
#### A. Initial State Preparation
To prepare the ground state $\left|\phi_{0}\right\rangle$, firstly we need to
create a pseudo-pure state (PPS) from the thermal equilibrium state - a mixed
state which is not yet ready for quantum computation purposes. The thermal
equilibrium state of our sample can be written as
$\rho_{ther}=\sum\limits_{i=1}^{3}\gamma_{i}I_{z}^{i}$, where $\gamma_{i}$ is
the gyromagnetic ratio of the nuclear spins. Typically,
$\gamma_{\texttt{C}}=1$, $\gamma_{\texttt{H}}=4$ and
$\gamma_{\texttt{F}}=3.7$, with a constant factor ignored. We then use the
spatial average technique (?) to prepare the PPS
$\rho_{000}=\frac{1-\epsilon}{8}\mathbb{{I}}+\epsilon\left|000\right\rangle\left\langle
000\right|,$ (19)
where $\epsilon\approx 10^{-5}$ represents the polarization of the system and
${\mathbb{{I}}}$ is the $8\times 8$ unity matrix. The unity matrix has no
influence on our experimental measurements and hence can be dropped. The pulse
sequence to prepare the PPS from the thermal equilibrium state is shown in
(Fig. S1A). In particular, the gradient pulses (represented by
G${}_{\text{Z}}$) destroys the coherence induced by the rotating pulses and
free evolutions. After obtaining the PPS, we apply one shaped pulse calculated
by the GRadient Ascent Pulse Engineering (GRAPE) algorithm (?, ?, ?) to obtain
the initial state $\left|\phi_{0}\right\rangle$, with the pulse width 10 ms
and a fidelity 0.995. In order to assess the accuracy of the experimental
preparation of the initial state, a full state tomography (?) is implemented.
The fidelity (?) between the target density matrix $\rho_{target}$ and the
experimental density matrix $\rho_{exp}$ is found to be
$\displaystyle F(\rho_{target},\rho_{exp})$ $\displaystyle\equiv$
$\displaystyle\texttt{Tr}(\rho_{target}\rho_{exp})/\sqrt{(\texttt{Tr}(\rho_{target}^{2})\texttt{Tr}(\rho_{exp}^{2})}$
(20) $\displaystyle\approx$ $\displaystyle 0.95.$
A detailed comparison between $\rho_{target}$ and $\rho_{exp}$ is displayed in
(Fig. S1B).
#### B. Dynamical Evolution
To observe the continuous reactant-to-product transformation, we divide the
whole time evolution into 25 discrete steps. For convenience all variables
here are expressed in terms of atomic units. For example, in atomic units
$e=1$, $\hbar=1$, and $\delta t=62.02$. To exploit the split-operator scheme
in (Eq. 18), we let $H_{0}=T$, and $H^{\prime}(t)=V-eq\epsilon(t)$. The
kinetic energy operator $T$ is diagonal in the momentum representation,
whereas the $V$ operator and the dipole-field interaction $-eq\epsilon(t)$
operator are both diagonal in the position representation. We then obtain from
(Eq. 18)
$\displaystyle U(t_{m},t_{m-1})\approx V_{\frac{\delta t}{2}}E_{\frac{\delta
t}{2}}U_{QFT}T_{\delta t}U_{QFT}^{\dagger}E_{\frac{\delta
t}{2}}V_{\frac{\delta t}{2}},$ (21)
where the operators
$\displaystyle V_{\frac{\delta t}{2}}$ $\displaystyle\equiv$ $\displaystyle
e^{-i{V}\frac{\delta t}{2}},$ (22) $\displaystyle T_{\delta t}$
$\displaystyle\equiv$ $\displaystyle e^{-i{T}\delta t},$ (23) $\displaystyle
E_{\frac{\delta t}{2}}$ $\displaystyle\equiv$ $\displaystyle
e^{i{q}\varepsilon(t_{m-1}+\delta t/2)\frac{\delta t}{2}}$ (24)
are assumed to be in their diagonal representations.
To map $U(t_{m},t_{m-1})$ to our 3-qubit NMR quantum computer, we discretize
the potential energy curve using 8 grid points. Upon this discretization,
operators $V_{\frac{\delta t}{2}}$, $T_{\delta t}$, and $q$ become $8\times 8$
diagonal matrices. Numerically, their diagonal elements (denoted $V_{diag}$,
$T_{diag}$, and $q_{diag}$, respectively) are found to be
$\displaystyle{V}_{diag}=$ $\displaystyle(293.78,-0.10,1.85,5.41,$
$\displaystyle 5.46,2.02,0.18,305.44)\times 10^{-3};$
$\displaystyle{T}_{diag}=$ $\displaystyle(0,0.91,3.63,8.16,$ $\displaystyle
14.51,8.16,3.63,-0.91)\times 10^{-3};$ $\displaystyle{q}_{diag}=$
$\displaystyle(-1.51,-1.08,-0.65,-0.22,$ (25) $\displaystyle
0.22,0.65,1.08,1.51).$
To evaluate the $E_{\frac{\delta t}{2}}$ operator, we also need to discretize
the time-dependence of the electric field associated with the ultrashort laser
pulse. For 25 snapshots of the reaction dynamics, we discretize the trapezoid-
type electric field by 25 points, i.e.,
$\varepsilon(t)=[0.05,0.42,0.85,1,...1,0.85,0.42,0.05]\times 10^{-3}.$ (26)
The quantum gate network for the QFT operation that transforms the momentum
representation to the coordinate representation is already shown in (Fig. 3).
It consists of three Hardmard gates (H), three controlled-phase gates (S and
T) and one SWAP gate (vertical line linking crosses). The Hardmard gate H is
represented by the Hardmard matrix
$\displaystyle H=\frac{1}{\sqrt{2}}\left(\begin{array}[]{cc}1&1\\\ 1&-1\\\
\end{array}\right),$ (29)
which maps the basis state $\left|0\right\rangle$ to
$\frac{1}{\sqrt{2}}(\left|0\right\rangle+\left|1\right\rangle)$ and
$\left|1\right\rangle$ to
$\frac{1}{\sqrt{2}}(\left|0\right\rangle-\left|1\right\rangle)$. The phase
gates S and T are given by
$\displaystyle\text{S}=\left(\begin{array}[]{cc}1&0\\\ 0&i\\\
\end{array}\right)$ (32)
and
$\displaystyle\text{T}=\left(\begin{array}[]{cc}1&0\\\ 0&e^{i\pi/4}\\\
\end{array}\right).$ (35)
The matrix form of the SWAP gate is
$\displaystyle\text{SWAP}=\left(\begin{array}[]{cccc}1&0&0&0\\\ 0&0&1&0\\\
0&1&0&0\\\ 0&0&0&1\\\ \end{array}\right),$ (40)
which exchanges the state of the 19F qubit and with that of the 1H qubit.
GRAPE Pulses. Since there are hundreds of logical gates in the required
network of quantum operations, a direct implementation of the gate operation
network will need a large number of single-qubit rotations as well as many
free evolutions during the single-qubit operations. This bottom-up approach
will then accumulate the errors in every single-qubit operation. Considerable
decoherence effects will also emerge during the long process. For example, we
have attempted to directly decompose the network into a sequence of RF pulses,
finding that the required free evolution time for the 25 loops of evolution is
more than 1 s, which is comparable to the $T_{2}$ time of our system. To
overcome these problems and to reach a high-fidelity quantum coherent control
over the three interacting qubits, the unitary operators used in our
experiment are realized by shaped quantum control pulses found by the GRadient
Ascent Pulse Engineering (GRAPE) technique (?, ?, ?). To maximize the fidelity
of the experimental propagator as compared with the ideal gate operations, we
use a mean gate fidelity by averaging over a weighted distribution of RF field
strengths to minimize the inhomogeneity effect of the RF pulses applied to the
sample.
For a known or desired unitary operator $U_{target}$, the goal of the GRAPE
algorithm is to find a shaped pulse within a given duration $t_{total}$ to
maximize the fidelity
$F=|\texttt{Tr}(U_{target}^{\dagger}U_{cal})/2^{n}|^{2},$ (41)
where $U_{cal}$ is the unitary operator actually realized by the shaped pulse
and $2^{n}$ is the dimension of the Hilbert space. We discretize the evolution
time $t_{total}$ into $N$ segments of equal duration $\Delta t=t_{total}/N$,
such that $U_{cal}=U_{N}\cdots U_{2}U_{1}$, with the evolution operator
associated with the _j_ th time interval given by
$U_{j}=\exp\\{-i\Delta
t(\mathcal{H}_{int}+\sum_{k=1}^{m}u_{k}(j)\mathcal{H}_{k})\\}.$ (42)
Here $\mathcal{H}_{int}$ is the three-qubit self-Hamiltonian in the absence of
any control field, $\mathcal{H}_{k}$ represents the interaction Hamiltonians
due to the applied RF field, and $u_{k}(j)$ is the control vectors associated
with $\mathcal{H}_{k}$. Specifically, in our experiment $u_{k}(j)$ are the
time-dependent amplitudes of the RF field along the _x_ and _y_ directions,
for the F-channel, the C-channel and the H-channel. With an initial guess for
the pulse shape, we use the GRAPE algorithm to optimize $u_{k}(j)$ iteratively
until $U_{cal}$ becomes very close to $U_{target}$. More details can be found
from Ref. (?). The GRAPE technique dramatically decreases the duration and
complexity of our experiment and at the same time increases the quantum
control fidelity. In our proof-of-principle demonstration of the feasibility
of the quantum simulation of a chemical reaction, the task of searching for
the GRAPE pulses is carried out on a classical computer in a rather
straightforward manner. It is important to note that this technique can be
scaled up for many-qubit systems, because the quantum evolution of the system
itself can be exploited in finding the high-fidelity coherent control pulses.
As an example, (Fig. S2) shows the details of one 15 ms GRAPE pulse to realize
the quantum evolution from $t=0$ to $t_{7}=7\delta t$ (also combining the
operations for initial state preparation and the extra operation $R$ that is
useful for the measurement stage). The shown GRAPE pulse is found by
optimizing the frequency spectrum divided into 750 segments. The shown GRAPE
pulse has a fidelity over 0.99.
#### C. Measurement
To simulate the process of a chemical reaction, it is necessary to measure the
simulated reactant-to-product transformation at different times. To that end
we measure the overlaps of
$C(\left|\psi({t_{m}})\right\rangle,\left|\phi_{0}\right\rangle)$ and
$C(\left|\psi(t_{m})\right\rangle,\left|\phi_{1}\right\rangle)$ at
$t_{m}=m\delta t$. Here we first provide more explanations about how a
diagonalization method can reduce the measurement of the overlaps to
population measurements.
Without loss of generality, we consider the measurement
$C(\left|\psi_{7}\right\rangle,\left|\phi_{0}\right\rangle)$.
$C(\left|\psi(t_{7})\right\rangle,\left|\phi_{0}\right\rangle)=|\langle\phi_{0}|\psi(t_{7})\rangle|^{2}=\texttt{Tr}[\rho(t_{7})\rho_{0}],$
(43)
where
$\rho(t_{7})=\left|\psi(t_{7}\right\rangle\left\langle\psi(t_{7})\right|$ and
$\rho_{0}=\left|\phi_{0}\right\rangle\left\langle\phi_{0}\right|$. Let _R_ be
a transformation matrix which diagonalizes $\rho_{0}$ to a diagonal density
matrix $\rho_{0}^{\prime}=R\rho_{0}R^{\dagger}$. Then
$\displaystyle\texttt{Tr}[\rho(t_{7})\rho_{0}]=\texttt{Tr}[R\rho(t_{7})R^{\dagger}R\rho_{0}R^{\dagger}]=\texttt{Tr}[\rho^{\prime}(t_{7})\rho_{0}^{\prime}],$
(44)
where $\rho^{\prime}(t_{7})=R\rho(t_{7})R^{\dagger}$. Clearly then, only the
diagonal terms (populations) of $\rho^{\prime}(t_{7})$ are relevant when
calculating $\texttt{Tr}[\rho^{\prime}(t_{7})\rho_{0}^{\prime}]$, namely, the
overlap $C(\left|\psi(t_{7}\right\rangle,\left|\phi_{0}\right\rangle)$. Hence
only population measurement of the density matrix $\rho^{\prime}(t_{7})$ is
needed to obtain the overlap between $|\psi(t_{7})\rangle$ and the initial
state. The GRAPE pulse that combines the operations for initial state
preparation, for the quantum evolution, as well as for the extra operation _R_
is shown in (Fig. S2).
The three population-readout spectra after applying the GRAPE pulse are shown
in (Fig. S3B-S3D), together with the 13C spectrum for the PPS $|000\rangle$.
The populations to be measured are converted to the observable coherent terms
by applying a $[\pi/2]_{y}$ pulse to each of the three qubits. For the 13C
spectrum shown in (Fig. S3C), four peaks from left to right are seen, with
their respective integration results representing $P(5)-P(7)$, $P(6)-P(8)$,
$P(1)-P(3)$, and $P(2)-P(4)$, where $P(i)$ is the _i_ th diagonal element of
$\rho^{\prime}(t_{7})$. Experimentally the four integrals associated with the
four peaks in (Fig. S3C) are found to be $-0.098$, $-0.482$, $-0.089$ and
$-0.071$, which are close to the theoretical values $-0.047$, $-0.501$,
$-0.114$ and $-0.041$. Further using other readouts from the 19F (see Fig.
S3B) and 1H (see Fig. S3D) spectra as well as the normalization condition
$\sum_{i=1}^{8}{P}({i})=1$, we obtain all the 8 populations and hence the
overlap $C(\left|\psi(t_{7}\right\rangle,\left|\phi_{0}\right\rangle)$. The
theoretical and experimental results for this overlap are 0.535 and 0.529,
which are in agreement. A similar procedure is used to obtain
$C(\left|\psi(t_{m}\right\rangle,\left|\phi_{1}\right\rangle)$.
The spectra of the 1H and 19F channel are obtained by first transmitting the
signals of the 19F and 1H qubits to the 13C qubit using SWAP gates. With this
procedure all the spectra shown in (Fig. S3) are exhibited on the 13C channel.
Indeed, because in our sample of natural abundance, only $\approx 1\%$ of all
the molecules contain a 13C nuclear spin, the signals from the 1H and 19F
nuclear spins without applying SWAP gates would be dominated by those
molecules with the 12C isotope.
References
1. 31. R. P. Feynman, Rev. Mod. Phys. 20, 367 (1948).
2. 32. M. D. Feit, J. A. Fleck, and A. Steiger, J. Comput. Phys. 47, 412 (1982).
3. 33. C. Leforestier, R. H. Bisseling, C. Cerjan, M. D. Feit, R. Friesner, A. Guldberg, A. Hammerich, G. Jolicard, W. Karrlein, H. -D. Meyer, N. Lipkin, O. Roncero, and R. Kosloff, J. Comput. Phys. 94, 59 (1991).
4. 34. J. Z. H. Zhang, Theory and Application of Quantum Molecular Dynamics (World Scientific, Singapore, 1999).
5. 35. A. D. Bandrauk and H. Chen, Can. J. Chem. 70, 555 (1992).
6. 36. W. S. Zhu and X. S. Zhao, J. Chem. Phys. 105, 9536 (1996).
7. 37. D. G. Cory, A. F. Fahmy and T. F. Havel, Proc. Natl. Acad. Sci. USA. 94, 1634 (1997).
8. 38. N. Khaneja, T. Reiss, C. Kehlet, T. S. Herbruggen, and S. J. Glaser, J. Magn. Reson. 172, 296 (2005).
9. 39. J. Baugh, J. Chamilliard, C. M. Chandrashekar, M. Ditty, A. Hubbard, R. Laflamme, M. Laforest, D. Maslov, O. Moussa, C. Negrevergne, M. Silva, S. Simmons, C. A. Ryan, D. G. Cory, J. S. Hodges, and C. Ramanathan, Phys. in Can. 63, No.4 (2007).
10. 40. C. A. Ryan, C. Negrevergne, M. Laforest, E. Knill, and R. Laflamme, Phys. Rev. A 78, 012328 (2008).
11. 41. J. S. Lee, Phys. Lett. A 305, 349 (2002).
12. 42. N. Boulant, E. M. Fortunato, M. A. Pravia, G. Teklemariam, D. G. Cory, and T. F. Havel, Phys. Rev. A 65, 024302 (2002).
Fig. S1. Pulse sequence for the preparation of the PPS and a comparison
between experimental and theoretical density matrix elements for the reactant
state $\left|\phi_{0}\right\rangle$. (A) Pulse sequence that implements the
PPS preparation, with $\theta=0.64\pi$ and
$\texttt{X}(\overline{\texttt{X}},\texttt{Y},\overline{\texttt{Y}})$
representing rotations around the $x$($-x$, $y$, $-y$) direction.
G${}_{\text{Z}}$ represents a gradient pulse to destroy the coherence induced
by the rotating pulses and free evolutions. $\frac{1}{4\text{J}_{\text{HF}}}$
and $\frac{1}{4\text{J}_{\text{CF}}}$ represent the free evolution of the
system under $\mathcal{H}_{int}$ for 5.252 ms and 1.286 ms, respectively. (B)
Comparison between measured density matrix elements of the initial state
$\left|\phi_{0}\right\rangle$ and the theoretial target density matrix
elements based on the 8-point encoding. Both the real part and the imaginary
part of the density matrix elements are shown.
Fig. S2. GRAPE pulse to simulate the quantum evolution of the reacting system
from $t=0$ to $t_{7}=7\delta t$. The top, middle and bottom panels depict the
time-dependence of the RF pulses applied to the F-channel, C-channel and
H-channel, respectively. The (blue) solid line represents the pulse power
applied in the ${x}$-direction, and the (red) dotted line represents the pulse
power applied in the ${y}$-direction.
Fig. S3. Measured spectra to extract the populations of the system density
matrix before or after applying the GRAPE shown in (Fig. S2). (A) 13C spectrum
of the PPS $\left|000\right\rangle$ as a result of a $[\pi/2]_{y}$ pulse
applied to the 13C qubit. The area (integral) of the absorption peak can be
regarded as one benchmark in NMR realizations of quantum computation. (B)-(D)
Signals from the 19F, 13C and 1H qubits after applying the GRAPE pulse and a
$[\pi/2]_{y}$ pulse to each of the three qubits. All the spectra are exhibited
on the 13C channel through SWAP gates. The integration of each spectral peak
gives the difference of two particular diagonal elements of the density matrix
$\rho^{\prime}(t_{7})$.
Fig. S1:
13. Fig. S2:
Fig. S3:
|
arxiv-papers
| 2011-05-21T08:17:06 |
2024-09-04T02:49:19.015269
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Dawei Lu, Nanyang Xu, Ruixue Xu, Hongwei Chen, Jiangbin Gong, Xinhua\n Peng, and Jiangfeng Du",
"submitter": "Jiangfeng Du",
"url": "https://arxiv.org/abs/1105.4228"
}
|
1105.4279
|
# Frame Coherence and Sparse Signal Processing
Dustin G. Mixon1, Waheed U. Bajwa2, Robert Calderbank2 This work was
supported by the Office of Naval Research under Grant N00014-08-1-1110, by the
Air Force Office of Scientific Research under Grants FA9550-09-1-0551 and FA
9550-09-1-0643 and by NSF under Grant DMS-0914892. Mixon was supported by the
A.B. Krongard Fellowship. The views expressed in this article are those of the
authors and do not reflect the official policy or position of the United
States Air Force, Department of Defense, or the U.S. Government. 1Program in
Applied and Computational Mathematics, Princeton University, Princeton, New
Jersey 08544 2Department of Electrical and Computer Engineering, Duke
University, Durham, North Carolina 27708
###### Abstract
The sparse signal processing literature often uses random sensing matrices to
obtain performance guarantees. Unfortunately, in the real world, sensing
matrices do not always come from random processes. It is therefore desirable
to evaluate whether an arbitrary matrix, or frame, is suitable for sensing
sparse signals. To this end, the present paper investigates two parameters
that measure the coherence of a frame: worst-case and average coherence. We
first provide several examples of frames that have small spectral norm, worst-
case coherence, and average coherence. Next, we present a new lower bound on
worst-case coherence and compare it to the Welch bound. Later, we propose an
algorithm that decreases the average coherence of a frame without changing its
spectral norm or worst-case coherence. Finally, we use worst-case and average
coherence, as opposed to the Restricted Isometry Property, to garner near-
optimal probabilistic guarantees on both sparse signal detection and
reconstruction in the presence of noise. This contrasts with recent results
that only guarantee noiseless signal recovery from arbitrary frames, and which
further assume independence across the nonzero entries of the signal—in a
sense, requiring small average coherence replaces the need for such an
assumption.
## I Introduction
Many classical applications, such as radar and error-correcting codes, make
use of over-complete spanning systems [1]. Oftentimes, we may view an over-
complete spanning system as a _frame_. Take $F=\\{f_{i}\\}_{i\in\mathcal{I}}$
to be a collection of vectors in some separable Hilbert space $\mathcal{H}$.
Then $F$ is a frame if there exist _frame bounds_ $A$ and $B$ with $0<A\leq
B<\infty$ such that $A\|x\|^{2}\leq\sum_{i\in\mathcal{I}}|\langle
x,f_{i}\rangle|^{2}\leq B\|x\|^{2}$ for every $x\in\mathcal{H}$. When $A=B$,
$F$ is called a _tight frame_. For finite-dimensional unit norm frames, where
$\mathcal{I}=\\{1,\ldots,N\\}$, the _worst-case coherence_ is a useful
parameter:
$\mu_{F}:=\max_{\begin{subarray}{c}i,j\in\\{1,\ldots,N\\}\\\ i\neq
j\end{subarray}}|\langle f_{i},f_{j}\rangle|.$ (1)
Note that orthonormal bases are tight frames with $A=B=1$ and have zero worst-
case coherence. In both ways, frames form a natural generalization of
orthonormal bases.
In this paper, we only consider finite-dimensional frames. Those not familiar
with frame theory can simply view a finite-dimensional frame as an $M\times N$
matrix of rank $M$ whose columns are the frame elements. With this view, the
tightness condition is equivalent to having the spectral norm be as small as
possible; for an $M\times N$ unit norm frame $F$, this equivalently means
$\|F\|_{2}^{2}=\frac{N}{M}$.
Throughout the literature, applications require finite-dimensional frames that
are nearly tight and have small worst-case coherence [4, 5, 2, 3, 1, 6, 7, 8].
Among these, a foremost application is sparse signal processing, where frames
of small spectral norm and/or small worst-case coherence are commonly used to
analyze sparse signals [4, 5, 6, 7, 8]. Recently, [9] introduced another
notion of frame coherence called _average coherence_ :
$\nu_{F}:=\tfrac{1}{N-1}\max_{i\in\\{1,\ldots,N\\}}\bigg{|}\sum_{\begin{subarray}{c}j=1\\\
j\neq i\end{subarray}}^{N}\langle f_{i},f_{j}\rangle\bigg{|}.$ (2)
Note that, in addition to having zero worst-case coherence, orthonormal bases
also have zero average coherence. It was established in [9] that when
$\nu_{F}$ is sufficiently smaller than $\mu_{F}$, a number of guarantees can
be provided for sparse signal processing. It is therefore evident from [9, 4,
5, 2, 3, 1, 6, 7, 8] that there is a pressing need for nearly tight frames
with small worst-case and average coherence, especially in the area of sparse
signal processing.
This paper offers four main contributions in this regard. First, we discuss
three types of frames that exhibit small spectral norm, worst-case coherence,
and average coherence: normalized Gaussian, random harmonic, and code-based
frames. With all three frame parameters provably small, these frames are
guaranteed to perform well in relevant applications. Second, performance in
many applications is dictated by worst-case coherence [4, 5, 2, 3, 1, 6, 7,
8]. It is therefore particularly important to understand which worst-case
coherence values are achievable. To this end, the Welch bound [1] is commonly
used in the literature. However, the Welch bound is only tight when the number
of frame elements $N$ is less than the square of the spatial dimension $M$
[1]. Another lower bound, given in [10] and [11], beats the Welch bound when
there are more frame elements, but it is known to be loose for real frames
[12]. Given this context, our next contribution is a new lower bound on the
worst-case coherence of real frames. Our bound beats both the Welch bound and
the bound in [10] and [11] when the number of frame elements far exceeds the
spatial dimension. Third, since average coherence is new to the frame theory
literature, we investigate how it relates to worst-case coherence and spectral
norm. In particular, we want average coherence to satisfy the following
property, which is used in [9] to provide various guarantees for sparse signal
processing:
###### Definition 1.
We say an $M\times N$ unit norm frame $F$ satisfies the _Strong Coherence
Property_ if
$\mbox{(SCP-1)}~{}~{}~{}\mu_{F}\leq\tfrac{1}{164\log
N}\qquad\mbox{and}\qquad\mbox{(SCP-2)}~{}~{}~{}\nu_{F}\leq\tfrac{\mu_{F}}{\sqrt{M}},$
where $\mu_{F}$ and $\nu_{F}$ are given by (1) and (2), respectively.
Since average coherence is so new, there is currently no intuition as to when
(SCP-2) is satisfied. As a third contribution, this paper shows how to
transform a frame that satisfies (SCP-1) into another frame with the same
spectral norm and worst-case coherence that additionally satisfies (SCP-2).
Finally, this paper uses the Strong Coherence Property to provide new
guarantees on both sparse signal detection and reconstruction in the presence
of noise. These guarantees are related to those in [4, 5, 7], and we elaborate
on this relationship in Section V. In the interest of space, the proofs have
been omitted throughout, but they can be found in [13].
## II Frame constructions
Many applications require nearly tight frames with small worst-case and
average coherence. In this section, we give three types of frames that satisfy
these conditions.
### II-A Normalized Gaussian frames
Construct a matrix with independent, Gaussian distributed entries that have
zero mean and unit variance. By normalizing the columns, we get a matrix
called a _normalized Gaussian frame_. This is perhaps the most widely studied
type of frame in the signal processing and statistics literature.
To be clear, the term “normalized” is intended to distinguish the results
presented here from results reported in earlier works, such as [9, 14, 15,
16], which only ensure that the frame elements of Gaussian frames have unit
norm in expectation. In other words, normalized Gaussian frames are frames
with individual frame elements independently and uniformly distributed on the
unit hypersphere in $\mathbb{R}^{M}$.
That said, the following theorem characterizes the spectral norm and the
worst-case and average coherence of normalized Gaussian frames.
###### Theorem 2 (Geometry of normalized Gaussian frames).
Build a real $M\times N$ frame $G$ by drawing entries independently at random
from a Gaussian distribution of zero mean and unit variance. Next, construct a
normalized Gaussian frame $F$ by taking
$\smash{f_{n}:=\frac{g_{n}}{\|g_{n}\|}}$ for every $n=1,\ldots,N$. Provided
$\smash{60\log{N}\leq M\leq\frac{N-1}{4\log{N}}}$, then the following
inequalities simultaneously hold with probability exceeding $1-11N^{-1}$:
1. (i)
$\mu_{F}\leq\frac{\sqrt{15\log{N}}}{\sqrt{M}-\sqrt{12\log{N}}}$,
2. (ii)
$\nu_{F}\leq\frac{\sqrt{15\log{N}}}{M-\sqrt{12M\log{N}}}$,
3. (iii)
$\|F\|_{2}\leq\frac{\sqrt{M}+\sqrt{N}+\sqrt{2\log{N}}}{\sqrt{M-\sqrt{8M\log{N}}}}$.
### II-B Random harmonic frames
Random harmonic frames, constructed by randomly selecting rows of a discrete
Fourier transform (DFT) matrix and normalizing the resulting columns, have
received considerable attention lately in the compressed sensing literature
[17, 18, 19]. However, to the best of our knowledge, there is no result in the
literature that shows that random harmonic frames have small worst-case
coherence. To fill this gap, the following theorem characterizes the spectral
norm and the worst-case and average coherence of random harmonic frames.
###### Theorem 3 (Geometry of random harmonic frames).
Let $U$ be an $N\times N$ non-normalized discrete Fourier transform matrix,
explicitly, $U_{k\ell}:=\mathrm{e}^{2\pi\mathrm{i}k\ell/N}$ for each
$k,\ell=0,\ldots,N-1$. Next, let $\\{B_{i}\\}_{i=1}^{N}$ be a collection of
independent Bernoulli random variables with mean $\smash{\frac{M}{N}}$, and
take $\mathcal{M}:=\\{i:B_{i}=1\\}$. Finally, construct an
$|\mathcal{M}|\times N$ harmonic frame $F$ by collecting rows of $U$ which
correspond to indices in $\mathcal{M}$ and normalize the columns. Then $F$ is
a unit norm tight frame: $\smash{\|F\|_{2}^{2}=\frac{N}{|\mathcal{M}|}}$.
Furthermore, provided $\smash{16\log{N}\leq M\leq\frac{N}{3}}$, the following
inequalities simultaneously hold with probability exceeding
$1-4N^{-1}-N^{-2}$:
1. (i)
$\frac{1}{2}M\leq|\mathcal{M}|\leq\frac{3}{2}M$,
2. (ii)
$\nu_{F}\leq\frac{\mu_{F}}{\sqrt{|\mathcal{M}|}}$,
3. (iii)
$\mu_{F}\leq\sqrt{\frac{118(N-M)\log{N}}{MN}}$.
### II-C Code-based frames
Many structures in coding theory are also useful for constructing frames.
Here, we build frames from a code that originally emerged with Berlekamp in
[20], and found recent reincarnation with [21]. We build a $2^{m}\times
2^{(t+1)m}$ frame, indexing rows by elements of $\mathbb{F}_{2^{m}}$ and
indexing columns by $(t+1)$-tuples of elements from $\mathbb{F}_{2^{m}}$. For
$x\in\mathbb{F}_{2^{m}}$ and $\alpha\in\mathbb{F}_{2^{m}}^{t+1}$, the
corresponding entry of the matrix $F$ is
$F_{x\alpha}=\tfrac{1}{\sqrt{2^{m}}}(-1)^{\mathrm{Tr}\big{[}\alpha_{0}x+\sum_{i=1}^{t}\alpha_{i}x^{2^{i}+1}\big{]}},$
(3)
where $\mathrm{Tr}:\mathbb{F}_{2^{m}}\rightarrow\mathbb{F}_{2}$ denotes the
trace map, defined by $\mathrm{Tr}(z)=\sum_{i=0}^{m-1}z^{2^{i}}$. The
following theorem gives the spectral norm and the worst-case and average
coherence of this frame.
###### Theorem 4 (Geometry of code-based frames).
The $2^{m}\times 2^{(t+1)m}$ frame defined by (3) is unit norm and tight,
i.e., $\|F\|_{2}^{2}=2^{tm}$, with worst-case coherence
$\mu_{F}\leq\frac{1}{\sqrt{2^{m-2t-1}}}$ and average coherence
$\smash{\nu_{F}\leq\frac{\mu_{F}}{\sqrt{2^{m}}}}$.
## III Fundamental limits on worst-case coherence
In many applications of frames, performance is dictated by worst-case
coherence. It is therefore particularly important to understand which worst-
case coherence values are achievable. To this end, the following bound is
commonly used in the literature:
###### Theorem 5 (Welch bound [1]).
Every $M\times N$ unit norm frame $F$ has worst-case coherence
$\mu_{F}\geq\sqrt{\tfrac{N-M}{M(N-1)}}$.
The Welch bound is not tight whenever $N>M^{2}$ [1]. For this region, the
following gives a better bound:
###### Theorem 6 ([10],[11]).
Every $M\times N$ unit norm frame $F$ has worst-case coherence $\mu_{F}\geq
1-2N^{-1/(M-1)}$. Taking $N=\Theta(a^{M})$, this lower bound goes to
$1-\frac{2}{a}$ as $M\rightarrow\infty$.
For many applications, it does not make sense to use a complex frame, but the
bound in Theorem 6 is known to be loose for real frames [12]. We therefore
improve Theorem 6 for the case of real unit norm frames:
###### Theorem 7.
Every real $M\times N$ unit norm frame $F$ has worst-case coherence
$\mu_{F}\geq\cos\bigg{[}\pi\Big{(}\tfrac{M-1}{N\pi^{1/2}}~{}\tfrac{\Gamma(\frac{M-1}{2})}{\Gamma(\frac{M}{2})}\Big{)}^{\frac{1}{M-1}}\bigg{]}.$
(4)
Furthermore, taking $N=\Theta(a^{M})$, this lower bound goes to
$\cos(\frac{\pi}{a})$ as $M\rightarrow\infty$.
In [12], numerical results are given for $M=3$, and we compare these results
to Theorems 6 and 7 in Figure 1. Considering this figure, we note that the
bound in Theorem 6 is inferior to the maximum of the Welch bound and the bound
in Theorem 7, at least when $M=3$. This illustrates the degree to which
Theorem 7 improves the bound in Theorem 6 for real frames. In fact, since
$\cos(\frac{\pi}{a})\geq 1-\frac{2}{a}$ for all $a\geq 2$, the bound for real
frames in Theorem 7 is asymptotically better than the bound for complex frames
in Theorem 6. Moreover, for $M=2$, Theorem 7 says
$\mu\geq\cos(\frac{\pi}{N})$, and [22] proved this bound to be tight for every
$N\geq 2$. For $M=3$, Theorem 7 can be further improved as follows:
###### Theorem 8.
Every real $3\times N$ unit norm frame $F$ has worst-case coherence
$\mu_{F}\geq 1-\frac{4}{N}+\frac{2}{N^{2}}$.
Figure 1: Different bounds on worst-case coherence for $M=3$, $N=3,\ldots,55$.
Stars give numerically determined optimal worst-case coherence of $N$ real
unit vectors, found in [12]. Dotted curve gives Welch bound, dash-dotted curve
gives bound from Theorem 6, dashed curve gives general bound from Theorem 7,
and solid curve gives bound from Theorem 8.
## IV Reducing average coherence
In [9], average coherence is used to garner a number of guarantees on sparse
signal processing. Since average coherence is so new to the frame theory
literature, this section will investigate how average coherence relates to
worst-case coherence and the spectral norm. We start with a definition:
###### Definition 9 (Wiggling and flipping equivalent frames).
We say the frames $F$ and $G$ are _wiggling equivalent_ if there exists a
diagonal matrix $D$ of unimodular entries such that $G=FD$. Furthermore, they
are _flipping equivalent_ if $D$ is real, having only $\pm 1$’s on the
diagonal.
The terms “wiggling” and “flipping” are inspired by the fact that individual
frame elements of such equivalent frames are related by simple unitary
operations. Note that every frame with $N$ nonzero frame elements belongs to a
flipping equivalence class of size $2^{N}$, while being wiggling equivalent to
uncountably many frames. The importance of this type of frame equivalence is,
in part, due to the following lemma, which characterizes the shared geometry
of wiggling equivalent frames:
###### Lemma 10 (Geometry of wiggling equivalent frames).
Wiggling equivalence preserves the norms of frame elements, the worst-case
coherence, and the spectral norm.
Now that we understand wiggling and flipping equivalence, we are ready for the
main idea behind this section. Suppose we are given a unit norm frame with
acceptable spectral norm and worst-case coherence, but we also want the
average coherence to satisfy (SCP-2). Then by Lemma 10, all of the wiggling
equivalent frames will also have acceptable spectral norm and worst-case
coherence, and so it is reasonable to check these frames for good average
coherence. In fact, the following theorem guarantees that at least one of the
flipping equivalent frames will have good average coherence, with only modest
requirements on the original frame’s redundancy.
###### Theorem 11 (Frames with low average coherence).
Let $F$ be an $M\times N$ unit norm frame with $\smash{M<\frac{N-1}{4\log
4N}}$. Then there exists a frame $G$ that is flipping equivalent to $F$ and
satisfies $\smash{\nu_{G}\leq\frac{\mu_{G}}{\sqrt{M}}}$.
While Theorem 11 guarantees the existence of a flipping equivalent frame with
good average coherence, the result does not describe how to find it.
Certainly, one could check all $2^{N}$ frames in the flipping equivalence
class, but such a procedure is computationally slow. As an alternative, we
propose a linear-time flipping algorithm (Algorithm 1). The following theorem
guarantees that linear-time flipping will produce a frame with good average
coherence, but it requires the original frame’s redundancy to be higher than
what suffices in Theorem 11.
Algorithm 1 Linear-time flipping
Input: An $M\times N$ unit norm frame $F$
Output: An $M\times N$ unit norm frame $G$ that is flipping equivalent to $F$
$g_{1}\leftarrow f_{1}$ {Keep first frame element}
for $n=2$ to $N$ do
if $\|\sum_{i=1}^{n-1}g_{i}+f_{n}\|\leq\|\sum_{i=1}^{n-1}g_{i}-f_{n}\|$ then
$g_{n}\leftarrow f_{n}$ {Keep frame element for shorter sum}
else
$g_{n}\leftarrow-f_{n}$ {Flip frame element for shorter sum}
end if
end for
###### Theorem 12.
Suppose $N\geq M^{2}+3M+3$. Then Algorithm 1 outputs an $M\times N$ frame $G$
that is flipping equivalent to $F$ and satisfies
$\nu_{G}\leq\frac{\mu_{G}}{\sqrt{M}}$.
As an example of how linear-time flipping improves average coherence, consider
the following matrix:
$F:=\frac{1}{\sqrt{5}}\left[\begin{array}[]{cccccccccc}+&+&+&+&-&+&+&+&+&-\\\
+&-&+&+&+&-&-&-&+&-\\\ +&+&+&+&+&+&+&+&-&+\\\ -&-&-&+&-&+&+&-&-&-\\\
-&+&+&-&-&+&-&-&-&-\end{array}\right].$
Here, $\smash{\nu_{F}\approx 0.3778>0.2683\approx\frac{\mu_{F}}{\sqrt{M}}}$.
Even though $N<M^{2}+3M+3$, we can run linear-time flipping to get the
flipping pattern $D:=\mathrm{diag}(+-+--++-++)$. Then $FD$ has average
coherence $\smash{\nu_{FD}\approx
0.1556<\frac{\mu_{F}}{\sqrt{M}}=\frac{\mu_{FD}}{\sqrt{M}}}$. This example
illustrates that the condition $N\geq M^{2}+3M+3$ in Theorem 12 is sufficient
but not necessary.
## V Near-optimal sparse signal processing without the Restricted Isometry
Property
Frames with small spectral norm, worst-case coherence, and/or average
coherence have found use in recent years with applications involving sparse
signals. Donoho et al. used the worst-case coherence in [5] to provide uniform
bounds on the signal and support recovery performance of combinatorial and
convex optimization methods and greedy algorithms. Later, Tropp [7] and Candès
and Plan [4] used both the spectral norm and worst-case coherence to provide
tighter bounds on the signal and support recovery performance of convex
optimization methods for most support sets under the additional assumption
that the sparse signals have independent nonzero entries with zero median.
Recently, Bajwa et al. [9] made use of the spectral norm and both coherence
parameters to report tighter bounds on the noisy model selection and noiseless
signal recovery performance of an incredibly fast greedy algorithm called
_one-step thresholding (OST)_ for most support sets and _arbitrary_ nonzero
entries. In this section, we discuss further implications of the spectral norm
and worst-case and average coherence of frames in applications involving
sparse signals.
### V-A The Weak Restricted Isometry Property
A common task in signal processing applications is to test whether a
collection of measurements corresponds to mere noise [23]. For applications
involving sparse signals, one can test measurements $y\in\mathbb{C}^{M}$
against the null hypothsis $H_{0}:y=e$ and alternative hypothesis
$H_{1}:y=Fx+e$, where the entries of the noise vector $e\in\mathbb{C}^{M}$ are
independent, identical zero-mean complex-Gaussian random variables and the
signal $x\in\mathbb{C}^{N}$ is $K$-sparse. The performance of such signal
detection problems is directly proportional to the energy in $Fx$ [24, 25,
23]. In particular, existing literature on the detection of sparse signals
[24, 25] leverages the fact that $\|Fx\|^{2}\approx\|x\|^{2}$ when $F$
satisfies the Restricted Isometry Property (RIP) of order $K$. In contrast, we
now show that the Strong Coherence Property also guarantees
$\|Fx\|^{2}\approx\|x\|^{2}$ for most $K$-sparse vectors. We start with a
definition:
###### Definition 13.
We say an $M\times N$ frame $F$ satisfies the _$(K,\delta,p)$ -Weak Restricted
Isometry Property (Weak RIP)_ if for every $K$-sparse vector
$x\in\mathbb{C}^{N}$, a random permutation $y$ of $x$’s entries satisfies
$(1-\delta)\|y\|^{2}\leq\|Fy\|^{2}\leq(1+\delta)\|y\|^{2}$
with probability exceeding $1-p$.
We note the distinction between RIP and Weak RIP—Weak RIP requires that $F$
preserves the energy of _most_ sparse vectors. Moreover, the manner in which
we quantify “most” is important. For each sparse vector, $F$ preserves the
energy of most permutations of that vector, but for different sparse vectors,
$F$ might not preserve the energy of permutations with the same support. That
is, unlike RIP, Weak RIP is _not_ a statement about the singular values of
submatrices of $F$. Certainly, matrices for which most submatrices are well-
conditioned, such as those discussed in [7], will satisfy Weak RIP, but Weak
RIP does not require this. That said, the following theorem shows, in part,
the significance of the Strong Coherence Property.
###### Theorem 14.
Any $M\times N$ unit norm frame $F$ that satisfies the Strong Coherence
Property also satisfies the $(K,\delta,\frac{4K}{N^{2}})$-Weak Restricted
Isometry Property provided $N\geq 128$ and
$2K\log{N}\leq\min\\{\frac{\delta^{2}}{100\mu_{F}^{2}},M\\}$.
### V-B Reconstruction of sparse signals from noisy measurements
Another common task in signal processing applications is to reconstruct a
$K$-sparse signal $x\in\mathbb{C}^{N}$ from a small collection of linear
measurements $y\in\mathbb{C}^{M}$. Recently, Tropp [7] used both the worst-
case coherence and spectral norm of frames to find bounds on the
reconstruction performance of _basis pursuit (BP)_ [26] for most support sets
under the assumption that the nonzero entries of $x$ are independent with zero
median. In contrast, [9] used the spectral norm and worst-case and average
coherence of frames to find bounds on the reconstruction performance of OST
for most support sets and _arbitrary_ nonzero entries. However, both [7] and
[9] limit themselves to recovering $x$ in the absence of noise, corresponding
to $y=Fx$, a rather ideal scenario.
Our goal in this section is to provide guarantees for the reconstruction of
sparse signals from noisy measurements $y=Fx+e$, where the entries of the
noise vector $e\in\mathbb{C}^{M}$ are independent, identical complex-Gaussian
random variables with mean zero and variance $\sigma^{2}$. In particular, and
in contrast with [5], our guarantees will hold for arbitrary frames $F$
without requiring the signal’s sparsity level to satisfy $K=O(\mu_{F}^{-1})$.
The reconstruction algorithm that we analyze here is the OST algorithm of [9],
which is described in Algorithm 2. The following theorem extends the analysis
of [9] and shows that the OST algorithm leads to near-optimal reconstruction
error for large classes of sparse signals.
Before proceeding further, we first define some notation. We use
$\textsf{{snr}}:=\|x\|^{2}/\mathbb{E}[\|e\|^{2}]$ to denote the _signal-to-
noise ratio_ associated with the signal reconstruction problem. Also, we use
$\smash{\mathcal{T}_{\sigma}(t):=\\{n:|x_{n}|>\frac{2\sqrt{2}}{1-t}\sqrt{2\sigma^{2}\log{N}}\\}}$
for any $t\in(0,1)$ to denote the locations of all the entries of $x$ that,
roughly speaking, lie above the _noise floor_ $\sigma$. Finally, we use
$\smash{\mathcal{T}_{\mu}(t):=\\{n:|x_{n}|>\frac{20}{t}\mu_{F}\|x\|\sqrt{2\log{N}}\\}}$
to denote the locations of entries of $x$ that, roughly speaking, lie above
the _self-interference floor_ $\mu_{F}\|x\|$.
Algorithm 2 One-Step Thresholding (OST) [9]
Input: An $M\times N$ unit norm frame $F$, a vector $y=Fx+e$, and a threshold
$\lambda>0$
Output: An estimate $\hat{\mathcal{K}}\subseteq\\{1,\ldots,N\\}$ of the
support of $x$ and an estimate $\hat{x}\in\mathbb{C}^{N}$ of $x$
$\hat{x}\leftarrow 0$ {Initialize}
$z\leftarrow F^{*}y$ {Form signal proxy}
$\hat{\mathcal{K}}\leftarrow\\{n:|z_{n}|>\lambda\\}$ {Select indices via OST}
$\hat{x}_{\hat{\mathcal{K}}}\leftarrow(F_{\hat{\mathcal{K}}})^{\dagger}y$
{Reconstruct signal via least-squares}
###### Theorem 15 (Reconstruction of sparse signals).
Take an $M\times N$ unit norm frame $F$ which satisfies the Strong Coherence
Property, pick $t\in(0,1)$, and choose
$\smash{\lambda=\sqrt{2\sigma^{2}\log{N}}~{}\max\\{\frac{10}{t}\mu_{F}\sqrt{M~{}\textsf{{snr}}},\frac{\sqrt{2}}{1-t}\\}}$.
Further, suppose $x\in\mathbb{C}^{N}$ has support $\mathcal{K}$ drawn
uniformly at random from all possible $K$-subsets of $\\{1,\ldots,N\\}$. Then
provided
$K\leq\tfrac{N}{c_{1}^{2}\|F\|_{2}^{2}\log{N}},$ (5)
Algorithm 2 produces $\hat{\mathcal{K}}$ such that
$\mathcal{T}_{\sigma}(t)\cap\mathcal{T}_{\mu}(t)\subseteq\hat{\mathcal{K}}\subseteq\mathcal{K}$
and $\hat{x}$ such that
$\|x-\hat{x}\|\leq
c_{2}\sqrt{\sigma^{2}|\hat{\mathcal{K}}|\log{N}}+c_{3}\|x_{\mathcal{K}\setminus\hat{\mathcal{K}}}\|$
(6)
with probability exceeding $1-10N^{-1}$. Finally, defining
$T:=|\mathcal{T}_{\sigma}(t)\cap\mathcal{T}_{\mu}(t)|$, we further have
$\|x-\hat{x}\|\leq c_{2}\sqrt{\sigma^{2}K\log{N}}+c_{3}\|x-x_{T}\|$ (7)
in the same probability event. Here, $c_{1}=37\mathrm{e}$,
$c_{2}=\frac{2}{1-\mathrm{e}^{-1/2}}$, and
$\smash{c_{3}=1+\frac{\mathrm{e}^{-1/2}}{1-\mathrm{e}^{-1/2}}}$ are numerical
constants.
A few remarks are in order now for Theorem 15. First, if $F$ satisfies the
Strong Coherence Property _and_ $F$ is nearly tight, then OST handles sparsity
that is almost linear in $M$: $K=O(M/\log{N})$ from (5). Second, the
$\ell_{2}$ error associated with the OST algorithm is the near-optimal (modulo
the $\log$ factor) error of $\sqrt{\sigma^{2}K\log{N}}$ _plus_ the best
$T$-term approximation error caused by the inability of the OST algorithm to
recover signal entries that are smaller than $O(\mu_{F}\|x\|\sqrt{2\log{N}})$.
Nevertheless, it is easy to convince oneself that such error is still near-
optimal for large classes of sparse signals. Consider, for example, the case
where $\mu_{F}=O(1/\sqrt{M})$, the magnitudes of $K/2$ nonzero entries of $x$
are some $\alpha=\Omega(\sqrt{\sigma^{2}\log{N}})$, while the magnitudes of
the other $K/2$ nonzero entries are not necessarily same but scale as
$O(\sqrt{\sigma^{2}\log{N}})$. Then we have from Theorem 15 that
$\|x-x_{T}\|=O(\sqrt{\sigma^{2}K\log{N}})$, which leads to near-optimal
$\ell_{2}$ error of $\|x-\hat{x}\|=O(\sqrt{\sigma^{2}K\log{N}})$. To the best
of our knowledge, this is the first result in the sparse signal processing
literature that does not require RIP and still provides near-optimal
reconstruction guarantees for such signals in the presence of noise, while
using either random or deterministic frames, even when $K=O(M/\log{N})$.
## References
* [1] T. Strohmer, R.W. Heath, Grassmannian frames with applications to coding and communication, Appl. Comput. Harmon. Anal. 14 (2003) 257–275.
* [2] R.B. Holmes, V.I. Paulsen, Optimal frames for erasures, Linear Algebra Appl. 377 (2004) 31–51.
* [3] D.G. Mixon, C. Quinn, N. Kiyavash, M. Fickus, Equiangular tight frame fingerprinting codes, to appear in: Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (2011) 4 pages.
* [4] E.J. Candès, Y. Plan, Near-ideal model selection by $\ell_{1}$ minimization, Ann. Statist. 37 (2009) 2145–2177.
* [5] D.L. Donoho, M. Elad, V.N. Temlyakov, Stable recovery of sparse overcomplete representations in the presence of noise, IEEE Trans. Inform. Theory 52 (2006) 6–18.
* [6] J.A. Tropp, Greed is good: Algorithmic results for sparse approximation, IEEE Trans. Inform. Theory 50 (2004) 2231–2242.
* [7] J.A. Tropp, On the conditioning of random subdictionaries, Appl. Comput. Harmon. Anal. 25 (2008) 1–24.
* [8] R. Zahedi, A. Pezeshki, E.K.P. Chong, Robust measurement design for detecting sparse signals: Equiangular uniform tight frames and Grassmannian packings, American Control Conference (2010) 6 pages.
* [9] W.U. Bajwa, R. Calderbank, S. Jafarpour, Why Gabor frames? Two fundamental measures of coherence and their role in model selection, J. Commun. Netw. 12 (2010) 289–307.
* [10] K. Mukkavilli, A. Sabharwal, E. Erkip, B.A. Aazhang, On beam-forming with finite rate feedback in multiple antenna systems, IEEE Trans. Inform. Theory 49 (2003) 2562–2579.
* [11] P. Xia, S. Zhou, G.B. Giannakis, Achieving the Welch bound with difference sets, IEEE Trans. Inform. Theory 51 (2005) 1900–1907.
* [12] J.H. Conway, R.H. Hardin, N.J.A. Sloane, Packing lines, planes, etc.: Packings in Grassmannian spaces, Experiment. Math. 5 (1996) 139–159.
* [13] W.U. Bajwa, R. Calderbank, D.G. Mixon, Two are better than one: Fundamental parameters of frame coherence, arXiv:1103.0435v1.
* [14] R. Baraniuk, M. Davenport, R.A. DeVore, M.B. Wakin, A simple proof of the restricted isometry property for random matrices, Constructive Approximation 28 (2008) 253–263.
* [15] E.J. Candès, T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory 51 (2005) 4203–4215.
* [16] M.J. Wainwright, Sharp thresholds for high-dimensional and noisy sparsity recovery using $\ell_{1}$-constrained quadratic programming (lasso), IEEE Trans. Inform. Theory 55 (2009) 2183–2202.
* [17] E.J. Candès, J. Romberg, T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory 52 (2006) 489–509.
* [18] E.J. Candès, T. Tao, Near-optimal signal recovery from random projections: Universal encoding strategies?, IEEE Trans. Inform. Theory 52 (2006) 5406–5425.
* [19] M. Rudelson, R. Vershynin, On sparse reconstruction from Fourier and Gaussian measurements, Commun. Pure Appl. Math. 61 (2008) 1025–1045.
* [20] E.R. Berlekamp, The weight enumerators for certain subcodes of the second order binary Reed-Muller codes, Inform. Control 17 (1970) 485–500.
* [21] N.Y. Yu, G. Gong, A new binary sequence family with low correlation and large size, IEEE Trans. Inform. Theory 52 (2006) 1624–1636.
* [22] J.J. Benedetto, J.D. Kolesar, Geometric properties of Grassmannian frames for $\mathbb{R}^{2}$ and $\mathbb{R}^{3}$, EURASIP J. on Appl. Signal Processing (2006) 17 pages.
* [23] S.M. Kay, Fundamentals of Statistical Signal Processing: Detection Theory, Upper Saddle River, Prentice Hall, 1998.
* [24] M.A. Davenport, P.T. Boufounos, M.B. Wakin, R.G. Baraniuk, Signal processing with compressive measurements, IEEE J. Select. Topics Signal Processing 4 (2010) 445–460.
* [25] J. Haupt, R. Nowak, Compressive sampling for signal detection, Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing (2007) 1509–1512.
* [26] S.S. Chen, D.L. Donoho, M.A. Saunders, Atomic decomposition by basis pursuit, SIAM J. Scientific Comput. 20 (1998) 33–61.
|
arxiv-papers
| 2011-05-21T19:05:32 |
2024-09-04T02:49:19.022831
|
{
"license": "Public Domain",
"authors": "Dustin G. Mixon, Waheed U. Bajwa, Robert Calderbank",
"submitter": "Dustin Mixon",
"url": "https://arxiv.org/abs/1105.4279"
}
|
1105.4374
|
# A Sharp inequality for holomorphic functions on the polydisc
Marijan Marković University of Montenegro, Faculty of Natural Sciences and
Mathematics, Cetinjski put b.b. 81000, Podgorica, Montenegro
marijanmmarkovic@gmail.com
###### Abstract.
In this paper we prove an isoperimetric inequality for holomorphic functions
in the unit polydisc $\mathbf{U}^{n}$. As a corollary we derive an inclusion
relation between weighted Bergman and Hardy spaces of holomorphic functions in
the polydisc which generalizes the classical Hardy–Littlewood relation
$H^{p}\subseteq A^{2p}$. Also, we extend some results due to Burbea.
###### Key words and phrases:
Weighted Bergman spaces; Hardy spaces; Isoperimetric inequality; Polydisc
## 1\. Introduction and statement of the result
### 1.1. Notations
For an integer $n\geq 1$ we consider the $n$-dimensional complex vector space
$\mathbf{C}^{n}$ with the usual inner product
$\left<z,\zeta\right>=\sum_{j=1}^{n}z_{j}\overline{\zeta}_{j},\quad z,\
\zeta\in\mathbf{C}^{n}$
and norm
$\|z\|=\left<z,z\right>^{\frac{1}{2}},$
where $z=(z_{1},\dots,z_{n})$, $\zeta=(\zeta_{1},\dots,\zeta_{n})$,
$\mathbf{U}$ is the open unit disc in the complex plane $\mathbf{C}$,
$\mathbf{T}$ is its boundary, and $\mathbf{U}^{n}$ and $\mathbf{T}^{n}$ stand
for the unit polydisc and its distinguished boundary, respectively.
Following the classical book of Rudin [31], let us recall some basic facts
from the theory of Hardy spaces $H^{p}(\mathbf{U}^{n})$ on the unit polydisc.
Let $p>0$ be an arbitrary real (in the sequel the letter $p$, with or without
index, will stand for a positive real number, if we do not make restrictions).
By $dm_{n}$ we denote the Haar measure on the distinguished boundary
$\mathbf{T}^{n}$, i.e.,
$dm_{n}(\omega)=\frac{1}{(2\pi)^{n}}d\theta_{1}\dots
d\theta_{n},\quad\omega=(e^{i\theta_{1}},\dots,e^{i\theta_{n}})\in\mathbf{T}^{n}.$
A holomorphic function $f$ in the polydisc $\mathbf{U}^{n}$ belongs to the
Hardy space $H^{p}(\mathbf{U}^{n})$ if it satisfies the growth condition
(1.1) $\|f\|_{H^{p}(\mathbf{U}^{n})}:=\left(\sup_{0\leq
r<1}\int_{\mathbf{T}^{n}}|f(r\omega)|^{p}dm_{n}(\omega)\right)^{\frac{1}{p}}<\infty.$
It turns out that if $f\in H^{p}(\mathbf{U}^{n})$, then there exists
$\lim_{r\to 1}f(r\omega)=f(\omega)\ \text{ a.e. on }\ \mathbf{T}^{n}$
and the boundary function belongs to $L^{p}(\mathbf{T}^{n},m_{n})$, the
Lebesgue space of all $p$-integrable functions on $\mathbf{T}^{n}$ (with
respect to the measure $m_{n}$). Moreover
$\int_{\mathbf{T}^{n}}|f(\omega)|^{p}dm_{n}(\omega)=\sup_{0\leq
r<1}\int_{\mathbf{T}^{n}}|f(r\omega)|^{p}dm_{n}(\omega).$
For $q>-1$ let
$d\mu_{q}(z)=\frac{(q+1)}{\pi}(1-|z|^{2})^{q}dxdy\quad(z=x+iy\in\mathbf{U}),$
stand for a weighted normalized measure on the disc $\mathbf{U}$. We will
consider also the corresponding measure on the polydisc $\mathbf{U}^{n}$,
$d\mu_{\mathbf{q}}(z)=\prod_{k=1}^{n}d\mu_{q_{k}}(z_{k}),\quad
z\in\mathbf{U}^{n},$
where $\mathbf{q}>\mathbf{-1}$ is an $n$-multiindex; the inequality
$\mathbf{q}_{1}>\mathbf{q}_{2}$ between two $n$-multiindices means
$q_{1,k}>q_{2,k},\ k=1,\dots,n$; we denote the $n$-multiindex $(m,\dots,m)$ by
$\mathbf{m}$. For a real number $m,\ m>1$, we have
$d\mu_{\mathbf{m-2}}(z)=\frac{(m-1)^{n}}{\pi^{n}}\prod_{k=1}^{n}(1-|z_{k}|^{2})^{m-2}dx_{k}dy_{k}\quad(z_{k}=x_{k}+iy_{k}).$
The weighted Bergman spaces $A^{p}_{\mathbf{q}}(\mathbf{U}^{n}),\ p>0,\
\mathbf{q}>\mathbf{-1}$ contain the holomorphic functions $f$ in the polydisc
$\mathbf{U}^{n}$ such that
$\|f\|_{A^{p}_{\mathbf{q}}(\mathbf{U}^{n})}:=\left(\int_{\mathbf{U}^{n}}|f(z)|^{p}d\mu_{\mathbf{q}}(z)\right)^{\frac{1}{p}}<\infty.$
Since $d\mu_{0}$ is the area measure on the complex plane normalized on the
unit disc, $A^{p}(\mathbf{U}^{n}):=A^{p}_{\mathbf{0}}(\mathbf{U}^{n})$ are the
ordinary (unweighted) Bergman spaces on $\mathbf{U}^{n}$.
It is well known that $\|\cdot\|_{H^{p}(\mathbf{U}^{n})}$ and
$\|\cdot\|_{A^{p}_{\mathbf{q}}(\mathbf{U}^{n})}$ are norms on
$H^{p}(\mathbf{U}^{n})$ and $A^{p}_{\mathbf{q}}(\mathbf{U}^{n})$,
respectively, if $p\geq 1$, and quasi-norms for $0<p<1$; for simplicity, we
sometimes write $\|\cdot\|_{p}$ and $\|\cdot\|_{p,\mathbf{q}}$. As usual,
$H^{p}(\mathbf{U})$ and $A^{p}_{q}(\mathbf{U})$ are denoted by $H^{p}$ and
$A^{p}_{q}$. Obviously, $H^{p}(\mathbf{U}^{n})\subseteq
A^{p}_{\mathbf{q}}(\mathbf{U}^{n})$.
Let us point out that the Hardy space $H^{2}(\mathbf{U}^{n})$ is a Hilbert
space with the reproducing kernel
(1.2)
$K_{n}(z,\zeta)=\prod_{j=1}^{n}\frac{1}{1-z_{j}\overline{\zeta}_{j}},\quad z,\
\zeta\in\mathbf{U}^{n}.$
For the theory of reproducing kernels we refer to [1].
### 1.2. Short background
The solution to the isoperimetric problem is usually expressed in the form of
an inequality that relates the length $L$ of a closed curve and the area $A$
of the planar region that it encloses. The isoperimetric inequality states
that
$4\pi A\leq L^{2},$
and that equality holds if and only if the curve is a circle. Dozens of proofs
of the isoperimetric inequality have been proposed. More than one approach can
be found in the expository papers by Osserman [26], Gamelin and Khavinson [10]
and Bläsjö [6] along with a brief history of the problem. For a survey of some
known generalizations to higher dimensions and the list of some open problems,
we refer to the paper by B$\mathrm{\acute{e}}$n$\mathrm{\acute{e}}$teau and
Khavinson [3].
In [7], Carleman gave a beautiful proof of the isoperimetric inequality in the
plane, reducing it to an inequality for holomorphic functions in the unit
disc. Following Carleman’s result, Aronszajn in [1] showed that if $f_{1}$ and
$f_{2}$ are holomorphic functions in a simply connected domain $\Omega$ with
analytic boundary $\partial\Omega$, such that $f_{1},\ f_{2}\in
H^{2}(\Omega)$, then
(1.3)
$\int_{\Omega}|f_{1}|^{2}|f_{2}|^{2}dxdy\leq\frac{1}{4\pi}\int_{\partial\Omega}|f_{1}|^{2}|dz|\int_{\partial\Omega}|f_{2}|^{2}|dz|\quad(z=x+iy).$
In [14] Jacobs considered not only simply connected domains (see also the
Saitoh work [32]).
Mateljević and Pavlović in [23] generalized (1.3) in the following sense: if
$f_{j}\in H^{p_{j}}(\Omega),\ j=1,\ 2$, where $\Omega$ is a simply connected
domain with analytic boundary $\partial\Omega$, then
(1.4)
$\frac{1}{\pi}\int_{\Omega}|f_{1}|^{p_{1}}|f_{2}|^{p_{2}}dxdy\leq\frac{1}{4\pi^{2}}\int_{\partial\Omega}|f_{1}|^{p_{1}}|dz|\int_{\partial\Omega}|f_{2}|^{p_{2}}|dz|,$
with equality if and only if either $f_{1}f_{2}\equiv 0$ or if for some
$C_{j}\neq 0$,
$f_{j}=C_{j}(\psi^{\prime})^{\frac{1}{p_{j}}},\quad j=1,\ 2,$
where $\psi$ is a conformal mapping of the domain $\Omega$ onto the disc
$\mathbf{U}$.
By using a similar approach as Carleman, Strebel in his book [33, Theorem
19.9, pp. 96–98] (see also the papers [21] and [34]) proved that if $f\in
H^{p}$ then
(1.5)
$\int_{\mathbf{U}}|f(z)|^{2p}dxdy\leq\frac{1}{4\pi}\left(\int_{0}^{2\pi}|f(e^{i\theta})|^{p}d\theta\right)^{2}\quad(\|f\|_{A^{2p}}\leq\|f\|_{H^{p}}),$
with equality if and only if for some constants $\zeta,\ |\zeta|<1$ and
$\lambda$,
$f(z)=\frac{\lambda}{(1-z\overline{\zeta})^{\frac{2}{p}}}.$
Further, Burbea in [5] generalized (1.5) to
(1.6)
$\frac{m-1}{\pi}\int_{\mathbf{U}}|f(z)|^{mp}(1-|z|^{2})^{m-2}dxdy\leq\left(\frac{1}{2\pi}\int_{0}^{2\pi}|f(e^{i\theta})|^{p}d\theta\right)^{m},$
where $m\geq 2$ is an integer. The equality is attained in the same case as in
the relation (1.5). The inequality (1.6) can be rewritten as
$\|f\|_{A^{mp}_{m-1}}\leq\|f\|_{H^{p}},\quad f\in H^{p},$
which is a generalization of the inclusion $H^{p}\subseteq A^{2p}$, proved by
Hardy and Littlewood in [12].
On the other hand, Pavlović and Dostanić showed in [28] that if
$\mathbf{B}_{n}$ is the unit ball in $\mathbf{C}^{n},\ \partial\mathbf{B}_{n}$
its boundary, and $\sigma_{n}$ is the normalized surface area measure on the
sphere $\partial\mathbf{B}_{n}$, then
$\int_{\partial\mathbf{B}_{n}}|f|^{2n}d\sigma_{n}\leq\left(\int_{\mathbf{T}^{n}}|f|^{2}dm_{n}\right)^{n}$
holds for $f\in H^{2}(\mathbf{U}^{n})$. They pointed out that this inequality
coincides with (1.6) for $m=n,\ p=2$ and $f(z)=f(z_{1},\dots,z_{n})=f(z_{1})$
that is if $f$ actually depends only on one complex variable.
For an isoperimetric inequality for harmonic functions we refer to [16].
### 1.3. Statement of the result
In the sequel, $m$ stands for an integer $\geq 2$. The starting point of this
paper is the work of Burbea [5] who obtained the following isoperimetric
inequalities concerning the unit disc and the unit polydisc.
###### Proposition 1.1.
Let $f_{j}\in H^{p_{j}},\ j=1,\dots,m$. Then
(1.7)
$\int_{\mathbf{U}}\prod_{j=1}^{m}|f_{j}|^{p_{j}}d\mu_{m-2}\leq\prod_{j=1}^{m}\int_{\mathbf{T}}|f_{j}|^{p_{j}}dm_{1}.$
Equality holds if and only if either some of the functions are identically
equal to zero or if for some point $\zeta\in\mathbf{U}$ and constants
$C_{j}\neq 0$,
$f_{j}=C_{j}K_{1}^{\frac{2}{p_{j}}}(\cdot,\zeta),\quad j=1,\dots,m,$
where $K_{1}$ is the reproducing kernel (1.2) for the Hardy space $H^{2}$.
###### Proposition 1.2.
Let $f_{j}\in H^{2}(\mathbf{U}^{n}),\ j=1,\dots,m$. Then
(1.8)
$\int_{\mathbf{U}^{n}}\prod_{j=1}^{m}|f_{j}|^{2}d\mu_{\mathbf{m-2}}\leq\prod_{j=1}^{m}\int_{\mathbf{T}^{n}}|f_{j}|^{2}dm_{n}.$
Equality holds if and only if either some of the functions are identically
equal to zero or if for some point $\zeta\in\mathbf{U}^{n}$ and constants
$C_{j}\neq 0$,
$f_{j}=C_{j}K_{n}(\cdot,\zeta),\quad j=1,\dots,m,$
where $K_{n}$ is the reproducing kernel (1.2).
Proposition 1.2 is a particular case of Theorem 4.1 in the Burbea paper [5, p.
257]. That theorem was derived from more general considerations involving the
theory of reproducing kernels (see also [4]). The inequality in that theorem
is between Bergman type norms, while Proposition 1.2 is the case with the
Hardy norm on the right side (in that case, we have an isoperimetric
inequality). In the next main theorem we extend (1.7) for holomorphic
functions which belong to general Hardy spaces on the polydisc
$\mathbf{U}^{n}$.
###### Theorem 1.3.
Let $f_{j}\in H^{p_{j}}(\mathbf{U}^{n}),\ j=1,\dots,m$. Then
(1.9)
$\int_{\mathbf{U}^{n}}\prod_{j=1}^{m}|f_{j}|^{p_{j}}d\mu_{\mathbf{m-2}}\leq\prod_{j=1}^{m}\int_{\mathbf{T}^{n}}|f_{j}|^{p_{j}}dm_{n}.$
Equality occurs if and only if either some of the functions are identically
equal to zero or if for some point $\zeta\in\mathbf{U}^{n}$ and constants
$C_{j}\neq 0$,
$f_{j}=C_{j}K_{n}^{\frac{2}{p_{j}}}(\cdot,\zeta),\quad j=1,\dots,m.$
Notice that in higher complex dimensions there is no analog of the Blaschke
product so we cannot deduce Theorem 1.3 directly from Proposition 1.2 as we
can for $n=1$ (this is a usual approach in the theory of $H^{p}$ spaces; see
also [5]). We will prove the main theorem in the case $n=2$ since for $n>2$
our method needs only a technical adaptation. As immediate consequences of
Theorem 1.3, we have the next two corollaries.
###### Corollary 1.4.
Let $p\geq 1$. The (polylinear) operator
$\beta:\bigotimes_{j=1}^{m}H^{p}(\mathbf{U}^{n})\to
A^{p}_{\mathbf{m-2}}(\mathbf{U}^{n})$, defined by
$\beta(f_{1},\dots,f_{m})=\prod_{j=1}^{m}f_{j}$ has norm one.
###### Corollary 1.5.
Let $f\in H^{p}(\mathbf{U}^{n})$. Then
$\int_{\mathbf{U}^{n}}|f|^{mp}d\mu_{\mathbf{m-2}}\leq\left(\int_{\mathbf{T}^{n}}|f|^{p}dm_{n}\right)^{m}.$
Equality occurs if and only if for some point $\zeta\in\mathbf{U}^{n}$ and
constant $\lambda$,
$f=\lambda K_{n}^{\frac{2}{p}}(\cdot,\zeta).$
In other words we have the sharp inequality
$\|f\|_{A^{mp}_{\mathbf{m-2}}(\mathbf{U}^{n})}\leq\|f\|_{H^{p}(\mathbf{U}^{n})},\quad
f\in H^{p}(\mathbf{U}^{n})$
and the inclusion
$H^{p}(\mathbf{U}^{n})\subseteq A^{mp}_{\mathbf{m-2}}(\mathbf{U}^{n}).$
Thus, when $p\geq 1$, the inclusion map $I_{p,m}:H^{p}(\mathbf{U}^{n})\to
A^{mp}_{\mathbf{m-2}}(\mathbf{U}^{n}),\ I_{p,m}(f):=f$, has norm one.
## 2\. Proof of the main theorem
Following Beckenbach and Rad$\acute{\mathrm{o}}$ [2], we say that a non-
negative function $u$ is logarithmically subharmonic in a plane domain
$\Omega$ if $u\equiv 0$ or if $\log u$ is subharmonic in $\Omega$.
Our first step is to extend the Burbea inequality (1.7) to functions which
belong to the spaces $h^{p}_{PL}$ defined in the following sense: $u\in
h^{p}_{PL}$ if it is logarithmically subharmonic and satisfies the growth
property
(2.1) $\sup_{0\leq
r<1}\int_{0}^{2\pi}|u(re^{i\theta})|^{p}\frac{d\theta}{2\pi}<\infty.$
It is known that a function from these spaces has a radial limit in
$e^{i\theta}\in\mathbf{T}$ for almost all $\theta\in[0,2\pi]$. Let us denote
this limit (when it exists) as $u(e^{i\theta})$. Then $\lim_{r\to
1}\int_{0}^{2\pi}|u(re^{i\theta})|^{p}\frac{d\theta}{2\pi}=\int_{0}^{2\pi}|u(e^{i\theta})|^{p}\frac{d\theta}{2\pi}$
and $\lim_{r\to
1}\int_{0}^{2\pi}|u(re^{i\theta})-u(e^{i\theta})|^{p}\frac{d\theta}{2\pi}=0$.
For an exposition of the topic of spaces of logarithmically subharmonic
functions which satisfy (2.1), we refer to the book of Privalov [29].
###### Lemma 2.1.
Let $u_{j}\in h^{p_{j}}_{PL},\ j=1,\dots,m$, be logarithmically subharmonic
functions in the unit disc. Then
(2.2)
$\int_{\mathbf{U}}\prod_{j=1}^{m}u_{j}^{p_{j}}d\mu_{m-2}\leq\prod_{j=1}^{m}\int_{\mathbf{T}}u_{j}^{p_{j}}dm_{1}.$
For continuous functions, equality holds if and only if either some of the
functions are identically equal to zero or if for some point
$\zeta\in\mathbf{U}$ and constants $\lambda_{j}>0$,
$u_{j}=\lambda_{j}\left|K_{1}(\cdot,\zeta)\right|^{\frac{2}{p_{j}}},\quad
j=1,\dots,m.$
###### Proof.
Suppose that no one of the functions $u_{j},\ j=1,\dots,m$ is identically
equal to zero. Then $\log u_{j}(e^{i\theta})$ is integrable on the segment
$[0,2\pi]$ and there exist $f_{j}\in H^{p_{j}}$ such that
$u_{j}(z)\leq|f_{j}(z)|,\ z\in\mathbf{U}$ and
$u_{j}(e^{i\theta})=|f_{j}(e^{i\theta})|,\ \theta\in[0,2\pi]$. Namely, for
$f_{j}$ we can choose
$f_{j}(z)=\exp\left(\int_{0}^{2\pi}\frac{e^{i\theta}+z}{e^{i\theta}-z}\log
u_{j}(e^{i\theta})\frac{d\theta}{2\pi}\right),\quad j=1,\dots,m.$
Since $\log u_{j}$ is subharmonic we have
$\log u_{j}(z)\leq\int_{0}^{2\pi}P(z,e^{i\theta})\log
u_{j}(e^{i\theta})\frac{d\theta}{2\pi},$
where $P(z,e^{i\theta})=\frac{1-|z|^{2}}{|z-e^{i\theta}|^{2}}$ is the Poisson
kernel for the disc $\mathbf{U}$. From this it follows the
$u_{j}(z)\leq|f_{j}(z)|,\ z\in\mathbf{U}$. Moreover, using the Jensen
inequality (for the concave function $\log$) we obtain
$\begin{split}\log|f_{j}(z)|^{p_{j}}&=\int_{0}^{2\pi}P(z,e^{i\theta})\log|u_{j}(e^{i\theta})|^{p_{j}}\frac{d\theta}{2\pi}\\\
&\leq\log\int_{0}^{2\pi}P(z,e^{i\theta})u^{p_{j}}_{j}(e^{i\theta})\frac{d\theta}{2\pi},\end{split}$
implying $f_{j}\in H^{p_{j}}$.
Now, in (1.7) we can take $f_{j},\ j=1,\dots,m$, and use the previous
relations, $u_{j}(z)\leq|f_{j}(z)|,\ z\in\mathbf{U}$ and
$u_{j}(e^{i\theta})=|f_{j}(e^{i\theta})|,\ \theta\in[0,2\pi]$, to derive the
inequality (2.2).
If all of the functions $u_{j},\ j=1,\dots,m$ are continuous (not equal to
zero identically) and if the equality in (2.2) occurs, then
$u_{j}(z)=|f_{j}(z)|,\ z\in\mathbf{U},\ j=1,\dots,m$. According to the
equality in the Proposition 1.1, we must have
$f_{j}=C_{j}K_{1}^{\frac{2}{p_{j}}}(\cdot,\zeta),\ j=1,\dots,m$, for some
point $\zeta\in\mathbf{U}$ and constants $C_{j}\neq 0$. Thus, for continuous
functions (not equal to zero identically) equality holds if and only if
$u_{j}=\lambda_{j}|K_{1}(\cdot,\zeta)|^{\frac{2}{p_{j}}},\ j=1,\dots,m\
(\lambda_{j}>0)$. ∎
We need the next two propositions concerning (logarithmically) subharmonic
functions. For the proofs of these propositions see the first paragraph of the
book of Ronkin [30].
###### Proposition 2.2.
Let $f$ be an upper semi-continuous function on a product $\Omega\times\Delta$
of domains $\Omega\subseteq\mathbf{R}^{n}$ and
$\Delta\subseteq\mathbf{R}^{k}$. Let $\mu$ be a positive measure on $\Delta$
and $E\subseteq\Delta$ be such that $\mu(E)<\infty$. Then
$\varphi(x):=\int_{E}f(x,y)d\mu(y),\quad x\in\Omega$
is (logarithmically) subharmonic if $f(\cdot,y)$ is (logarithmically)
subharmonic for all (almost all with respect to the measure $\mu$)
$y\in\Omega$.
###### Proposition 2.3.
Let $A$ be an index set and $\\{u_{\alpha},\ \alpha\in A\\}$ a family of
(logarithmically) subharmonic functions in a domain
$\Omega\subseteq\mathbf{R}^{n}$. Then
$u(x):=\sup_{\alpha\in A}u_{\alpha}(x),\quad x\in\Omega$
is (logarithmically) subharmonic if it is upper semi-continuous in the domain
$\Omega$.
Also, we need the next theorem due to Vitali (see [11]).
###### Theorem 2.4 (Vitali).
Let $X$ be a measurable space with finite measure $\mu$, and let
$h_{n}:X\to\mathbf{C}$ be a sequence of functions that are uniformly
integrable, i.e., such that for every $\epsilon>0$ there exists $\delta>0$,
independent of $n$, satisfying
$\mu(E)<\delta\Rightarrow\int_{E}|h_{n}|d\mu<\epsilon.$
Then if $h_{n}(x)\to h(x)$ a.e., then
$\lim_{n\to\infty}\int_{X}|h_{n}|d\mu=\int_{X}|h|d\mu.$
In particular, if $\sup_{n}\int_{X}|h_{n}|d\mu<\infty$, then the previous
condition holds.
###### Lemma 2.5.
Let $f\in H^{p}(\mathbf{U}^{2})$. Then
$\phi(z):=\int_{0}^{2\pi}|f(z,e^{i\eta})|^{p}\frac{d\eta}{2\pi}$
is continuous. Moreover, $\phi$ is logarithmically subharmonic and belongs to
the space $h^{1}_{PL}$.
###### Proof.
For $0\leq r<1$, let us denote
$\phi_{r}(z):=\int_{0}^{2\pi}|f(z,re^{i\eta})|^{p}\frac{d\eta}{2\pi},\quad
z\in\mathbf{U}.$
According to Proposition 2.2, $\phi_{r}$ is logarithmically subharmonic in the
unit disc, since $z\to|f(z,re^{i\eta})|^{p}$ are logarithmically subharmonic
for $\eta\in[0,2\pi]$. Since for all $z\in\mathbf{U}$ we have
$\phi_{r}(z)\to\phi(z)$, monotone as $r\to 1$, it follows that
$\phi(z)=\sup_{0\leq r<1}\phi_{r}(z)$. Thus, we have only to prove (by
Proposition 2.3) that $\phi$ is continuous.
First of all we have
(2.3) $\phi(z)=\|f(z,\cdot)\|_{p}\leq
C_{p}(1-|z|)^{-\frac{1}{p}}\|f\|_{p},\quad z\in\mathbf{U},$
for some positive constant $C_{p}$. Namely, according to the theorem of Hardy
and Littlewood (see [12], Theorem 27 or [8], Theorem 5.9) applied to the one
variable function $f(\cdot,w)$ with $w$ fixed, we obtain
$|f(z,w)|\leq
C_{p}(1-|z|)^{-\frac{1}{p}}\|f(\cdot,w)\|_{p},\quad(z,w)\in\mathbf{U}^{2},$
for some $C_{p}>0$. Using the above inequality and the monotone convergence
theorem, we derive
$\begin{split}\|f(z,\cdot)\|_{p}^{p}&=\lim_{s\to
1}\int_{0}^{2\pi}|f(z,se^{i\eta})|^{p}\frac{d\eta}{2\pi}\\\ &\leq
C_{p}^{p}(1-|z|)^{-1}\lim_{s\to
1}\int_{0}^{2\pi}\|f(\cdot,se^{i\eta})\|_{p}^{p}\frac{d\eta}{2\pi}\\\
&=C_{p}^{p}(1-|z|)^{-1}\lim_{s\to
1}\int_{0}^{2\pi}\frac{d\theta}{2\pi}\lim_{r\to
1}\int_{0}^{2\pi}|f(re^{i\theta},se^{i\eta})|_{p}^{p}\frac{d\eta}{2\pi}\\\
&=C_{p}^{p}(1-|z|)^{-1}\lim_{(r,s)\to(1,1)}\int_{0}^{2\pi}\int_{0}^{2\pi}|f(re^{i\theta},se^{i\eta})|_{p}^{p}\frac{d\theta}{2\pi}\frac{d\eta}{2\pi}\\\
&=C_{p}^{p}(1-|z|)^{-1}\|f\|_{p}^{p},\end{split}$
and (2.3) follows.
The inequality (2.3) implies that the family of integrals
$\left\\{\phi(z)=\int_{0}^{2\pi}|f(z,e^{i\eta})|^{p}\frac{d\eta}{2\pi}:z\in\mathbf{U}\right\\}$
is uniformly bounded on compact subsets of the unit disc. Since
$z\to|f(z,e^{i\eta})|^{p}$ is continuous for almost all $\eta\in[0,2\pi]$, as
a module of a holomorphic function (according to [36], Theorem XVII 5.16) it
follows that $\phi(z),\ z\in\mathbf{U}$ is continuous. Indeed, let
$z_{0}\in\mathbf{U}$ and let $(z_{k})_{k\geq 1}$ be a sequence in the unit
disc such that $z_{k}\to z_{0},\ k\to\infty$. According to the Vitali theorem
we have
$\lim_{k\to\infty}\phi(z_{k})=\lim_{k\to\infty}\int_{0}^{2\pi}|f(z_{k},e^{i\eta})|^{p}\frac{d\eta}{2\pi}=\int_{0}^{2\pi}|f(z_{0},e^{i\eta})|^{p}\frac{d\eta}{2\pi}=\phi(z_{0}).$
∎
We now prove the main Theorem 1.3.
###### Proof.
Let $f_{j}\in H^{p_{j}}(\mathbf{U}^{2}),\ j=1,\dots,m$ be holomorphic
functions in the polydisc $\mathbf{U}^{2}$. Using the Fubini theorem,
Proposition 1.1, and Lemma 2.1, we obtain
$\begin{split}\int_{\mathbf{U}^{2}}\prod_{j=1}^{m}|f_{j}|^{p_{j}}d\mu_{(m-2,m-2)}&=\int_{\mathbf{U}}d\mu_{m-2}(z)\int_{\mathbf{U}}\prod_{j=1}^{m}|f_{j}(z,w)|^{p_{j}}d\mu_{m-2}(w)\\\
&\leq\int_{\mathbf{U}}d\mu_{m-2}(z)\prod_{j=1}^{m}\int_{0}^{2\pi}|f_{j}(z,e^{i\eta})|^{p_{j}}\frac{d\eta}{2\pi}\\\
&\leq\prod_{j=1}^{m}\int_{0}^{2\pi}\frac{d\theta}{2\pi}\int_{0}^{2\pi}|f_{j}(e^{i\theta},e^{i\eta})|^{p_{j}}\frac{d\eta}{2\pi}\\\
&=\prod_{j=1}^{m}\int_{\mathbf{T}^{2}}|f_{j}|^{p_{j}}dm_{2},\end{split}$
since the functions
$\phi_{j}(z):=\int_{0}^{2\pi}|f_{j}(z,e^{i\eta})|^{p_{j}}\frac{d\eta}{2\pi}$
are logarithmically subharmonic in the disc $\mathbf{U}$ and since
$\phi_{j}\in h^{1}_{PL},\ j=1,\dots,m$, by Lemma 2.5.
We now determine when the equalities hold in the above inequalities.
Obviously, if some of functions $f_{j},\ j=1,\dots,m$ are identically equal to
zero, we have equalities everywhere. Suppose this is not the case. We will
first prove that $f_{j},\ j=1\dots,m$ do not vanish in the polydisc
$\mathbf{U}^{2}$.
Since for $j=1,\dots,m$ we have $\phi_{j}\not\equiv 0$, the equality obtains
in the second inequality if and only if for some point
$\zeta^{\prime\prime}\in\mathbf{U}$ and $\lambda_{j}>0$ we have
$\phi_{j}=\lambda_{j}|K_{1}(\cdot,\zeta^{\prime\prime})|^{2},\ j=1,\dots,m$.
Thus, $\phi_{j}$ is free of zeroes in the unit disc. Let
$\psi(z):=\int_{\mathbf{U}}\prod_{j=1}^{m}|f_{j}(z,w)|^{p_{j}}d\mu_{m-2}(w),\quad
z\in\mathbf{U}.$
The function $\psi$ is continuous; we can prove the continuity of $\psi$ in a
similar fashion as for $\phi_{j}$, observing that $\psi(z),\ z\in\mathbf{U}$
is uniformly bounded on compact subsets of the unit disc, which follows from
the inequality $\psi(z)\leq\prod_{j=1}^{m}\phi_{j}(z)$ since the $\phi_{j},\
j=1,\dots,m$ satisfy this property. Because of continuity, the equality in the
first inequality, that is,
$\int_{\mathbf{U}}\psi(z)d\mu_{m-2}(z)\leq\int_{\mathbf{U}}\prod_{j=1}^{m}\phi_{j}(z)d\mu_{m-2}(z),$
holds (by Proposition 1.1) only if for all $z\in\mathbf{U}$ and some
$\zeta^{\prime}(z)\in\mathbf{U}$ and $C_{j}(z)\neq 0$,
$f_{j}(z,\cdot)=C_{j}(z)K_{1}^{\frac{2}{p_{j}}}(\cdot,\zeta^{\prime}(z)),\quad
j=1,\dots,m.$
Since $\phi(z)\neq 0,\ z\in\mathbf{U}$, it is not possible that
$f_{j}(z,\cdot)\equiv 0$ for some $j$ and $z$.
Thus, if equality holds in (1.9), then $f_{j}$ does not vanish,
$f_{j}(z,w)\neq 0,\ (z,w)\in\mathbf{U}^{2}$, and we can obtain some branches
$f_{j}^{\frac{p_{j}}{2}}$. Applying Proposition 1.2 for
$f_{j}^{\frac{p_{j}}{2}},\ j=1,\dots,m$, we conclude that there must hold
$f_{j}^{\frac{p_{j}}{2}}=C^{\prime}_{j}K_{2}(\cdot,\zeta),\quad j=1,\dots,m$
for some point $\zeta\in\mathbf{U}^{2}$ and constants $C^{\prime}_{j}\neq 0$.
The equality statement of Theorem 1.3 follows. ∎
###### Remark 2.6.
The generalized polydisc is a product
$\Omega^{n}=\prod_{k=1}^{n}\Omega_{k}\subset\mathbf{C}^{n}$, where
$\Omega_{k},\ k=1,\dots,n$ are simply connected domains in the complex plane
with rectifiable boundaries. Let
$\partial\Omega^{n}:=\prod_{k=1}^{n}\partial\Omega_{k}$ be its distinguished
boundary and let $\phi_{k}:\Omega_{k}\to\mathbf{U},\ k=1,\dots,n$ be conformal
mappings. Then
$\Phi(z):=(\phi_{1}(z_{1}),\dots,\phi_{n}(z_{n})),\quad z=(z_{1},\dots,z_{n})$
is a bi-holomorphic mapping of $\Omega^{n}$ onto $\mathbf{U}^{n}$.
There are two standard generalizations of Hardy spaces on a hyperbolic simple
connected plain domain $\Omega$. One is immediate, by using harmonic
majorants, denoted by $H^{p}(\Omega)$. The second is due to Smirnov, usually
denoted by $E^{p}(\Omega)$. The definitions can be found in the tenth chapter
of the book of Duren [8]. These generalizations coincide if and only if the
conformal mapping of $\Omega$ onto the unit disc is a bi-Lipschitz mapping (by
[8, Theorem 10.2]); for example this occurs if the boundary is $C^{1}$ with
Dini-continuous normal (Warschawski’s theorem, see [35]). The previous can be
adapted for generalized polydiscs (see the paper of Kalaj [15]). In
particular, $H^{p}(\Omega^{n})=E^{p}(\Omega^{n})$ and
$\|\cdot\|_{H^{p}}=\|\cdot\|_{E^{p}}$, if the distinguished boundary
$\partial\Omega^{n}$ is sufficiently smooth, which means $\partial\Omega_{k},\
k=1,\dots,n$ are sufficiently smooth. Thus, in the case of sufficiently smooth
boundary, we may write
$\|f\|_{H^{p}(\Omega^{n})}=\left(\frac{1}{(2\pi)^{n}}\int_{\partial\Omega^{n}}|f(z)|^{p}|dz_{1}|\dots|dz_{n}|\right)^{\frac{1}{p}},$
where the integration is carried over the non-tangential (distinguished)
boundary values of $f\in H^{p}(\Omega^{n})$.
By Bremerman’s theorem (see [9, Theorem 4.8, pp. 91–93], $E^{2}(\Omega^{n})$
is a Hilbert space with the reproducing kernel given by
(2.4)
$K_{\Omega^{n}}(z,\zeta):=K_{n}(\Phi(z),\Phi(\zeta))\left(\prod_{k=1}^{n}\phi_{k}^{\prime}(z_{k})\overline{\phi_{k}^{\prime}(\zeta_{k})}\right)^{\frac{1}{2}},\quad
z,\ \zeta\in\Omega^{n}$
where $K_{n}$ is the reproducing kernel for $H^{2}(\mathbf{U}^{n})$;
$K_{\Omega^{n}}$ does not depend on the particular $\Phi$.
For the next theorem we need the following assertion. The sum
$\varphi_{1}+\varphi_{2}$ is a logarithmically subharmonic function in
$\Omega$ provided $\varphi_{1}$ and $\varphi_{2}$ are logarithmically
subharmonic in $\Omega$ (see e.g. [13, Corollary 1.6.8], or just apply
Proposition 2.2 for a discrete measure $\mu$). By applying this assertion to
the logarithmically subharmonic functions $\varphi_{k}(z)=|f_{k}(z)|^{2},\
z\in\Omega,\ k=1,\dots,l$, where $f=(f_{1},\dots,f_{l})$ is a
$\mathbb{C}^{l}$-valued holomorphic function, and the principle of
mathematical induction, we obtain that the function $\varphi$ defined by
$\varphi(z):=\|f(z)\|=\left(\sum_{k=1}^{l}|f_{k}(z)|^{2}\right)^{\frac{1}{2}},\quad
z\in\Omega$
is logarithmically subharmonic in $\Omega$ (obviously, the positive exponent
of a logarithmically subharmonic function is also logarithmically
subharmonic).
Theorem 1.3 in combination with the same approach as in [5] and [15] leads to
the following sharp inequality for vector-valued holomorphic functions which
generalizes Theorem 3.5 in [5, p. 256]; by vector-valued we mean
$\mathbb{C}^{l}$-valued for some integer $l$. We allow vector-valued
holomorphic functions to belong to the spaces $H^{p}(\Omega^{n})$ if they
satisfy the growth condition (1.1) with $\|\cdot\|$ instead of $|\cdot|$.
Let $V_{n}$ be the volume measure in the space $\mathbf{C}^{n}$ and
$\lambda_{\Omega^{n}}(z)=K_{n}(\Phi(z),\Phi(z))\prod_{k=1}^{n}|\phi^{\prime}_{k}(z_{k})|,\quad
z\in\Omega^{n}$
be the Poincaré metric on the generalized polydisc $\Omega^{n}$ (the right
side does not depend on the mapping $\Phi$).
###### Theorem 2.7.
Let $f_{j}\in H^{p_{j}}(\Omega^{n}),\ j=1,\dots,m$ be holomorphic vector-
valued functions on a generalized polydisc $\Omega^{n}$ with sufficiently
smooth boundary. The next isoperimetric inequality holds:
$\frac{(m-1)^{n}}{\pi^{n}}\int_{\Omega^{n}}\prod_{j=1}^{m}\|f_{j}(z)\|^{p_{j}}\lambda^{2-m}_{\Omega^{n}}(z)dV_{n}(z)\leq\prod_{j=1}^{m}\|f_{j}\|^{p_{j}}_{H^{p_{j}}(\Omega^{n})}.$
For complex-valued functions, the equality in the above inequality occurs if
and only if either some of the $f_{j},\ j=1,\dots,m$ are identically equal to
zero or if for some point $\zeta\in\Omega^{n}$ and constants $C_{j}\neq 0$, or
$C_{j}^{\prime}\neq 0$, the functions have the following form
$f_{j}=C_{j}K^{\frac{2}{p_{j}}}_{\Omega^{n}}(\cdot,\zeta)=C^{\prime}_{j}\left(\prod_{k=1}^{n}\psi^{\prime}_{k}\right)^{\frac{1}{p_{j}}},\quad
j=1,\dots,m,$
where $K_{\Omega^{n}}$ is the reproducing kernel for the domain $\Omega^{n}$
and $\psi_{k}:\Omega_{k}\to\mathbf{U},\ k=1,\dots,n$ are conformal mappings.
In particular, for $n=1$ and $m=2$ and in the case of complex-valued
functions, the above inequality reduces to the result of Mateljević and
Pavlović mentioned in the Introduction.
### Acknowledgments
I wish to thank Professors Miodrag Mateljvić and David Kalaj for very useful
comments. Also, I am grateful to Professor Darko Mitrović for comments on the
exposition and language corrections.
## References
* [1] N. Aronszajn, Theory of reproducing kernels, Trans. Amer. Math. Soc. 68 (1950), no. 3, 337–404.
* [2] E. F. Beckenbach and T. Rad$\acute{\mathrm{o}}$, Subharmonic functions and surfaces of negative curvature, Trans. Amer. Math. Soc. 35 (1933), no. 3, 662–674.
* [3] C. B$\mathrm{\acute{e}}$n$\mathrm{\acute{e}}$teau and D. Khavinson, The isoperimetric inequality via approximation theory and free boundary problems, Comput. Methods Funct. Theory 6 (2006), no. 2, 253–274.
* [4] J. Burbea, Inequalities for holomorphic functions of several complex variables, Trans. Amer. Math. Soc. 276 (1983), no. 1, 247–266.
* [5] J. Burbea, Sharp inequalities for holomorphic functions, Illinois J. Math. 31 (1987), no. 2, 248–264.
* [6] V. Bläsjö, The isoperimetric problem, Amer. Math. Monthly 112 (2005), no. 6, 526–566.
* [7] T. Carleman, Zur Theorie der Minimalflächen, Math. Z. 9 (1921), no. 1–2, 154–160.
* [8] P. L. Duren, Theory of HP spaces, Academic Press, New York, 1970.
* [9] B. A. Fuks, Special Chapters in the Theory of Analytic Functions of Several Complex Variables, (Russian) Gosudarstv. Izdat. Fiz.-Mat. Lit., Moscow, 1963.
* [10] T. W. Gamelin and D. Khavinson, The isoperimetric inequality and rational approximation, Amer. Math. Monthly 96 (1989), no 1., 18–30.
* [11] P. R. Halmos, Measure Theory, Van Nostrand, New York, 1950.
* [12] G. H. Hardy and J. E. Littlewood, Some properties of fractional integrals II, Math. Z. 34 (1932), 403–439.
* [13] L. Hörmander, An Introduction to Complex Analysis in Several Variables, North-Holland, Amsterdam, 1973.
* [14] S. Jacobs An Isoperimetric Inequality for Functions Analytic in Multiply Connected Domains, Mittag-Leffler Institute Report 5, (1972).
* [15] D. Kalaj, Isoperimetric inequality for the polydisk, Ann. Mat. Pura Appl. 190 (2011), no. 2, 355–369.
* [16] D. Kalaj and R. Meštrović, An isoperimetric type inequality for harmonic functions, J. Math. Anal. Appl. 373 (2011), no. 2, 439-448
* [17] M. Keldysh and M. Lavrentiev, Sur la représentation conforme des domaines limités par des courbes rectifiables, Ann. Sci. École Norm. Sup. 54 (1937), no. 3, 1–38.
* [18] H. O. Kim, On a theorem of Hardy and Littlewood on the polydisc, Proc. Amer. Math. Soc. 97 (1986), no. 3, 403–409
* [19] C. J. Kolaski, Isometries of Bergman spaces over bounded Runge domains, Canad. J. Math. 33 (1981), no. 5, 1157–1164.
* [20] S. G. Krantz, Function Theory of Several Complex Variables, Wiley, New York, 1982.
* [21] M. Mateljević, The isoperimetric inequality and some extremal problems in $H^{1}$, Lect. Notes Math. 798 (1980), 364–369.
* [22] M. Mateljević, The isoperimetric inequality in the Hardy clas II, Mat. vesnik 31 (1979), 169–178.
* [23] M. Mateljević and M. Pavlović, New proofs of the isoperimetric inequality and some generalizations, J. Math. Anal. Appl. 98 (1984), no. 1, 25–30.
* [24] M. Mateljević and M. Pavlović, Some inequalities of isoperimetric type concerning analytic and subharmonic functions, Publ. Inst. Math. (Belgrade) (N.S.) 50 (64) (1991), 123–130.
* [25] M. Mateljević and M. Pavlović, Some inequalities of isoperimetric type for the integral means of analytic functions, Mat. vesnik 37 (1985), 78–80.
* [26] R. Osserman, The isoperimetric inequality, Bull. Amer. Math. Soc. 84 (1978), no. 6, 1182–1238.
* [27] M. Pavlović Introduction to Function Spaces on the Disk, Matematički Institut SANU, Belgrade, 2004.
* [28] M. Pavlović and M. Dostanić, On the inclusion $H^{2}(\mathbb{U}^{n})\subset H^{2n}(\mathbb{B}_{n})$ and the isoperimetric inequality, J. Math. Anal. Appl. 226 (1998), no. 1, 143–149.
* [29] I. I. Privalov, Boundary Properties of Analytic Functions, (Russian) Gostekhizdat, Moscow, 1950.
* [30] L. I. Ronkin, Introduction to the Theory of Entire Functions of Several Variables, Translations of Mathematical Monographs, Amer. Math. Soc., Providence, RI, 1974.
* [31] W. Rudin, Function Theory in Polydiscs, Benjamin, New York, 1969.
* [32] S. Saitoh, The Bergman norm and the Szegö norm, Trans. Amer. Math. Soc. 249 (1979), no. 2, 261–279.
* [33] K. Strebel, Quadratic Differentials, Springer-Verlag, Berlin, 1984.
* [34] D. Vukotić, The isoperimetric inequality and a theorem of Hardy and Littlewood, Amer. Math. Monthly 110 (2003), no. 6, 532–536.
* [35] S. E. Warschawski, On differentiability at the boundary in conformal mapping, Proc. Amer. Math. Soc. 12 (1961), no. 4, 614–620
* [36] A. Zygmund, Trigonometric Series, Vol. II, 2nd ed., Cambridge University Press, 1959.
|
arxiv-papers
| 2011-05-22T21:24:13 |
2024-09-04T02:49:19.030122
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Marijan Markovic",
"submitter": "Marijan Markovic",
"url": "https://arxiv.org/abs/1105.4374"
}
|
1105.4457
|
# Topics on nonlinear generalized functions
J. F. Colombeau,
Institut Fourier, Université de Grenoble 1, France
jf.colombeau@wanadoo.fr
###### Abstract
The aim of this paper is to give the text of a recent introduction to
nonlinear generalized functions exposed in my talk in the congress GF2011,
which was asked by several participants. Three representative topics were
presented: two recalls ”Nonlinear generalized functions and their connections
with distribution theory”, ”Examples of applications”, and a recent
development: ”Locally convex topologies and compactness: a functional analysis
of nonlinear generalized functions”.
AMS classification: 46F30
1\. Locally convex topologies and compactness.
We start with this topic because of its complete novelty and its somewhat
unexpected character which makes the audience more interested in it than in
the recalls. Let $\Omega$ be an open set in $\mathbb{R}^{n}$ and let
$\mathcal{G}(\Omega)$ denote the special algebra of generalized functions on
$\Omega$, i.e. there is no canonical inclusion of the vector space of
distributions into $\mathcal{G}(\Omega)$: we can obtain such an inclusion
after choice of a mollifier. A topology on $\mathcal{G}(\Omega)$ was defined
in [2,1]. This topology is not a vector space topology. Later this topology
was rediscovered by D. Scarpalezos [16,17,13] who gave it the name of ”sharp
topology”, and improved by Scarpalezos, Aragona, Garetto,Verwaerde, ….
Later I found a proof of nonexistence of a Hausdorff locally convex topology
on $\mathcal{G}(\Omega)$ having some natural needed properties. At this point
$\mathcal{G}(\Omega)$ permits concrete applications such as in nonlinear
elasticity and acoustics in heterogeneous media. Since these methods are based
on topology, complete metric spaces (contractive mappings), or Hilbert spaces,
or compactness,$\dots$ a topology is needed and one could strongly wish a
classical topology so as to be able to use the well developped theory of
locally convex spaces, Banach spaces, Hilbert spaces,$\dots$, as this has been
done in the various spaces of the theory of distributions: locally convex
spaces in the Schwartz theory, Sobolev spaces,$\dots$. Therefore the natural
character of the following question:
Does there exists a convenient subalgebra of $\mathcal{G}(\Omega)$ having the
requested topological properties? We shall expose here that the answer is yes:
even there are many of them.
Theorem. There exists subalgebras of $\mathcal{G}(\Omega)$ with the following
properties:
i) they are Hausdorff locally convex algebras in which all bounded sets are
relatively compact,
ii) they contain ”most” distributions on $\Omega$ and all bounded sets of
distributions are relatively compact
iii) all partial derivatives are linear continuous from any such algebra into
itself. Many nonlinearities are also continuous and internal.
The topological algebras we have constructed are nonmetrizable, complete,
Schwartz and some of them nuclear.
The situation can be presented as follows: distribution theory provides a
synthesis between differentiation and irregular functions: any irregular
function is a distribution and any partial derivative of a distribution is
still a distribution. $\mathcal{G}(\Omega)$ has added nonlinearities into this
context: in particular any product of two elements of $\mathcal{G}(\Omega)$ is
still an element of $\mathcal{G}(\Omega)$. The theory of distributions has
also provided to mathematics a very rich variety of topologies having optimal
properties (but limited to vector spaces due to the impossibility to multiply
conveniently the distributions). Now the above subalgebras bring algebra
topologies that are as rich as the topologies of the spaces of distributions,
whith this basic difference that these topologies are compatible with
nonlinearities, as this could be expected in the context of nonlinear
generalized functions. In other word in these algebras one has
$u_{n}\rightarrow u,v_{n}\rightarrow v\Rightarrow u_{n}.v_{n}\rightarrow u.v$
exactly as in the classical spaces of $\mathcal{C}^{\infty}$ functions.
Further in natural cases ii) means that
$u_{n}\rightharpoonup u\Rightarrow u_{n}\rightarrow u$
where the left one sided arrow means the classical (weak) convergence in
spaces of distributions and the right complete arrow is the topological
convergence in these algebras (there is no mistake in this implication as it
could be expected at first sight). Often one has a bounded family of
approximate solutions of an equation; if one considers the strong topologies
in some well chosen Banach spaces then the product $u_{n}v_{n}$ tends to $uv$
if $u_{n}\rightarrow u$ and $v_{n}\rightarrow v$, but from the Riesz theorem
such bounded families are not a priori relatively compact so that one cannot
extract convergent subsequences. To have the needed compactness tool one is
usually forced to work with weak (or *weak) topologies. Then it is well known
that these topologies are incompatible with a passage to the limit in
nonlinear terms:
$u_{n}\rightharpoonup u,v_{n}\rightharpoonup v\not\Rightarrow
u_{n}v_{n}\rightharpoonup uv$
Therefore we can hope that this context could be useful in nonlinear problems.
As presented above the context announced in the theorem looks therefore like
the Schwartz presentation of distributions in which the locally convex vector
spaces would be replaced by locally convex topological algebras. Since the
constructions of these topological algebras are completely different from the
definition of the spaces in Schwartz theory there is work for those
mathematicians who love topological vector spaces. This context can be
explained in the context with the concept of Banach and Hilbert spaces only.
For instance:
There is an infinite increasing sequence $(H_{n})$ of separable Hilbert spaces
with nuclear inclusion maps $H_{n}\rightarrow H_{n+1}$. The union of the
$H_{n}$’s is strictly smaller than $\mathcal{G}(\Omega)$ but it contains
significative spaces of distributions such that any bounded set in these
spaces of distributions is contained and bounded in one $H_{n}$. One can
choose these Hilbert spaces such that $H_{n}.H_{n}\subset H_{n+1}$ where
$H_{n}.H_{n}:=\\{x.y\\},x\in H_{n},y\in H_{n}$, and such that
$\frac{\partial}{\partial x_{i}}H_{n}\subset H_{n+1}.$
Numerous questions were raised after the talk. The applications to PDEs have
not yet been investigated. Other questions concerned the construction of these
algebras: it is quite delicate and could not be given in full in such a short
time. The idea of the construction I did is that in these subalgebras of
$\mathcal{G}(\Omega)$, whose elements are of course equivalence classes (from
the definition of $\mathcal{G}(\Omega)$), there is in each equivalence class a
privilieged representative so that there is an algebraic isomorphism between
the algebra of equivalence classes and a classical algebra (i.e. without
quotient) made of these privilieged representatives. The absence of a quotient
in the algebra of the privilieged representatives permits to define there
natural Hausdorff locally convex topologies that it suffices to transport to
the subalgebra of $\mathcal{G}(\Omega)$ under concern. One uses deeply the
theory of topological vector spaces [3, or other books] and the theory of
nuclear spaces [12,15].
In conclusion it is likewise that the above opens the possibility of an
original functional analysis of nonlinear generalized functions as rich as the
classical linear functional analysis, but significantly different.
2\. Nonlinear generalized functions as a natural continuation of Schwartz
presentation of distributions.
The purpose of this section is to show to mathematicians who do not know the
nonlinear generalized functions that their construction is indeed naturally
connected to the classical linear theory as presented by Schwartz [18]. For
this we explain the construction of the nonlinear generalized functions for
mathematicians in a pedagogical way to stress the basic facts and hide the
(slightly) technical points that are minor but could be a drawback for a clear
understanding of the situation. We adopt the Schwartz notation:
$\mathcal{D}(\Omega)=\mathcal{C}_{c}^{\infty}(\Omega)=$ the space of all
$\mathcal{C}^{\infty}$ functions on $\Omega$ with compact support,
$\mathcal{D}^{\prime}(\Omega)=$ the vector space of all distributions on
$\Omega$, $\mathcal{E}^{\prime}(\Omega)=$ the vector space of all
distributions on $\Omega$ with compact support, $\delta_{x}=$ the Dirac
measure centered at the point $x\in\Omega,\ \mathbb{K}=\mathbb{R}$ or
$\mathbb{C}$.
In the algebra $\mathcal{C}^{\infty}(\mathcal{D}(\Omega))$ of all
$\mathcal{C}^{\infty}$ maps $\Phi:\mathcal{D}(\Omega)\rightarrow\mathbb{K}$
one considers an equivalence relation $\mathcal{R}$, and -modulo some
technical details- the algebra $\mathcal{G}(\Omega)$ is the set of all
equivalence classes. To understand this equivalence relation -whose definition
is somewhat technical- we describe it on subspaces of
$\mathcal{C}^{\infty}(\mathcal{D}(\Omega))$ where it takes a particularly
simple form.
$\mathcal{C}^{\infty}(\mathcal{E}^{\prime}(\Omega))\subset\mathcal{C}^{\infty}(\mathcal{D}(\Omega))$
through the restriction map $\Phi\rightarrow\Phi_{|\mathcal{D}(\Omega)}$ which
is injective since $\mathcal{D}(\Omega)$ is dense in
$\mathcal{E}^{\prime}(\Omega)$. In
$\mathcal{C}^{\infty}(\mathcal{E}^{\prime}(\Omega))$ the equivalence relation
$\mathcal{R}$ takes a particularly simple form:
$\Phi_{1}\mathcal{R}\Phi_{2}\Leftrightarrow\Phi_{1}(\delta_{x})=\Phi_{2}(\delta_{x})\forall
x\in\Omega$
In the subspace $\mathcal{D}^{\prime}(\Omega)$ of
$\mathcal{C}^{\infty}(\mathcal{D}(\Omega))$ then the equivalence relation
$\mathcal{R}$ reduces to the identity. That is why
$\mathcal{D}^{\prime}(\Omega)\subset\mathcal{G}(\Omega)$ and why this
equivalence relation was not perceived by specialists of distribution theory.
The statement of the equivalence relation in
$\mathcal{C}^{\infty}(\mathcal{D}(\Omega))$ is merely an extension of the
above in which $\delta_{x}$ is replaced by a net
$\\{\rho_{\epsilon,x}\\}_{\epsilon}$ of elements of $\mathcal{D}(\Omega)$ that
approximate $\delta_{x}$ in a standard way. On such a net the equality
$\Phi(\delta_{x})=0$ is replaced by a suitable fast decrease to 0 of the
values $\Phi(\rho_{\epsilon,x})$ when $\epsilon\rightarrow 0$.
The Schwartz impossibility result states that ”multiplication of distributions
is impossible in any mathematical context possibly different from distribution
theory”. This appears to be in contradiction with the existence of the algebra
$\mathcal{G}(\Omega)$. Here is a simplified version of what appears to be the
”core” of this result. Let $H$ denote the Heaviside function.
$\int_{-\infty}^{+\infty}(H^{2}-H)H^{\prime}dx=0$ because $H^{2}=H$. On the
other hand
$\int_{-\infty}^{+\infty}(H^{2}-H)H^{\prime}dx=[\frac{H^{3}}{3}-\frac{H^{2}}{2}]_{-\infty}^{+\infty}=\frac{1}{3}-\frac{1}{2}=-\frac{1}{6}.$
What is the correct result? If one admits that $H^{2}=H$ as adopted in
classical mathematics from Lebesgue integration theory then the second line
proves that multiplication of $H$ and $H^{\prime}$ is impossible. In the
theory of nonlinear generalized functions the second line is true, which shows
that necessarily in this theory $H^{2}\not=H$. This is in perfect agreement
with physical intuition: indeed $H$ can be viewed as an idealization of a
continuous phenomenon that ”jumps” on a very small interval, say of length
$\epsilon$ around $x=0$. Then $H^{2}-H$ appears as having nonzero values
located on this small interval, and $H^{\prime}$ appears to have large values
located on this interval. When $\epsilon\rightarrow 0$ the first line of the
above calculations has the form $0\times\infty$ whose undeterminacy here is
solved by the second line. Therefore the classical algebra of step functions
is not a faithful subalgebra of $\mathcal{G}(\mathbb{R})$. The Schwartz
impossibility result extends this fact to the algebra
$\mathcal{C}(\mathbb{R})$ of all continuous functions on $\mathbb{R}$. This
fact requests explanations since now one has two different products of
continuous functions, which is a priori unacceptable. Indeed the situation is
solved in a satisfactory way simply by noticing that the difference between
the two products is always insignificant as long as one perfoms calculations
that make sense within distribution theory. For instance it is true that
$H^{2}$ and $H$ can be identified as long as one considers integration with a
test function: $\forall\psi\in\mathcal{D}(\Omega)$ then $\int H\psi dx=\int
H^{2}\psi dx$. In $\mathcal{G}(\Omega)$ we say that $H^{2}$ and $H$ are
associated, i.e. their integration on any test function in
$\mathcal{D}(\Omega)$ gives same result. The association is another
equivalence relation in $\mathcal{C}^{\infty}(\mathcal{D}(\Omega))$, less
restrictive than the relation $\mathcal{R}$ considered above. For all
calculations valid within distribution theory associated objects give same
result and the restriction of the association to the space of distributions
reduces again to the equality of distributions. Therefore the fact noticed by
Schwartz that he interpreted as impossibility to multiply the distributions
does not cause problem: the calculations that make sense within the
distributions give always the same result when reproduced in
$\mathcal{G}(\Omega)$ This is developped in detail in numerous pedagogical
texts and talks such as [4-8,11,13,14].
3\. Examples of applications This has already been developped in expository
texts such as [6,9-11,14].
## References
* [1] H.A. Biagioni. A Nonlinear Theory of Generalized Functions. Lecture Notes in Math. 1421. Springer. 1990.
* [2] H.A. Biagioni, J.F. Colombeau. New Generalized Functions and $\mathcal{C}^{\infty}$ functions with values in generalized complex numbers. J. London Math. Soc. 2,33, 1986, p. 169-179.
* [3] N. Bourbaki. Topological Vector Spaces.Hermann, Paris.
* [4] J.F. Colombeau. New Generalized Functions and Multiplication of Distributions. North-Holland. 1984.
* [5] J.F. Colombeau. Elementary Introduction to Nonlinear Generalized FunctionS. North-Holland. 1985.
* [6] J.F. Colombeau. Multiplication of Distributions . Lecture Notes in Math. 1532. Springer. 1992.
* [7] J.F. Colombeau. Generalized functions and infinitesimals. ArXiv 0610264.
* [8] J.F. Colombeau. Generalized functions as a tool for nonsmooth nonlinear problems. ArXiv 061071.
* [9] J.F. Colombeau, A Gsponer,B. Perrot. Generalized functions and the Heisenberg-Pauli formalism of QFT ArXiv 07052396
* [10] J.F. Colombeau, A Gsponer. The Heisenberg-Pauli canonical formalism of QFT in the rigorous setting of nonlinear generalized functions ArXiv 07083425.
* [11] M. Grosser, M. Kunzinger, M. Oberguggenberger, R. Steinbauer. Geometric Theory of Generalized Functions with Applications to General Relativity. Kluwer 2001.
* [12] A. Grothendieck. Produits tensoriels topologiques; Memoirs of the AMS.
* [13] M. Nedeljkov, S. Pilipovic, D. Scarpalezos. The Linear Theory of Colombeau Generalized Functions. Pitman Research Notes in Math. 1998.
* [14] M. Oberguggenberger. Multiplication of Distributions and Applications to Partial Differential Equations. Pitman Research Notes in Math. 1992.
* [15] Pietsch. Nuclear locally convex spaces. Erg. der Math. 66 , Springer Verlag, 1972.
* [16] D. Scarpalezos. Colombeau generalized functions, topological structures, microlocal properties. A simplified point of view.
* [17] D. Scarpalezos. Private communication, about 1989.
* [18] L. Schwartz. Theorie des distributions. Hermann, Paris, numerous editions.
|
arxiv-papers
| 2011-05-23T10:16:30 |
2024-09-04T02:49:19.037308
|
{
"license": "Public Domain",
"authors": "J.F. Colombeau",
"submitter": "J.F. Colombeau",
"url": "https://arxiv.org/abs/1105.4457"
}
|
1105.4503
|
# Two body hadronic $D$ decays
Yu Fusheng, Cai-Dian Lü, Xiao-Xia Wang
We analyze the decay modes of $D/D_{s}\to PP,PV$ on the basis of a hybrid
method with the generalized factorization approach for emission diagrams and
the pole dominance model for the annihilation type contributions. Our results
of PV final states are better than the previous method, while the results of
PP final states are comparable with previous diagrammatic approach.
## 1 Introduction
The CLEO-c and the two B factories already give more measurements of charmed
meson decays than ever. The BESIII and super B factories are going to give
even much more data soon. Therefore, it is a good chance to further study the
nonleptonic two-body $D$ decays. However, it is theoretically unsatisfied
since some model calculations, such as QCD sum rules or Lattice QCD, are
ultimate tools but formidable tasks. In $B$ physics, there are QCD-inspired
approaches for hadronic decays, such as the perturbative QCD approach
(pQCD),$\\!{}^{{\bf?}}$ the QCD factorization approach (QCDF),$\\!{}^{{\bf?}}$
and the soft-collinear effective theory (SCET).$\\!{}^{{\bf?}}$ But it doesn’t
make much sense to apply these approaches to charm decays, since the mass of
charm quark, of order 1.5 GeV, is neither heavy enough for a sensible
$1/m_{c}$ expansion, nor light enough for the application of chiral
perturbation theory.
After decades of studies, the factorization approach is still an effective way
to investigate the hadronic $D$ decays $\\!{}^{{\bf?}}$. However, the naive
factorization encounters well-known problems: the Wilson coefficients are
renormalization scale and $\gamma_{5}$-scheme dependent, and the color-
suppressed processes are not well predicted due to the smallness of $a_{2}$.
The generalized factorization approaches were proposed to solve these
problems, considering the significant nonfactorizable contributions in the
effective Wilson coefficients $\\!{}^{{\bf?}}$. Besides, in the naive or
generalized factorization approaches, there are no strong phases between
different amplitudes, which are demonstrated to be existing by experiments.
On the other hand, the hadronic picture description of non-leptonic weak
decays has a longer history, because of their non-perturbative feature. Based
on the idea of the vector dominance, which is discussed on strange particle
decays,$\\!{}^{{\bf?}}$ the pole-dominance model of two-body hadronic decays
was proposed.$\\!{}^{{\bf?}}$ This model has already been applied to the two-
body nonleptonic decays of charmed and bottom mesons $\\!{}^{{\bf?},{\bf?}}$.
In this work, the two-body hadronic charm decays are analyzed based on a
hybrid method with the generalized factorization approach for emission
diagrams and the pole dominance model for the annihilation type contributions
$\\!{}^{{\bf?}}$.
## 2 The hybrid method
In charm decays, we start with the weak effective Hamiltonian for the $\Delta
C=1$ transition
$\mathcal{H}_{eff}=\frac{G_{F}}{\sqrt{2}}V_{CKM}(C_{1}O_{1}+C_{2}O_{2})+h.c.,$
(1)
with the current-current operators
$\displaystyle
O_{1}=\bar{u}_{\alpha}\gamma_{\mu}(1-\gamma_{5})q_{2\beta}\cdot\bar{q}_{3\beta}\gamma^{\mu}(1-\gamma_{5})c_{\alpha},$
$\displaystyle
O_{2}=\bar{u}_{\alpha}\gamma_{\mu}(1-\gamma_{5})q_{2\alpha}\cdot\bar{q}_{3\beta}\gamma^{\mu}(1-\gamma_{5})c_{\beta}.$
(2)
In the generalized factorization method, the amplitudes are separated into two
parts
$\langle
M_{1}M_{2}|\mathcal{H}_{eff}|D\rangle=\frac{G_{F}}{\sqrt{2}}V_{CKM}a_{1,2}\langle
M_{1}|\bar{q}_{1}\gamma_{\mu}(1-\gamma_{5})q_{2}|0\rangle\langle
M_{2}|\bar{q}_{3}\gamma^{\mu}(1-\gamma_{5})c|D\rangle,$ (3)
where $a_{1}$ and $a_{2}$ correspond to the color-favored tree diagram
($\mathcal{T}$) and the color-suppressed diagram ($\mathcal{C}$) respectively.
To include the significant non-factorizable contributions, we take $a_{1,2}$
as scale- and process-independent parameters fitted from experimental data.
Besides, a large relative strong phase between $a_{1}$ and $a_{2}$ is
demonstrated by experiments. Theoretically, the existence of large phase is
reasonable for the importance of inelastic final state interactions in the
charmed meson decays, with on-shell intermediate states. Therefore, we take
$a_{1}=|a_{1}|,~{}~{}a_{2}=|a_{2}|e^{i\delta},$ (4)
where $a_{1}$ is set to be real for convenience.
On the other hand, annihilation type contributions are neglected in the
factorization approach. However, the weak annihilation ($W$-exchange and
$W$-annihilation) contributions are sizable, of order $1/m_{c}$, and have to
be considered. It is also demonstrated to be important by the difference of
life time between $D^{0}$ and $D^{+}$. The pole-dominance model is a useful
tool to calculate the considerable resonant effects of annihilation diagrams.
For simplicity, only the lowest-lying pole is considered in the single-pole
model. Taking $D^{0}\to PP,PV$ as example, the annihilation type diagram in
the pole model is shown in Fig.1(a). $D^{0}$ goes into the intermediate state
$M$ via the effective weak Hamiltonian in Eq.(1), shown by the quark line in
the Fig.1(b), and then decays into $PP(PV)$ through strong interactions.
Angular momentum should be conserved at the weak vertex, and all conservation
laws be preserved at the strong vertex. Therefore, the intermediate particles
are scalar mesons for $PP$ modes and pseudoscalar mesons for $PV$ modes. In
$D^{0}$ decays, they are $W$-exchange diagrams, but $W$-annihilation
amplitudes in the $D^{+}_{(s)}$ decay modes.
Figure 1: Annihilation diagram in the pole-dominance model
The weak matrix elements are evaluated in the vacuum insertion
approximation$\\!{}^{{\bf?}}$,
$\displaystyle\langle M|\mathcal{H}|D\rangle$ $\displaystyle=$
$\displaystyle\frac{G_{F}}{\sqrt{2}}V_{CKM}a_{A,E}\langle
M|\bar{q}_{1}\gamma_{\mu}(1-\gamma_{5})q_{2}|0\rangle\langle
0|\bar{q}_{3}\gamma^{\mu}(1-\gamma_{5})c|D\rangle$ (5) $\displaystyle=$
$\displaystyle\frac{G_{F}}{\sqrt{2}}V_{CKM}a_{A,E}f_{M}f_{D}m_{D}^{2},$
where the effective coefficients $a_{A}$ and $a_{E}$ correspond to
$W$-annihilation and $W$-exchange amplitudes respectively. Strong phases
relative to the emission diagrams are also considered in these coefficients.
For the $PV$ modes, the effective strong coupling constants are defined
through the Lagrangian
$\mathcal{L}_{VPP}=ig_{VPP}V^{\mu}(P_{1}{\partial}_{\mu}P_{2}-P_{2}{\partial}_{\mu}P_{1}),$
(6)
where $g_{VPP}$ is dimensionless and obtained from experiments. By inserting
the propagator of the intermediate state $M$, the annihilation amplitudes are
$\langle
PV|\mathcal{H}_{eff}|D\rangle=\frac{G_{F}}{\sqrt{2}}V_{CKM}a_{A,E}f_{M}f_{D}m_{D}^{2}\frac{1}{m_{D}^{2}-m_{M}^{2}}g_{VPM}2(\varepsilon^{*}\cdot
p_{D}).$ (7)
As for the $PP$ modes, the intermediate mesons are scalar particles. The
effective strong coupling constants are described by
$\mathcal{L}_{SPP}=-g_{SPP}m_{S}SPP.$ (8)
However, the decay constants of scalar mesons are very small, which is shown
in the following relation
$\displaystyle\frac{f_{S}}{\bar{f}_{S}}=\frac{m_{2}(\mu)-m_{1}(\mu)}{m_{S}},$
(9)
where $f_{S}$ is the vector decay constant used in the pole model,
$\bar{f}_{S}$ is the scale-independent scalar decay constant, $m_{1,2}$ are
the running current quark mass, and $m_{S}$ is the mass of scalar meson.
Therefore, the scalar pole contribution is very small, resulting in little
resonant effect of annihilation type contributions in the $PP$ modes. On the
contrary, large annihilation contributions are given in the $PV$ modes by
relative large decay constants of intermediate pseudoscalar mesons.
## 3 Numerical results and discussions
In this method, only the effective Wilson coefficients with relative strong
phases are free parameters, which are chosen to obtain the suitable results
consistent with experimental data. For $PP$ modes,
$\displaystyle a_{1}$ $\displaystyle=$ $\displaystyle 0.94\pm
0.10,~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}a_{2}=(0.65\pm
0.10)e^{i(142\pm 10)^{\circ}},$ $\displaystyle a_{A}$ $\displaystyle=$
$\displaystyle(0.20\pm 0.10)e^{i(300\pm 10)^{\circ}},~{}~{}a_{E}=(1.7\pm
0.1)e^{i(90\pm 10)^{\circ}}.$ (10)
For $PV$ modes,
$\displaystyle a_{1}^{PV}$ $\displaystyle=$ $\displaystyle 1.32\pm
0.10,~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}a_{2}^{PV}=(0.75\pm
0.10)e^{i(160\pm 10)^{\circ}},$ $\displaystyle a_{A}^{PV}$ $\displaystyle=$
$\displaystyle(0.12\pm 0.10)e^{i(345\pm 10)^{\circ}},~{}~{}a_{E}^{PV}=(0.62\pm
0.10)e^{i(238\pm 10)^{\circ}}.$ (11)
All the predictions of the 100 channels are shown in the tables of
ref.$\\!{}^{{\bf?}}$. The prediction of branching ratio of the pure
annihilation process $D_{s}^{+}\to\pi^{+}\pi^{0}$ vanishes in the pole model
within the isospin symmetry. It is also zero in the diagrammatic approach in
the flavor SU(3) symmetry. Simply, two pions can form an isospin 0,1,2 state,
but 0 is ruled out because of charged final states, and isospin-2 is forbidden
for the leading order $\Delta C=1$ weak decay. The only left s-wave isospin-1
sate is forbidden by Bose-Einstein statics. In the pole model language, $G$
parity is violated in the isospin-1 case. Therefore, no annihilation amplitude
contributes to this mode.
The theoretical analysis in the $\eta-\eta^{\prime}$ sector is kind of
complicated. The predictions with $\eta^{\prime}$ in the final state are
always smaller in this hybrid method than those case of $\eta$ due to the
smaller phase space. However, it is opposite by experiments in some modes,
such as $D_{s}^{+}\to\pi^{+}\eta(\eta^{\prime})$,
$D^{0}\to\bar{K}^{0}\eta(\eta^{\prime})$. This may be the effects of SU(3)
flavor symmetry breaking for $\eta_{q}$ and $\eta_{s}$, the error mixing angle
between $\eta$ and $\eta^{\prime}$ aaaThe theoretical and phenomenological
estimates for the mixing angle $\phi$ is $42.2^{\circ}$ and $(39.3\pm
1.0)^{\circ}$, respectively.$\\!{}^{{\bf?}}$, inelastic final state
interaction, or the two gluon anomaly mostly associated to the
$\eta^{\prime}$, etc.. The mode of $D_{s}^{+}\to\rho^{+}\eta(\eta^{\prime})$
is similar with the above two cases, the opposite ratio of $\eta$ over
$\eta^{\prime}$ between theoretical prediction and the data. But this is a
puzzle by experiment measurement, which is taken more than ten years ago
$\\!{}^{{\bf?}}$. As is questioned by PDG $\\!{}^{{\bf?}}$, this branching
ratio of $(12.5\pm 2.2)\%$ considerably exceeds the recent inclusive
$\eta^{\prime}$ fraction of $(11.7\pm 1.8)\%$.
Recently, model independent diagrammatic approach is used to analyze the charm
decays $\\!{}^{{\bf?}}$. All two-body hadronic decays of $D$ mesons can be
expressed in terms of some distinct topological diagrams within the SU(3)
flavor symmetry, by extracting the topological amplitudes from the data
$\\!{}^{{\bf?}}$. Since the recent measurements of
$D_{s}^{+}\to\pi^{+}\rho^{0}$ $\\!{}^{{\bf?}}$ and $D_{s}^{+}\to\pi^{+}\omega$
$\\!{}^{{\bf?}}$ give a strong constraint on the $W-$annihilation amplitudes,
one cannot find a nice fit for $A_{P}$ and $A_{V}$ in the diagrammatic
approach to the data with $D_{s}^{+}\to\bar{K}^{*0}K^{+},\bar{K}^{0}K^{*+}$
simultaneously. Compared to the calculations in the model-independent
diagrammatic approach $\\!{}^{{\bf?}}$, our hybrid method gives more
predictions for the $PV$ modes in which the predictions are consistent with
the experimental data. It is questioned that the measurement of
$Br(D_{s}^{+}\to\bar{K}^{0}K^{*+})=(5.4\pm 1.2)\%$,$\\!{}^{{\bf?}}$ which was
taken two decades ago, was overestimated. Since $|C_{V}|<|C_{P}|$ and
$A_{V}\approx A_{P}$ as a consequence of very small rate of
$D_{s}^{+}\to\pi^{+}\rho^{0}$, it is expected that
$Br(D_{s}^{+}\to\bar{K}^{0}K^{*+})<Br(D_{s}^{+}\to\bar{K}^{*0}K^{+})=(3.90\pm
0.23)\%$. Our result in the hybrid method also agrees with this argument.
As an application of the diagrammatic approach, the mixing parameters
$x=(m_{1}-m_{2})/\Gamma$ and $y=(\Gamma_{1}-\Gamma_{2})/\Gamma$ in the
$D^{0}-\bar{D}^{0}$ mixing are evaluated from the long distance contributions
of the $PP$ and $VP$ modes $\\!{}^{{\bf?}}$. The global fit and predictions in
the diagrammatic approach are done in the SU(3) symmetry limit. However, as we
know, the nonzero values of $x$ and $y$ come from the SU(3) breaking effect.
Part of the flavor SU(3) breaking effects are considered in the factorization
method and in the pole model. Therefore, our hybrid method takes its advantage
in the analysis of $D^{0}-\bar{D}^{0}$ mixing.
## Acknowledgments
This work is partially supported by National Science Foundation of China under
the Grant No. 10735080, 11075168; and National Basic Research Program of China
(973) No. 2010CB833000.
## References
## References
* [1] Y. -Y. Keum, H. -n. Li, A. I. Sanda, Phys. Lett. B504, 6-14 (2001) [hep-ph/0004004]; Y. Y. Keum, H. -N. Li, A. I. Sanda, Phys. Rev. D63, 054008 (2001) [hep-ph/0004173]; C. -D. Lu, K. Ukai, M. -Z. Yang, Phys. Rev. D63, 074009 (2001) [hep-ph/0004213]; C. -D. Lu, M. -Z. Yang, Eur. Phys. J. C23, 275-287 (2002) [hep-ph/0011238].
* [2] M. Beneke, G. Buchalla, M. Neubert, C. T. Sachrajda, Phys. Rev. Lett. 83, 1914-1917 (1999) [hep-ph/9905312]; M. Beneke, G. Buchalla, M. Neubert, C. T. Sachrajda, Nucl. Phys. B591, 313-418 (2000) [hep-ph/0006124].
* [3] C. W. Bauer, D. Pirjol, I. W. Stewart, Phys. Rev. Lett. 87, 201806 (2001) [hep-ph/0107002]; C. W. Bauer, D. Pirjol, I. W. Stewart, Phys. Rev. D65, 054022 (2002) [hep-ph/0109045].
* [4] M. Wirbel, B. Stech, M. Bauer, Z. Phys. C29, 637 (1985); M. Bauer, B. Stech, M. Wirbel, Z. Phys. C34, 103 (1987).
* [5] H. -Y. Cheng, Phys. Lett. B335, 428-435 (1994) [hep-ph/9406262]; H. -Y. Cheng, Z. Phys. C69, 647-654 (1996) [hep-ph/9503219].
* [6] J. J. Sakurai, Phys. Rev. 156, 1508 (1967).
* [7] A. K. Das, V. S. Mathur, Mod. Phys. Lett. A8, 2079-2086 (1993) [hep-ph/9301279]; P. F. Bedaque, A. K. Das, V. S. Mathur, Phys. Rev. D49, 269-274 (1994) [hep-ph/9307296];
* [8] G. Kramer, C. -D. Lu, Int. J. Mod. Phys. A13, 3361-3384 (1998) [hep-ph/9707304].
* [9] Y. Fusheng, X. -X. Wang, C. -D. Lu, [arXiv:1101.4714].
* [10] T. Feldmann, P. Kroll, B. Stech, Phys. Rev. D58, 114006 (1998). [hep-ph/9802409].
* [11] C. P. Jessop et al. [CLEO Collaboration], Phys. Rev. D58, 052002 (1998) [hep-ex/9801010].
* [12] K. Nakamura et al. [ Particle Data Group Collaboration ], J. Phys. G G37, 075021 (2010).
* [13] J. L. Rosner, Phys. Rev. D60, 114026 (1999) [hep-ph/9905366].
* [14] H. -Y. Cheng, C. -W. Chiang, Phys. Rev. D81, 074021 (2010) [arXiv:1001.0987 [hep-ph]].
* [15] B. Aubert et al. [BABAR Collaboration], Phys. Rev. D79, 032003 (2009), arXiv:0808.0971 [hep-ex].
* [16] J. Y. Ge et al. [CLEO Collaboration], Phys. Rev. D80, 051102 (2009). [arXiv:0906.2138 [hep-ex]].
* [17] W. Y. Chen et al. [CLEO Collaboration ], Phys. Lett. B226, 192 (1989).
* [18] H. Y. Cheng and C. W. Chiang, Phys. Rev. D 81 (2010) 114020 arXiv:1005.1106 [hep-ph].
|
arxiv-papers
| 2011-05-23T13:51:08 |
2024-09-04T02:49:19.042789
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Yu Fusheng, Cai-Dian L\\\"u, Xiao-Xia Wang",
"submitter": "Yu Fusheng",
"url": "https://arxiv.org/abs/1105.4503"
}
|
1105.4799
|
# Simple physical applications of a groupoid structure
Giuseppe Iurato
###### Abstract
Motivated by Quantum Mechanics considerations, we expose some cross product
constructions on a groupoid structure. Furthermore, critical remarks are made
on some basic formal aspects of the Hopf algebra structure.
2010 Mathematics Subject Classification: 16T05, 16T99, 20L05, 20L99, 20M99.
Keywords and phrases: groupoid, Hopf algebra, group algebra, cross product.
$\normalsize\bf 1.\ Introduction$
In [11] was recovered the notion of E-groupoid [EBB-groupoid], whose groupoid
algebra led to the so-called E-groupoid algebra [EBB-groupoid algebra].
Following a suggestion of A. Connes (see [6, § I.1]), some elementary
algebraic structures of Matrix Quantum Mechanics can arise from certain
representations of such groupoids and related group algebras. For instance,
the map $(i,j)\rightarrow\nu_{ij}$ of [11, § 5], provides the kinematical time
evolution of the observables $\mathfrak{q}$ and $\mathfrak{p}$, given by the
Hermitian matrices [11, § 5, $(\maltese)$], that, by means of the Kuhn-Thomas
relation, satisfies the celebrated Heisenberg canonical commutation relations
(see [23, § 14.4] and [3, § 2.3]), whereas the HBJ EBB-groupoid algebra
constructed in [11, § 6], is the algebra of physical observables according to
W. Heisenberg.
The structure of groupoid has many considerable applications both in pure and
applied mathematics (see [4, 25]): here, let us consider the following,
particular example, drew from Quantum Field Theory (QFT).
$\bullet$ A toy model of QFT. In Renormalization of Quantum Field Theory (see
[26] and references therein), a Feynman graph $\Gamma$ may be analytically
represented as sum of iterated divergent integrals defined as follows
$\Gamma^{1}(t)=\int_{t}^{\infty}\frac{dp_{1}}{p_{1}^{1+\varepsilon}},\ \ \
\Gamma^{n}(t)=\int_{t}^{\infty}\frac{dp_{1}}{p_{1}^{1+\varepsilon}}\int_{p_{1}}^{\infty}\frac{dp_{2}}{p_{2}^{1+\varepsilon}}\
...\int_{p_{n-1}}^{\infty}\frac{dp_{n}}{p_{n}^{1+\varepsilon}}\ \ \ n\geq 2$
for $\varepsilon\in\mathbb{R}^{+}$; such integrals diverges logarithmically
for $\varepsilon\rightarrow 0^{+}$.
It can be proved as these iterated integrals form a Hopf algebra of rooted
trees (see [7, 26]), say $\mathcal{H}_{R}$, and called the Connes-Kreimer Hopf
algebra (see [7]). The renormalization of these integrals requires a
regularization, for instance through a linear multiplicative functional
$\phi_{a}$ (bare Green function, defining a Feynman rule) defined on them,
which represents a certain way of evaluation of the Feynman graphs, at the
energy scale $a$, of the type
$\phi_{a}\Big{(}\prod_{i\in I}\Gamma^{i}(t)\Big{)}=\prod_{i\in
I}\Gamma^{i}(a),$
where $I$ denotes an arbitrary finite ordered subset of $\mathbb{N}$, and
$\Gamma_{a}=\Gamma^{0}(a)$ are the normalized coupling constants at the energy
scale (or renormalization point) $a$; if $\Gamma$ is a Feynman graph, then
$\phi_{a}(\Gamma)$ is the corresponding regularized Feynman amplitude,
according to the renormalization scheme parametrized by $a$.
So, every Feynman rule is a character
$\phi_{a}:\mathcal{H}_{R}\rightarrow\mathbb{C}$ of the Hopf algebra
$\mathcal{H}_{R}$, and their set is a (renormalization) group
$\mathcal{G}_{R}$ under the group law given by the usual convolution law
$\hat{\ast}$ of $\mathcal{H}_{R}$. Thus, the coalgebra structure of
$\mathcal{H}_{R}$ endows $\mathcal{G}_{R}$ with a well-defined group
structure.
If $S$ denotes the antipode of $\mathcal{H}_{R}$, then let us consider the
following deformed antipode $S_{a}\doteq\phi_{a}\circ S$; in [26], it is
considered a particular modification of the usual antipode axiom
$S\hat{\ast}\mbox{\rm id}=\mbox{\rm id}\hat{\ast}S=\eta\circ\varepsilon$ (see
[11, § 8]), precisely
$\varepsilon_{a,b}=S_{a}\hat{\ast}\mbox{\rm id}_{b}\doteq(\phi_{a}\circ
S)\hat{\ast}\phi_{b}=m\circ(S_{a}\otimes\phi_{b})\circ\Delta\quad\mbox{\rm(renormalized
Green functions)}.$
Hence, in [26, § 5], it is proved to be true the following pair groupoid law
$\varepsilon_{a,b}\hat{\ast}\varepsilon_{b,c}=\varepsilon_{a,c}$, deduced from
the Hopf algebra properties of $\mathcal{H}_{R}$. Moreover, if we consider the
renormalized quantities $\varepsilon_{a,b}(\Gamma^{n}(t))=\Gamma_{a,b}^{n}$,
then we have
$\Gamma_{a,b}^{1}=\int_{b}^{a}\frac{dp}{p^{1+\varepsilon}},\ \
\Gamma_{a,b}^{2}=\int_{b}^{a}\frac{dp_{1}}{p_{1}^{1+\varepsilon}}\int_{p_{1}}^{a}\frac{dp_{2}}{p_{2}^{1+\varepsilon}},\
\ ....\ ,$
with every $\Gamma_{a,b}^{n}$ finite for $\varepsilon\rightarrow 0^{+},$ and
zero for $a=b$. $\varepsilon_{a,b}$ is said to be a renormalized character of
$\mathcal{H}_{R}$ at the energy scales $a,b$. The correspondence
(renormalization schemes) $\phi_{b},\phi_{b}\rightarrow\varepsilon_{a,b}$ is
what renormalization typically achieves.
Finally, from the relation
$\varepsilon_{a,b}\hat{\ast}\varepsilon_{b,c}=\varepsilon_{a,c}$ and the
coproduct rule of $\mathcal{H}_{R}$, it is possible to obtain the following
relation
$\Gamma^{i}_{a,c}=\Gamma^{i}_{a,b}+\Gamma_{b,c}^{i}+\sum_{j=1}^{i-1}\Gamma_{a,b}^{j}\Gamma_{b,c}^{i-j}\
\ \ \ i\geq 2,$
that is a generalization of the so-called Chen’s Lemma (see [5, 13]); this
relation describes what happens if we change the renormalization point.
If $\Gamma\in\mathcal{H}_{R}$ is a Feynman graph, then we have the following
asymptotic expansion $\Gamma=\Gamma^{0}+\Gamma^{1}+\Gamma^{2}+\Gamma^{3}+...$;
in general, such a series may be divergent, and, in this case, it can be
renormalized to an finite, but undetermined, value. We have that
$\varepsilon_{a,b}(\Gamma)=\Gamma_{a,b}$ is the result of the regularization
of $\Gamma$ at the energy scales $a,b$, and, respect to the scale change
$\phi_{a}\rightarrow\phi_{b}$ (which allows us to renormalizes in a non-
trivial manner), we have the following rule for the shift of the normalized
coupling constants
$\Gamma_{b}=\Gamma_{a}+\sum_{i\in\mathbb{N}}\Gamma_{a,b}^{i}$, in dependence
of the running coupling constants $\Gamma_{a,b}^{i},\ \ i\geq 1$.
In short, the comparison among different renormalization schemes (via the
variation of the renormalization point) is regulated by the fundamental pair
groupoid law $\varepsilon_{a,b}\hat{\ast}\varepsilon_{b,c}=\varepsilon_{a,c}$.
Furthermore, we point out as this groupoid combination law, connected with a
variation of the renormalization points, leads us to further formal properties
of renormalization111Moreover, it should be interesting to go into the
question related to possible, further roles that the groupoid structures may
play in Renormalization. See, also, the conclusions of § 7 of the present
paper., as, for instance, the cohomological ones or the Callan-Symanzik type
equations.
In [11, § 5], we have defined a specific EBB-groupoid, called the Heisenberg-
Born-Jordan EBB-groupoid (or HBJ EBB-groupoid), whose group algebra, said HBJ
EBB-groupoid algebra, may be endowed with a (albeit trivial) Hopf algebra
structure, obtaining the so-called HBJ EBB-Hopf algebra (or HBJ EBBH-algebra),
that is a first, possible example of generalization of the structure of Hopf
algebra: indeed, it is a particular weak Hopf algebra, or quantum groupoid
(see [19, § 2.5], [20, § 2.1.4] and [24, § 2.2]), in the finite-dimensional
case.
We remember that group algebras were basic examples of Hopf algebras222See,
for instance, the formalization of the quantum mechanics motivations adduced
by V. G. Drinfeld in [9, § 1]., so that groupoid algebras may be considered as
basic examples of a class of structures generalizing the ordinary Hopf algebra
structure; this class contains the so-called weak Hopf algebras, the Lu’s and
Xu’s Hopf algebroids, and so on (see [1, 2, 14, 27]).
In this paper, starting from the E-groupoid structure exposed in [11], we want
to introduce another, possible generalization of the ordinary Hopf algebra
structure, following the notions of commutative Hopf algebroid (see [22]) and
of quantum semigroup (see [9, § 1, p. 800]).
Moreover, at the end of [11, § 8], it has been mentioned both the triviality
of the Hopf algebra structure there introduced (on the HBJ EBB-algebra), and
some non-trivial duality questions related to the (possible) not finite
generation of the HBJ EBB-algebra.
At the § 5. of the present paper333That must be considered as a necessary
continuation of [1]., we’ll try to settle these questions by means of some
fundamental works of S. Majid.
Indeed, in [16] and [17], Majid has constructed non-trivial examples of non-
commutative and non-cocommutative Hopf algebras (hence, non-trivial examples
of quantum groups), via his notion of bicrossproduct.
This type of structures involves group algebras and their duals; furthermore,
these structures has an interesting physical meaning, since they are an
algebraic representation of some quantum mechanics problems (see also [18,
Chap. 6]).
Finally, we’ll recall some other cross product constructions, among which the
group Weyl algebra and the (Drinfeld) quantum double, that provides further,
non-trivial examples of a quantum group having a really physical meaning.
Moreover, if we consider such structures applied to the EBJ EBBH-algebra (of
[11, § 8]), then it is possible to get structures that represents an algebraic
formalization of some possible quantum mechanics problems on a groupoid, in
such a way to obtain new examples of elementary structures of a Quantum
Mechanics on groupoids.
$\bf 2.\ The\ Notion\ of\ \mbox{\bf E-semigroupoid}$
For the notions of E-groupoid and EBB-groupoid, with relative notations, we
refer to [11, § 1].
An E-semigroupoid is an algebraic system of the type
$(G,G^{(0)},G^{(1)},r,s,i,\star)$, where $G,G^{(0)},G^{(1)}$ are non-void sets
such that $G^{(0)},G^{(1)}\subseteq G$, $r,s:G\rightarrow G^{(0)}$,
$i:G^{(1)}\rightarrow G^{(1)}$ and $G^{(2)}=\\{(g_{1},g_{2})\in G\times
G;s(g_{1})=r(g_{2})\\}$, satisfying the following conditions444Whenever the
relative $\star$-products are well-defined.:
* •
1 $s(g_{1}\star g_{2})=s(g_{2}),r(g_{1}\star g_{2})=r(g_{1}),\ \
\forall(g_{1},g_{2})\in G^{(2)}$;
* •
2 $s(g)=r(g)=g,\ \ \forall g\in G^{(0)}$;
* •
3 $g\star\alpha(s(g))=\alpha(r(g))\star g=g,\ \ \forall g\in G$;
* •
4 $(g_{1}\star g_{2})\star g_{3}=g_{1}\star(g_{2}\star g_{3}),\ \ \forall
g_{1},g_{2},g_{3}\in G$;
* •
5 $\forall g\in G^{(1)},\exists g^{-1}\in G^{(1)}:g\star
g^{-1}=\alpha(r(g)),g^{-1}\star g=\alpha(s(g)),$
being $\alpha:G^{(0)}\hookrightarrow G$ the immersion of $G^{(0)}$ into $G$,
and $i:g\rightarrow g^{-1}$. The maps $r,s$ are called, respectively, range
and source, $G$ is the support, $G^{(0)}$ is the set of units, and $G^{(1)}$
is the set of inverses, of the given E-semigroupoid.
For simplicity, we write $r(g),s(g)$ instead of $\alpha(r(g)),\alpha(s(g))$.
We obtain an E-groupoid when $G^{(1)}=G$ (see [11, § 1]), whereas we obtain a
monoid when $G^{(0)}=\\{e\\}$. Moreover, if an E-semigroupoid also verify the
condition of [11, § 1, $\bullet_{6}$], then we have an EBB-semigroupoid.
$\bf 3.\ The\ Notion\ of\ Linear\ \mbox{\bf$\mathbb{K}$-algebroid}$
A linear algebra (over a commutative scalar field $\mathbb{K}$) is an
algebraic system of the type $(V_{\mathbb{K}},+,\cdot,m,\eta)$, where
$(V_{\mathbb{K}},+,\cdot)$ is a $\mathbb{K}$-linear space and $(V,+,m)$ is a
unital ring, satisfying certain compatibility conditions; in particular,
$(V,m)$ is a unital semigroup (that is, a monoid).
Following, in part, [21] (where it is introduced the notion of vector
groupoid), if it is given an E-semigroupoid $(G,G^{(0)},G^{(1)},r,s,i,\star)$
such that
i) $G_{\mathbb{K}}=(G,+,\cdot)$ is a $\mathbb{K}$-linear space, and
$G^{(0)},G^{(1)}$ are its linear subspaces;
ii) $r,s$ and $i$, are linear maps;
iii) $g_{1}\star(\lambda g_{2}+\mu g_{3}-s(g_{1}))=\lambda(g_{1}\star
g_{2})+\mu(g_{1}\star g_{3})-g_{1}$, $(\lambda g_{1}+\mu g_{2}-r(g_{3}))\star
g_{3}=\lambda(g_{1}\star g_{3})+\mu(g_{2}\star g_{3})-g_{3},$ for every
$g_{1},g_{2},g_{3}\in G$ and $\lambda,\mu\in\mathbb{K}$ for which there exists
the relative $\star$-products,
then we say that $(G_{\mathbb{K}},G^{(0)},G^{(1)},r,s,i,\star)$ is a linear
$\mathbb{K}$-algebroid.
$\bf 4.\ The\ Notion\ of\ \mbox{\bf E-Hopf Algebroid}$
Let $\mathfrak{G}_{\mathbb{K}}=(G_{\mathbb{K}},G^{(0)},G^{(1)},r,s,i,\star)$
be a linear $\mathbb{K}$-algebroid. If we set
$G^{(2)}=\\{g_{1}\otimes_{\star}g_{2}\in G\times G;s(g_{1})=r(g_{2})\\}\doteq
G\otimes_{\star}G,$ $m_{\star}(g_{1}\otimes_{\star}g_{2})=g_{1}\star g_{2},$
then, more specifically, with
$(\mathfrak{G}_{\mathbb{K}},m_{\star},\\{\eta_{r}^{(e)}\\}_{e\in
G^{(0)}},\\{\eta_{s}^{(e)}\\}_{e\in G^{(0)}})$, we’ll denote such a linear
$\mathbb{K}$-algebroid where, for each $e\in G^{(0)}$, we put
$\eta_{r}^{(e)},\eta_{s}^{(e)}:\mathbb{K}\rightarrow G$ in such a way that
$\eta_{r}^{(e)}(k)=\\{e\\}$ and $\eta_{s}^{(e)}(k)=\\{e\\},\ \ \forall
k\in\mathbb{K}$. Hence, the unitary and associativity properties $\bullet_{3}$
and $\bullet_{4}$, of the given linear $\mathbb{K}$-algebroid, are as
follows555Henceforth, every partial $\star$-operation (as $\otimes_{\star}$,
and so on) that we consider, it is assumed to be defined.
1\.
$m_{\star}\circ(\eta_{r}^{(e)}\otimes_{\star}\mbox{id})=m_{\star}\circ(\mbox{id}\otimes_{\star}\eta_{s}^{(e^{\prime})}),\quad\forall
e,e^{\prime}\in G^{(0)},$
2\.
$m_{\star}\circ(\eta_{r}^{(e)}\otimes_{\star}m_{\star})=m_{\star}\circ(m_{\star}\otimes_{\star}\eta_{s}^{(e^{\prime})}),\quad\forall
e,e^{\prime}\in G^{(0)},$
where id is the identity of $G$.
Let us introduce, now, a cosemigroupoid structure as follows.
We define a partial comultiplication by a map $\Delta_{\star}:G\rightarrow
G\otimes_{\star}G$, in such a way that, when the following condition holds:
3\.
$(\mbox{id}\otimes_{\star}\Delta_{\star})\circ\Delta_{\star}=(\Delta_{\star}\otimes_{\star}\mbox{id})\circ\Delta_{\star}$,
then we say that $(\mathfrak{G}_{\mathbb{K}},\Delta_{\star})$ is a
cosemigroupoid.
If we require to subsist suitable homomorphism conditions for the maps
$m_{\star},\Delta_{\star},\\{\eta_{r}^{(e)}\\}_{e\in
G^{(0)}},\\{\eta_{s}^{(e)}\\}_{e\in G^{(0)}}$, then we may to establish a
certain quantum semigroupoid structure (in analogy to the quantum semigroup
structure - see [9, § 1, p. 800]) on
$(\mathfrak{G}_{\mathbb{K}},m_{\star},\\{\eta_{r}^{(e)}\\}_{e\in
G^{(0)}},\\{\eta_{s}^{(e)}\\}_{e\in G^{(0)}})$.
Following, in part, the notion of commutative Hopf algebroid given in [22,
Appendix 1], if we define certain counits by maps
$\varepsilon_{r}^{(e)},\varepsilon_{s}^{(e)}:G\rightarrow\mathbb{K}$ for each
$e\in G^{(0)}$, then we may to require that further counit properties holds,
chosen among the following
$4_{1}.\ \
(\mbox{id}\otimes_{\star}\varepsilon_{r}^{(e)})\circ\Delta_{\star}=(\varepsilon_{r}^{(e^{\prime})}\otimes_{\star}\mbox{id})\circ\Delta_{\star}=\mbox{id},\quad\forall
e,e^{\prime}\in G^{(0)}$,
$4_{2}.\ \
(\mbox{id}\otimes_{\star}\varepsilon_{s}^{(e)})\circ\Delta_{\star}=(\varepsilon_{s}^{(e^{\prime})}\otimes_{\star}\mbox{id})\circ\Delta_{\star}=\mbox{id},\quad\forall
e,e^{\prime}\in G^{(0)}$,
$4_{3}.\ \
(\mbox{id}\otimes_{\star}\varepsilon_{r}^{(e)})\circ\Delta_{\star}=(\varepsilon_{s}^{(e^{\prime})}\otimes_{\star}\mbox{id})\circ\Delta_{\star}=\mbox{id},\quad\forall
e,e^{\prime}\in G^{(0)}$,
$4_{4}.\ \
(\mbox{id}\otimes_{\star}\varepsilon_{s}^{(e)})\circ\Delta_{\star}=(\varepsilon_{r}^{(e^{\prime})}\otimes_{\star}\mbox{id})\circ\Delta_{\star}=\mbox{id},\quad\forall
e,e^{\prime}\in G^{(0)}$,
with a set of compatibility conditions chosen among the following (or a
suitable combination of them)
$5_{1}.\ \
\varepsilon_{r}^{(e)}\circ\eta_{r}^{(e^{\prime})}=\varepsilon_{r}^{(e)}\circ\eta_{r}^{(e)}=\mbox{id},\
\
\eta_{r}^{(e)}\circ\varepsilon_{r}^{(e^{\prime})}=\eta_{r}^{(e)}\circ\varepsilon_{r}^{(e^{\prime})}=\mbox{id}_{G^{(0)}},\
\ \forall e,e^{\prime}\in G^{(0)}$,
$5_{2}.\ \
\varepsilon_{s}^{(e)}\circ\eta_{s}^{(e^{\prime})}=\varepsilon_{s}^{(e)}\circ\eta_{s}^{(e)}=\mbox{id},\
\
\eta_{s}^{(e)}\circ\varepsilon_{s}^{(e^{\prime})}=\eta_{s}^{(e)}\circ\varepsilon_{s}^{(e^{\prime})}=\mbox{id}_{G^{(0)}},\
\ \forall e,e^{\prime}\in G^{(0)}$,
$5_{3}.\ \
\varepsilon_{r}^{(e)}\circ\eta_{s}^{(e^{\prime})}=\varepsilon_{r}^{(e)}\circ\eta_{s}^{(e)}=\mbox{id},\
\
\eta_{r}^{(e)}\circ\varepsilon_{s}^{(e^{\prime})}=\eta_{r}^{(e)}\circ\varepsilon_{s}^{(e^{\prime})}=\mbox{id}_{G^{(0)}},\
\ \forall e,e^{\prime}\in G^{(0)}$,
$5_{4}.\ \
\varepsilon_{s}^{(e)}\circ\eta_{r}^{(e^{\prime})}=\varepsilon_{s}^{(e)}\circ\eta_{r}^{(e)}=\mbox{id},\
\
\eta_{s}^{(e)}\circ\varepsilon_{r}^{(e^{\prime})}=\eta_{s}^{(e)}\circ\varepsilon_{r}^{(e^{\prime})}=\mbox{id}_{G^{(0)}},\
\ \forall e,e^{\prime}\in G^{(0)};$
in such a case, we may define a suitable linear $\mathbb{K}$-coalgebroid
structure of the type
$(\mathfrak{G}_{\mathbb{K}},\Delta_{\star},\\{\varepsilon_{r}^{(e)}\\}_{e\in
G^{(0)}},\\{\varepsilon_{s}^{(e)}\\}_{e\in G^{(0)}})$, whence a linear
$\mathbb{K}$-coalgebroid structure of the type
$(\mathfrak{G}_{\mathbb{K}},m_{\star},\Delta_{\star},\\{\eta_{r}^{(e)}\\}_{e\in
G^{(0)}},\\{\eta_{s}^{(e)}\\}_{e\in G^{(0)}},\\{\varepsilon_{r}^{(e)}\\}_{e\in
G^{(0)}},\\{\varepsilon_{s}^{(e)}\\}_{e\in G^{(0)}})$.
If, when it is possible, we impose certain $\mathbb{K}$-algebroid homomorphism
conditions for the maps $\Delta_{\star},\\{\varepsilon_{r}^{(e)}\\}_{e\in
G^{(0)}},\\{\varepsilon_{s}^{(e)}\\}_{e\in G^{(0)}}$, and/or certain
$\mathbb{K}$-coalgebroid homomorphism conditions for the maps
$m_{\star},\\{\eta_{r}^{(e)}\\}_{e\in G^{(0)}},\\{\eta_{s}^{(e)}\\}_{e\in
G^{(0)}}$, then we may to establish a certain linear $\mathbb{K}$-bialgebroid
structure on $\mathfrak{B}_{\mathfrak{G}_{\mathbb{K}}}$, having posed
$\mathfrak{B}_{\mathfrak{G}_{\mathbb{K}}}=(\mathfrak{G}_{\mathbb{K}},m_{\star},\Delta_{\star},\\{\eta_{r}^{(e)}\\}_{e\in
G^{(0)}},\\{\eta_{s}^{(e)}\\}_{e\in G^{(0)}},\\{\varepsilon_{r}^{(e)}\\}_{e\in
G^{(0)}},\\{\varepsilon_{s}^{(e)}\\}_{e\in G^{(0)}})$.
Finally, if, for each $f,g\in End\ (G_{\mathbb{K}})$ such that
$f\otimes_{\star}g$ there exists, we put
$G\stackrel{{\scriptstyle\Delta_{\star}}}{{\longrightarrow}}G\otimes_{\star}G\stackrel{{\scriptstyle
f\otimes_{\star}g}}{{\longrightarrow}}G\otimes_{\star}G\stackrel{{\scriptstyle
m_{\star}}}{{\longrightarrow}}G,$
then it is possible to consider the following (partial) convolution product
$f\tilde{\ast}_{\star}g\doteq
m_{\star}\circ(f\otimes_{\star}g)\circ\Delta_{\star}\in End\
(G_{\mathbb{K}}).$
Thus, an element $a\in End\ (G_{\mathbb{K}})$ may to be said an antipode of a
$\mathbb{K}$-bialgebroid structure $\mathfrak{B}_{\mathfrak{G}_{\mathbb{K}}}$,
when there exists $a\tilde{\ast}_{\star}\mbox{id}$,
$\mbox{id}\tilde{\ast}_{\star}a$, and
$a{\tilde{\ast}}_{\star}\mbox{id}=\mbox{id}\tilde{\ast}_{\star}a=\mbox{\rm$2^{th}$
condition of}\ 5_{i},$
if $\mathfrak{B}_{\mathfrak{G}_{\mathbb{K}}}$ has the property $5_{i},\ \
i=1,2,3,4$.
A $\mathbb{K}$-bialgebroid structure with, at least, one antipode, is said to
have an E-Hopf algebroid structure. If $G^{(0)}=\\{e\\}$, then we obtain an
ordinary Hopf algebra structure.
$\bf 5.\ The\ Majid^{\prime}s\ quantum\ gravity\ model$
S. Majid, in [16] and [17], have introduced a particular noncommutative and
noncocommutative bicrossproduct Hopf algebra that should be viewed as a toy
model of a physical system in which both quantum effects (the
noncommutativity) and gravitational curvature effects (the noncocommutativity)
are unified. The Majid construction, being a noncommutative noncocommutative
Hopf algebra, may be viewed as a non-trivial example of quantum group having
an important physical meaning. We’ll apply this model to the EBJ EBB-groupoid
algebra (eventually equipped with a riemannian structure).
Following [15, Chap. III, § 1], an E-groupoid $(G,G^{(0)},r,s,\star)$ is said
to be a differentiable E-groupoid (or a E-groupoid manifold) if $G,G^{(0)}$
are differentiable manifolds, the maps $r,s:G\rightarrow G^{(0)}$ are
surjective submersions, the inclusion $\alpha:G^{(0)}\hookrightarrow G$ is
smooth, and the partial multiplication $\star:G^{(2)}\rightarrow G$ is smooth
(if we understand $G^{(2)}$ as submanifold of $G\times G$).
A locally trivial666For the notion of local triviality of a topological
groupoid, see [15, Chap. II, § 2]. differentiable E-groupoid is said to be a
Lie E-groupoid.
Following [10], a differentiable E-groupoid $(G,G^{(0)},r,s,\star)$ is said to
be a Riemannian E-groupoid if there exists a metric $g$ over $G$ and a metric
$g_{0}$ over $G^{(0)}$ in such a way that the inversion map $i:G\rightarrow G$
is an isometry, and $r,s$ are Riemannian submersions of $(G,g_{0})$ onto
$(G^{(0)},g_{0})$.
Let
$\mathcal{A}_{HBJ}(=\mathcal{A}_{\mathbb{K}}(\mathcal{G}_{HBJ}(\mathcal{F}_{I}))\cong\mathcal{A}_{\mathbb{K}}(\mathcal{G}_{Br}(I)))$
be the HBJ EBB-algebra of [11, § 6]: as seen there, it represent the algebra
of physical observables according to the Matrix Quantum Mechanics.
The question of which metric to choice, for instance over $\mathcal{A}_{HBJ}$,
or over
$\mathcal{G}_{HBJ}(=\mathcal{G}_{HBJ}(\mathcal{F}_{I})\cong\mathcal{G}_{Br}(I))$
(for this last groupoid isomorphism, see [11, § 5]), is not trivial and not a
priori dictated (see, [12, II] for a discussion of a similar question related
to a tentative of metrization of the symplectic phase-space manifold of a
dynamical system, in order that be possible to define a classical Brownian
motion on it).
Let us introduce a Majid’s toy model of quantum mechanics combined with
gravity, following [16, 17, 18].
S. Majid ([16]) follows the abstract quantization formulation of I. Segal,
whereby any abstract $\mathbb{C}^{\ast}$-algebra can be considered as the
algebra of observables of a quantum system, and the positive linear
functionals on it as the states.
He, first, consider a pure algebraic formulation of the classical mechanics of
geodesic motion on a Riemannian spacetime manifold, precisely on a homogeneous
spacetime, following the well-known Mackey’s quantization procedure on
homogeneous spacetimes (see [18, Chap. 6], and references therein).
The basis of the Majid’s physical picture lies in a new interpretation of the
semidirect product algebra as quantum mechanics on homogeneous spacetime
(according to [8]). Namely, he consider the semidirect product
$\mathbb{K}[G_{1}]\ltimes_{\alpha}\mathbb{K}(G_{2})$ where $G_{1}$ is a finite
group that acts, through $\alpha$, on a set $G_{2}$; here, $\mathbb{K}[G_{1}]$
denotes the group algebra over $G_{1}$, whereas $\mathbb{K}(G_{2})$ denotes
the algebra of $\mathbb{K}$-valued functions on $G_{2}$.
[17, section 1.1] motivates the search for self-dual algebraic structures in
general, and Hopf algebras in particular, so that it is natural to search the
self-dual structure of $\mathbb{K}[G_{1}]\ltimes_{\alpha}\mathbb{K}(G_{2})$,
as follows.
To this end, we assume that $G_{2}$ is also a group that acts back by an
action $\beta$ on $G_{1}$ as a set; so, one can equivalently view that $\beta$
induces a coaction of $\mathbb{K}[G_{2}]$ on $\mathbb{K}(G_{1})$, and defines
the corresponding semidirect coproduct coalgebra which we denote
$\mathbb{K}[G_{1}]^{\beta}\rtimes\mathbb{K}(G_{2})$. Such a bicrossproduct
structure will be denoted
$\mathbb{K}[G_{1}]^{\beta}\bowtie_{\alpha}\mathbb{K}(G_{2})$.
Majid’s model fit together the semidirect product by $\alpha$ with the
semidirect coproduct by $\beta$, to form a Hopf algebra; in such a way, we’ll
have certain (compatibility) constraints on $(\alpha,\beta)$ that gives a
bicrossproduct Hopf algebra structure to
$\mathbb{K}[G_{1}]\otimes\mathbb{K}(G_{2})$, that it is of self-dual type;
this structure is non-commutative [non-cocommutative] when $\alpha$ [$\beta$]
is non-trivial.
Majid’s model of quantum gravity starts from the physical meaning of a
bicrossproduct structure relative to the case $\mathbb{K}=\mathbb{C}$,
$G_{1}=G_{2}=\mathbb{R}$ and $\alpha_{\rm left\ u}(s)=\hbar u+s$, with $\hbar$
a dimensionful parameter (Planck’s constant), achieved as a particular case of
the classical self-dual $\ast$-Hopf algebra of observables (according to I.
Segal) $\mathbb{C}^{\ast}(G_{1})\otimes\mathbb{C}(G_{2})$, where $G_{1},G_{2}$
has a some group structure and $\mathbb{C}^{\ast}(G_{1})$ is the convolution
$\mathbb{C}^{\ast}$-algebra on $G_{1}$. Further, with a suitable compatibility
conditions (see [16, (9)] or [18, (6.15)]) for $(\alpha,\beta)$ $-$ that can
be viewed as certain (Einstein) ”second-order gravitational field equations”
for $\alpha$ (that induces metric properties on $G_{2}$), with back-reaction
$\beta$ playing the role of an auxiliary physical field $-$ we obtain a
bicrossproduct Hopf algebra
$\mathbb{C}^{\ast}(G_{1})^{\beta}\bowtie_{\alpha}\mathbb{C}(G_{2})$, with the
following self-duality
$(\mathbb{C}^{\ast}(G_{1})^{\beta}\bowtie_{\alpha}\mathbb{C}(G_{2}))^{\ast}\cong\mathbb{C}^{\ast}(G_{2})^{\alpha}\bowtie_{\beta}\mathbb{C}(G_{1})$.
Moreover (see [16]), in the Lie group setting, the non-commutativity of
$G_{2}$ (whence, the non-cocommutativity of the coalgebra structure) means
that the intrinsic torsion-free connection on $G_{2}$, has curvature
(cogravity), that is to say, the non-cocommutativity plays the role of a
Riemannian curvature on $G_{2}$ (in the sense of non-commutative geometry).
We, now, consider a simple quantization problem (see [17, § 1.1.2]). Let
$G_{1}=G_{2}$ be a group and $\alpha$ the left action; hence, the algebra
$\mathcal{W}(G)\doteq\mathbb{K}^{\ast}[G]\ltimes_{\rm left}\mathbb{K}(G)$ will
be called the group Weyl algebra of $G$, and it represents the algebraic
quantization of a particle moving on $G$ by translations.
Finally, if we want to apply these considerations to the case
$G_{1}=G_{2}=\mathcal{G}_{HBJ}$, then we must consider both the finite-
dimensional and the infinite-dimensional case, in such a way that be possible
to determine the dual (or the restricted dual) of
$\mathcal{A}_{\mathbb{K}}(\mathcal{G}_{HBJ})$.
Taking into account the physical meaning of $\mathcal{G}_{HBJ}$, it follows
that it is possible to consider the above mentioned algebraic structures (with
their physical meaning) in relation to the case study777Such a groupoid may
be, eventually, endowed with a further topological and/or metric (as the
Riemannian one) structure. $G_{1}=G_{2}=\mathcal{G}_{HBJ}$, with consequent
physical interpretation (where possible), in such a way to get non-trivial
examples of quantum groupoids888If we consider a non-commutative non-
cocommutative Hopf algebra as a model of quantum group. having a possible
quantic meaning.
$\bf 6.\ A\ particular\ Weyl\ Algebra\ (and\ other\ structures)$
The cross and bicross (or double cross) constructions provides the basic
algebraic structures on which to build up non-trivial examples of quantum
groups, even in the infinite-dimensional case.
In this paragraph, we expose some examples of such constructions.
Let
$\mathcal{F}_{HBJ}\doteq\mathcal{F}_{\mathbb{K}}(\mathcal{G}_{HBJ}(\mathcal{F}_{I}))$
be the linear $\mathbb{K}$-algebra of $\mathbb{K}$-valued functions defined on
$\mathcal{G}_{HBJ}$.
Let $(G,\cdot)$ be a finite group. If $\mathcal{F}_{\mathbb{K}}(G)$ is the
linear $\mathbb{K}$-algebra of $\mathbb{K}$-valued functions on $G$, then,
since
$\mathcal{F}_{\mathbb{K}}(G)\otimes\mathcal{F}_{\mathbb{K}}(G)\cong\mathcal{F}_{\mathbb{K}}(G\times
G)$, it follows that such an algebra can be endowed with a natural structure
of Hopf algebra by the following data
1\. coproduct
$\Delta:\mathcal{F}_{\mathbb{K}}(G)\rightarrow\mathcal{F}_{\mathbb{K}}(G\times
G)$ given by $\Delta(f)(g_{1},g_{2})=f(g_{1}\cdot g_{2})$ for all
$g_{1},g_{2}\in G$;
2\. counit $\varepsilon:\mathcal{F}_{\mathbb{K}}(G)\rightarrow\mathbb{K}$,
with $\varepsilon(f)=1$ for each $f\in G$;
3\. antipode
$S:\mathcal{F}_{\mathbb{K}}(G)\rightarrow\mathcal{F}_{\mathbb{K}}(G)$, defined
by $S(f)(g)=f(g^{-1})$ for all $g\in G$.
If we consider a finite groupoid instead of a finite group, then 3. even
subsists, but 1. and 2. are no longer valid because of the groupoid structure.
If $\mathcal{G}=(G,G^{(0)},r,s,\star)$ is a finite groupoid, then, following
[24, § 2.2], the most natural coalgebra structure on
$\mathcal{F}_{\mathbb{K}}(\mathcal{G})$, is given by the following data
1’. coproduct $\Delta(g_{1},g_{2})=f(g_{1}\star g_{2})$ if $(g_{1},g_{2})\in
G^{(2)}$, and $=0$ otherwise;
2’. counit $\varepsilon(f)=\sum_{e\in G^{(0)}}f(e)$.
In such a way, via 1’, 2’ and 3, it is possible to consider a Hopf algebra
structure on $\mathcal{F}_{\mathbb{K}}(\mathcal{G})$.
In the finite-dimensional case, we have
$\mathcal{A}^{\ast}_{\mathbb{K}}(\mathcal{G})\cong\mathcal{F}_{\mathbb{K}}(\mathcal{G})$
(see [18, Example 1.5.4]) as regard the dual Hopf algebra
$\mathcal{A}^{\ast}_{\mathbb{K}}(\mathcal{G})$ of
$\mathcal{A}_{\mathbb{K}}(\mathcal{G})$, whereas, in the infinite-dimensional
case, we have the restricted dual
$\mathcal{A}^{\ast}_{\mathbb{K}}(\mathcal{G})\cong\mathcal{F}^{(o)}_{\mathbb{K}}(\mathcal{G})\subseteq\mathcal{F}_{\mathbb{K}}(\mathcal{G})$;
hence, in our case $\mathcal{G}=\mathcal{G}_{HBJ}$, following [18, Chap. 6] it
is possible to consider a non-degenerate dual pairing, say $\langle\ ,\
\rangle$, between $\mathcal{A}_{\mathbb{K}}(\mathcal{G}_{HBJ})$ and
$\mathcal{F}^{(o)}_{\mathbb{K}}(\mathcal{G}_{HBJ})$ (in the finite-dimensional
case, it is
$\mathcal{F}^{(o)}_{\mathbb{K}}(\mathcal{G}_{HBJ})=\mathcal{F}_{\mathbb{K}}(\mathcal{G}_{HBJ})$).
Hence, if we define the action
$\alpha:(b,a)\rightarrow b\rhd a=\langle b,a_{(1)}\rangle a_{(2)}\qquad\forall
a\in\mathcal{A}_{\mathbb{K}}(\mathcal{G}_{HBJ}),\ \forall
b\in\mathcal{F}^{(o)}_{\mathbb{K}}(\mathcal{G}_{HBJ}),$
then it is possible to define the left cross product algebra
$\mathcal{H}(\mathcal{A}_{\mathbb{K}}(\mathcal{G}_{HBJ})\doteq\mathcal{A}_{\mathbb{K}}(\mathcal{G}_{HBJ})\ltimes_{\alpha}\mathcal{F}^{(o)}_{\mathbb{K}}(\mathcal{G}_{HBJ}),$
called the Heisenberg double of $\mathcal{A}_{\mathbb{K}}(\mathcal{G}_{HBJ})$.
If $V$ is an $A$-module algebra, with $A$ a Hopf algebra, then let $V\ltimes
A$ be the corresponding left cross product, and $(v\otimes a)\rhd w=v(a\rhd
w)$ the corresponding Schrödinger representation (see [18, § 1.6]) of $V$ on
itself.
If $V$ has a Hopf algebra structure and $V^{(o)}$ is its restricted dual via
the pairing $\langle\ ,\ \rangle$, then (see [18, Chap. 6]) the action
$\sigma$ given by $\phi\rhd v=v_{(1)}\langle\phi,v_{(2)}\rangle$ for all $v\in
V,\phi\in V^{(o)}$, make $V$ into a $V^{(o)}$-module algebra and $V\otimes
V^{(o)}$ into an algebra with product given by
$(v\otimes\phi)(w\otimes\psi)=vw_{(1)}\otimes\langle
w_{(2)},\phi_{(1)}\rangle\phi_{(2)}\psi,$
so that let $\mathcal{W}(V)\doteq V\ltimes_{\sigma}V^{(o)}$ be the
corresponding left cross product algebra. Hence, it is possible to prove (see
[18, Chap. 6]) that the related Schrödinger representation (see [18, § 6.1, p.
222]) give rise to an algebra isomorphism
$\chi:V\ltimes_{\sigma}V^{(o)}\rightarrow Lin(V)$ (= algebra of
$\mathbb{K}$-endomorphisms of $V$), given by
$\chi(v\otimes\psi)w=vw_{(1)}\langle\phi,w_{(2)}\rangle$;
$\mathcal{W}(V)\doteq V\ltimes_{\sigma}V^{(o)}$ is said to be the (restricted)
group Weyl algebra of the Hopf algebra $V$.
This last construction is an algebraic generalization of the usual Weyl
algebra of Quantum Mechanics on a group (see [17, § 1.1.2]), whose finite-
dimensional prototype is as follows.
Let $G$ be a finite group, and let’s consider the strict dual pair given by
the $\mathbb{K}$-valued functions on $G$, say $\mathbb{K}(G)$, and the free
algebra on $G$, say $\mathbb{K}G$. Hence, the right action of $G$ on itself
given by $\psi_{u}(s)=su$, establishes a left cross product algebra structure,
say $\mathbb{K}(G)\ltimes\mathbb{K}G$, on $\mathbb{K}(G)\otimes\mathbb{K}G$;
such an action induces (see [18, Chap. 6]), also, a left regular
representation of $G$ into $\mathbb{K}G$, so that we can consider the related
Schrödinger representation generated by it and by the action of $\mathbb{K}G$
on itself by pointwise product. Thus, if $V=\mathbb{K}(G)$, with
$\mathbb{K}(G)$ endowed with the usual Hopf algebra structure, then we have
that the Weyl algebra $\mathbb{K}(G)\ltimes\mathbb{K}G$ (with
$V^{(o)}=\mathbb{K}(G)^{\ast}=\mathbb{K}G$ since $G$ is finite) is isomorphic
to $Lin(\mathbb{K}(G))$ via the Schrödinger representation. As already said in
the previous paragraph, such a Weyl algebra formalizes the algebraic
quantization of a particle moving on $G$ by translations.
If we apply what has been said above to $\mathcal{G}_{HBJ}$ in the finite-
dimensional case (that is, when card $I<\infty$ that correspond to a finite
number of energy levels $-$ see [11, § 6]), since
$V^{(o)}=\mathcal{F}^{(o)}_{\mathbb{K}}(\mathcal{G}_{HBJ})=\mathcal{A}_{\mathbb{K}}(\mathcal{G}_{HBJ})$,
then we have that
$\mathcal{W}(\mathcal{F}_{\mathbb{K}}(\mathcal{G}_{HBJ}))=\mathcal{F}_{\mathbb{K}}(\mathcal{G}_{HBJ})\ltimes\mathcal{A}_{\mathbb{K}}(\mathcal{G}_{HBJ})$
represents the algebraic quantization of a particle moving on the groupoid
$\mathcal{G}_{HBJ}$ by translations999From here, we may speak of a Quantum
Mechanics on a groupoid., remembering the quantic meaning (according to I.
Segal) of $\mathcal{F}_{\mathbb{K}}(\mathcal{G}_{HBJ})$ (as set of states) and
$\mathcal{A}_{\mathbb{K}}(\mathcal{G}_{HBJ})$ (as set of observables).
Instead, in the infinite-dimensional case, we have that
$\mathcal{F}^{(o)}_{\mathbb{K}}(\mathcal{G}_{HBJ})$ is isomorphic to a sub-
Hopf algebra of $\mathcal{A}_{\mathbb{K}}(\mathcal{G}_{HBJ})$ (this is the HBJ
EBBH-algebra $-$ see [11, § 8]), so that
$\mathcal{W}(\mathcal{F}_{\mathbb{K}}(\mathcal{G}_{HBJ}))=\mathcal{F}_{\mathbb{K}}(\mathcal{G}_{HBJ})\ltimes\mathcal{F}^{(o)}_{\mathbb{K}}(\mathcal{G}_{HBJ})\subseteq\mathcal{F}_{\mathbb{K}}(\mathcal{G}_{HBJ})\ltimes\mathcal{A}_{\mathbb{K}}(\mathcal{G}_{HBJ}).$
Finally, the right adjoint action (see [18, § 1.6]) of $\mathcal{G}_{HBJ}$ on
itself given by $\psi_{g}(h)=g^{-1}\star h\star g$ if there exists, and $=0$
otherwise, make $\mathcal{F}_{\mathbb{K}}(\mathcal{G}_{HBJ})$ into an
$\mathcal{A}_{\mathbb{K}}(\mathcal{G}_{HBJ})$-module algebra. In the finite-
dimensional case, we have
$\mathcal{A}^{\ast}_{\mathbb{K}}(\mathcal{G}_{HBJ})\cong\mathcal{F}_{\mathbb{K}}(\mathcal{G}_{HBJ})$,
so that
$\mathcal{A}_{\mathbb{K}}(\mathcal{G}_{HBJ})=\mathcal{A}^{\ast\ast}_{\mathbb{K}}(\mathcal{G}_{HBJ})\cong\mathcal{F}^{\ast}_{\mathbb{K}}(\mathcal{G}_{HBJ}),$
whence $\mathcal{F}_{\mathbb{K}}(\mathcal{G}_{HBJ})$ is a
$\mathcal{F}^{\ast}_{\mathbb{K}}(\mathcal{G}_{HBJ})$-module algebra too.
Therefore, we may consider the following left cross product algebra
$\mathcal{F}_{\mathbb{K}}(\mathcal{G}_{HBJ})\ltimes\mathcal{F}^{\ast}_{\mathbb{K}}(\mathcal{G}_{HBJ})\cong\mathcal{F}_{\mathbb{K}}(\mathcal{G}_{HBJ})\ltimes\mathcal{A}_{\mathbb{K}}(\mathcal{G}_{HBJ})$
that the tensor product coalgebra makes into a Hopf algebra, called the
(Drinfeld) quantum double of $\mathcal{G}_{HBJ}$, and denoted
$\mathcal{D}(\mathcal{G}_{HBJ})$; even in the finite-dimensional case, it
represent the algebraic quantization of a particle constrained to move on
conjugacy classes of $\mathcal{G}_{HBJ}$ (quantization on homogeneous space
over a groupoid).
Besides, it has been proved, for a finite group $G$, that this (Drinfeld)
quantum double $\mathcal{D}(\mathcal{G}_{HBJ})$, has a quasitriangular
structure (see [18, Chap. 6]) given by
$(\delta_{s}\otimes u)(\delta_{t}\otimes
v)=\delta_{u^{-1}su,t}\delta_{t}\otimes uv,\qquad\Delta(\delta_{s}\otimes
u)=\sum_{ab=s}\delta_{a}\otimes u\delta_{b}\otimes u,$
$\varepsilon(\delta_{s}\otimes u)=\delta_{s,e},\qquad S(\delta_{s}\otimes
u)=\delta_{u^{-1}s^{-1}u}\otimes u^{-1},$ $R=\sum_{u\in G}\delta_{u}\otimes
e\otimes 1\otimes u,$
where we have identifies the dual of $\mathbb{K}G$ with $\mathbb{K}(G)$ via
the idempotents $p_{g},g\in G$ such that $p_{g}p_{h}=\delta_{g,h}p_{g}$ (see
[19, § 2.5] and [18, § 1.5.4]).
Such a quantum double represents the algebra of quantum observables of a
certain physical system with symmetry group $G$.
Hence, even in the finite-dimensional case101010The infinite-dimensional case
is not so immediate., we may consider, with suitable modifications, an
analogous quasitriangular structure on $\mathcal{G}_{HBJ}$, obtaining the
(Drinfeld) quantum double $\mathcal{D}(\mathcal{G}_{HBJ})$ on
$\mathcal{G}_{HBJ}$; thus, if we consider a quasitriangular Hopf algebra as a
model of quantum group, the (Drinfeld) quantum double
$\mathcal{D}(\mathcal{G}_{HBJ})$ provides a non-trivial example of quantum
group having a (possible) quantic meaning (related to a quantum mechanics on a
groupoid).
$\bf 7.\ Conclusions$
From what has been said above, and in [11], it rises a possible role played by
the groupoid structures in Quantum Mechanics and Quantum Field Theory.
For instance, such a groupoid structures111111Eventually equipped with
further, more specific structures, as the topological or metric ones. might
takes place a prominent rule in Renormalization, as well as in the Majid’s
model of quantum gravity. Indeed, a central problem in quantum gravity
concerns its nonrenormalizability due to the existence of UV divergences; in
turn, the UV divergences arise from the assumption that the classical
configurations being summed over are defined on a continuum.
So, the discreteness given by groupoid structures may turn out to be of some
usefulness in such a (renormalization) QFT problems.
## References
* [1] G. Böhm, Hopf algebroids, in: Handbook of Algebra, edited by M. Hazewinkel, North-Holland, Amsterdam, 2009.
* [2] G. Böhm, F. Nill, K. Szlachányi, Weak Hopf Algebras I. Integral Theory and C∗-structure, Journal of Algebra, 221 (1999) 385-438.
* [3] M. Born, P. Jordan, Zur Quantenmechanik, Zeitschrift für Physik, 34 (1925) 858-888.
* [4] R. Brown, From groups to groupoids: a brief survey, Bull. London Math. Soc., 19 (1987) 113-134.
* [5] T.K. Chen, Iterated Path-Integrals, Bull. AMS, 83 (1977) 831-879.
* [6] A. Connes, Noncommutative Geometry, Academic Press, New York, 1994.
* [7] A. Connes, D. Kreimer, Hopf algebras, renormalization and noncommutative geometry, Comm. Math. Phys., 199 (1998) 203-242.
* [8] H.D. Doebner, J. Tolar, Quantum mechanics on homogeneous spaces, Journal of Mathematical Physics, 16 (1975) 975-984.
* [9] V.G. Drinfeld, Quantum Groups, in: Proceedings of the International Congress of Mathematicians, Berkeley 1986, vol. I, ed. by A. Gleason, Amer. Math. Soc., Providence, RI, 1987.
* [10] E. Gallego, L. Gualandri, G. Hector, A. Reventós, Groupoides riemanniens, Publ. Mat., 33 (3) (1989) 417-422.
* [11] G. Iurato, A possible quantic motivation of the structure of quantum group, JP Journal of Algebra, Number Theory and Applications, 20 (1) (2011) 77-99.
* [12] J.R. Klauder, Quantization is Geometry, after all, Annals of Physics, 188 (1988) 120-141.
* [13] D. Kreimer, Chen’s iterated integral represents the operator product expansion, Adv. Theor. Math. Phys., 3 (1999) 627-670.
* [14] J-H. Lu, Hopf algebroids and quantum groupoids, Intern. Jour. of Math., 7 (1996) 47-70.
* [15] K. Mackenzie, Lie groupoids and Lie algebroids in Differential Geometry, Cambridge University Press, Cambridge, 1987.
* [16] S. Majid, Hopf algebras for physics at the Planck scale, Class. Quantum Grav., 5 (1987) 1587-1606.
* [17] S. Majid, Physics for Algebraists: Non-commutative and Non-cocommutative Hopf Algebras by a Bicrossproduct Construction, J. Algebra, 130 (1990) 17-64.
* [18] S. Majid, Foundations of Quantum Group Theory, Cambridge University Press, Cambridge, 1995.
* [19] D. Nikshych, L. Vainerman, Finite Quantum Groupoid and their applications, in: New Directions in Hopf Algebras, edited by S. Montgomery and H.J. Schneider, MSRI Publications, vol. 43, 2002.
* [20] D. Nikshych, L. Vainerman, Algebraic versions of a finite dimensional quantum groupoid, Lect. Notes in Pure and Appl. Math., 209 (2000) 189-221.
* [21] V. Popuţa, G. Ivan, Vector groupoids, Analele Stiintifice ale Universitati Ovidius Constanta, Serie Matematica, (to appear).
* [22] D. Ravenel, Complex cobordism and stable homotopy groups of spheres, Academic Press, New York, 1986.
* [23] J.C. Slater, Quantum Theory of Matter, McGraw-Hill Book Company, New York, 1968\.
* [24] J.M. Vallin, Relative matched pairs of finite groups from depth two inclusion of Von Neumann algebras to quantum groupoid, Journal of Functional Analysis, 254 (2) (2008) 2040-2068.
* [25] A. Weinstein, Groupoids: Unifying Internal and External Symmetry, Notices of AMS, 43 (1996) 744-752.
* [26] R. Wulkenhaar, Hopf Algebras in Renormalization and Noncommutative Geometry, in: Noncommutative Geometry and the Standard Model of Elementary Particle Physics, edited by F. Scheck, H. Upmeier and W. Werner, LNP 596, Springer-Verlag, Berlin, 2002.
* [27] P. Xu, Quantum groupoids, Comm. Math. Phys., 216 (2001) 539-581.
|
arxiv-papers
| 2011-05-24T15:19:36 |
2024-09-04T02:49:19.053804
|
{
"license": "Public Domain",
"authors": "Giuseppe Iurato",
"submitter": "Giuseppe Iurato",
"url": "https://arxiv.org/abs/1105.4799"
}
|
1105.4811
|
# Rectification at Graphene-Semiconductor Interfaces: Zero-Gap Semiconductor
Based Diodes
S. Tongay1,2,3, M. Lemaitre1, X. Miao2, B. Gila2,3, B. R. Appleton2,3, and A.
F. Hebard2 1 Materials Science and Engineering, University of Florida,
Gainesville, Florida 32611 2 Department of Physics, University of Florida,
Gainesville, FL 32611 3 Nanoscience Institute for Medical and Engineering
Technology, University of Florida, Gainesville, FL 32611
###### Abstract
Using current-voltage ($I$-$V$), capacitance-voltage ($C$-$V$) and electric
field modulated Raman measurements, we report on the unique physics and
promising technical applications associated with the formation of Schottky
barriers at the interface of a one-atom-thick zero-gap semiconductor
(graphene) and conventional semiconductors. When chemical vapor deposited
graphene is transferred onto n-type Si, GaAs, 4H-SiC and GaN semiconductor
substrates, there is a strong van der Waals attraction that is accompanied by
charge transfer across the interface and the formation of a rectifying
(Schottky) barrier. Thermionic emission theory in conjunction with the
Schottky-Mott model within the context of bond-polarization theory provides a
surprisingly good description of the electrical properties. Applications, such
as to sensors where in forward bias there is exponential sensitivity to
changes in the Schottky barrier height due to the presence of absorbates on
the graphene or to analogue devices for which Schottky barriers are integral
components are promising because of graphene’s mechanical stability, its
resistance to diffusion, its robustness at high temperatures and its
demonstrated capability to embrace multiple functionalities.
###### pacs:
72.80.Vp, 81.05.Ue, 73.30.+y, 73.40.Ei
## I 1\. INTRODUCTION
Single atom layers of carbon (graphene) have been studied intensively after
becoming experimentally accessible with techniques such as mechanical
exfoliationNovoselov et al. (2004), thermal decomposition on SiC
substratesBerger et al. (2004) and chemical vapor deposition (CVD) Li et al.
(2009); Kim et al. (2009). Graphene is a zero-gap semiconductor (ZGS) with an
exotic linearly dispersing electronic structure, high optical transparency,
exceptional mechanical stability, resilience to high temperatures and an in-
plane conductivity with unusually high mobilityCastroNeto et al. (2009).
Accordingly, graphene has been proposed as a novel material for incorporation
into devices ranging from Schottky light emitting diodes (LEDs) Li et al.
(2010a); Fan et al. (2011); Chen et al. (2011) to field effect transistors
(FETs) Chung et al. (2010); Lin et al. (2009). Although integration of
graphene into semiconductor devices is appealing, there is still very little
known about the interface physics at graphene-semiconductor junctions. To this
end, graphene/Si junctions showing successful solar cell operation have been
produced by transferring either CVD-prepared Li et al. (2010a) or exfoliated
Chen et al. (2011) graphene sheets onto Si substrates. The resulting diodes
have shown ideality factors (measure of deviation from thermionic emission)
varying from $\sim$1.5 Li et al. (2010a) which is close to the ideal value of
unity, to values in the range $\sim$5-30 on exfoliated graphene Chen et al.
(2011) implying that additional non-thermionic current carrying processes
exist at the graphene/Si interface. Nevertheless these promising results point
to the need for additional research on integrating graphene with
technologically important semiconductors.
Figure 1: (a) Graphene/semiconductor diode sample geometry where the $J-V$
characteristics were measured between ohmic contact (ground) and graphene
(high) (b) Hall bar geometry for measurements of the carrier density of
graphene. In this configuration the graphene does not make contact with the
semiconductor. (c) Optical image of the graphene/Au/SiO2 \- graphene/Si
transition edge after the graphene transfer. (d) Scanning electron microscope
image of Cu foils after the CVD graphene growth showing formation of grain
sizes large with respect to the 10 $\mu$m scale bar.
Here we report rectification (diode) effects at ZGS-semiconductor (i.e
graphene-semiconductor) interfaces on a surprisingly wide variety of
semiconductors. In addition to current-voltage measurements we utilize, Hall,
capacitance-voltage and electric field modulated Raman techniques to gain
heretofore unrecognized insights into the unique physics occurring at the
graphene/semiconductor interface. We find that when CVD-prepared graphene
sheets are transferred onto n-type Si, GaAs, 4H-SiC and GaN semiconductor
substrates, equilibration of the Fermi level throughout the system gives rise
to a charge transfer between the graphene and semiconductor, thereby creating
strong rectification (Schottky effect) at the interface. We find that
graphene’s Fermi level ($E_{F}^{gr}$) is subject to variation during charge
transfer across the graphene-semiconductor interface as measured by in-situ
Raman spectroscopy measurements, unlike conventional metal-semiconductor
diodes where the Fermi level (EF) of the metal stays constant due to a high
density of states at the Fermi level. These effects become particularly
pronounced at high reverse bias voltages when the induced negative charge in
the graphene is sufficient to increase $E_{F}^{gr}$ and give rise to increased
current leakage. Our observations and interpretation based on a modification
of thermionic emission theory not only provide a new understanding for the
development of high frequency, high power, and high temperature Schottky based
devices, such as metal-semiconductor field effect transistors (MESFETs) and
high electron mobility transistors (HEMTs), but also allow us to integrate
graphene into semiconductor devices while simultaneously preserving the
superior properties of the graphene and avoiding chemical-structural
modifications to the semiconductor.
## II 2\. EXPERIMENTAL METHODS
Our diodes are fabricated by transferring large scale graphene sheets grown by
chemical vapor deposition (CVD) directly onto the semiconductor under
investigation and allowing Van der Waals attraction to pull the graphene into
intimate contact with the semiconductor. Large-area single layer graphene
sheets were synthesized on Cu foils via a multi-step, low-vacuum CVD process
similar to that used in Ref. Li et al. (2010b). A quartz tube furnace
operating in CVD-mode was loaded with 25-50 $\mu$m-thick Cu foils (Puratronic,
99.9999% Cu), evacuated to 4 mTorr, and subsequently heated to 500∘C under a
25 sccm flow of H2 at 325 mTorr. After 30 minutes soak, the temperature was
raised to 1025∘C for 60 minutes to promote Cu grain growth (mean grain size
exceeds 5 mm2 determined by optical microscopy). An initial low-density
nucleation and slow growth phase was performed at 1015∘C for 100 minutes with
a mixture of CH4 and H2 at a total pressure of 90 mTorr and flows of $\leq$
0.5 and 2 sccm, respectively. Full coverage was achieved by dropping the
temperature to 1000∘C for 10 minutes while increasing the total pressure and
methane flow to 900 mTorr and 30 sccm, respectively. A 1.5 $\mu$m-thick film
of PMMA (MicroChem, 11$\%$ in Anisole) was then spin-cast onto the Cu foils at
2000 rpm for 60 seconds. The exposed Cu was etched in an O2 plasma to remove
unwanted graphene from the backside of the samples. The PMMA supported films
were then etched overnight in a 0.05 mg/L solution of Fe(III)NO3 (Alfa Aesar)
to remove the copper. The graphene-PMMA films were then washed in deionized
water, isopropyl alcohol (IPA), and buffered oxide etch for 10 minutes, each.
After growth and transfer, the graphene films were characterized and
identified using a Horiba-Yvon micro-Raman spectrometer with green, red and UV
lasers.
Commercially available semiconducting wafers were purchased from different
vendors. $n$-Si and $n$-GaAs samples were doped with P (2-6$\times$$10^{15}$
cm-3) and Si (3-6$\times$$10^{16}$ cm-3) respectively. Epilayers of $n$-GaN
and $n$-4H-SiC, 3-6 $\mu$m-thick, were grown on semi-insulating sapphire
substrates with Si (1-3$\times$$10^{16}$ cm-3) and N (1-3$\times$$10^{17}$
cm-3) dopants. During the sample preparation and before the graphene transfer,
the wafers were cleaned using typical surface cleaning techniques. Ohmic
contacts to the semiconductors were formed using conventional ohmic contact
recipesHan et al. (2002); Sze (1981); Hao et al. (1996); Ruvimov et al.
(1996). Multilayer ohmic contacts were thermally grown at the back/front side
of the semiconductor and were annealed at high temperatures using rapid
thermal annealing. After the ohmic contact formation, a 0.5-1.0 $\mu$m thick
SiOx window was grown on various semiconductors using a plasma enhanced
chemical vapor deposition (PECVD) system, and $\sim$ 500 nm thick gold
electrodes were thermally evaporated onto SiOx windows at 5$\times 10^{-7}$
Torr. The graphene contacting areas were squares with sides in the range 500
$\mu$m to 2000 $\mu$m. Application of IPA improves the success rate of the
graphene transfer and does not affect the measurements presented here. After
depositing the graphene/PMMA films, the samples were placed in an acetone
vapor rich container for periods ranging from 10 minutes to $\sim$10 hours.
The acetone bath allows slow removal of the PMMA films without noticeable
deformation of the graphene sheets.
Prior to the graphene transfer there is an open circuit resistance between the
Au contacts and the semiconductor. After the transfer of the PMMA-graphene
bilayer, the graphene makes simultaneous connection to the Au contacts and the
semiconductor as evidenced by the measured rectifying I-V characteristics.
Since the diodes made with the PMMA-graphene bilayer show essentially the same
rectifying characteristics as the samples in which the PMMA has been dissolved
away, we conclude that the carbon layer on the PMMA (shown by Raman
measurements to be graphene) is making intimate contact with the
semiconductor.
Figure 2: (a) Raman spectra of CVD-grown graphene on Cu foils and (b) graphene
after transfer onto various semiconductor substrates. Graphene sheets show
large $I_{2D}/I_{G}$ ratio, and after the transfer the graphene becomes
slightly disordered. (c-d) Raman G and 2D peaks measured respectively on
graphene/Cu and on graphene/semiconductor combinations indicated in the legend
of panel (b).
A schematic for our graphene based diodes is shown in Fig. 1(a); the backside
of the semiconductor substrate is covered with an ohmic contact and the
graphene sheet transfered onto Cr/Au contacts grown on SiOx windows. After the
transfer, the graphene and semiconductor adhere to each other in an intimate
Van der Waals contact in the middle of the open window, and the Cr/Au contact
pad provides good electrical contact with the graphene. Our ohmic contact
arrangements allow current density versus voltage ($J$-$V$) and capacitance
versus voltage ($C$-$V$) measurements to be taken separately. $J$-$V$
measurements were taken in dark room conditions using a Keithley 6430 sub-
femptoamp source-meter, and $C$-$V$ measurements were taken using an HP 4284A
capacitance bridge. The electric field modulated Raman measurements were made
on the same configuration. Four-terminal transport and Hall measurements
however were performed with an intervening layer of SiOx (Fig. 1(b)) using a
physical property measurement system (PPMS), at room temperature in magnetic
fields up to 7 Tesla.
## III 3\. RESULTS AND DISCUSSION
### III.1 A. Raman measurements
Figure 3: In-situ Raman spectra taken on Graphene/GaN junctions as a function
of applied bias: 0V (black line), +1V (red line) and -10V (blue line).
In Fig. 2(a-d), we show typical Raman spectroscopy data taken on graphene
sheets grown onto Cu foils by CVD deposition before and after transferring
onto semiconductors. The presented scans have been reproduced at more than 20
random spots and are good representations of the quality of the graphene on
the Cu foils before transfer and on the semiconductor surface after transfer.
In the literature the quality of graphene sheets is measured by a large 2D to
G intensity ratio ($I_{2D}$/$I_{G}$) and a low D peak intensity ($I_{D}$).
Single layer graphene is expected to show $I_{2D}$/$I_{G}>2$, and the amount
of disorder in the sheets is often correlated with $I_{D}$. In our samples, we
observe $I_{2D}$/$I_{G}\geq 2$ and a negligible D peak amplitude. However
after graphene transfer to the semiconductor substrate, we observe that
$I_{D}$ becomes apparent while $I_{2D}$/$I_{G}$ remains the same (Fig. 2(b)).
The increase in $I_{D}$ reflects the lower sheet mobility of CVD-grown
graphene and gives rise to weak localization effects at low temperatures Cao
et al. (2010). Moreover, because of the low solubility of carbon in Cu,
graphene growth onto Cu foils is known to be self-limiting Li et al. (2009)
therefore allowing large-area single layers of graphene to be grown onto Cu
foil surfaces. After the graphene growth, the backside of the Cu foils have
been exposed to O2 plasma to remove unwanted graphene and checked with Raman
spectroscopy. This step assures that bi-layer (or multi-layer) graphene is not
formed on PMMA/graphene after etching the Cu foils (see Experimental methods).
The Raman spectrum of exfoliated graphene transferred onto Si/SiO2 substrates
has been previously studied as a function of applied bias Das et al. (2008).
It has been found that the G and 2D peaks of graphene are sensitive to the
Fermi energy (carrier density) of graphene and allow one to estimate the bias-
induced changes in E${}_{F}^{gr}$. Considering the typical operating voltages
of Schottky junctions, the low carrier density in graphene, and the associated
bias dependence of E${}_{F}^{gr}$, we have also measured the Raman spectrum of
graphene transferred onto GaN as a function of applied bias. Our Raman
measurements differ from those reported in Ref. Das et al. (2008) in the
following three ways: (1) we are using CVD-prepared rather than exfoliated
graphene, (2) the graphene is in direct contact with GaN rather than oxidized
Si, and (3) the graphene is measured in situ as part of a Schottky rather than
a gated FET. In Fig. 3, we show the evolution of the Raman spectrum as a
function of applied bias. While G and 2D are almost identical with the same
peak positions at 0V and 1V, in reverse bias at 10V, the G band shifts to
higher (by $\sim$6 $\pm$3 cm-1) and the 2D band shifts to lower (by $\sim$7
$\pm$3 cm-1) wavenumbers. The relative shifts in the Raman peaks along with a
reduction of the 2D/G peak ratio from 2.6 (at 0 V) to 1.2 (at 10 V) imply that
graphene sheets transferred onto GaN become electron doped. Considering the
previous results reported on graphene/SiO2 Das et al. (2008), the shift in EF
can be estimated to be in the range $\sim$0.2-0.5 eV.
### III.2 B. Hall measurements
Figure 4: Rxy versus magnetic field data taken at 300 K. Typically sample
mobilities are in the range 1400-2100 cm2/Vs and carrier densities (holes) in
the range 2-8$\times 10^{12}$ cm2.
Hall measurements show that the Hall mobility of the graphene sheets used in
our diodes is in the range 1400-2100 cm2/Vs, and that the sheets are hole
doped with carrier densities in the range 2-8$\times 10^{12}$ cm-2 (Fig. 4).
The presence of extrinsic residual doping in exfoliated graphene has been
previously reportedNovoselov et al. (2004) and attributed to residual water
vapor (p type) or NH3 (n-type). In both cases annealing reduces the
concentration of the dopants and forces $E_{F}^{gr}$ closer to the neutrality
point. For our CVD prepared graphene, the presence of residual impurity doping
can be attributed to a lowering of $E_{F}^{gr}$ due to hole doping of the
graphene during the Fe(III)NO3 etching-transfer processSu et al. (2011).
### III.3 C. Current-voltage measurements
Schottky diodes are expected to pass current in the forward bias
(semiconductor is negatively biased) while becoming highly resistive in
reverse bias (semiconductor is positively biased). As seen in Fig. 5(a-d),
$J$-$V$ (main panels) and log$J$-$V$ (insets) data taken on various
graphene/n-type semiconductor junctions display strong rectification. This
rectification is a consequence of Schottky barrier formation at the interface
when electrons flow from the semiconductor to the graphene as the Fermi
energies equilibrate (Fig. 7(b)).
Figure 5: Room temperature current density-voltage characteristics show
Schottky rectification at the (a) graphene/$n$-Si, (b) $n$-GaAs, (c)
$n$-4H-SiC and (d) $n$-GaN interfaces. Insets: Semilogarithmic leaf plots,
log$J$-$V$, reveal a thermionic emission dominated current density in forward
bias that spans at least two decades of linearity (dotted lines) allowing us
to extract out the Schottky barrier height recorded in Table. I.
In principle, any semiconductor with electron affinity ($\chi_{e}$) smaller
than the work function of the metal $(\Phi_{metal})$ can create rectification
at a metal-semiconductor (M-S) interface with Schottky barrier height,
$\phi_{SBH}=\Phi_{metal}-\chi_{e}$, given by the Schottky-Mott model. Electron
transport over the Schottky barrier at the M-S interface is well described by
thermionic emission theory (TE) with the expression;
$J(T,V)=J_{s}(T)[\exp({eV}/{\eta k_{B}T})-1],$ (1)
where $J(T,V)$ is the current density across the graphene/semiconductor
interface, V the applied voltage, T the temperature and $\eta$ the ideality
factor Sze (1981). The prefactor, $J_{s}(T)$ is the saturation current density
and is expressed as $J_{s}=A^{*}T^{2}\exp({-e\phi_{SBH}}/{k_{B}T})$, where
$e\phi_{SBH}$ is the zero bias Schottky barrier height (SBH) and $A^{*}$ is
the Richardson constant.
Figure 6: (a) The temperature dependence of the current ($I$) versus voltage
($V$) curves measured across a graphene/GaAs junction from 250 K up to 320 K
with 10 K intervals separating each isotherm. The arrows indicate the
direction of increasing temperature. (b) The temperature dependence of $I$-$V$
curves taken on graphene/GaAs junctions at different temperatures. (c)
Extracted $I_{s}$ values from Fig.6(b) are plotted in terms of
ln$I_{s}$/$T^{2}$ versus 1000/T.
When electronic transport across the barrier is dominated by thermionic
emission described by Eq. 1, semilogarithmic plots of the $J$-$V$ curves
should display a linear region in forward bias. As seen in the insets of Fig.
5(a-d) the overlying straight line segments of our measurements typically
reveal 2-4 decades of linearity, thus allowing us to extract $J_{s}$ and
$\eta$ for each diode. The deviations from linearity at higher bias are due to
series resistance contributions from the respective semiconductors. The
temperature-dependent data for the graphene/GaAs diode (Fig. 6(a-b)) show that
for both bias directions, a larger (smaller) current flows as the temperature
is increased (decreased) and the probability of conduction electrons
overcoming the barrier increases (decreases). In forward bias, the TE process
manifests itself as linear “log$J$-V curves”(Fig. 6(b)) and linear
“$\ln(I_{s}(T)/T^{2})$ versus $T^{-1}$ curves”(Fig. 6(c)) where
$I_{s}(T)=J_{s}(T)A$. The SBH is calculated directly from the slope of this
linear dependence. By repeating these temperature-dependent measurements for
the four different diodes, we find that the SBH ($\phi^{JV}_{SBH}$) values at
the graphene/semiconductor interfaces are 0.86 eV, 0.79 eV, 0.91 eV and 0.73
eV for Si, GaAs, SiC and GaN respectively (Table I). While the overall reverse
current density increases as $T$ is increased, we notice that at high reverse
bias the magnitude of the breakdown voltage $V_{b}$ decreases linearly with
temperature (see boxed region in upper left hand corner of Fig. 6(b)) implying
that $V_{b}$ has a positive breakdown coefficient and that the junction
breakdown mechanism is mainly avalanche multiplicationSze (1981).
Table 1: Extracted SBHs, doping densities, and corresponding graphene work function values on various graphene/semiconductor junctions | $\phi_{SBH}^{JV}$ | $\phi_{SBH}^{CV}$ | $N_{D}^{CV}$ | $N_{D}^{Hall}$ | $\Phi_{gr}$
---|---|---|---|---|---
Junction type | [eV] | [eV] | [cm-3] | [cm-3] | [eV]
Graphene/nSi | 0.86 | 0.92 | $4.0\times 10^{15}$ | $3.0\times 10^{15}$ | 4.91
Graphene/nGaAs | 0.79 | 0.91 | $3.5\times 10^{16}$ | $3.0\times 10^{16}$ | 4.89
Graphene/n4H-SiC | 0.91 | N A | N A | $1.0\times 10^{16}$ | 4.31
Graphene/nGaN | 0.73 | N A | N A | $1.0\times 10^{17}$ | 4.83
Schottky barrier values are well described using either the Bardeen or
Schottky limits. In the Bardeen limit, the interface physics is mostly
governed by interface states which, by accumulating free charge, change the
charge distribution at the interface and cause $E_{F}$ of the semiconductor to
be fixed (Fermi level pinning). Accordingly, the SBH shows weak dependence on
the work function of the metals used for contacts, as is found for example in
GaAs Sze (1981). On the other hand, the wide band gap semiconductors SiC and
GaN are well described by the Schottky-Mott (S-M) limit,
$\phi_{SBH}=\Phi_{gr}-\chi_{e},$ (2)
where $\Phi_{gr}$ is the work function of the graphene and $\chi$ is the
electron affinity of the semiconductor. Using the extracted values of
$\phi_{SBH}$, and electron affinity values ($\chi_{Si}\sim 4.05$ eV,
$\chi_{GaAs}\sim 4.1$ eV, $\chi_{4H-SiC}\sim 3.4$ eV and $\chi_{GaN}\sim 4.1$
eV), we calculate $\Phi_{gr}$ (Table I). The calculated values of the work
function are typically higher than the accepted values ($\sim$4.6 eV) of
graphene when $E_{F}$ is at the Dirac point (K point). The deviation from this
ideal graphene work function can be attributed to the lowering of $E_{F}$ due
to hole doping of the graphene during the Fe(III)NO3 etching-transfer
processSu et al. (2011) (Fig. 2(c-d)) together with the fact that the graphene
is in physical contact with the gold electrodesYu et al. (2009) (Fig. 4).
Although the SBHs on Si, GaAs and GaN can be roughly explained within the S-M
model, in reality GaAs surfaces have a high density of surface states and thus
exhibit characteristic Fermi level pinning. In the Bardeen limit, GaAs based
diodes generally have SBHs in the range of 0.75-0.85 eV as observed in our
measurements, and proper interpretation of the SBH on GaAs/graphene junctions
requires the Bardeen model. Subsequent to the placement of the graphene on the
semiconductor surface, there is charge separation and concomitant formation of
induced dipoles at the interface. According to bond polarization theoryTung
(2001a, b), the SBH is determined by charge separation at the boundary between
the outermost layers of the metal (here, a single layer carbon sheet) and the
semiconductor. Our results are in good agreement with the findings of our
earlier work on graphite and MLG junctions where the layer in closest
proximity to the semiconductor surface is a single sheet of carbon atomsTongay
et al. (2009, 2011). On the other hand, barriers formed on the 4H-SiC
substrates give an unphysically low value for $\Phi_{gr}$ (see Table I) and
therefore cannot be explained by either model.
Next we turn our attention to reverse bias characteristics when the
semiconductor (graphene) is positively (negatively) charged. In conventional
metal-semiconductor Schottky diodes, the work function of the metal is pinned
independent of bias voltage due to the high density of states at $E_{F}$ while
in the reverse (forward) bias the Fermi energy of the semiconductor shifts
down (up) allowing observed rectification via an increase (decrease) in the
built-in potential ($V_{bi}$). Unlike conventional metals, graphene’s work
function ($\Phi_{gr}$) is a function of bias Das et al. (2008), and for large
voltage values the SBH does not stay constant. When Schottky diodes are
forward biased, they pass large currents at voltages well below $\sim$1 V and
and small decreases in the Fermi level of graphene cannot be distinguished
from voltage drops associated with a series resistance. Said in another way,
the deviation from linearity in the semilogarithmic plots of Fig. 5(a-d) for
forward bias could be due to a combination of a series resistance becoming
important at high currents together with a small increase in $\Phi_{gr}$ and a
downward shift in $E_{F}$ for the positively charged graphene. However, in
reverse bias where the applied voltage can be larger than 10 V, $E_{F}$ starts
changing dramaticallyYu et al. (2009) and the fixed SBH assumption clearly no
longer holds. In reverse bias when the graphene electrodes are negatively
charged, $E_{F}$ increases and $\Phi_{gr}$ decreases causing the SBH height to
decrease as the reverse bias is increased. As observed in the insets of Fig.
5(a-d) this effect causes the total reverse current to increase as the
magnitude of the bias is increased, thus preventing the Schottky diode from
reaching reverse current saturation. This non-saturating reverse current has
not been observed in graphite based Schottky junctions due to the fixed Fermi
level of graphite Tongay et al. (2009).
Figure 7: Capacitive response of the graphene based Schottky diodes,
determination of the built-in potential, $V_{bi}$, and the proposed Schottky
band diagram. (a) Plots of the inverse square capacitance ($1/C^{2}$) versus
applied bias ($V$) graphene/$n$-Si (Red squares) and $n$-GaAs (green circles)
at 300 K and 100 Hz show a linear dependence implying that the Schottky-Mott
model provides a good description. The intercept on the abscissa gives the
built-in potential ($V_{bi}$) which can be correlated to the Schottky barrier
height while the slope of the linear fit gives
$2/eN_{D}\epsilon_{s}\epsilon_{0}$. Extracted $\phi_{SBH}$ and $N_{D}$ values
are listed in Table. I
### III.4 D. Capacitance-voltage measurements
Capacitance-voltage $C$-$V$ measurements made in the reverse bias mode are
complementary to $J$-$V$ measurements and provide useful information about the
distribution and density $N_{D}$ of ionized donors in the semiconductor and
the magnitude of the built-in potential $V_{bi}$. For a uniform distribution
of ionized donors within the depletion width of the semiconductor, the
Schottky-Mott relationship between $1/C^{2}$ and the reverse bias voltage
$V_{R}$, satisfies the linear relationship,
$1/C^{2}=2(V_{R}+V_{bi})/eN_{D}\epsilon_{s}\epsilon_{0}$, which as shown in
Fig. 7(a) is observed to hold for graphene/GaAs and graphene/Si junctions.
Linear extrapolation to the intercept with the abscissa gives the built-in
potential, $V_{bi}$, which is related to $\phi_{SBH}$ via the expression,
$\phi_{SBH}=V_{bi}+e^{-1}k_{b}T$ln$(N_{c}/N_{D})$ Sze (1981). Here $N_{c}$ is
the effective density of states in the conduction band, $N_{D}$ is the doping
level of the semiconductor, and the slope of the linear fitting to $1/C^{2}$
versus $V_{R}$ gives the doping density of the semiconductor. We list
$\phi_{SBH}^{CV}$ and $N_{D}$ values for the graphene/GaAs and graphene/Si
junctions in Table. I.
We note from Table 1 that the extracted $\phi^{CV}_{SBH}$ values on the Si and
GaAs junctions are generally higher than $\phi^{JV}_{SBH}$. The discrepancy
between the SBHs determined by the two methods can be attributed to: (a) the
existence of a thin oxide or residue at the graphene/semiconductor interface,
and/or (b) Schottky barrier inhomogeneity. Graphene sheets transferred onto
SiO2 are known to have charge puddles mostly due to the inhomonegous doping
either originating from natural graphite (mechanical exfoliation transfer) or
from chemicals used during the graphene production or transfer (CVD graphene
transfer) process. Since the SBH is sensitive to the EF of graphene, patches
with different charge densities (doping) are expected to have an impact on the
SBH and hence the $J$-$V$ characteristics of the graphene diodes.
An important difference between the $C$-$V$ and $J$-$V$ techniques is that the
$C$-$V$ measurements probe the average junction capacitance at the interface
thereby yielding an average value for the SBH, while the $J$-$V$ measurements
give a minimum value for the SBH, since electrons with thermionic emission
probabilities exponentially sensitive to barrier heights choose low barrier
patches (less p-doped graphene patches) over higher patches (more p-doped
graphene patches)Tung (2001b). While $C$-$V$ measurements give reasonable
values of the SBH for graphene/GaAs and graphene/Si, we have not been able to
obtain reliable $C$-$V$ measurements for graphene deposited on GaN and SiC
because of high series resistance in these wide band gap semiconductors.
The linearity of the $C$-$V$ measurements shown in Fig. 7 is consistent with
the Schottky-Mott model and the abrupt junction approximation, which assumes
that the density of ionized donors $N_{D}$ is constant throughout the
depletion width of the semiconductor. This good agreement invites a more
quantitative analysis of the Fermi energy shifts in the graphene that are the
source of the non-saturating reverse bias currents discussed in the previous
subsection. We begin by writing the electron charge density per unit area $Q$
on the graphene as
$Q=en_{induced}=C_{dep}(V_{bi}+V_{R}),$ (3)
where
$C_{dep}=\sqrt{\frac{e\epsilon_{s}\epsilon_{0}N_{D}}{2\left(V_{bi}+V_{R}\right)}},$
(4)
is the Schottky-Mott depletion capacitance, $n_{induced}$ is the number of
electrons per unit area and $V_{R}$ is the magnitude of the reverse bias
voltage. Combining these two equations gives the result,
$n_{induced}=\sqrt{\epsilon_{s}\epsilon_{0}N_{D}(V_{bi}+V_{R})/{2e}}.$ (5)
The above expression provides an estimate of the number of carriers per unit
area associated with the electric field within the depletion width but does
not account for extrinsic residual doping described by the carrier density
$n_{0}$ on the graphene before making contact with the semiconductor. The
processing steps used to transfer the CVD grown graphene from Cu substrates to
semiconductor surfaces typically results in p-doped material with $n_{0}\sim
5\times 10^{12}$cm-2 as inferred from Hall data (Fig. 4) taken at 300 K.
Accordingly, the final carrier density including contributions from the as-
made graphene and the charge transfers associated with the Schottky barrier
($V_{bi}$ and the applied voltage $V_{R}$) reads as,
$n_{final}=n_{0}-n_{induced},$ (6)
Using the well-known expression for graphene’s Fermi energyCastroNeto et al.
(2009) we can write
$E_{F}=-\hbar\left|v_{F}\right|k_{F}=-\hbar\left|v_{F}\right|\sqrt{\pi(n_{0}-n_{induced})},$
(7)
which in combination with Eq. 5 becomes
$E_{F}=-\hbar\left|v_{F}\right|\sqrt{\pi(n_{0}-\sqrt{\epsilon_{s}\epsilon_{0}N_{D}(V_{bi}+V_{R})/{2e}})},$
(8)
To calculate typical shifts in $E_{F}$, we use parameter values
$\epsilon_{0}=8.84\times 10^{-14}$ F/cm2, $\hbar=6.5\times 10^{-16}$ eV s,
$e=1.6\times 10^{-19}$ C, $v_{F}=1.1\times 10^{8}$ cm/s, $V_{bi}\sim 0.6$ V
and $\epsilon_{s}\sim 10$ for a typical semiconductor. Thus the Fermi energy
of the as-made graphene with $n_{0}\sim 5\times 10^{12}$ cm-2 is calculated
from $E_{F}=-\hbar\left|v_{F}\right|\sqrt{\pi n_{0}}$ to be $-0.287$ eV below
the charge neutrality point, a shift associated with the aforementioned
p-doping during processing. When the graphene is transferred to the
semiconductor, equilibration of the chemical potentials and concomitant
formation of a Schottky barrier (Fig. 7) results in a transfer of negative
charge to the graphene and an increase in $E_{F}$ (calculated from Eq. 8 for
$V_{R}=0$) to be in the range 3 to 11 meV for $N_{D}$ in the range $1\times
10^{16}$ to $1\times 10^{17}$ cm-3. The application of a typical 10 V reverse
bias (see Figs. 5 and 6) creates significantly larger Fermi energy shifts
which from Eq. 8 give $E_{F}$ in the range $-0.271$ to $-0.233$ eV for the
same factor of ten variation in $N_{D}$. The corresponding shifts from the
pristine value of $-0.287$ eV are in the range 15 - 53 meV and thus bring EF
closer to the neutrality point. These numerical calculations show that for our
n-doped semiconductors, it is relatively easy to induce Fermi energy shifts on
the order of 50 meV with the application of a sufficiently high reverse bias
voltage. An upward shift in $E_{F}$ of 50 meV causes a reduction in
$\Phi_{gr}$ by the same amount. Since the electron affinity of the
semiconductor remains unchanged, the Schottky-Mott constraint of Eq. 2
enforces the same reduction in $\phi_{SBH}$ thus leading to a greater than 5%
reduction in the measured SBH’s shown in Table I. We note that the induced
shift in graphene’s EF as determined by the in-situ Raman spectroscopy
measurements (Fig.3) is larger ($\Delta$EF$\sim$200-500 meV) than our
theoretical estimation ($\Delta$EF$\sim$50 meV).
The discrepancy between the theoretical estimate of $\Delta$EF and the
experimental values might be attributed to: (1) the existence of an interface
capacitance induced by dipoles at the graphene/semiconductor interface (within
bond polarization theory) causing deviation from the ideal Schottky-Mott
capacitance relation given by Eq. 4 and (2) the estimate of $\Delta$EF using
relative peak shifts in the G and 2D peak positions for graphene deposited on
Si/SiO2 Das et al. (2008) might be different than the change in Fermi level
for graphene transferred onto semiconductors.
### III.5 E. Modification of thermionic emission theory
As discussed in the previous sections, since the EF of graphene electrode is
sensitive to the applied bias across the graphene/semiconductor interface, the
SBH at the interface becomes bias dependent especially for large reverse
voltages. However, extracting the SBH from $J$-$V$ characteristics using Eq. 1
which involves extrapolating current density to zero bias saturation current
(Js) yields the putative zero bias barrier height (Table.1). In this section,
we present a simple modification to the Richardson equation (Eq. 1)
considering the shift in EF of graphene induced by applied bias. The modified
Richardson equation preserves the original functional form of Eq. 1 but allows
one to estimate the SBH at fixed voltages.
The voltage-dependent SBH ($\Phi_{SBH}(V)$) can be written as,
$e\Phi_{SBH}=e\Phi^{0}_{SBH}+e\Delta\Phi_{SBH}\left(V\right)\\\
=e\Phi^{0}_{SBH}-\Delta E_{F}\left(V\right)\\\ $ (9)
where e$\Phi$${}^{0}_{SBH}$ is the zero bias SBH and e$\Delta\Phi_{SBH}(V)$ is
the correction to the SBH at fixed voltage V. The change in the Fermi energy
$\Delta E_{F}\left(V\right)$ is opposite to e$\Delta\Phi_{SBH}(V)$, i.e.,
$\Delta E_{F}\left(V\right)=-e\Delta\Phi_{SBH}(V)$, as seen in Fig. 7b. Thus
for reverse bias (addition of electrons to the graphene) we use Eq. 5 in Eq.
7, together with the inequality $n_{induced}<<n_{0}$ to calculate
$\begin{split}&e\Delta\Phi_{SBH}(V_{R})=-\Delta E_{F}(V_{R})=\\\ &\hbar
v_{F}\left[\sqrt{\pi(n_{0}-n_{induced})}-\sqrt{\pi n_{0}}\right]\\\
&\approx-\frac{1}{2}\hbar v_{F}\sqrt{\pi n_{0}}\frac{n_{induced}}{n_{0}}\\\
&=-\frac{1}{2}\hbar
v_{F}\sqrt{\frac{\pi\epsilon_{s}\epsilon_{0}N_{D}(V_{bi}+V_{R})}{2en_{0}}}\end{split}$
(10)
Adding the reverse and forward current densities as is done in standard
treatments of the diode equationSze (1981) yields the total current density
across the graphene/semiconductor interface,
$\begin{split}&J(V)=\\\
&A^{*}T^{2}\texttt{exp}\left(-\frac{e\Phi^{0}_{SBH}+e\Delta\Phi_{SBH}\left(V\right)}{k_{B}T}\right)\biggl{[}\texttt{exp}\left(\frac{eV}{k_{B}T}\right)-1\biggr{]}\end{split}$
(11)
Here, we note that the original form of the Richardson equation is preserved
with slight modifications to the saturation current term which is given as;
$J_{s}=A^{*}T^{2}\texttt{exp}\left(-\frac{e\Phi^{0}_{SBH}+e\Delta\Phi_{SBH}(V)}{k_{B}T}\right)$
(12)
with $\Delta\Phi_{SBH}(V)$ for reverse bias given by Eq. 10.
In our conventional $J$-$V$ analysis using Eq. 1, the zero-bias saturation
current Js is extracted by extrapolating the current density to zero bias
limit. In this limit, the correction to the SBH is expected to be zero, since
the graphene is not subject to applied bias and hence the Fermi level does not
shift from the original value. However, using the extrapolated zero-bias
saturation current density, one can extract out the SBH and the correction to
the SBH at fixed bias (V) can be taken into account by the additional term
($\Delta\Phi_{SBH}(V)$ in Eq. 12).
## IV Conclusion
In summary, we have used current-voltage and capacitance-voltage measurements
to characterize the Schottky barriers formed when graphene, a zero-gap
semiconductor, is placed in intimate contact with the n-type semiconductors:
Si, GaAs, GaN and SiC. The good agreement with Schottky-Mott (S-M) physics
within the context of bond-polarization theory is somewhat surprising since
the S-M picture has been developed for metal-semiconductor interfaces, not for
single atomic layer ZGS-semiconductor interfaces discussed here. Moreover, due
to a low density of states, graphene’s Fermi level shifts during the charge
transfer across the graphene-semiconductor interface. This shift does not
occur at metal-semiconductor or graphite-semiconductor interfaces where
$E_{F}$ remains fixed during Schottky barrier formation and the concomitant
creation of a built-in potential, $V_{bi}$ with associated band bending (see
Fig. 7). Another major difference becomes apparent when under strong reverse
bias. According to our in-situ Raman spectroscopy measurements, large voltages
across the graphene/semiconductor interface change the charge density and
hence the Fermi level of graphene as determined by relative changes in the G
and 2D peak positions. The bias-induced shift in the Fermi energy (and hence
the the work function) of the graphene causes significant changes in the diode
current. Considering changes in the barrier height associated with bias
induced Fermi level shift, we modify the thermionic emission theory allowing
us to estimate the change in the barrier height at fixed applied bias. The
rectification effects observed on a wide variety of semiconductors suggest a
number of applications, such as to sensors where in forward bias there is
exponential sensitivity to changes in the SBH due to the presence of
absorbates on the graphene or to MESFET and HEMT devices for which Schottky
barriers are integral components. Graphene is particularly advantageous in
such applications because of its mechanical stability, its resistance to
diffusion, its robustness at high temperatures and its demonstrated capability
to embrace multiple functionalities.
We thank N. Savage for her help during preparation of the diagrams used in
this manuscript and Prof. A. Rinzler and Dr. M. McCarthy for technical
assistance. This work is supported by the Office of Naval Research (ONR) under
Contract Number 00075094 (BA) and by the National Science Foundation (NSF)
under Contract Number 1005301 (AFH).
## References
* Novoselov et al. (2004) K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, and S. V. Dubonos, Science 306, 666 (2004).
* Berger et al. (2004) C. Berger, Z. Song, T. Li, X. Li, A. Y. Ogbazghi, R. Feng, Z. Dai, A. N. Marchenkov, E. H. Conrad, P. N. First, et al., The Journal of Physical Chemistry B 108, 19912 (2004).
* Li et al. (2009) X. Li, W. Cai, J. An, S. Kim, J. Nah, D. Yang, R. Piner, A. Velamakanni, I. Jung, E. Tutuc, et al., Science 324, 1312 (2009).
* Kim et al. (2009) K. S. Kim, Y. Zhao, H. Jang, S. Y. Lee, J. M. Kim, K. S. Kim, J. Ahn, P. Kim, J. Choi, and B. H. Hong, Nature 457, 706 (2009).
* CastroNeto et al. (2009) A. H. CastroNeto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. 81, 109 (2009).
* Li et al. (2010a) X. Li, H. Zhu, K. Wang, A. Cao, J. Wei, C. Li, Y. Jia, Z. Li, X. Li, and D. Wu, Advanced Materials 22, 2743 (2010a).
* Fan et al. (2011) G. Fan, H. Zhu, K. Wang, J. Wei, X. Li, Q. Shu, N. Guo, and D. Wu, ACS Applied Materials & Interfaces 3, 721 (2011).
* Chen et al. (2011) C. C. Chen, M. Aykol, C. C. Chang, A. F. J. Levi, and S. B. Cronin, Nano Letters 11 (5), 1863 1867 (2011).
* Chung et al. (2010) K. Chung, C. Lee, and G. Yi, Science 330, 655 (2010).
* Lin et al. (2009) Y. M. Lin, K. A. Jenkins, A. V. Garcia, J. P. Small, D. B. Farmer, and P. Avouris, Nano Letters 9, 422 (2009).
* Tongay et al. (2009) S. Tongay, T. Schumann, and A. F. Hebard, Applied Physics Letters 95, 222103 (2009).
* Tongay et al. (2011) S. Tongay, T. Schumann, X. Miao, B. R. Appleton, and A. F. Hebard, Carbon 49, 2033 (2011).
* Li et al. (2010b) X. Li, C. W. Magnuson, A. Venugopal, J. An, J. W. Suk, B. Han, M. Borysiak, W. Cai, A. Velamakanni, Y. Zhu, et al., Nano Letters 10, 4328 (2010b).
* Han et al. (2002) S. Y. Han, J. Y. Shin, B. T. Lee, and J. L. Lee, Journal of Vacuum Science & Technology B: Microelectronics and Nanometer Structures 20, 1496 (2002).
* Sze (1981) S. M. Sze, _Physics of semiconductor devices 2nd ed._ (John Wiley and Sons, 1981).
* Hao et al. (1996) P. H. Hao, L. C. Wang, F. Deng, S. S. Lau, and J. Y. Cheng, Journal of Applied Physics 79, 4211 (1996).
* Ruvimov et al. (1996) S. Ruvimov, Z. L. Weber, J. Washburn, K. J. Duxstad, E. E. Haller, Z.-F. Fan, S. N. Mohammad, W. Kim, A. E. Botchkarev, and H. Morkoç, Applied Physics Letters 69, 1556 (1996).
* Cao et al. (2010) H. Cao, Q. Yu, L. A. Jauregui, J. Tian, and W. Wu, Appl. Phys. Lett. 96, 122106 (2010).
* Su et al. (2011) C.-Y. Su, D. Fu, A.-Y. Lu, K.-K. Liu, Y. Xu, Z.-Y. Juang, and L.-J. Li, Nanotechnology 22, 185309 (2011).
* Yu et al. (2009) Y. J. Yu, Y. Zhao, S. Ryu, L. E. Brus, K. S. Kim, and P. Kim, Nano Letters 9, 3430 (2009).
* Tung (2001a) R. T. Tung, Phys. Rev. B 64, 205310 (2001a).
* Tung (2001b) R. T. Tung, Material Sci. and Eng. R 35, 1 (2001b).
* Das et al. (2008) A. Das, S. Pisana, B. Chakraborty, S. Piscanec, S. K. Saha, U. V. Waghmare, K. S. Novoselov, H. R. Krishnamurthy, A. K. Geim, A. C. Ferrari, et al., Nature Nanotechnology 3, 210 (2008).
|
arxiv-papers
| 2011-05-24T15:56:57 |
2024-09-04T02:49:19.061009
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "S. Tongay, M. Lemaitre, X. Miao, B. Gila, B. R. Appleton, and A. F.\n Hebard",
"submitter": "Arthur F. Hebard",
"url": "https://arxiv.org/abs/1105.4811"
}
|
1105.4874
|
030004 2011 J-C. Géminard B. Tighe, Instituut-Lorentz, Universiteit Leiden,
Netherlands. 030004
We prepare static granular beds under gravity in different stationary states
by tapping the system with pulsed excitations of controlled amplitude and
duration. The macroscopic state—defined by the ensemble of static
configurations explored by the system tap after tap—for a given tap intensity
and duration is studied in terms of volume, $V$, and force moment tensor,
$\Sigma$. In a previous paper [Pugnaloni et al., Phys. Rev. E 82, 050301(R)
(2010)], we reported evidence supporting that such macroscopic states cannot
be fully described by using only $V$ or $\Sigma$, apart from the number of
particles $N$. In this work, we present an analysis of the fluctuations of
these variables that indicates that $V$ and $\Sigma$ may be sufficient to
define the macroscopic states. Moreover, we show that only one of the
invariants of $\Sigma$ is necessary, since each component of $\Sigma$ falls
onto a master curve when plotted as a function of $\rm{Tr}(\Sigma)$. This
implies that these granular assemblies have a common shape for the stress
tensor, even though it does not correspond to the hydrostatic type. Although
most results are obtained by molecular dynamics simulations, we present
supporting experimental results.
# Master curves for the stress tensor invariants in stationary states of
static granular beds. Implications for the thermodynamic phase space
Luis A. Pugnaloni [inst1] José Damas E-mail: luis@iflysib.unlp.edu.ar [inst2]
Iker Zuriguel E-mail: jdelacruz@alumni.unav.es [inst2] Diego Maza[inst2]
E-mail: iker@unav.esE-mail: dmaza@unav.es
($11$ May 2011; 15 July 2011)
††volume: 3
99 inst1 Instituto de Física de Líquidos y Sistemas Biológicos, (CONICET La
Plata, UNLP), Calle 59 No 789, 1900 La Plata, Argentina. inst2 Departamento de
Física y Matemática Aplicada, Facultad de Ciencias, Universidad de Navarra,
Pamplona, Spain.
## 1 Introduction
The study of static granular systems is of fundamental importance to the
industry to improve the storage of such materials in bulk, as well as to
optimize the packaging design. However, far from yielding such benefits of
practical interest, physicists have found a fascinating challenge on their way
that has stuck them, to a large extent, in the area of static granular matter.
Such challenge is finding an appropriate description of the simplest state in
which matter can be found: equilibrium [1]. At equilibrium, a sample will
explore different microscopic configurations over time, in such a way that
macroscopic averages over large periods will be well defined, with no aging.
Moreover, if not only equilibrium but ergodicity is present, averages over a
large number of replicas at a given time should give equivalent results to
time averages [2]. Finally, one expects that all macroscopic properties of
such equilibrium states can be put in terms of a few independent macroscopic
variables. Then, a thermodynamic description and, hopefully, a statistical
mechanics approach can be attempted. Theoretical formalisms based on these
assumptions have been used to analyze data from experiments and numerical
models. However, some of the foundations are still supported by little
evidence.
The use of careful protocols to make a granular sample explore microscopic
configurations within a seemingly equilibrium macroscopic state has given us
the first standpoint [3, 4, 6, 5]. In such protocols, the sample is subjected
to external excitations of the form of pulses. Well defined, reproducible time
averages are found after a transient if the same pulse shape and pulse
intensity are applied. However, for low intensity pulses, a previous annealing
might be in order since equilibrium is hard to reach —as in supercooled
liquids. We will use the expressions equilibrium state, steady state or simply
state to refer to the collection of all configurations generated by using a
given external excitation after any transient has disappeared.
More than twenty years ago, Edwards and Oakshott [7] put forward the idea that
the number of grains $N$ and the volume $V$ are the basic state variables that
suffice to characterize a static sample of hard grains in equilibrium. The
$NV$ granular ensemble was then introduced as a collection of microstates,
where the sample is in mechanical equilibrium, compatible with $N$ and $V$.
However, newer theoretical works [8, 9, 12, 10, 11, 13] suggest that the force
moment tensor, $\Sigma$, ($\Sigma=V\sigma$, where $\sigma$ is the stress
tensor) must be added to the set of extensive macroscopic variables (i.e., an
$NV\Sigma$ ensemble) to adequately describe a packing of real grains.
In the rest of this paper, we show experimental and simulation evidence that
the equilibrium states of static granular packings cannot be only described by
$V$ (or equivalently the packing fraction, $\phi$, defined as the fraction of
the space covered by the grains) nor by $\Sigma$. We do this by generating
states of equal $V$ but different $\Sigma$ and states of equal $\Sigma$ but
different $\phi$. We also show that states of equal $V$ may present different
volume fluctuations. Moreover, we show that states of equal $V$ and $\Sigma$
display the same fluctuations of these variables, suggesting that no other
extensive parameter might be required to characterize the state (apart from
$N$). Finally, but of major significance, we show that the shape of the force
moment tensor is universal, in the sense that different states that present
the same trace of the tensor actually have the same value in all the
components of $\Sigma$.
## 2 Simulation
We use soft-particle 2D molecular dynamics [15, 14]. Particle–particle
interactions are controlled by the particle–particle overlap
$\xi=d-\left|\mathbf{r}_{ij}\right|$ and the velocities
$\dot{\mathbf{r}}_{ij}$, $\omega_{i}$ and $\omega_{j}$. Here,
$\mathbf{r}_{ij}$ represents the center-to-center vector between particles $i$
and $j$, $d$ is the particle diameter and $\omega$ is the particle angular
velocity. These forces are introduced in the Newton’s translational and
rotational equations of motion and then numerically integrated by a velocity
Verlet algorithm [16]. The interaction of the particles with the flat surfaces
of the container is calculated as the interaction with a disk of infinite
radius.
The contact interactions involve a normal force $F_{\text{n}}$ and a
tangential force $F_{\text{t}}$.
$F_{\text{n}}=k_{\text{n}}\xi-\gamma_{\text{n}}v_{i,j}^{\text{n}}$ (1)
$F_{\text{t}}=-\min\left(\mu|F_{\text{n}}|,|F_{\text{s}}|\right)\cdot\text{sign}\left(\zeta\right)$
(2)
where
$F_{\text{s}}=-k_{\text{s}}\zeta-\gamma_{\text{s}}v_{i,j}^{\text{t}}$ (3)
$\zeta\left(t\right)=\int_{t_{0}}^{t}v_{i,j}^{\text{t}}\left(t^{\prime}\right)dt^{\prime}$
(4)
$v_{i,j}^{\text{t}}=\dot{\mathbf{r}}_{ij}\cdot\mathbf{s}+\frac{1}{2}d\left(\omega_{i}+\omega_{j}\right)$
(5)
The first term in Eq. (1) corresponds to a restoring force proportional to the
superposition $\xi$ of the interacting disks and the stiffness constant
$k_{n}$. The second term accounts for the dissipation of energy during the
contact and is proportional to the normal component $v_{i,j}^{\text{n}}$ of
the relative velocity $\dot{\mathbf{r}}_{ij}$ of the disks.
Equation (2) provides the magnitude of the force in the tangential direction.
It implements the Coulomb’s criterion with an effective friction following a
rule that selects between static or dynamic friction. Notice that Eq. (2)
implies that the maximum static friction force $|F_{\text{s}}|$ used
corresponds to $\mu|F_{\text{n}}|$, which effectively sets
$\mu_{\text{dynamic}}=\mu_{\text{static}}=\mu$. The static friction force
$F_{\text{s}}$ [see Eq. (3)] has an elastic term proportional to the relative
shear displacement $\zeta$ and a dissipative term proportional to the
tangential component $v_{i,j}^{\text{t}}$ of the relative velocity. In Eq.
(5), $\mathbf{s}$ is a unit vector normal to $\mathbf{r}_{ij}$. The elastic
and dissipative contributions are characterized by $k_{\text{s}}$ and
$\gamma_{\text{s}}$, respectively. The shear displacement $\zeta$ is
calculated through Eq. (4) by integrating $v_{i,j}^{\text{t}}$ from the
beginning of the contact (i.e., $t=t_{0}$). The tangential interaction behaves
like a damped spring which is formed whenever two grains come into contact and
is removed when the contact finishes [17].
The particular set of parameters used for the simulation is (unless otherwise
stated): $\mu=0.5$, $k_{n}=10^{5}(mg/d)$, $\gamma_{n}=300(m\sqrt{g/d})$,
$k_{s}=\frac{2}{7}k_{n}$ and $\gamma_{s}=200(m\sqrt{g/d})$. In some cases, we
have varied $\mu$ and $\gamma_{n}$ in order to control the friction and
restitution coefficient. The integration time step is set to
$\delta=10^{-4}\sqrt{d/g}$. The confining box ($13.39d$-wide and infinitely
high) contains $N=512$ monosized disks. Units are reduced with the diameter of
the disks, $d$, the disk mass, $m$, and the acceleration of gravity, $g$.
Tapping is simulated by moving the confining box in the vertical direction
following a half sine wave trajectory [$A\sin(2\pi\nu t)(1-\Theta(2\pi\nu
t-\pi))$]. The excitation can be controlled through the amplitude, $A$, and
the frequency, $\nu$, of the sinusoidal trajectory. We implement a robust
criterion based on the stability of particle contacts to decide when the
system has reached mechanical equilibrium [14] before a new tap is applied to
the sample. Averages were taken over 100 taps in the steady state and over 20
independent simulations for each value of $A$ and $\nu$.
The volume, $V$, of the system after each tap can be obtained from the packing
fraction, $\phi$, as $V=N\pi(d/2)^{2}/\phi$. We measure $\phi$ in a
rectangular window centered in the center of mass of the packing. The
measuring region covers 90% of the height of the granular bed (which is of
about $40d$) and avoids the area close to the walls by $1.5d$. We have
observed that $\phi$ is sensitive to the chosen window. However, none of the
conclusions drawn in this paper are affected by this choice.
The stress tensor, $\sigma$, is calculated from the particle–particle contact
forces as
$\sigma^{\alpha\beta}=\frac{1}{V}\sum_{c_{ij}}{r^{\alpha}_{ij}f^{\beta}_{ij}},$
(6)
where the sum runs over all contacts.
The force moment tensor, $\Sigma$, is defined as $\Sigma\equiv V\sigma$.
During the course of a tap $\Sigma$ is non-symmetric, however, once mechanical
equilibrium is reached in accordance with our criterion, $\Sigma$ becomes
symmetric within a very small error compared with the fluctuations of
$\Sigma$. Although $\Sigma$ may depend on depth, we have measured the force
moment tensor by simply summing over particle–particle contacts in the entire
system.
The fluctuations of $\phi$ ($\Delta\phi$) and $\Sigma$ ($\Delta\Sigma$) are
calculated as the standard deviation in the 100 taps obtained in each steady
state. We average $\phi$, $\Sigma$, and their fluctuations over 20 independent
runs for each steady state and estimate error bars as the standard deviation
over these 20 runs.
## 3 Experimental method
Figure 1: (a) Schematic diagram of the experimental set-up. A: accelerometer,
S: shaker, C: camera, O: oscilloscope, FG: function generator, PC: computer.
(b) Example of an image of the packing. The region of measurement is indicated
by a red square.
The experimental set-up is sketched in Fig. 1(a). A quasi 2D Plexiglass cell
(28 mm wide and 150 mm high) is filled with 900 alumina oxide beads of
diameter $d=1\pm 0.005$ mm. The separation between the front and rear plates
was made 15% larger than the bead diameter. The cell is tapped by an
electromagnetic shaker (Tiravib 52100) with a trend of half sine wave pulses
separated five seconds between them. The tapping amplitude was controlled
adjusting the intensity, $A$, and frequency, $\nu$, of the pulse and was
measured by an accelerometer attached to the base of the cell. Averages are
taken over 500 taps after equilibration.
High resolution images (more than 10 MPix) are taken after each tap. To
calculate the packing fraction, we only consider a rectangular zone at the
center of the packing whose limits are at 4 mm from the borders [see Fig.
1(b)]. We determine the centroid of each particle by means of a numerical
algorithm with subpixel resolution. Then, we calculate the packing fraction by
assuming that the 2D projection of each bead corresponds to a disk of diameter
$d=1$ mm. We estimate the packing fraction with a resolution of $\pm 0.001$.
As the separation between plates is larger than the particle diameter, small
overlaps between the spheres are present in the 2D projections of the images.
Therefore, the calculated packing fraction in some dense configurations might
result slightly higher than the hexagonal disk packing limit
($\pi\sqrt{3}/6$).
## 4 Tapping characterization and asymptotic equilibrium states
It is often debated [18, 19] what is the appropriate parameter to characterize
the external excitation used to drive a granular sample. Dijksman et al. [18]
proposed a parameter related to the lift-off velocity of the granular bed.
Ludewing et al. [19] presented an energy based parameter. Pugnaloni et al.
[15, 20] suggested that the factor of expansion induced on the sample would be
a suitable measure [21]. The usual perspective to define a pulse parameter,
$\epsilon$, is to achieve a collapse of the $\phi$–$\epsilon$ curves as
different details of the pulse shape are changed (such as amplitude and
duration). Parameters defined in all these previous works fail to make such
curves collapse for the data presented in this paper. The main reason for this
is that our $\phi$–$\epsilon$ curves are non-monotonic (see for example Fig.
6) presenting a minimum whose depth depends on the details of the pulse shape.
Therefore, a simple rescale of the horizontal axis does not suffice to
collapse the curves.
Since we are interested in macroscopic states, the actual pulse used to drive
the system is merely a control parameter but not a macroscopic variable that
describes the state. Therefore, the external pulse does not need to be
described with a simplified quantity. The complete functional form of the
pulse can be given instead. In our case, we use a sine pulse and both, the
pulse amplitude and frequency, are needed to fully describe the excitation. We
will employ the usual nondimensional peak acceleration $\Gamma\equiv
a_{peak}/g=A(2\pi\nu)^{2}/g$ (where $g$ is the acceleration of gravity) and
the frequency $\nu$ to precisely define the external excitation. A detailed
study of the dynamical response of a granular bed to a pulse of controlled
intensity and duration can be found in Damas et al. [22].
One major issue in studying equilibrium states is the evidence that one can
have, indicating if the system is actually at equilibrium [5]. Since the
definition of equilibrium is circular [23], we can simply do our best to check
if different properties of the system have well defined means (as well as
higher order moments of the distributions) which should not depend on the
history of the processes applied to the sample.
Figure 2: Evolution of $\phi$ towards the steady state starting from a
disordered configuration initially obtained by strong taps (experimental
results). Results for two tapping intensities are shown: $\Gamma=4.8$ (blue),
and $\Gamma=17$ (green). In both experiments, the frequency of the pulse is
$\nu=30$ Hz.
We have generated configurations corresponding to a particular pulse (of a
given shape and intensity) by repeatedly applying such pulse to the system and
allowing enough time for any transient to fade. To prove that our samples are
at equilibrium, we approach a particular pulse through different paths —and
starting from different initial conditions (ordered and disordered
configurations)— and confirm that the steady states obtained present
equivalent mean values and second moments of the distributions of the
variables of interest.
In Ref. [26], we showed that the steady states corresponding to excitations of
high intensity are reached in a few taps, even if the initial configuration
corresponds to a very ordered structure. In Fig. 2, we consider the
equilibration process from a highly disordered initial configuration. For low
tapping intensities (blue line in Fig. 2), the packing fraction evolves to the
steady state in two stages. Just after switching the tapping amplitude to the
reference value, the system rapidly evolves to values of $\phi$ close to the
final steady state. Beyond this initial convergence, a slower compaction phase
takes the system to the final steady state. For high tapping intensities, the
evolution to the steady state is very rapid. The steady state is reached after
about a hundred taps [green line in Fig. 2)]. Therefore, we apply a sequence
of at least 1000 taps in all our experiments before taking averages to warrant
that the steady state has been reached. In our simulations, 400 taps of
equilibration were enough.
Figure 3: Evolution of $\phi$ (a) and $\rm{Tr}(\Sigma)$ (b) as the pulse
intensity is suddenly changed between two values that produce steady states
with the same $\phi$ but different $\Sigma$ in our simulations. The initial
200 taps correspond to $\Gamma=4.9$, the middle 800 pulses to $\Gamma=61.5$
and the final 200 pulses, again, to $\Gamma=4.9$. In all cases,
$\nu=0.5\sqrt{g/d}$. The mean values and standard deviations in each section
are indicated by arrows.
An interesting example of equilibration is presented in Fig. 3. In this case,
we present the values of $\phi$ and of $\rm{Tr}(\Sigma)$ during a very special
sequence of pulses obtained in our simulations. The system is initially
deposited from a dilute random configuration with all particles in the air.
First, we tap the system 200 times at $\Gamma=4.9$, then 800 times at
$\Gamma=61.5$ and finally, 200 times at $\Gamma=4.9$. In the whole run, we
keep $\nu=0.5\sqrt{g/d}$. These two values of the pulse intensity have been
chosen because they are known to produce packings with the same mean $\phi$ in
the steady state [26]. However, the mean $\Sigma$ is clearly different.
Notice, however, that the same values of $\phi$, $\Sigma$ and its
fluctuations, $\Delta\phi$ and $\Delta\Sigma$, are observed for the steady
state of low $\Gamma$ obtained before and after the 800 pulses of high
$\Gamma$.
Unless otherwise stated, all the results we present in what follows correspond
to steady states. We have tested this by obtaining the same states through two
different preparation protocols consisting of: (i) application of a large
number of identical pulses starting from a disordered configuration, and (ii)
application of a reduced number of identical pulses after annealing the system
from much higher pulse intensities. In a few cases, the results (mean values
and/or fluctuations) from both protocols did not match. This was an indication
that a steady state, if existed, was not reached by one of the protocols (or
by both protocols). This happens especially for low intensity taps, which
require longer equilibration times. Such cases have been removed from the
analysis.
## 5 Volume and volume fluctuations
In Fig. 4(a), we plot $\phi$ in the steady state as a function of $\Gamma$
from our experiments for different pulse frequencies [24]. As we can see,
there exists a minimum $\phi$ at relatively high $\Gamma$. A similar
experiment on a three-dimensional cell also yielded analogous results [26].
This behavior has also been reported for various models [15, 20, 25]. An
explanation for this, based on the formation of arches, has been given in
[15]. The position of the minimum in $\phi$ shifts to larger $\Gamma$ if the
frequency of the pulse is increased (i.e., if the pulse duration is reduced).
Figure 4: (a) Experimental results for the steady state packing fraction,
$\phi$, as a function of the reduced peak acceleration, $\Gamma$, for
different frequencies, $\nu$, of the tap pulse. (b) Histogram of the
configurations visited by the system for $\nu=30$ Hz: $\Gamma=15$ (green) and
$\Gamma=28$ (red). Figure 5: The steady state volume fraction fluctuations,
$\Delta\phi$, as a function of $\Gamma$ from our experiments with $\nu=30$ Hz.
The solid line is only a guide to the eyes.
Although the increment in the packing fraction beyond the minimum is rather
small, it is important to remark that this difference is not an artifact
introduced by our experimental resolution. In Fig. 4(b), we show the histogram
for the sequence of packings obtained for $\Gamma\simeq 15$ (the minimum
packing fraction for $\nu=30$ Hz) and for $\Gamma=28$ (the largest excitation
explored for $\nu=30$ Hz). Both steady states are statistically comparable;
however, it is possible to distinguish different mean values.
Since steady states of equal $\phi$ obtained at both sides of the $\phi$
minimum are generated via very different tap intensities, it is worth
assessing if such states are, in fact, equivalent. This can be done by
comparing the volume fluctuations of such states. A similar analysis was done
in Ref. [27] for states generated with different pulse amplitude and duration
in liquid fluidized beds.
The fluctuations of $\phi$ in the steady state, as measured by the standard
deviation $\Delta\phi$, are presented in Fig. 5. As we can see, a minimum in
the fluctuations is just apparent. The position of such minimum in the
fluctuations coincides with the minimum in $\phi$. Unfortunately, the
resolution of $\phi$ in our experiments is around the size of the
fluctuations. However, the results from the simulations are not limited in
this respect and we address to those for a more reliable assessment of the
fluctuations.
In Fig. 6(a), we plot $\phi$ in the steady state as a function of $\Gamma$ for
our simulations. Although the fluctuations of $\phi$ are large [see Fig. 3],
its mean value is well defined with a small confidence interval (see error
bars). For low excitations, $\phi$ decreases as $\Gamma$ is increased.
However, beyond a certain value $\Gamma_{\rm{min}}$, the packing fraction
grows. The same trend is observed if the tap frequency $\nu$ is changed.
However, the minimum is deeper for lower $\nu$ and its position
$\Gamma_{\rm{min}}$ shifts to larger values of $\Gamma$ as $\nu$ is increased
in coincidence with our experiments (see Fig. 4). Due to the change in the
depth of the $\phi$ minimum in the $\phi$–$\Gamma$ curves, a simple rescaling
of $\Gamma$ is unable to collapse the data for different frequencies. However,
rescaling $\phi$ and $\Gamma$ with $\phi_{min}$ and $\Gamma_{min}$,
respectively, yields very good collapse, even between simulated and
experimental data [26].
In Fig. 6(b), the volume fraction fluctuations, $\Delta\phi$, are plotted as a
function of $\Gamma$. As we can see, these fluctuations are non-monotonic, as
suggested by our experiments (Fig. 5). Non-monotonic volume fluctuations have
also been reported in Ref. [6]. For $\Delta\phi$, we obtain a minimum and a
maximum. We have also observed a maximum in $\Delta\phi$ for values of
$\Gamma$ below the ones reported here (see for example Refs. [26] and [25]).
However, we do not report such low values of $\Gamma$ in this work and focus
on tapping intensities that warrant the steady state with a modest number of
pulses.
Figure 6: (a) Simulation results for the volume fraction, $\phi$, as a
function of the reduced peak acceleration, $\Gamma$, for different
frequencies, $\nu$, of the tap pulse. (b) The corresponding volume fraction
fluctuation, $\Delta\phi$, as a function of $\Gamma$.
The value of $\Gamma$, at which the fluctuations display a minimum, coincides
with $\Gamma_{min}$, the value at which the minimum packing fraction,
$\phi_{min}$, is obtained. The maximum coincides with the inflection point in
the $\phi$–$\Gamma$ curve at higher $\Gamma$. Since one expects to find few
mechanically stable configurations compatible with a large volume (low
$\phi$), it seems reasonable that fluctuations reach a minimum if $\phi$ does
so. Similarly, there should be few low-volume, mechanical stable
configurations which implies that at high $\phi$ fluctuations should diminish.
Hence, a maximum in $\Delta\phi$ should be present at intermediate packing
fractions. This is more clearly seen in Fig. 7 where we plot the fluctuations
as a function of the average value of $\phi$.
Figure 7: Density fluctuations as a function of $\phi$ for different
frequencies of the tap pulse in the simulations.
Figure 7 presents two distinct branches: the lower branch corresponds to
$\Gamma>\Gamma_{min}$ and the upper branch to $\Gamma<\Gamma_{min}$. For
$\Gamma>\Gamma_{min}$, the fluctuations corresponding to different tap
durations collapse, suggesting that such equal-$\phi$, equal-$\Delta\phi$
states might correspond to unique states (below, we will find that this is not
the case). For $\Gamma<\Gamma_{min}$, we obtain states of same $\phi$ as some
states in the lower branch but presenting larger fluctuations. This is clear
evidence that the equal-$\phi$ states of the upper and lower branch are indeed
distinct and that other macroscopic variables must be used to distinguish one
from the other.
We have assessed a number of other structural descriptors (coordination
number, bond order parameter, radial distribution function). In all cases,
equal–$\phi$ states from the upper and lower branch of the $\phi$–$\Delta\phi$
curve (Fig. 7) present similar values of the structural descriptors with only
subtle discrepancies. Although this indicates that the states are not
equivalent, it also suggests that such descriptors are not good candidates to
form a set of macroscopic variables, along with $V$, to uniquely identify a
given steady state.
In Ref. [26], we have assessed the force moment tensor $\Sigma$ as a good
candidate to complete $V$ and $N$ in describing a stationary state. This has
been suggested by some theoretical speculations [12]. However, some authors
prefer to directly replace $V$ by $\Sigma$. Below, we will show that both, $V$
and $\Sigma$, are required for the equilibrium states generated in our
simulations. Moreover, we will show that only one of the invariants of
$\Sigma$ is necessary (at least in our 2D systems) and that fluctuations of
these variables suggest that no other extra macroscopic parameter may be
required.
## 6 Stress tensor
Before we focus on the force moment tensor, we will consider the stress
tensor, $\sigma$, in order to understand the phenomenology of the force
distribution in our tapped granular beds. We recall that $\Sigma$ and $\sigma$
are simply related through $\Sigma\equiv V\sigma$. However, we have to bear in
mind that $V$ is not a simple constant since the volume of the system depends,
in a nontrivial way, on the shape and intensity of the excitation.
In Fig. 8, we show the components of $\sigma$ as a function of $\Gamma$ for
different $\nu$. As a reference, we show results of our simulations for a
frictionless system. In a frictionless system, the shear vanishes and
$\sigma_{yy}$ is only determined by the weight of the sample since the
Janssen’s effect is not present. As we can see, the frictionless sample
presents a constant value of $\sigma_{yy}$ for all $\Gamma$. For low $\Gamma$,
the frictional samples display values of $\sigma_{yy}$ below the frictionless
reference. This is a consequence of the Janssen’s effect since part of the
weight of the sample is supported by the wall friction. Consequently, in this
region, $\sigma_{xy}$ is also positive [see Fig. 8(b)]. However, for each
$\nu$, there is a critical value of $\Gamma$, $\Gamma_{shear=0}$. Beyond it,
the sample presents an apparent weight above the weight of the packing. In
correspondence with this, $\sigma_{xy}$ changes sign and becomes negative.
This indicates that, for $\Gamma>\Gamma_{shear=0}$, the frictional walls are
not supporting any weight. Rather, they prevent the packing from expansion by
a downward frictional force. As $\Gamma$ is increased beyond
$\Gamma_{shear=0}$, the packing tends to store most of its stress in the
horizontal direction ($\sigma_{xx}$) while $\sigma_{yy}$ eventually saturates.
For very intense pulses, the sample expands and lifts off significantly during
the tap. When the bed falls back, it creates a very compressed structure with
most of the stress transmitted in the lateral directions and the wall friction
sustaining the system downwards. It is worth mentioning that
$\Gamma_{shear=0}$ is always higher than $\Gamma_{min}$ (see Fig. 6).
Figure 8: (a) Diagonal components of the stress tensor, $\sigma$, as a
function of $\Gamma$ for different frequencies $\nu$ of the tap pulse (the
upper set of curves corresponds to $\sigma_{yy}$ and the lower set to
$\sigma_{xx}$). (b) Off diagonal component of the stress tensor,
$\sigma_{xy}$. The horizontal line corresponds to $\sigma_{yy}$ [in panel (a)]
and to $\sigma_{xy}$ [in panel (b)] from simulations of a packing of
frictionless disks.
## 7 The force moment tensor master curve
Since the stress is not an extensive parameter, the force moment tensor is
generally used to characterize the macroscopic state [12, 13]. Therefore, we
will use $\Sigma$ in the rest of the paper. Let us simply remark that since
$V$ presents a non-monotonic response to $\Gamma$, the curves in Fig. 8
present a somewhat different shape if $\Sigma$ is plotted instead of $\sigma$.
Particularly, $\Sigma_{yy}$ does not display a minimum at low $\Gamma$, as the
one observed for $\sigma_{yy}$ in frictional disks but a monotonic increase.
In Fig. 9, we show the trace of $\Sigma$ as a function of $\Gamma$. There is a
clear monotonic increase of $\rm{Tr}(\Sigma)$ as $\Gamma$ is increased.
Moreover, for a given $\Gamma$, if the frequency of the excitation pulse is
increased, a significant reduction in the force moment tensor is observed.
Figure 9: Trace, $\rm{Tr}(\Sigma)$, of the force moment tensor as a function
of $\Gamma$ for different frequencies $\nu$ of the tap pulse.
In Fig. 10, we plot the components of $\Sigma$ as a function of its trace for
all the steady states generated. We can see that all data for different
$\Gamma$ and $\nu$ collapse into three master curves. This indicates that if
two equilibrium states present the same $\rm{Tr}(\Sigma)$, all the components
of $\Sigma$ are also equal. Here, we point out a relevant piece of information
that will be discussed in the next section. Two states may present equal force
moment tensor but differ in volume. This means that many points collapsing in
Fig. 10 correspond to states of different $\phi$. Therefore, at equilibrium,
irrespective of the structure of the sample, two states with the same trace in
$\Sigma$ will present equal $\Sigma$.
In a liquid at equilibrium, the stress tensor is diagonal and all elements
along the diagonal are equal. This hydrostatic property allows us to know the
full stress tensor if we only know the hydrostatic pressure (i.e., if we only
know the trace of the tensor). In our granular samples, the force moment
tensor can also be known if the trace is known. However, the shape of the
tensor in static packings under gravity is defined by the three master curves
of Fig. 10.
Figure 10: (a) Diagonal components of the force moment tensor, $\Sigma$, as a
function of $\rm{Tr}(\Sigma)$ for different frequencies of the tap pulse (the
upper set of curves corresponds to $\Sigma_{yy}$ and the lower to
$\Sigma_{xx}$). (b) Off diagonal component of the force moment tensor,
$\Sigma_{xy}$.
To our knowledge, there is no previous speculation that this property must
hold for static granular packings. A more detailed study on the extent of this
commonality of the shape of the force moment tensor will be pursued in a
future paper [28]. However, we show some suggestive preliminary results below.
In Fig. 11, we show the components of $\Sigma$ as a function of
$\rm{Tr}(\Sigma)$ for a range of samples of different materials, for different
tapping intensities, and for different tapping frequencies. As we can see,
there is reasonable collapse of the data onto the same three master curves
shown in Fig. 10. This is an indication that these master curves may be
universal and enclose a rather fundamental underlying property (inaccessible
to us at this point) of static granular beds.
Figure 11: (a) Diagonal components of force moment tensor, $\Sigma$, as a
function of $\rm{Tr}(\Sigma)$ for different frequencies of the tap pulse,
different friction coefficient and different restitution (the upper set of
curves corresponds to $\Sigma_{yy}$ and the lower to $\Sigma_{xx}$). (b) Off
diagonal component of the force moment tensor, $\Sigma_{xy}$. The dashed lines
are only a guide to the eyes.
## 8 Force moment tensor fluctuations
Since we have shown in the previous section that only the trace of $\Sigma$
suffices (along with the master curves) to describe the full force moment
tensor, we will now focus on this invariant and its fluctuations. In Fig. 12,
the fluctuations of $\rm{Tr}(\Sigma)$ are plotted as a function of $\Gamma$.
We obtain a single minimum, in contrast with the minimum and maximum observed
in $\Delta\phi$. Interestingly, the states with minimum
$\Delta\rm{Tr}(\Sigma)$ correspond to the states where the minimums $\phi$ and
$\Delta\phi$ are reached for each $\nu$. However, unlike $\Delta\phi$, the
depth of the minimum in $\Delta\rm{Tr}(\Sigma)$ is fairly independent of
$\nu$. It is unclear why the force moment fluctuations should present a
minimum. Provided that the minimum of $\Delta\rm{Tr}(\Sigma)$ coincides with
the minimum of $\Delta\phi$, it can be speculated that a reduced number of
geometric configurations can accommodate a limited number of force
configurations. We have seen that all individual components of $\Sigma$
present the same minimum in their fluctuations, however, the actual values in
$\Delta\rm{Tr}(\Sigma$) are dominated by $\Delta\Sigma_{xx}$ which takes
values five times larger than $\Delta\Sigma_{yy}$.
Figure 12: (a) Fluctuations of the trace of the force moment tensor as a
function of $\Gamma$ for different frequencies of the tap pulse. (b)
Fluctuations of the trace of the force moment tensor as a function of
$\rm{Tr}(\Sigma)$.
If we plot $\Delta\rm{Tr}(\Sigma)$ in terms of the average value
$\rm{Tr}(\Sigma)$ [see Fig. 12(b)], we can see that the curves collapse on top
of each other over a wide range of $\rm{Tr}(\Sigma)$. However, some deviations
are apparent at very low and very high forces. Although the fair collapse of
the curves suggests that states of equal-$\Sigma$ may correspond to the same
equilibrium states, we will see in the next section that many of these
equilibrium states are distinguishable through the volume. The inflection
observed at high $\rm{Tr}(\Sigma)$ corresponds to the change of regime
observed in Fig. 8(a) where the vertical stress saturates and most of the
contact forces are directed in the $x$–direction.
## 9 The thermodynamic phase space
As we have suggested, the mean volume of static granular samples is not
sufficient to describe the equilibrium state since states of equal-$\phi$ may
present distinct fluctuations. On the other hand, the force moment tensor
seems to be able to serve as a standalone descriptor since states of equal
$\Sigma$ do generally present the same $\Sigma$ fluctuations. However, states
of equal-$\Sigma$ may present different volumes. In Fig. 13, we plot the loci
of the equilibrium states generated in our simulations in a hypothetical
$\phi$–$\Sigma$ thermodynamic phase space. As we can see, states of equal $V$
but different $\Sigma$ are obtained as well as states of equal $\Sigma$ and
different $V$.
Figure 13: Phase space $\phi$–$\rm{Tr}(\Sigma)$. Loci visited in the
simulations for different frequencies of the tap pulse.
We can ask now if these two state variables suffice to fully describe the
equilibrium states. A hint that this may be the case is given by the fact that
states generated with different $\Gamma$ and $\nu$ but that correspond to the
same state in the $\phi$–$\Sigma$ plot display the same fluctuations of these
variables. In Fig. 14, we have highlighted some pairs of neighboring states.
We can see that such states do also present similar fluctuations [Fig. 14(b)].
In contrast, states which are distant in the $\phi$–$\Sigma$ plot present
distinct fluctuations even if they correspond to an equivalent mean volume or
an equivalent mean force moment tensor (see states joined by solid lines in
Fig. 14).
Figure 14: (a) Phase space $\phi$–$\rm{Tr}(\Sigma)$. (b) Fluctuations of the
state variables. Selected neighboring states are colored in pairs for
comparison. Distant states of equal $V$ or $\rm{Tr}(\Sigma)$ are joined by
thick lines.
Let us point out here, that the sole coincidence of fluctuations in the
macroscopic variables is not a rigorous proof that the set of chosen variables
is a full complete set of thermodynamic parameters. Future explorations of
these systems may confirm or disprove that $N$, $V$ and $\rm{Tr}(\Sigma)$ are
a full set of macroscopic, extensible variables able to describe all
equilibrium states. Meanwhile, it is clear that for moderate tapping
intensities, around which the minimum in $\phi$ is observed, the approximation
of a simple $NV$ or $N\Sigma$ ensemble is not warranted in view of the large
discrepancies between the curves generated with different pulse frequencies in
Fig. 13.
## 10 Concluding remarks
We have studied steady states of mechanically stable granular samples driven
by tap like excitations. We have varied the external excitation by changing
both, the pulse amplitude and the pulse duration. We have considered the
macroscopic extensive variables $V$ (volume) and $\Sigma$ (force moment
tensor), and their fluctuations. From the results, we can draw the following
conclusions:
* •
There seems to be a rather robust set of master curves for
$\Sigma_{\alpha\beta}$ which implies that the knowledge of $\rm{Tr}(\Sigma)$
suffices to infer the other components of the force moment tensor.
* •
The equilibrium states cannot be only described by $V$ or $\Sigma$, apart from
the number of particles $N$.
* •
The equilibrium states seem to be well described by the set
$NV\rm{Tr}(\Sigma)$.
There exists a number of points to be considered in view of these findings.
Here, we mention a few that may serve as starting points for future directions
of research:
* •
What is the extent to which the $\Sigma$ master curves are applicable? Is this
dependent on the dimensionality of the system, the excitation procedure, the
chosen contact force law, etc?
* •
What is the dynamics during a single pulse that leads to the appearance of the
$\phi$ minimum? Is this minimum present in states generated with other types
of pulses like fluidization or shear? The ubiquity of this minimum in
simulation models [15, 20] suggests that it might be found in numerous
conditions.
* •
Are the fluctuations shown in Figs. 7 and 12 the definitive phenomenological
equation of states? Other authors have so far found monotonic density
fluctuations [27] or concave up density fluctuations [6].
* •
How much of the $\phi$–$\rm{Tr}(\Sigma)$ plane can be explored by changing
material properties?
* •
Are there other excitation protocols (such as shearing) that may give rise to
steady states that are thermodynamically equivalent to the ones obtained by
tapping?
* •
Is it possible to construct a phenomenological entropy function from the
equations of states (Figs. 7 and 12) by simple integration of a Gibbs–Duhem-
like equation? Let us bear in mind that Fig. 7 is multiply valued.
It is worth stressing that if two ensembles generated by arbitrary excitation
protocols —such as tapping or shearing— happen to present the same mean values
(and fluctuation) for all macroscopic variables, then such macroscopic states
should be considered thermodynamically identical. However, it may be the case
that a given protocol produces a narrow range of macroscopic states that can
be eventually described with a reduced set of macroscopic variables.
###### Acknowledgements.
LAP acknowledges discussions with Massimo Pica Ciamarra. JD acknowledges a
scholarship of the FPI program from Ministerio de Ciencia e Innovación
(Spain). This work has been financially supported by CONICET (Argentina),
ANPCyT (Argentina), Project No. FIS2008-06034-C02-01 (Spain) and PIUNA (Univ.
Navarra).
## References
* [1] H B Callen, Thermodynamics and an Introduction to Thermostatistics, 2nd ed., Wiley-VCH, New York (1985).
* [2] R K Pathria, Statistical Mechanics, 2nd ed., Butterworth-Heinemann, Oxford (1996).
* [3] E R Nowak, J B Knight, M L Povinelli, H M Jeager, S R Nagel, Reversibility and irreversibility in the packing of vibrated granular material, Powder Tech. 94, 79 (1997).
* [4] P Richard, M Nicodemi, R Delannay, P Ribire, D Bideau, Slow relaxation and compaction of granular systems, Nature Mater. 4, 121 (2005).
* [5] Ph Ribière, P Richard, P Philippe, D Bideau, R Delannay, On the existence of stationary states during granular compaction, Eur. Phys. J. E 22, 249 (2007).
* [6] M Schröter, D I Goldman, H L Swinney, Stationary state volume fluctuations in a granular medium, Phys. Rev. E 71, 030301(R) (2005).
* [7] S F Edwards, R B S Oakeshott, Theory of powders, Physica A 157, 1080 (1989).
* [8] J H Snoeijer, T J H Vlugt, W G Ellenbroek, M van Hecke, J M J van Leeuwen, Ensemble theory for force networks in hyperstatic granular matter, Phys. Rev. E. 70, 061306 (2004).
* [9] S F Edwards, The full canonical ensemble of a granular system, Physica A 353, 114 (2005).
* [10] S Henkes, C S O’Hern, B Chakraborty, Entropy and temperature of a static granular assembly: An ab initio approach, Phys. Rev. Lett. 99, 038002 (2007).
* [11] S Henkes, B Chakraborty, Stress correlations in granular materials: An entropic formulation, Phys. Rev. E. 79, 061301 (2009).
* [12] R Blumenfeld, S F Edwards, On granular stress statistics: Compactivity, angoricity, and some open issues, J. Phys. Chem. B 113, 3981 (2009).
* [13] B P Tighe, A R T van Eerd, T J H Vlugt, Entropy maximization in the force network ensemble for granular solids, Phys. Rev. Lett. 100, 238001 (2008).
* [14] R Arévalo, D Maza, L A Pugnaloni, Identification of arches in two-dimensional granular packings, Phys. Rev. E 74, 021303 (2006).
* [15] L A Pugnaloni, M Mizrahi, C M Carlevaro, F Vericat, Nonmonotonic reversible branch in four model granular beds subjected to vertical vibration, Phys. Rev. E 78, 051305 (2008).
* [16] J Schäfer, S Dippel, D E Wolf, Force schemes in simulations of granular materials, J. Phys. I (France) 6, 5 (1996).
* [17] H Hinrichsen, D Wolf, The physics of granular media, Willey-VCH, Weinheim (2004).
* [18] J A Dijksman, M van Hecke, The role of tap duration for the steady-state density of vibrated granular media, Eur. Phys. Lett. 88, 44001 (2009).
* [19] F Ludewig, S Dorbolo, T Gilet, N Vandewalle, Energetic approach for the characterization of taps in granular compaction, Eur. Phys. Lett. 84, 44001 (2008).
* [20] P A Gago, N E Bueno, L A Pugnaloni, High intensity tapping regime in a frustrated lattice gas model of granular compaction, Granular Matter 11, 365 (2009).
* [21] Notice however, that this expansion parameter is not based on the properties of the mechanical excitation itself but on the sample response to it.
* [22] J Damas et al., (unpublished)
* [23] According to Callen [1], a system is at equilibrium if Thermodynamics holds for such state.
* [24] Notice, that in Fig. 1(c) of our previous work [26], the maximum value of $\Gamma$ reported was half of the value shown here. This discrepancy is due to a deficient filtering of the accelerometers used in Ref. [26]. The accelerations reported here have been corrected and checked against measurements done with a high speed camera.
* [25] C M Carlevaro, L A Pugnaloni, Steady state of tapped granular polygons, J. Stat. Mech. P01007 (2011).
* [26] L A Pugnaloni, I Sánchez, P A Gago, J Damas, I Zuriguel, D Maza, Towards a relevant set of state variables to describe static granular packings, Phys. Rev. E 82, 050301(R) (2010).
* [27] M Pica Ciamarra, A Coniglio, M Nicodemi, Thermodynamics and statistical mechanics of dense granular media, Phys. Rev. Lett. 97, 158001 (2006).
* [28] L. A. Pugnaloni, et al., (unpublished).
|
arxiv-papers
| 2011-05-24T19:57:14 |
2024-09-04T02:49:19.069067
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Luis A. Pugnaloni, Jos\\'e Damas, Iker Zuriguel, Diego Maza",
"submitter": "Luis Ariel Pugnaloni",
"url": "https://arxiv.org/abs/1105.4874"
}
|
1105.4965
|
Evolution of scaling emergence in large-scale spatial epidemic spreading
Lin Wang, Xiang Li$\ast$, Yi-Qing Zhang, Yan Zhang, Kan Zhang
Adaptive Networks and Control Lab, Department of Electronic Engineering, Fudan
University, Shanghai 200433, PR China
$\ast$ Corresponding author: lix@fudan.edu.cn
## Abstract
Background: Zipf’s law and Heaps’ law are two representatives of the scaling
concepts, which play a significant role in the study of complexity science.
The coexistence of the Zipf’s law and the Heaps’ law motivates different
understandings on the dependence between these two scalings, which is still
hardly been clarified.
Methodology/Principal Findings: In this article, we observe an evolution
process of the scalings: the Zipf’s law and the Heaps’ law are naturally
shaped to coexist at the initial time, while the crossover comes with the
emergence of their inconsistency at the larger time before reaching a stable
state, where the Heaps’ law still exists with the disappearance of strict
Zipf’s law. Such findings are illustrated with a scenario of large-scale
spatial epidemic spreading, and the empirical results of pandemic disease
support a universal analysis of the relation between the two laws regardless
of the biological details of disease. Employing the United States(U.S.)
domestic air transportation and demographic data to construct a metapopulation
model for simulating the pandemic spread at the U.S. country level, we uncover
that the broad heterogeneity of the infrastructure plays a key role in the
evolution of scaling emergence.
Conclusions/Significance: The analyses of large-scale spatial epidemic
spreading help understand the temporal evolution of scalings, indicating the
coexistence of the Zipf’s law and the Heaps’ law depends on the collective
dynamics of epidemic processes, and the heterogeneity of epidemic spread
indicates the significance of performing targeted containment strategies at
the early time of a pandemic disease.
## Introduction
Scaling concepts play a significant role in the field of complexity science,
where a considerable amount of efforts is devoted to understand these
universal properties underlying multifarious systems[1-4]. Two representatives
of scaling emergence are the Zipf’s law and the Heaps’ law. G.K. Zipf, sixty
years ago, found a power law distribution for the occurrence frequencies of
words within different written texts, when they were plotted in a descending
order against their rank[5]. This frequency-rank relation also corresponds to
a power law probability distribution of the word frequencies[32]. The Zipf’s
law is found to hold empirically for a great deal of complex systems, e.g.,
natural and artificial languages[5-9], city sizes[10,11], firm sizes[12],
stock market index[13,14], gene expression[15,16], chess opening[17],
arts[18], paper citations[19], family names[20], and personal donations[21].
Many mechanisms are proposed to trace the origin of the Zipf’s law[22-24].
Heaps’ law is another important empirical principle describing the sublinear
growth of the number of unique elements, when the system size keeps on
enlarging[25]. Recently, particular attention is paid to the coexistence of
the Zipf’s law and the Heaps’ law, which is reported for the corpus of web
texts[26], keywords in scientific publication[27], collaborative tagging in
web applications[28,29], chemoinformatics[30], and more close to the interest
in this article, global pandemic spread[31], and etc.
In [33,34], an improved version of the classical Simon model[35] was put
forward to investigate the emergence of the Zipf’s law, which is deemed to be
a result from the existence of the Heaps’ law. However, [26,32] concluded that
the Zipf’s law leads to the Heaps’ law. In fact, the interdependence of these
two laws has hardly been clarified. This embarrassment comes from the fact
that the empirical/simulated evidence employed to show the emergence of Zipf’s
law mainly deals with static and finalized speicmens/results, while the Heaps’
law actually describes the evolving characteristics.
In this article, we investigate the relation between these scaling laws from
the perspective of coevolution between the scaling properties and the epidemic
spread. We take the scenarios of large-scale spatial epidemic spreading for
example, since the empirical data contain sufficient spatiotemporal
information making it possible to visualize the evolution of the scalings,
which allows us to analyze the inherent mechanisms of their formation. The
Zipf’s law and the Heaps’ law of the laboratory confirmed cases are naturally
shaped to coexist during the early epidemic spread at both the global and the
U.S. levels, while the crossover comes with the emergence of their
inconsistency as the epidemic keeps on prevailing, where the Heaps’ law still
exists with the disappearance of strict Zipf’s law. With the U.S. domestic air
transportation and demographic data, we construct a fine-grained
metapopulation model to explore the relation between the two scalings, and
recognize that the broad heterogeneity of the infrastructure plays a key role
in their temporal evolution, regardless of the biological details of diseases.
## Results
### Empirical and Analytical Results
With the empirical data of the laboratory confirmed cases of the A(H1N1)
provided by the World Health Organization(WHO)(see the data description in
_Materials and Methods_), we first study the probability-rank
distribution(_PRD_) of the cumulative confirmed number(_CCN_) of every
infected country at several given dates sampled about every two weeks.
$C_{j}(t)$ denotes the _CCN_ in a given country $j$ at time $t$. Since
$C_{j}(t)$ grows with time, the distributions at different dates are
normalized by the global _CCN_ , $C_{T}(t)=\sum_{j}C_{j}(t)$, for comparison.
Fig.1(A) shows the Zipf-plots of the _PRD_ $P_{t}(r)$ of the infected
countries’ confirmed cases by arranging every $C_{j}(t)/C_{T}(t)>0$ in a
descending order for each specimen. The maximal rank $r_{t,max}$(on x-axis)
for each specimen denotes the total number of infected countries at a given
date, and grows as the epidemic spreading.
At the early stage(the period between April 30th and June 1st, 2009),
$P_{t}(r)$ shows a power law pattern $P_{t}(r)\sim r^{-\theta}$, which
indicates the emergence of the Zipf’s law. We estimate the power law exponent
$\theta$ for each specimen of this stage by the maximum likelihood
method[22,37], and report its temporal evolution in the left part of Fig.1(C).
About sixty countries were affected by the A(H1N1) on June 1st, and most of
them are countries with large population and/or economic power, e.g., U.S.,
Mexico, Canada, Japan, Australia, China. After June 1st, the disease swept
much more countries in a short time, and the WHO announcement on June 11th[38]
raised the pandemic level to its highest phase, phase 6(see _Text S1_), which
implied that the global pandemic flu was occurring. At this stage(after June
1st, 2009), $P_{t}(r)$ gradually displays a power law distribution with an
exponential cutoff $P_{t}(r)\sim r^{-\theta}exp(-r/r_{c})$, where $r_{c}$ is
the parameter controlling the cutoff effect(see _Text S1_), and the exponent
$\theta$ gradually reduces to around 1.7, as shown in Fig.1(C). Surprisingly,
$P_{t}(r)$ at different dates eventually reaches a stable distribution as time
evolves(see those curves since June in Fig.1(A)). Indeed, after June 19th,
$\theta$ seems to reach a stable value with mild fluctuations, as shown in
Fig.1(C). The characteristics of the temporal evolution of the parameter
$r_{c}$ is similar to $\theta$, thus we mainly present the empirical results
of the exponent $\theta$ in the main text and hold the results of $r_{c}$ in
Figure S1. In the following, we analyze the evolution of the normalized
distribution $P_{t}(r)$ by the contact process of an epidemic transmission,
regardless of the biological details of diseases.
Straightforwardly, according to the mass action principle in the mathematical
epidemiology[39,40](see _Text S1_), which is widely applied in studying the
epidemic spreading process on a network[41-56], we consider the SIR epidemic
scheme here,
$\displaystyle(\mathcal{D}^{[S]}_{j},\mathcal{D}^{[I]}_{j},\mathcal{D}^{[R]}_{j})\rightarrow\left\\{\begin{array}[]{l@{}
l}\displaystyle(\mathcal{D}^{[S]}_{j}-1,\mathcal{D}^{[I]}_{j}+1,\mathcal{D}^{[R]}_{j}),&\
with\ rate\
\beta\mathcal{D}^{[S]}_{j}\mathcal{D}^{[I]}_{j}/N_{j},\\\\[2.84544pt]
\displaystyle(\mathcal{D}^{[S]}_{j},\mathcal{D}^{[I]}_{j}-1,\mathcal{D}^{[R]}_{j}+1),&\
with\ rate\ \mu\mathcal{D}^{[I]}_{j},\\\ \end{array}\right.$ (3)
where $\mathcal{D}_{j}^{[\varphi]}$ denotes the number of individuals in
compartment $[\varphi]$(susceptible(S), infectious(I) or permanently
recovered(R)) in a given country $j$, $\beta$ denotes the disease transmission
rate, and infectious individuals recover with a probability $\mu$. The
population in a given country $j$ at time $t$ is
$N_{j}(t)=\sum_{\varphi}\mathcal{D}^{[\varphi]}_{j}(t)$, where $t=0$ means the
time when initially confirmed cases in the entire system are reported. At the
early stage of a pandemic outbreak, the new introductions of infectious
individuals dominate the onset of outbreak in unaffected countries. However,
after the disease already lands in these countries, the ongoing indigenous
transmission gradually exceeds the influence of the new introductions, and
becomes the mainstream of disseminators[57,58]. According to Eq.(3), in a
given infected country $j$, there are
$\displaystyle\mathcal{D}^{[I_{new}]}_{j}(t+1)=\beta\mathcal{D}^{[S]}_{j}(t)\mathcal{D}^{[I]}_{j}(t)/N_{j}(t)$
(4)
new infected individuals on average at $t+1$ days, and the average number of
illness at $t+1$ days is
$\displaystyle\mathcal{D}^{[I]}_{j}(t+1)=(1-\mu+\beta\mathcal{D}^{[S]}_{j}(t)/N_{j}(t))\mathcal{D}^{[I]}_{j}(t).$
(5)
Defining $\chi(t)=-\mu+\beta\mathcal{D}^{[S]}_{j}(t)/N_{j}(t)$ and
$\mathcal{Y}(t)=\mathcal{D}^{[S]}_{j}(t)/N_{j}(t)$, we have
$\displaystyle\mathcal{D}^{[I]}_{j}(t+1)=\prod^{t}\limits_{t^{\prime}=t_{1}}[1+\chi(t^{\prime})]\mathcal{D}^{[I]}_{j}(t_{1})\
\ and\ \
\mathcal{D}^{[I_{new}]}_{j}(t+1)=\beta\mathcal{Y}(t)\prod\limits^{t-1}_{t^{\prime}=t_{1}}[1+\chi(t^{\prime})]\mathcal{D}^{[I]}_{j}(t_{1}),$
(6)
where $\mathcal{D}^{[I]}_{j}(t_{1})$ denotes the number of initially confirmed
or introduced cases in country $j$, and is always a small positive integer.
The _CCN_ of country $j$ at $t+1$ days is
$C_{j}(t+1)=C_{j}(t)+\mathcal{D}^{[I_{new}]}_{j}(t+1)$. When $t$ is large
enough, we have
$\displaystyle
C_{j}(t+1)/C_{j}(t)=1+\beta\mathcal{Y}(t)\prod^{t-1}\limits_{t^{\prime}=t_{1}}[1+\chi(t^{\prime})]\mathcal{D}^{[I]}_{j}(t_{1})/C_{j}(t).$
(7)
Before the disease dies out in country $j$, $C_{j}(t)$ keeps increasing from
the onset of outbreak[59]. When $t$ is large enough, it is obviously
$C_{j}(t)\gg 0$, $0\leqslant\mathcal{Y}\ll
1,-\mu\leqslant\mathcal{X}(t^{\prime})\ll\beta-\mu$, thus
$\prod^{t-1}\limits_{t^{\prime}=t_{1}}[1+\chi(t^{\prime})]$ is definitely
larger than $0$ and can hardly be infinity. $\mathcal{D}_{j}^{[I]}(t_{1})$ is
a small positive integer, thus $\mathcal{D}_{j}^{[I]}(t_{1})/C_{j}(t)\sim 0$
when $t$ is large enough. We therefore have $C_{j}(t+1)/C_{j}(t)\sim
1,j\leqslant M(t+1)$ for large $t$, where $M(t+1)$ is the total number of
infected countries after $t+1$ days of spreading. Thus the normalized
probability $P_{t+1}(r(j))$ at $t+1$ day is:
$\displaystyle
P_{t+1}(r(j))=\frac{C_{j}(t+1)}{C_{T}(t+1)}=\frac{C_{j}(t)}{\sum_{j}C_{j}(t)}=P_{t}(r(j)),j\leqslant
M(t+1),\ with\ large\ t,$ (8)
where $r(j)$ is the rank of the _CCN_ of country $j$ in the descending order
of the _CCN_ list of all infected countries. Eq.(8) indicates that each
probability $P_{t}(r(j))$ is invariant for large $t$, thus the normalized
distribution $P_{t}(r)$ becomes stable when $t$ is large enough. The intrinsic
reasons for the emergence of these scaling properties are discussed in
_Modeling and Simulation Results_.
Since the normalized _PRD_ $P_{t}(r)$ displays the Zipf’s law pattern
$P_{t}(r)\sim r^{-\theta}$ at the early stage of the epidemic, the _CCN_ of
the country ranked $r$ is $C_{r}(t)\sim C_{T}(t)\cdot r^{-\theta}$ at this
stage. Considering the _CCN_ of the countries with ranks between $r$ and
$r+\delta r$, where $\delta r$ is any infinitesimal value, we have $\delta
C_{r}(t)\sim-\theta r^{-\theta-1}C_{T}(t)\delta r$. Supposing $\delta r\sim
P_{C_{r}}(t)\delta C_{r}(t)$ with $P_{C_{r}}$ denoting the probability density
function, we have
$\displaystyle P_{C_{r}}(t)\sim-\theta^{-1}r^{\theta+1}C_{T}^{-1}(t).$ (9)
Thus
$\displaystyle
P_{C_{r}}(t)=\mathcal{A}(1-\phi)C_{T}^{\phi-1}(t)C^{-\phi}_{r}(t),$ (10)
where $\phi=1+\theta^{-1}$, $\mathcal{A}$ is a constant. According to the
normalization condition
$\int_{C^{min}(t)}^{C^{max}(t)}P_{C_{r}}(t)dC_{r}(t)=1$, where
$C^{max}(t)(C^{min}(t))$ is the _CCN_ of the country with the maximal(minimal)
value at a give time $t$, we have
$\mathcal{A}=-C_{T}^{1-\phi}(t)C^{min}(t)^{\phi-1}$ because
$\phi=1+\theta^{-1}>1$ and $C^{max}(t)\gg 0$. Then
$\displaystyle P_{C_{r}}(t)=(\phi-1)C^{min}(t)^{\phi-1}C_{r}^{-\phi}(t).$ (11)
At a given date, $r$ can be regarded as the number of countries with the
amount of cumulated confirmed cases which is no less than $C_{r}(t)$, then
$\displaystyle
r=\int_{C_{r}(t)}^{C^{max}(t)}M(t)P(C_{r}^{\prime}(t))dC_{r}^{\prime}(t).$
(12)
Recalling $r\sim(C_{T}(t)/C_{r}(t))^{\frac{1}{\theta}}$, we have
$\displaystyle M(t)\sim(\frac{C_{T}(t)}{C^{min}(t)})^{\eta},$ (13)
where $\eta=1/\theta$. At the early stage corresponding to the period between
April 30th and June 1st, $C^{min}(t)$ is one according to the WHO data.
Therefore, we have
$\displaystyle M(t)\sim C^{\eta}_{T}(t),\eta=1/\theta,$ (14)
which indicates that the Heap’s law[25,26,31,32] can be observed in this case.
The empirical evidence for the emergence of the Heap’s law at this stage is
shown in the middle part of Fig.1(E). The Heaps’ exponent $\eta$ is obtained
by the least square method[31,32], and the relevance between $\theta$ and
$\eta$ is reported in Table 1.
At the latter stage(the period after June 1st, 2009), the exponential tail of
the distribution $P_{t}(r)$ leads to a deviation from the strict Zipf’s law.
However, with a steeper exponent $\eta\approx 0.473$, the Heaps’ law still
exists, as shown in the right part of Fig.1(E). Though the two scaling laws
are naturally shaped to coexist during the early epidemic spreading, their
inconsistency gradually emerges as the epidemic keeps on prevailing. Indeed,
in the _Discussion_ of [32], without empirical or analytical evidence, Lü et
al have intuitively suspected that there may exist some unknown mechanisms
only producing the Heaps’ law, and it is possible that a system displaying the
Heaps’ law does not obey the strict Zipf’s law. Here we not only verify this
suspicion with the empirical results, but also explore the substaintial
mechanisms of the evolution process in _Modeling and Simulation Results_ ,
where we uncover the important role of the broad heterogeneity of the
infrastructure in the temporal evolution of scaling emergence.
We also empirically study the evolution of scaling emergence of the epidemic
spreading at the countrywide level. Since the United States is one of the
several earliest and most seriously prevailed countries of the A(H1N1)[60], we
mainly focus on the A(H1N1) spreading in the United States. With the empirical
data of the laboratory confirmed cases of the A(H1N1) provided by the Centers
for Disease Control and Prevention(CDC)(see the data description in _Materials
and Methods_), in Fig.1(B) we report the _PRD_ of the _CCN_ of infected
states, $P^{us}_{t}(r)$, at several given dates sampled about every two weeks.
Our findings suggest a crossover in the temporal evolution of $P^{us}_{t}(r)$.
At the early stage(the period before May 15th), $P^{us}_{t}(r)$ shows a power
law pattern $P^{us}_{t}(r)\sim r^{-\theta_{us}}$ with a much smaller exponent
$\theta_{us}$ than that of the WHO results. Washington D.C. and 46
states(excluding Alaska, Mississippi, West Virginia, Wyoming) were affected by
A(H1N1) on May 15th. After May 15th, $P^{us}_{t}(r)$ gradually becomes a power
law distribution with an exponential cutoff, $P^{us}_{t}(r)\sim
r^{-\theta_{us}}exp(-r/r^{us}_{c})$, which leads to a deviation from the
strict Zipf’s law. In this case, the exponent $\theta_{us}$ gradually reduces
and reaches a stable value 0.45(see Fig.1(D)), which conforms to the fact that
$P^{us}_{t}(r)$ of different dates eventually reaches a stable distribution as
time evolves. The temporal evolution of the exponent $\theta_{us}$ of all data
are shown in Figure S2. $r_{c}$ keeps the value around 14 after June 12th,
2009.
The relation between $M_{us}(t)$ and $C^{us}_{T}(t)$ is shown in Fig.1(F).
Though at first glance this figure provides us an impression of the sublinear
growth of the number of infected states $M_{us}(t)$ when the cumulative number
of national total patients $C^{us}_{T}(t)$ increases, we could not use the
least square method here to estimate the Heaps’ exponent $\eta_{us}$ for
several reasons: (i) the amount of data at each stage is quite small; (ii)
there are several periods that $M_{us}(t)$ keeps unchanged(May 6th
$\rightarrow$ May 7th, $M_{us}(t)=41$; May 12th $\rightarrow$ May 13th,
$M_{us}(t)=45$; May 18th $\rightarrow$ May 27th, $M_{us}(t)=48$); (iii) the
magnitude of $C^{us}_{T}(t)$ is much larger than that of $M_{us}(t)$; (iv)
after June 1st, 2009, Washington D.C. and all 50 states of the United States
were affected by the A(H1N1). Define $M^{max}$ the maximal number of the
geographical regions the epidemic spreads to. In the U.S. scenario,
$M^{max}_{us}=51$. When $M_{us}(t)$ reaches $M^{max}_{us}$ on June 1st,
$P^{us}_{t}(r)$ evolves and becomes stable after June 26th(see Fig.1(B,D)). In
the _Modeling and Simulation Results_ , we explore the relation between these
two scalings with a fine grained metapopulation model characterizing the
spread of the A(H1N1) at the U.S. level in detail.
Note that these scaling properties are not exceptive for the A(H1N1)
transmission. More supported exemplifications are reported in _Figure S3_ ,
e.g. the cases of SARS, Avian Influenza(H5N1). It is worth remarking that the
normalized distribution $P_{t}(r)$ almost keeps the power law pattern during
the whole spreading process of the global SARS. This phenomenon might result
from the intense containment strategies, e.g. patient isolation, enforced
quarantine, school closing, travel restriction, implemented by individuals or
governments confronting mortal plague.
### Modeling and Simulation Results
The above analyses, however, do not tell the whole story, because the
intrinsic reasons for the emergence of these scaling properties have not been
explained. Some additional clues from the perspective of Shannon entropy[61]
of a system might unlock the puzzle.
Nowadays, population explosion in the urban areas, massive interconnectivity
among different geographical regions, and huge volume of human mobility are
the factors accelerating the spread of infectious disease[62,74]. At a large
geographical scale, one main class of models is the metapopulation model
dividing the entire system into several interconnected
subpopulations[58,63-74,87,88]. Within each subpopulation, the infectious
dynamics is described by the compartment schemes, while the spread from one
subpopulation to another is due to the transportation and mobility
infrastructures, e.g., air transportation. Individuals in each subpopulation
exist in various discrete health compartments(status), i.e. susceptible,
latent, infectious, recovered, and etc., with compartmental transitions by the
contagion process or spontaneous transition, and might travel to other
subpopulations by vehicles, e.g., airplane, in a short time. The
metapopulation model can not only be employed to describe the global pandemic
spread when we regard each subpopulation as a given country, but also be used
to simulate the disease transmission within a country when each subpopulation
is regarded as a given geographical region in the country. Here we mainly
consider the spread of pandemic influenza at the U.S. country level for
threefold reasons: (i) the computational cost of simulating global pandemic
spread is too tremendous to implement on a single PC or
Server[58,70,72,81,87]; (ii) the IATA or OAG flight schedule data, which is
widely used to obtain the global air transportation network, do not provide
the attendance and flight-connecting information(see data description in
_Materials and Methods_); (iii) the United States is one of the several
earliest and most seriously prevailed countries[60].
We construct a metapopulation model at the U.S. level with the U.S. domestic
air transportation and demographic statistical data[75-78](detailed data
description is provided in _Materials and Methods_ , and a full specification
of the simulation model is reported in _Text S1_). Define a subpopulation as a
Metropolitan/Micropolitan Statistical Areas(MSAs/$\mu$SAs)[75] connected by a
transportation network, in this article, the U.S. domestic airline
network(USDAN). The USDAN is a weighted graph comprising $V=406$
vertices(airports) and $E=6660$ weighted and directed edges denoting flight
courses. The weight of each edge is the daily amount of passengers on that
flight course. The infrastructure of the USDAN presents high levels of
heterogeneity in connectivity patterns, traffic capacities and population(see
Fig.2). The disease dynamics in a single subpopulation is modeled with the
Susceptible-Latent-Infectious-Recovered(SLIR) compartmental scheme, where the
abbreviation L denotes the latent compartment which experiences
$\epsilon^{-1}$ days on average for an infected person(The SIR epidemic
dynamics discussed at _Empirical and Analytical Results_ is an reasonable
approximation, which actually simplifies the epidemic evolution to a Markov
chain to help us study the issue, and the value of the reproductive number
$R_{0}$ does not depend on $\epsilon$, we therefore ignore the compartment L
there).
The key parameters determining the spreading rate of infections are the
reproductive number $R_{0}$ and the generation time $G_{t}$. $R_{0}$ is
defined as the average amount of individuals an ill person infects during his
or her infectious period $\mu^{-1}$ in a large fully susceptible population,
and $G_{t}$ refers to the sum of the latent period $\epsilon^{-1}$ and the
infectious period $\mu^{-1}$. In our metapopulation model,
$R_{0}=\beta\cdot\mu^{-1}$. The initial conditions of the disease are defined
as the onset of the outbreak in San Diego-Carlsbad-San Marcos, CA MSA on April
17th, 2009, as reported by the CDC[79]. Assuming a short latent period value
$\epsilon^{-1}=1.1$ days as indicated by the early estimates of the pandemic
A(H1N1)[80], which is compatible with other recent studies[81,82], we
primarily consider a baseline case with parameters: $G_{t}=3.6,\mu^{-1}=2.5$
days and $R_{0}=1.75$, which are higher than those obtained in the early
findings of the pandemic A(H1N1)[80], but they are the median results in other
subsequent analyses[81,83]. Fixing the latency period to $\epsilon^{-1}=1.1$
days, we also employ a more aggravated baseline scenario with parameters:
$G_{t}=4.1,\mu^{-1}=3$ days and $R_{0}=2.3$, which are close to the upper
bound results in[81,83-85].
In succession, we characterize the disease spreading pattern by information
entropy, which is customarily applied in information theory. To quantify the
heterogeneity of the epidemic spread at the U.S. level, we examine the
prevalence at each time $t$, $i_{j}(t)={D}^{[I]}_{j}(t)/N_{j}(t)$, for all
subpopulations, and introduce the normalized vector $\vec{p}^{[i]}$ with
components $p_{j}^{[i]}(t)=i_{j}(t)/\sum_{k}i_{k}(t)$. Then we measure the
level of heterogeneity of the disease prevalence by quantifying the disorder
encoded in $\vec{p}^{[i]}$ with the normalized entropy function
$\displaystyle H^{[i]}(t)=-\frac{1}{\log V}\sum_{j}p^{[i]}_{j}(t)\log
p^{[i]}_{j}(t),$ (15)
which provides an estimation of the geographical heterogeneity of the disease
spread at time $t$. If the disease is uniformly influencing all
subpopulations(e.g., all prevalences are equivalent), the entropy reaches its
maximum value $H^{[i]}=1$. On the other hand, starting from $H^{[i]}=0$, which
is the most localized and heterogeneous situation that just one subpopulation
is initially affected by the disease, $H^{[i]}(t)$ increases as more
subpopulations are influenced, thus decreasing the level of heterogeneity.
In order to better uncover the origin of the emergence of the scaling
properties, we compare the baseline results with those obtained on a null
model _UNI_. The _UNI_ model is a homogeneous Erdös-Rényi random network with
the same number of vertices as that of the USDAN, and the generating
regulation is described as follows: for each pair of vertices $(i,j)$, an edge
is independently generated with the uniform probability $p_{e}=\langle
k\rangle/V$, where $\langle k\rangle=16.40$ is the average out-degree of the
USDAN. Moreover, the weights of the edges and the populations are uniformly
equal to their average values in the USDAN, respectively. Therefore, the _UNI_
model is completely absent from the heterogeneity of the airline topology,
flux and population data.
Different evolving behaviors between the _UNI_ scenarios and the
baselines(real airline cases) provide a remarkable evidence for the direct
dependence between the scaling toproperties and the heterogeneous
infrastructure. Fig.3(A,C) show the comparison of the _PRD_ between the
baseline results and the _UNI_ outputs at several given dates sampled about
every 30 days, where each specimen is the median result over all runs that led
to an outbreak at the U.S. level in 100 random Monte Carlo realizations. In
Fig.3(A), we consider the situation of $R_{0}=1.75$, and do observe that the
evolution of _PRD_ of the baseline case experiences two stages: a power law at
the initial time and an exponentially cutoff power law at a larger time.
However, the _UNI_ scenario shows a distinct pattern: as time evolves, the
middle part of the _PRD_ grows more quickly, and displays a peak which
obviously deviates scaling properties. Fig.3(C) reports the situation of
$R_{0}=2.3$. In this aggravated instance, the _PRD_ of the _UNI_ scenario
actually becomes rather homogeneous when $t$ is large enough(see the curve of
July 17th of the _UNI_ scenario in Fig.3(C)). Fig.3(B,D) present the
comparison of the information entropy profiles between the baseline results
and the _UNI_ outputs when $R_{0}=1.75,R_{0}=2.3$, respectively. The
completely homogeneous network _UNI_ shows a homogeneous
evolution($H^{[i]}\approx 1$) of the epidemic spread in a long period(see the
light cyan areas in Fig.3(B,D)), with sharp fallings at both the beginning and
the end of the outbreak. However, we observe distinct results in the
baselines, where $H^{[i]}$ is significantly smaller than 1 for most of the
time, and the long tails indicate a long lasting heterogeneity of the epidemic
prevalence. These analyses signal that the broad heterogeneity of
infrastructure plays an essential role in the emergence of scalings.
We further explore the properties of the two scalings and their relation with
the baseline case of $R_{0}=1.75$ in detail. Since each independent simulation
generates a stochastic realization of the spreading process, we analyze the
statistical properties with 100 random Monte Carlo realizations, measure the
normalized _PRD_ of the _CCN_ of infected MSAs/$\mu$SAs for each realization
that led to an outbreak at the U.S. level, and report the median result of the
_PRD_ $P^{\prime us}_{t}(r)$ of each day. From $t=26$ to $t=39$, $P^{\prime
us}_{t}(r)$ clearly shows a power law pattern $P^{\prime us}_{t}(r)\sim
r^{-\theta^{\prime}_{us}}$, which implies the emergence of the Zipf’s law(when
$t<26$, just several regions are affected by the disease). The exponent
$\theta^{\prime}_{us}$ at each date is estimated by the maximum likelihood
method[22,37], and the temporal evolution of $\theta^{\prime}_{us}$ is
reported in the left part of Fig.4(A). When $t>39$, $P^{\prime us}_{t}(r)$
gradually becomes an exponentially cutoff power law distribution $P^{\prime
us}_{t}(r)\sim r^{-\theta^{\prime}_{us}}exp(-r/r^{us^{\prime}}_{c})$, and the
exponent $\theta^{\prime}_{us}$ gradually reduces and reaches a stable value
of 0.574 with neglectable fluctuations when $t>126$(see Fig.4(A)). Here we do
not show the error bar since the fitting error on the exponent is far
less($10^{-2}$) than the value of $\theta^{\prime}_{us}$ by the average of 100
random realizations. The inset of Fig.4(A) shows the increase of the number of
infected regions $M^{\prime}_{us}(t)$ as time evolves. When $t>110$, more than
400 subpopulations reports the existence of confirmed cases, thus
$M^{\prime}_{us}(t)$ tends to reach its saturation.
Fig.4(B) shows the relation between $M^{\prime}_{us}(t)$ and $C^{\prime
us}_{T}(t)$(the national cumulative number of patients). Since $P^{\prime
us}_{t}(r)$ displays a power law of $P^{\prime us}_{t}(r)=b\cdot
r^{\theta^{\prime}_{us}}$ at the early stage of the period between $t=26$ and
$t=39$, it is reasonable to deduce the existence of the Heaps’ law
$\displaystyle M^{\prime}_{us}(t)=(C^{\prime us}_{T}(t)\cdot
b)^{\eta^{\prime}_{us}},\eta^{\prime}_{us}=1/\theta^{\prime}_{us},$ (16)
according to the analyses in _Empirical and Analytical Results_. In order to
verify this assumption, we estimate the exponent $\eta^{\prime}_{us}$ using
Eq.(16), and report the relevance between $\theta^{\prime}_{us}$ and
$\eta^{\prime}_{us}$ in Table 2(the amount of data in this period is not
sufficient to get a accurate estimation of the exponent $\eta^{\prime}_{us}$
with the least square method). When $t>39$, though $P^{\prime us}_{t}(r)$
gradually deviates the strict Zipf’s law, the Heaps’ law of the relation
between $M^{\prime}_{us}(t)$ and $C^{\prime us}_{T}(t)$ still exists till
$M^{\prime}_{us}(t)$ tends to reach its saturation(see the middle part in
Fig.4(B)).
## Discussion
Zipf’s law and Heaps’ law are two representatives of the scaling concepts in
the study of complexity science. Recently, increasing evidence of the
coexistence of the Zipf’s law and the Heaps’ law motivates different
understandings on the dependence between these two scalings, which is still
hardly been clarified. This embarrassment derives from the contradiction that
the empirical or simulated materials employed to show the emergence of Zipf’s
law are often finalized and static specimens, while the Heaps’ law actually
describes the evolving characteristics.
In this article, we have identified the relation between the Zipf’s law and
the Heaps’ law from the perspective of coevolution between the scalings and
large-scale spatial epidemic spreading. We illustrate the temporal evolution
of the scalings: the Zipf’s law and the Heaps’ law are naturally shaped to
coexist at the early stage of the epidemic at both the global and the U.S.
levels, while the crossover comes with the emergence of their inconsistency at
a larger time before reaching a stable state, where the Heaps’ law still
exists with the disappearance of strict Zipf’s law.
With the U.S. domestic air transportation and demographic data, we construct a
metapopulation model at the U.S. level. The simulation results predict main
empirical findings. Employing information entropy characterizing the epidemic
spreading pattern, we recognize that the broad heterogeneity of the
infrastructure plays an essential role in the evolution of scaling emergence.
These findings are quite different from the previous conclusions in the
literature. For example, studying a phenomenologically self-adaptive complete
network, Han et al. claimed that scaling properties are dependent on the
intensity of containment strategies implemented to restrict the interregional
travel[31]. In [36], Picoli Junior et al. considered a simple stochastic model
based on the multiplicative process[23], and suggested that seasonality and
weather conditions, i.e., temperature and relative humidity, also dominates
the temporal evolution of scalings because they affect the dynamics of
influenza transmission. In this work, without the help of any specific
additional factor, we directly show that the evolution of scaling emergence is
mainly determined by the contact process underlying disease transmission on an
infrastructure with huge volume and heterogeneous structure of population
flows among different geographic regions. (The effects of the travel-related
containment strategies implemented in real world can be neglected, since the
number of scheduled domestic and international passengers of the U.S. air
transportation only declined in 2009 by 5.3% from 2008[86]. In fact, the
travel restrictions would not be able to significantly slow down the epidemic
spread unless more than 90% of the flight volume is reduced[58,66,69,70,88].)
In summary, our study suggests that the analysis of large-scale spatial
epidemic spread as a promising new perspective to understand the temporal
evolution of the scalings. The unprecedented amount of information encoded in
the empirical data of pandemic spreading provides us a rich environment to
unveil the intrinsic mechanisms of scaling emergence. The heterogeneity of
epidemic spread uncovered by the metapopulation model indicates the
significance of performing targeted containment strategies, e.g. vaccination
of prior groups, targeted antiviral prophylaxis, at the early time of a
pandemic disease.
## Materials and Methods
### Data Description
In this article, in order to construct the U.S. domestic air transportation
network, we mainly utilize the “ _Air Carrier Traffic and Capacity Data by On-
Flight Market report(December 2009)_ ” provided by the Bureau of
Transportation Statistics(BTS) database[76]. This report contains 12 months’
data covering more than $96\%$ of the entire U.S. domestic air traffic in
2009, and provides the monthly number of passengers, freight and/or mail
transported between any two airports located within the U.S. boundaries and
territories, regardless of the number of stops between them. This _BTS_ report
provides a more accurate solution for studying aviation flows between any two
U.S. airports than other data sources(the attendance and the flight-connecting
information in the OAG flight schedule data are commonly unknown, while the
datasets adopted in [63,64,66,69] primarily consider the international
passengers). In order to study the epidemic spread in the Continental United
States where we have a good probability to select citizens living and moving
in the mainland, we get rid of the airports as well as the corresponding
flight courses located in Hawaii, and all offshore U.S. territories and
possessions from the _BTS_ report.
In order to obtain the U.S. demographic data, we resort to the “ _OMB Bulletin
N0. 10-02: Update of Statistical Area Definitions and Guidance on Their Uses_
”[75] provided by the United States Office of Management and Budget(OMB), and
the “ _Annual Estimates of the Population of Metropolitan and Micropolitan
Statistical Areas: April 1, 2000 to July 1, 2009_ ”[77] provided by the United
States Census Bureau(CB). OMB defines a Metropolitan Statistical
Area(MSA)(Micropolitan Statistical Area, $\mu$SA) as one or more adjacent
counties or county equivalents that have at least one urban core area of at
least 50,000 population(10,000 population but less than 50,000), plus adjacent
territory that has a high degree of social and economic integration with the
core. For other regions with at least 5,000 population but less than 10,000,
we use the American FactFinder[78] provided by the CB to get the demographic
information. We do not consider sparsely populated areas with population less
than 5,000, because they are commonly remote islands, e.g. Block Island in
Rhode Island, Sand Point in Alaska.
Before constructing the metapopulation model, we take into account the fact
that there might be more than one airport in some huge metropolitan areas. For
instance, New York-Northern New Jersey-Long Island(NY-NJ-PA MSA) has up to six
airports(their IATA codes: JFK, LGA, ISP, EWR, HPN, FRG), Los Angeles-Long
Beach-Santa Ana(CA MSA) has four airports(their IATA codes: LAX, LGB, SNA,
BUR), and Chicago-Joliet-Naperville(IL-IN-WI MSA) has two airports(their IATA
codes: MDW, ORD). Assuming a homogeneous mixing inside each subpopulation, we
need to assemble each group of airports serving the same MSA/$\mu$SA, because
the mixing within each given census areas is quite high and cannot be
characterized by fine-grained version of subpopulations for every single
airport. We searched for groups of airports located close to each other and
belonged to the same metropolitan areas, and then manually aggregated the
airports of the same group in a single “super-hub”.
The full list of updates of the pandemic A(H1N1) human cases of different
countries is available on the website of Global Alert and Response(GAR) of
World Health Organization(WHO)(WHO website.
http://www.who.int/csr/disease/swineflu/updates/en/index.html. Accessed 2011
May 24). It is worth remarking that WHO was no longer updating the number of
the cumulated confirmed cases for each country after July 6th, 2009, but
changed to report the number of confirmed cases on the WHO Region level(the
Member States of the World Health Organization(WHO) are grouped into six
regions, including WHO African Region(46 countries), WHO European Region(53
countries), WHO Eastern Mediterranean Region(21 countries), WHO Region of the
Americas(35 countries), WHO South-East Asia Region (11 countries), WHO Western
Pacific Region(27 countries).(WHO website.
http://www.who.int/about/regions/en/index.html. Accessed 2011 May 24).
The cumulative number of the laboratory confirmed human cases of A(H1N1) flu
infection of each U.S. state is available at the website of 2009 A(H1N1) Flu
of the Centers for Disease Control and Prevention(CDC)(CDC website.
http://cdc.gov/h1n1flu/updates/. Accessed 2011 May 24), where the detailed
data were started from April 23, 2009, to July 24, 2009. After July 24, the
CDC discontinued the reporting of individual confirmed cases of A(H1N1), and
began to report the total number of hospitalizations and deaths weekly.
The data of the human cases of global SARS and global Avian influenza(H5N1)
are available at the website of the Disease covered by GAR of WHO(WHO website.
http://www.who.int/csr/disease/en/. Accessed 2011 May 24).
## Acknowledgments
We were grateful to the insightful comments of the editor Alejandro Raul
Hernandez Montoya and the two anonymous referees, and gratefully acknowledged
the helpful discussions with Changsong Zhou, Xiao-Pu Han, Zhi-Hai Rong, Zhen
Wang, Yang Yang. We also thank the Bureau of Transportation Statistics(BTS)
for providing us the U.S. domestic air traffic database.
## References
1\. Stanley HE (1999) Scaling, universality, and renormalization: Three
pillars of modern critical phenomena. Rev Mod Phys 71: S358-S366.
2\. Stanley HE, Amaral LAN, Gopikrishnan P, Ivanov PC, Keitt TH, et al (2000)
Scale invariance and universality: organizing principles in complex systems.
Physica A 281: 60-68.
3\. Cardy J (1996) Scaling and Renormalization in Statistical
Physics(Cambridge University Press, New York).
4\. Brown JH, West GB (2000) Scaling in Biology(Oxford University Press, USA).
5\. Zipf GK (1949) Human Behaviour and the Principle of Least Effort: An
Introduction to Human Ecology(Addison-Wesley, Massachusetts).
6\. Ferrer-i-Cancho R, Elvevåg B (2010) Random Texts Do Not Exhibit the Real
Zipf’s Law-Like Rank Distribution. PLoS ONE 5: e9411.
7\. Lieberman E, Michel JB, Jackson J, Tang T, Nowak MA (2007) Quantifying the
evolutionary dynamics of language. Nature 449: 713-716.
8\. Kanter I, Kessler DA (1995) Markov Processes: Linguistics and Zipf’s Law.
Phys Rev Lett 74, 4559-4562.
9\. Maillart T, Sornette D, Spaeth S, von Krogh G (2008) Empirical Tests of
Zipf’s Law Mechanism in Open Source Linux Distribution. Phys Rev Lett 101,
218701.
10\. Decker EH, Kerkhoff AJ, Moses ME (2007) Global Patterns of City Size
Distributions and Their Fundamental Drivers. PLoS ONE 2: e934.
11\. Batty M (2006) Rank clocks. Nature 444: 592-596.
12\. Axtell RL (2001) Zipf Distribution of U.S. Firm sizes. Science 293:
1818-1820.
13\. Coronel-Brizio HF, Hernández-Montoya AR (2005) On Fitting the Pareto-Levy
distribution to financial data: Selecting a suitable fit’s cut off parameter.
Physica A 354: 437-449.
14\. Coronel-Brizio HF, Hernández-Montoya AR (2005) Asymptotic behavior of the
Daily Increment Distribution of the IPC, the Mexican Stock Market Index.
Revista Mexicana de Física 51: 27-31.
15\. Ogasawara O, Okubo K (2009) On Theoretical Models of Gene Expression
Evolution with Random Genetic Drift and Natural Selection. PLoS ONE 4: e7943.
16\. Furusawa C, Kaneko K (2003) Zipf’s Law in Gene Expression. Phys Rev Lett
90: 088102.
17\. Blasius B, Tönjes R (2009) Zipf’s Law in the Popularity Distribution of
Chess Openings. Phys Rev Lett 103: 218701.
18\. Martínez-Mekler G, Martínez RA, del Río MB, Mansilla R, Miramontes P, et
al. Universality of Rank-Ordering Distributions in the Arts and Sciences. PloS
ONE 4: e4791.
19\. Redner S (1998) How popular is your paper? An empirical study of the
citation distribution. Eur Phys J B 4: 131-134.
20\. Baek SK, Kiet HAT, Kim BJ (2007) Family name distributions: Master
equation approach. Phys Rev E 76: 046113.
21\. Chen Q, Wang C, Wang Y (2009) Deformed Zipf’s law in personal donation.
Europhys Lett 88: 38001.
22\. Newman MEJ (2005) Power laws, Pareto distributions and Zipf’s law.
Contemporary Physics 46: 323-351.
23\. Sornette D (1997) Multiplicative processes and power laws. Phys Rev E 57:
4811-4813.
24\. Saichev A., Malevergne Y, Sornette D (2009) Theory of Zipf’s Law and
Beyond, Lecture Notes in Economics and Mathematical Systems(Springer).
25\. Heaps HS (1978) Information Retrieval: Computational and Theoretical
Aspects(Academic Press, Orlando).
26\. Serrano MÁ, Flammini A, Menczer F (2009) Modeling Statistical Properties
of Written Text. PLoS ONE 4: e5372.
27\. Zhang ZK, Lü L, Liu JG, Zhou T (2008) Empirical analysis on a keyword-
based semantic system. Eur Phys J B 66: 557-561.
28\. Cattuto C, Barrat A, Baldassarri A, Schehr G, Loreto V (2009) Collective
dynamics of social annotation. Proc Natl Acad Sci 106: 10511-10515.
29\. Cattuto C, Loreto V, Pietronero L (2007) Semiotic dynamics and
collaborative tagging. Proc Natl Acad Sci 104: 1461-1464.
30\. Benz RW, Swamidass SJ, Baldi P (2008) Discovery of power-law in chemical
space. J Chem Inf Model 48: 1138-1151.
31\. Han XP, Wang BH, Zhou CS, Zhou T, Zhu JF (2009) Scaling in the Global
Spreading Patterns of Pandemic Influenza A and the Role of Control: Empirical
Statistics and Modeling. eprint arXiv: 0912.1390.
32\. Lü L, Zhang ZK, Zhou T (2010) Zipf’s Law Leads to Heaps’ Law: Analyzing
Their Relation in Finite-Size Systems. PLoS ONE 5: e14139.
33\. Montemurro MA, Zanette DH (2002) New perspectives on Zipf’s law in
linguistics: from single texts to large corpora. Glottometrics 4: 86-98.
34\. Zanette DH, Montemurro MA (2005) Dynamics of Text Generation with
Realistic Zipf’s Distribution. J Quant Linguistics 12: 29-40.
35\. Simon HA (1955) On a class of skew distribution functions. Biometrika 42:
425-440.
36\. Picoli Junior Sd, Teixeira JJV, Ribeiro HV, Malacarne LC, Santos RPBd, et
al. (2011) Spreading Patterns of the Influenza A (H1N1) Pandemic. PLoS ONE 6:
e17823.
37\. Clauset A, Shalizi CR, Newman MEJ (2009) Power-law distributions in
empirical data. SIAM Review 51, 661-703.
38\. “World now at the start of 2009 influenza pandemic”, Statement to the
press by WHO Director-General Dr. Margaret Chan(June 11, 2009), World Health
Organization. WHO website.
http://www.who.int/mediacentre/news/statements/2009/h1n1_pandemic_phase6_20090611/en/.
Accessed 2011 May 24.
39\. Anderson RM, May RM (1991) Infectious Diseases of Humans: Dynamics and
Control(Oxford Unvi. Press, Oxford).
40\. Hamer WH (1906) The Milroy Lectures On Epidemic disease in England — The
evidence of variability and of presistency of type. The Lancet 167: 733-739.
41\. Pastor-Satorras R, Vespignani A (2001) Epidemic Spreading in Scale-Free
Networks. Phys Rev Lett 86: 3200-3203.
42\. Eguíluz VM, Klemm K (2002) Epidemic Threshold in Structured Scale-Free
Networks. Phys Rev Lett 89: 108701.
43\. Barthélemy M, Barrat A, Pastor-Satorras R, Vespignani A (2004) Velocity
and Hierarchical Spread of Epidemic Outbreaks in Scale-Free Networks. Phys Rev
Lett 92: 178701.
44\. Gross T, D’Lima CJD, Blasius B (2006) Epidemic Dynamics on an Adaptive
Network. Phys Rev Lett 96: 208701.
45\. Li X, Wang XF (2006) Controlling the spreading in small-world evolving
networks: stability, oscillation, and topology. IEEE T AUTOMAT CONTR 51:
534-540.
46\. Zhou T, Liu JG, Bai WJ, Chen GR, Wang BH (2006) Behaviors of susceptible-
infected epidemics on scale-free networks with identical infectivity. Phys Rev
E 74: 056109.
47\. Han XP (2007) Disease spreading with epidemic alert on small-world
networks. Phys Lett A 365: 1-5.
48\. Yang R, Zhou T, Xie YB, Lai YC, Wang BH (2008) Optimal contact process on
complex networks. Phys Rev E 78: 066109.
49\. Parshani R, Carmi S, Havlin S (2010) Epidemic Threshold for the
Susceptible-Infectious-Susceptible Model on Random Networks. Phys Rev Lett
104: 258701.
50\. Castellano C, Pastor-Satorras R (2010) Thresholds for Epidemic Spreading
in Networks. Phys Rev Lett 105: 218701.
51\. Li X, Cao L, Cao GF (2010) Epidemic prevalence on random mobile dynamical
networks: Individual heterogeneity and correlation. Eur Phys J B 75: 319-326.
52\. Pulliam JR, Dushoff JG, Levin SA, Dobson AP (2007) Epidemic Enhancement
in Partially Immune Populations. PLoS ONE 2: e165.
53\. Scoglio C, Schumm W, Schumm P, Easton T, Roy Chowdhury S, et al. (2010)
Efficient Mitigation Strategies for Epidemics in Rural Regions. PLoS ONE 5:
e11569.
54\. Matrajt L, Longini IM Jr (2010) Optimizing Vaccine Allocation at
Different Points in Time during an Epidemic. PLoS ONE 5: e13767.
55\. Iwami S, Suzuki T, Takeuchi Y (2009) Paradox of Vaccination: Is
Vaccination Really Effective against Avian Flu Epidemics? PLoS ONE 4: e4915.
56\. Bettencourt LMA, Ribeiro RM (2008) Real Time Bayesian Estimation of the
Epidemic Potential of Emerging Infectious Diseases. PLoS ONE 3: e2185.
57\. Longini IM Jr., Nizam A, Xu S, Ungchusak K, Hanshaoworakul W, et al
(2005) Containing Pandemic Influenza at the Source. Science 309: 1083-1087.
58\. Bajardi P, Poletto C, Ramasco JJ, Tizzoni M, Colizza V, et al. (2011)
Human Mobility Networks, Travel Restrictions, and the Global Spread of 2009
H1N1 Pandemic. PLoS ONE 6: e16591.
59\. Fraser C, Riley S, Anderson RM, Ferguson NM (2004) Factors that make an
infectious disease outbreak controllable. Proc Natl Acad Sci USA 101:
6146-6151.
60\. Situation updates—Pandemic (H1N1) 2009, World Health Organization. WHO
website.
http://www.who.int/csr/disease/swineflu/updates/en/index.html. Accessed 2011
May 24.
61\. Shannon CE, Weaver W (1964) The Mathematical Theory of Communication(The
University of Illinois Press, Urbana).
62\. Barabási AL (2010) Bursts: The Hidden Pattern Behind Everything We
Do(Dutton Books, USA).
63\. Rvachev LA, Longini IM Jr (1985) A mathematical model for the global
spread of influenza. Math Biosci 75: 3-22.
64\. Hufnagel L, Brockmann D, Geisel T (2004) Forecast and control of
epidemics in a globalized world. Proc Natl Acad Sci USA 101: 15124- 15129.
65\. Colizza V, Barrat A, Barthèlemy M, Vespignani A (2006) The role of the
airline transportation network in the prediction and predictability of global
epidemic. Proc Natl Acad Sci USA 103: 2015-2020.
66\. Cooper BS, Pitman RJ, Edmunds WJ, Gay NJ (2006) Delaying the
International Spread of Pandemic Influenza. PLoS Med 3: e212.
67\. Ovaskainen O, Cornell SJ (2006) Asymptotically exact analysis of
stochastic metapopulation dynamics with explicit spatial structure. Theor
Popul Biol 69: 13-33.
68\. Colizza V, Pastor-Satorras R, Vespignani A (2007) Reaction-diffusion
processes and metapopulation models in heterogeneous networks. Nat Phys 3:
276.
69\. Epstein JM, Goedecke DM, Yu F, Morris RJ, Wagener DK, et al. (2007)
Controlling Pandemic Flu: The Value of International Air Travel Restrictions.
PLoS ONE 2: e401.
70\. Colizza V, Barrat A, Barthelemy M, Valleron AJ, Vespignani A (2007)
Modeling the worldwide spread of pandemic influenza: Baseline case and
containment interventions. PLoS Med 4: e13.
71\. Cornell SJ, Ovaskainen O (2008) Exact asymptotic analysis for
metapopulation dynamics on correlated dynamic landscapes. Theor Popul Biol 74:
209-225.
72\. Balcan D, Colizza V, Gonçalves B, Hu H, Ramasco JJ, et al. (2009)
Multiscale mobility networks and the spatial spreading of infectious diseases.
Proc Natl Acad Sci USA 106: 21484-21489.
73\. Vergu E, Busson H, Ezanno P (2010) Impact of the Infection Period
Distribution on the Epidemic Spread in a Metapopulation Model. PLoS ONE 5:
e9371.
74\. Balcan D, Vespignani A (2011) Phase transitions in contagion processes
mediated by recurrent mobility patterns. Nat Phys. doi:10.1038/nphys1944
75\. United States Office of Management and Budget(OMB), OMB Bulletin No.
10-02: Update of Statistical Area Definitions and Guidance on Their
Uses(December 1, 2009). Whitehouse website.
http://www.whitehouse.gov/sites/default/files/omb/assets/bulletins/b10-02.pdf.
Accessed 2011 May 24.
76\. Bureau of Transportation Statistics(BTS), United States, Air Carrier
Traffic and Capacity Data by On-Flight Market report(December 2009). BTS
website. http://www.bts.gov/. Accessed 2011 May 24.
77\. United States Census Bureau(CB), Annual Estimates of the Population of
Metropolitan and Micropolitan Statistical Areas: April 1, 2000 to July 1,
2009. U.S. Census Bureau website.
http://www.census.gov/popest/metro/. Accessed 2011 May 24.
78\. United States Census Bureau(CB), American Factfinder. U.S. Census Bureau
website.
http://factfinder.census.gov/home/saff/main.html?_lang=en. Accessed 2011 May
24.
79\. Centers for Disease Control and Prevention(CDC), United States, Swine
Influenza A (H1N1) Infection in Two Children — Southern California, March-
April 2009. CDC website.
http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5815a5.htm. Accessed 2011 May 24.
80\. Fraser C, Donnelly CA, Cauchemez S, Hanage WP, Kerkhove MDV, et al.
(2009) Pandemic Potential of a Strain of Influenza A (H1N1): Early Findings.
Science 324: 1557-1561.
81\. Balcan D, Hu H, Gonçalves B, Bajardi P, Poletto C, et al. (2009) Seasonal
transmission potential and activity peaks of the new influenza A(H1N1): a
Monte Carlo likelihood analysis based on human mobility. BMC Med. 7: 45.
82\. Lessler J, Reich NG, Brookmeyer R, Perl TM, Nelson KE, et al. (2009)
Incubation periods of acute respiratory viral infections: a systematic review.
Lancet Infect. Dis 9: 291-300.
83\. Yang Y, Sugimoto JD, Halloran ME, Basta NE, Chao DL, et al. (2009) The
Transmissibility and Control of Pandemic Influenza A (H1N1) Virus. Science
326: 729-733.
84\. Boëlle PY, Bernillon P, Desenclos JC (2009) A preliminary estimation of
the reproduction ratio for new influenza A(H1N1) from the outbreak in Mexico,
March-April 2009. Euro Surveill 14: 19205.
85\. Nishiura H, Castillo-Chavez C, Safan M, Chowell G (2009) Transmission
potential of the new influenza A(H1N1) virus and its agespecificity in Japan.
Euro Surveill 14: 19227.
86\. Bureau of Transportation Statistics(BTS), United States (2010) “Summary
2009 Traffic Data for U.S and Foreign Airlines: Total Passengers Down 5.3
Percent from 2008”. BTS website. http://www.bts.gov/. Accessed 2011 May 24.
87\. den Broeck WV, Gioannini C, Gonçalves B, Quaggiotto M, Colizza V, et al.
(2011) The GLEaMviz computational tool, a publicly available software to
explore realistic epidemic spreading scenarios at the global scale. BMC Infect
Dis 11: 37.
88\. Colizza V, Vespignani A (2008) Epidemic modeling in metapopulation
systems with heterogeneous coupling pattern: Theory and simulations. J Theor
Biol 251: 450.
## Tables
Table 1: The empirical results of the parameters $\theta$ and $\eta$, and their relevance at the early time(the period between April 30th and June 1st, 2009), using 2009 Pandemic A(H1N1) data collected by the WHO. Date | $\theta$ | $\eta$ | $\theta\cdot\eta$
---|---|---|---
April 30th | 3.12 | 0.349 | 1.046
May 1st | 3.23 | 0.349 | 1.127
May 2th | 3.00 | 0.349 | 1.047
May 3th | 3.32 | 0.349 | 1.159
May 4th | 2.93 | 0.349 | 1.022
May 5th | 3.29 | 0.349 | 1.148
May 6th | 3.35 | 0.349 | 1.169
May 7th | 3.5 | 0.349 | 1.222
May 8th | 3.39 | 0.349 | 1.183
May 9th | 3.2 | 0.349 | 1.117
May 10th | 3.16 | 0.349 | 1.103
May 11th | 2.96 | 0.349 | 1.033
May 12th | 3.06 | 0.349 | 1.068
May 13th | 2.96 | 0.349 | 1.033
May 14th | 3.00 | 0.349 | 1.047
May 15th | 3.07 | 0.349 | 1.071
May 16th | 3.07 | 0.349 | 1.071
May 17th | 2.95 | 0.349 | 1.030
May 18th | 2.93 | 0.349 | 1.023
May 19th | 2.98 | 0.349 | 1.040
May 20th | 2.97 | 0.349 | 1.037
May 21th | 2.92 | 0.349 | 1.019
May 22th | 2.82 | 0.349 | 0.984
May 23th | 2.77 | 0.349 | 0.967
May 26th | 2.62 | 0.349 | 0.914
May 27th | 2.54 | 0.349 | 0.886
May 29th | 2.44 | 0.349 | 0.852
June 1st | 2.33 | 0.349 | 0.813
Table 2: The value of the parameters $\theta^{\prime}_{us}$ and $\eta^{\prime}_{us}$ for the simulation results at the early time of the period between $t=26$ and $t=39$. _t_ | $\theta^{\prime}_{us}$ | $\eta^{\prime}_{us}$ | $\theta^{\prime}_{us}\cdot\eta^{\prime}_{us}$
---|---|---|---
26 | 2.623 | 0.427 | 1.120
27 | 2.395 | 0.459 | 1.099
28 | 2.535 | 0.449 | 1.138
29 | 2.433 | 0.457 | 1.112
30 | 2.429 | 0.456 | 1.108
31 | 2.269 | 0.455 | 1.032
32 | 2.285 | 0.460 | 1.051
33 | 2.170 | 0.482 | 1.046
34 | 2.220 | 0.477 | 1.059
35 | 2.086 | 0.492 | 1.026
36 | 1.976 | 0.503 | 0.994
37 | 1.977 | 0.504 | 0.996
38 | 1.717 | 0.540 | 0.927
39 | 1.644 | 0.538 | 0.884
Figure 1: The empirical results of A(H1N1). (A) The Zipf-plots of the
normalized probability-rank distributions $P_{t}(r)$ of the cumulated
confirmed number of every infected country at several given date sampled about
every two weeks, data provided by the WHO. (B) The Zipf-plots of
$P^{us}_{t}(r)$ at several given data sampled about every two weeks, data
provided by the CDC. (C) Temporal evolution of the estimated exponent $\theta$
of the normalized distribution $P_{t}(r)$. (D) Temporal evolution of the
estimated exponent $\theta_{us}$ of the normalized distribution
$P^{us}_{t}(r)$ of the period after May 15th. (E) The sublinear relation
between the number of infected countries $M(t)$ and the cumulative number of
global confirmed cases $C_{T}(t)$, data collected by the WHO. (F) The
sublinear relation between the number of infected states $M_{us}(t)$ and the
cumulative number of national confirmed cases $C^{us}_{T}(t)$, data collected
by the CDC. The shaded areas in the figures (C,E,F) corresponds to their
different evolution stages, respectively. Figure 2: The heterogeneity of the
USDAN’s infrastructure. (A) The degree distribution $P(k)$ follows a power law
pattern on almost two decades with an exponent 1.30$\pm$0.03. (B) shows that
the probability-rank distribution of the traffic outflux
$S_{j}=\sum_{\ell\in\upsilon}\omega_{j\ell}$, where $\upsilon$ denotes the set
of neighbors belonging to the vertex $j$ and the weight $\omega_{j\ell}$ of a
connection between two vertices $(j,\ell)$ is the number of passengers
traveling a given route per day, is skewed and heterogeneously distributed.
(C) shows that the probability-rank distribution of populations is skewed and
heterogeneously distributed. Figure 3: Comparisons of the scaling properties
between the _UNI_ scenarios and the baseline cases. (A,C) present the
comparison of the _PRD_ $P^{\prime us}_{t}(r)$ of the _CCN_ of every infected
_MSA/ $\mu$SA_ between the baselines and the _UNI_ scenarios at several given
date sampled about every 30 days when $R_{0}=1.75,R_{0}=2.3$, respectively.
(B,D) present the comparison of the information entropy profiles between the
baselines and the _UNI_ results when $R_{0}=1.75,R_{0}=2.3$, respectively.
Each data in these figures are the median results over all runs that led to an
outbreak at the U.S. level in 100 random Monte Carlo realizations. Figure 4:
The statistical results of the scaling properties of our metapopulation model.
(A) Temporal evolution of the estimated exponent $\theta^{\prime}_{us}$ of the
normalized distribution $P^{\prime us}_{t}(r)$. The inset shows the growing of
the number of infected subpopulations $M^{\prime}_{us}(t)$ with time $t$. (B)
The relation between the number of infected subpopulations
$M^{\prime}_{us}(t)$ and the national cumulative confirmed cases $C^{\prime
us}_{T}(t)$. The shaped areas in the figures corresponds to their different
evolution stages, respectively. Each data in these figures are the median
results over all runs that led to an outbreak at the U.S. level in 100 random
Monte Carlo realizations.
## Supporting Information
Supporting Information Text S1
“Evolution of scaling emergence in large-scale spatial epidemic spreading”
Lin Wang, Xiang Li$\ast$, Yiqing Zhang, Yan Zhang, Kan Zhang
$\ast$ Corresponding author: lix@fudan.edu.cn
### Simulation Model Design
As a basic modeling scheme, we use the metapopulation approach, which
explicitly considers the geographical structure in the model by introducing
multiple subpopulations coupled by individuals’ mobility. More specifically,
the subpopulations correspond to the metropolitan areas, and the dynamics of
individuals’ mobility is described by enplaning between any two regions.
(_i_) Infection Dynamics in a Single Subpopulation.
The infection dynamics takes place within each single subpopulation, and is
described by a homogeneously mixed population with an influenza-like illness
compartmentalization in which each individual exists in just one of the
following discrete classes such as susceptible(S), latent(L), infectious(I) or
permanently recovered(R). In each subpopulation $j$, the population is
$N_{j}$, and $\mathcal{D}_{j}^{[\varphi]}(t)$ is the number of individuals in
the class $[\varphi]$ at time $t$. By definition, it is evident that
$N_{j}=\sum_{\varphi}\mathcal{D}^{[\varphi]}_{j}(t)$. Two essential kinds of
the disease evolution processes are considered in the infection dynamics: the
contagion process(e.g., a susceptible individual acquires the infection from
any given infectious individual and becomes latent with the rate $\beta$,
where $\beta$ is the transmission rate of a disease) and the spontaneous
transition of individual from one compartment to another(i.e. latent ones
become infectious with a probability $\epsilon$, or the infectious individuals
recover with a probability $\mu$, where $\epsilon^{-1}$ and $\mu^{-1}$ are the
average latency time and the average infection duration, respectively).
Schematically, the stochastic infection dynamics is given by
$\displaystyle(\mathcal{D}^{[S]}_{j},\mathcal{D}^{[L]}_{j},\mathcal{D}^{[I]}_{j},\mathcal{D}^{[R]}_{j})\Rightarrow\left\\{\begin{array}[]{l@{}
l}\displaystyle(\mathcal{D}^{[S]}_{j}-1,\mathcal{D}^{[L]}_{j}+1,\mathcal{D}^{[I]}_{j},\mathcal{D}^{[R]}_{j}),&\
with\ rate\
\beta\mathcal{D}^{[S]}_{j}\mathcal{D}^{[I]}_{j}/N_{j},\\\\[2.84544pt]
\displaystyle(\mathcal{D}^{[S]}_{j},\mathcal{D}^{[L]}_{j}-1,\mathcal{D}^{[I]}_{j}+1,\mathcal{D}^{[R]}_{j}),&\
with\ rate\ \epsilon\mathcal{D}^{[L]}_{j},\\\\[2.84544pt]
\displaystyle(\mathcal{D}^{[S]}_{j},\mathcal{D}^{[L]}_{j},\mathcal{D}^{[I]}_{j}-1,\mathcal{D}^{[R]}_{j}+1),&\
with\ rate\ \mu\mathcal{D}^{[I]}_{j},\\\ \end{array}\right.$
where the first reaction reflects the fact that each susceptible in
subpopulation $j$ would be infected by contacting any infectious individuals
with probability $\beta\mathcal{D}^{[I]}_{j}/N_{j}$, therefore the number of
new infections generated in subpopulation $j$ at time $t+1$ is extracted from
a binomial distribution with the probability
$\beta\mathcal{D}^{[I]}_{j}(t)/N_{j}(t)$ and the number of trials
$\mathcal{D}^{[S]}_{j}(t)$; the second and the third reactions represent the
spontaneous transition process.
(_ii_)Epidemic Transmission among Different Subpopulations.
As individuals travel around the country, the disease may spread from one area
to another. Therefore, in addition to the infection dynamics taking place
inside each subpopulation, the epidemic spreading at a large geographical
scale is inevitably governed by the human mobility among different
subpopulations by means of the domestic air transportation. Since the _BTS_
report reflects the actual aviation flows between any two U.S. airports, we
define a stochastic dispersal operator
$\nabla_{j}(\\{\mathcal{D}^{[\varphi]}\\})$ representing the net balance of
individuals in a given compartment $\mathcal{D}^{[\varphi]}$ that entered in
and left from each subpopulation $j$. In each subpopulation $j$, the dispersal
operator is expressed as
$\displaystyle\nabla_{j}(\\{\mathcal{D}^{[\varphi]}\\})=\sum\limits_{\ell}(\mathcal{X}_{\ell
j}(\mathcal{D}^{[\varphi]}_{\ell})-\mathcal{X}_{j\ell}(\mathcal{D}^{[\varphi]}_{j})),$
where $\mathcal{X}_{j\ell}(\mathcal{D}^{[\varphi]}_{j})$ describes the daily
number of individuals in the compartment $[\varphi]$ traveling from
subpopulation $j$ to subpopulation $\ell$. In the scenario of air travel, this
operator is relative to the passenger traffic flows and the population.
Neglecting multiple legs travels and assuming the well mixing of population
inside each subpopulation, we deduce that the probability of any individual
traveling on each connection $j\rightarrow\ell$ everyday is given by
$p_{j\ell}=\omega_{j\ell}/N_{j}$, where $\omega_{j\ell}$ represents the daily
passenger number from $j$ to $\ell$. The stochastic variables
$\mathcal{X}_{j\ell}(\mathcal{D}^{[\varphi]}_{j})$ therefore follow the
multinomial distribution
$\displaystyle P(\\{\mathcal{X}_{j\ell}\\})$ $\displaystyle=$
$\displaystyle\frac{\mathcal{D}^{[\varphi]}_{j}!}{(\mathcal{D}^{[\varphi]}_{j}-\sum_{\ell}\mathcal{X}_{j\ell})!\prod_{\ell}\mathcal{X}_{j\ell}!}(\prod\limits_{\ell}p_{j\ell}^{\mathcal{X}_{j\ell}})(1-\sum\limits_{\ell}p_{j\ell})^{(\mathcal{D}^{[\varphi]}_{j}-\sum_{\ell}\mathcal{X}_{j\ell})},$
where $(\mathcal{D}^{[\varphi]}_{j}-\sum_{\ell}\mathcal{X}_{j\ell})$ indicates
daily number of non-traveling individuals of compartment $[\varphi]$ staying
in subpopulation $j$. It is noticeable that the population $N_{j}$ of each
subpopulation keeps constant, e.g.,
$\sum_{[\varphi]}\nabla_{j}(\mathcal{D}^{[\varphi]})=0$, because the passenger
flows are balanced on each pair of connections in this article.
## Pandemic phases defined by the WHO
1\. Interpandemic period
Phase 1: no new influenza virus subtypes circulating among animals have been
reported to cause infections in humans.
Phase 2: a new influenza virus subtypes circulating among domesticated or wild
animals is known to have caused infection in humans, and is therefore
considered a potential pandemic threat.
2\. Pandemic alert period
Phase 3: a new influenza virus subtypes has caused sporadic cases or small
clusters of disease in people, but has not resulted in human-to-human
transmission sufficient to sustain community-level outbreaks.
Phase 4: it is characterized by verified human-to-human transmission of a new
influenza virus subtypes able to cause “community-level outbreaks”. Though the
virus is not well adapted to humans, the ability to cause sustained disease
outbreaks in a community marks a significant upwards shift in the risk for a
pandemic.
Phase 5: it is characterized by human-to-human spread of the virus into at
least two countries in one WHO region. While most countries will not be
affected at this stage, the declaration of Phase 5 is a strong signal that a
pandemic is imminent.
3\. Pandemic period
Phase 6: it is characterized by community level outbreaks in at least one
other country in a different WHO region in addition to the criteria defined in
Phase 5. This phase indicates that a global pandemic is under way.
## Power Law Distribution with an Exponential Cutoff
In fact, little real systems do display a perfect power law pattern for the
Zipf’s distribution or probability density distribution[1,2]. When an
exponential cutoff is added to the power law function, the fit is
substantially improved for dozens of systems, e.g., the forest fires,
earthquakes, web hits, email networks, sexual contact networks. The cutoff
indicates that there is a characteristic scale, and that infrequently super-
enormous events are exponentially rare. Strictly speaking, a cutoff power law
should always fit the data at least as good as a pure one(just let the cutoff
scale go to infinity), thus the power law distribution can be deemed as a
subset of the exponentially cutoff power law[2].
## The Mass Action Principle
Prof. Hamer postulated that the course of an epidemic depends on the rate of
contact between susceptible and infectious individuals. This conception plays
a significant role in the study of epidemiology; it is the so-called ‘mass
action principle’ in which the net rate of spread of infection is assumed to
be proportional to the product of the density of susceptible persons times the
density of infectious individuals[3,4].
1\. Newman MEJ (2005) Power laws, Pareto distributions and Zipf’s law.
Contemporary Physics 46: 323-351.
2\. Clauset A, Shalizi CR, and Newman MEJ (2009) Power-law distributions in
empirical data. SIAM Review 51, 661-703.
3\. Anderson RM and May RM (1991) Infectious Diseases of Humans: Dynamics and
Control(Oxford Unvi. Press, Oxford).
4\. Hamer WH (1906) The Milroy Lectures On Epidemic disease in England — The
evidence of variability and of persistency of type. The Lancet 167: 733-739.
## Figure Legends
Figure S1: The temporal evolution of the estimated parameter $r_{c}$, data
provided by the WHO. Figure S2: The temporal evolution of the estimated
exponent $\theta_{us}$ for all data provided by the CDC. Figure S3: The
empirical results of the SARS and avian influenza(H5N1). (A) shows the
normalized probability-rank distribution of the cumulated confirmed number of
every infected country around the world at several given date sampled about
every four weeks, data provided by the
WHO(http://www.who.int/csr/sars/country/en/index.html). (B) shows the
normalized probability-rank distribution of the cumulated confirmed number of
every infected country around the world at several given date sampled about
every half a year, data provided by the
WHO(http://www.who.int/csr/disease/avian_influenza
/country/en/).
|
arxiv-papers
| 2011-05-25T08:27:37 |
2024-09-04T02:49:19.080416
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Lin Wang, Xiang Li, Yi-Qing Zhang, Yan Zhang, and Kan Zhang",
"submitter": "Lin Wang",
"url": "https://arxiv.org/abs/1105.4965"
}
|
1105.5069
|
¡html¿¡head¿ ¡meta http-equiv=”content-type” content=”text/html;
charset=ISO-8859-1”¿
¡title¿CERN-2010-002¡/title¿
¡/head¿
¡body¿
¡h1¿¡a href=”http://cas.web.cern.ch/cas/Belgium-2009/Bruges-advert.html”¿CAS -
CERN Accelerator School: Magnets¡/a¿¡/h1¿
¡h2¿Bruges, Belgium, 16 - 25 June 2009¡/h2¿
¡h2¿Proceedings - CERN Yellow Report
¡a href=”http://cdsweb.cern.ch/record/1158462”¿CERN-2010-004¡/a¿¡/h2¿
¡h3¿editor: D. Brandt ¡/h3¿
These proceedings present the lectures given at the twenty-third specialized
course organized by the CERN Accelerator School (CAS), the topic being
’Magnets’. The course was held in Bruges, Belgium, from 16 to 25 June 2009.
This is the first time this topic has been selected for a specialized course.
Taking into account the number of related applications currently in use in
accelerators around the world, but, even more important, the worrying decrease
in the corresponding expertise in the different laboratories, it was
recognized that such a topic should definitively be incorporated into the CAS
series of specialized courses. The specific aim of the course was to introduce
the participants to the basics of resistive magnet design and its underlying
theoretical concepts. The first part of the school dealt with basic
introductory courses such as Maxwell’s equations for magnets, beam optics,
physics and measurement of magnetic materials, the different types of
resistive magnets and their respective performance, an introduction to
numerical field computation, and a core lecture on basic magnet design. The
second part of the course focused more on quality control, the different
measurement systems with their electronics, calibration techniques and
respective applications as well as the question of stability and
reproducibility. For the first time, in addition to the actual lectures, a
Case Study was proposed to the participants. This consisted of eight hours of
a guided practical exercise, where the participants had to propose their own
design for a magnet fulfilling the boundary conditions corresponding to a
combined-function magnet developed for the ALBA Synchrotron Light Source in
Barcelona, Spain. This Case Study was enthusiastically received by the
participants, who praised both the proposed approach and the amount of
practical information acquired from this exercise.
¡h2¿Lectures¡/h2¿
¡!– Maxwell’s Equations for Magnets –¿
LIST:arXiv:1103.0713¡br¿
¡!–Physics and measurements of magnetic materials –¿
LIST:arXiv:1103.1069¡br¿
¡!– Basic design and engineering of normal-conducting, iron-dominated
electromagnets –¿
LIST:arXiv:1103.1119¡br¿
¡!– Eddy currents in accelerator magnets –¿
LIST:arXiv:1103.1800¡br¿
¡!–Injection and extraction magnets: kicker magnets –¿
LIST:arXiv:1103.1583¡br¿
¡!–Injection and extraction magnets: septa –¿
LIST:arXiv:1103.1062¡br¿
¡!– Permanent magnets including undulators and wigglers –¿
LIST:arXiv:1103.1573¡br¿
¡!– Specifications, quality control, manufacturing, and testing of accelerator
magnets –¿
LIST:arXiv:1103.1815¡br¿
¡!– Dimensional metrology and positioning operations: basics for a spatial
layout analysis of measurement systems –¿
LIST:arXiv:1104.0799¡br¿
¡!–Dielectric insulation and high-voltage issues –¿
LIST:arXiv:1104.0802¡br¿
¡!– Magnetic measurement with coils and wires –¿
LIST:arXiv:1104.3784¡br¿
¡!– Fabrication and calibration of search coils –¿
LIST:arXiv:1104.0803¡br¿
¡!– Hall probes: physics and application to magnetometry –¿
LIST:arXiv:1103.1271¡br¿
¡!– Magnet stability and reproducibility –¿
LIST:arXiv:1103.1575¡br¿
¡/body¿¡/html¿
|
arxiv-papers
| 2011-05-25T15:50:10 |
2024-09-04T02:49:19.092447
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Daniel Brandt",
"submitter": "Scientific Information Service Cern",
"url": "https://arxiv.org/abs/1105.5069"
}
|
1105.5163
|
# Effective medium theory of binary thermoelectrics.
Paul M. Haney1 1Center for Nanoscale Science and Technology, National
Institute of Standards and Technology, Gaithersburg, Maryland 20899-6202, USA
###### Abstract
The transport coefficients of disordered media are analyzed using direct
numerical simulation and effective medium theory. The results indicate a range
of materials parameters for which disorder leads to an enhanced power factor.
This increase in power factor is generally not accompanied by an increase in
the figure of merit $ZT$, however. It is also found that the effective
electrical conductivity and electronic contribution to the thermal
conductivity are not generally proportional to each other in the presence of
disorder.
## I introduction
There has been considerable recent interest in utilizing nanostructure and
inhomogeneity to enhance thermoelectric performance. Nanostructure has been
predicted to reduce phonon thermal conductivity without reducing electrical
conductivity venk , enhance the power factor via quantum confinement harman ,
and provide strongly energy-dependent electron scattering leonard ; moyzhes ;
martin \- beneficial features for thermoelectrics. One type of nanostructure
consists of the inclusion of metallic structures in a thermoelectric (i.e
semiconductor) framework. These types of nanostructured materials can exhibit
two unusual properties: unexpectedly high Seeback coefficient heremans , and
anomalous magnetoresistance xu . The anomalous magnetoresistive properties can
be understood as a consequence of disordered transport in a random resistor
network model littlewood .
Motivated by these models of magnetoresistance, I revisit similar models of
thermoelectric properties cohen to see if the same mechanism can explain both
phenomena. The work presented here extends previous studies in two ways: 1.
The electrical conductivity and Seeback coefficient of the constituent
materials are varied independently. It’s found that in certain parameter
regimes the thermoelectric power factor of the composite medium is enhanced
relative to that of the constituent materials. 2. The electron and phonon
contributions to the total thermal conductivity are calculated separately.
Although the constituent materials are assumed to be obey the Weidemann-Franz
law (W-F), the composite medium does not. Finally, in accordance with previous
studies levy , the figure of merit $ZT$ of the composite medium is found to be
smaller than that of the high-$ZT$ constituent material. These conclusions are
illustrated with numerical results and analytic expressions derived within
effective medium theory (EMT).
## II Model description
The starting point is the linear response description of transport for the
electrical current $j$ and thermal current $j_{Q}$:
$\displaystyle j$ $\displaystyle=$ $\displaystyle-\sigma\left({\bf
r}\right)\nabla V+L^{12}\left({\bf r}\right)\nabla T~{},$ $\displaystyle
j_{Q}$ $\displaystyle=$ $\displaystyle-\left(\kappa_{e}\left({\bf
r}\right)+\kappa_{\gamma}\left({\bf r}\right)\right)\nabla T+L^{12}\left({\bf
r}\right)T\nabla V~{},$ (1)
where $\sigma\left({\bf r}\right)$ is the local electrical conductivity,
$\kappa_{e}^{\prime}~{}\left(\kappa_{\gamma}^{\prime}\right)$ is the electron
(phonon) contribution to the total local thermal conductivity
$\kappa\left({\bf r}\right)$ ($\kappa\left({\bf
r}\right)=\kappa_{e}^{\prime}\left({\bf
r}\right)+\kappa_{\gamma}^{\prime}\left({\bf r}\right)$) (all thermal
conductivities evaluated for zero electric field), $V$ is the electrostatic
potential, and $T$ is the temperature. The local Seeback coefficient
$S\left({\bf r}\right)$ is related to $\sigma\left({\bf r}\right)$ and
$L^{12}\left({\bf r}\right)$ by: $L^{12}\left({\bf r}\right)=S\left({\bf
r}\right)\sigma\left({\bf r}\right)$. I assume that $\sigma\left({\bf
r}\right)$ and $\kappa_{e}\left({\bf r}\right)$ obey the W-F law:
$\kappa_{e}\left({\bf r}\right)=\sigma\left({\bf r}\right)L_{0}T$, where
$L_{0}$ is the Lorenz number. As shown in Ref. (cohen, ), the effective medium
electrical conductivity $\bar{\sigma}$, total thermal conductivity
$\bar{\kappa}$, and $\bar{L}^{12}$ satisfy:
$\displaystyle\left\langle\frac{\sigma\left({\bf
r}\right)-\bar{\sigma}}{\sigma\left({\bf
r}\right)+2\bar{\sigma}}\right\rangle=0~{};$
$\displaystyle\left\langle\frac{\kappa\left({\bf
r}\right)-\bar{\kappa}}{\kappa\left({\bf
r}\right)+2\bar{\kappa}}\right\rangle=0~{},$ (2) $\displaystyle\bar{L}^{12}=$
$\displaystyle 3\bar{\sigma}\bar{\kappa}\left\langle\frac{L^{12}\left({\bf
r}\right)}{\left(\kappa\left({\bf
r}\right)+2\bar{\kappa}\right)\left(\sigma\left({\bf
r}\right)+2\bar{\sigma}\right)}\right\rangle\times$ (3)
$\displaystyle\left(\left\langle\frac{\sigma\left({\bf
r}\right)\bar{\kappa}+\bar{\sigma}\kappa\left({\bf
r}\right)+2\bar{\sigma}\bar{\kappa}-\sigma\left({\bf r}\right)\kappa\left({\bf
r}\right)}{\left(\kappa\left({\bf
r}\right)+2\bar{\kappa}\right)\left(\sigma\left({\bf
r}\right)+2\bar{\sigma}\right)}\right\rangle\right)^{-1}.$
The brackets indicate an average over disorder configurations. In addition,
the electron and phonon parts of the thermal conductivity of the effective
medium, $\bar{\kappa}_{e}$ and $\bar{\kappa}_{\gamma}$, satisfy:
$\displaystyle\left\langle\frac{\kappa_{e,\gamma}\left({\bf
r}\right)-\bar{\kappa}_{e,\gamma}}{\kappa\left({\bf
r}\right)+2\bar{\kappa}}\right\rangle$ $\displaystyle=$ $\displaystyle 0~{},$
(4)
Two-component materials are considered here, so that local material parameters
may assume one of two possible values. I further suppose that
$\sigma_{1}\ll\sigma_{2}$, and $\kappa_{\gamma 1,2}\ll\kappa_{e2}$.
The figure of merit $ZT$ can be expressed with the Onsager number $N$ (note
that $N$ is constrained thermodynamically to be less than 1 davidson ; mahan
).
$\displaystyle N$ $\displaystyle=$
$\displaystyle\frac{\left(\bar{L}^{12}\right)^{2}T}{\bar{\sigma}\bar{\kappa}_{e}}~{},$
(5) $\displaystyle ZT$ $\displaystyle=$
$\displaystyle\frac{N}{1-N+\left(\frac{\bar{\kappa}_{\gamma}}{\bar{\kappa}_{e}}\right)}~{}.$
(6)
In addition to the analysis of EMT, Eq. (1) and the continuity equations for
heat and charge ($\nabla\cdot j=0$, $\nabla\cdot j_{Q}=0$ footnote ) are
solved directly for an ensemble of randomly disordered configurations in 3-d
(correlated disorder does not change the results appreciably). The system is
discretized into $30^{3}$ sites, and the ensemble size is chosen such that the
statistical error of the effective transport parameters is converged (this
typically requires about 30 configurations). The error bars on the plots of
numerical results indicate the statistical uncertainty (one standard
deviation).
## III Results
Power factor \- Figs. 1(a), 1(b), and 1(c) show
$\bar{\sigma},~{}\bar{L}^{12}$, and the power factor
$\bar{S}^{2}\bar{\sigma}$, respectively, as a function of concentration of
material 1 (denoted by $c$). For this calculation $L^{12}_{1}=L^{12}_{2}$. At
the percolation threshold ($c=2/3$), $\bar{\sigma}$ shows a well-known kink
kirkpatrick , while $\bar{L}^{12}$ is maximized. The origin of this
enhancement in $\bar{L}^{12}$ is discussed in the next section. The
enhancement in $\bar{L}^{12}$ leads to a peak in the the power factor
$\bar{S}^{2}\bar{\sigma}$ near the percolation threshold. The figure shows
good agreement between EMT and numerical results, indicating the EMT captures
the essential physics of the power factor enhancement. Fig. 1(d) shows the
Seeback coefficient versus concentration when the correct effective medium
$L^{12}$ value is used to calculate $S$, and when it is assumed that $L^{12}$
of the composite medium is the same as that of the constituent materials.
Figure 1: (a), (b), (c) show the effective conductivity, $L^{12}$, and power
factor as a function of disorder concentration. (d) shows the Seeback
coefficient $S=P/\sigma$ as a function of conductivity (note the log scale of
conductivity), using $L^{12}=\bar{L}^{12},~{}L^{12}=L^{12}_{1}$. Model
parameters are: $\sigma_{1}=4\times 10^{5}~{}{\rm\frac{1}{\Omega\cdot m}}$,
$f_{s}=10^{3}$, $\kappa_{1\gamma}=3~{}{\rm\frac{W}{m\cdot K}}$,
$\kappa_{2\gamma}=1~{}{\rm\frac{W}{m\cdot K}}$,
$L^{12}_{1}=L^{12}_{2}=\sqrt{L_{0}}\sigma_{1}=62.5~{}{\rm\frac{A}{m\cdot K}}$.
To explain the peak in power factor, I solve Eqs. (2-3) at the percolation
transition ($c=2/3$), and expand the solution in the small parameters:
$\displaystyle f_{s}=\frac{\sigma_{1}}{\sigma_{2}};$ $\displaystyle
f_{k}=\frac{\kappa_{1}}{\kappa_{2}}.$ (7)
Keeping only leading order terms leads to:
$\displaystyle\bar{\sigma}$ $\displaystyle=$
$\displaystyle\sqrt{\frac{\sigma_{1}\sigma_{2}}{2}}=\sigma_{1}\sqrt{\frac{1}{2f_{s}}}+\ldots$
(8) $\displaystyle\bar{L}^{12}$ $\displaystyle=$ $\displaystyle
L^{12}_{1}\left(\sqrt{2f_{k}}+\sqrt{2f_{s}}\right)^{-1}+\ldots$ (9)
Eq. (9) is valid when $L^{12}_{2}$ is not significantly (e.g more than 100
times) greater than $L^{12}_{1}$. Eqs. (8) and(9) show that both
$\bar{\sigma}$ and $\bar{L}^{12}$ diverge as $f_{s}^{-1/2}$ at the percolation
transition. This implies that the power factor
($\left(\bar{L}^{12}\right)^{2}/\bar{\sigma}$) also diverges as $f_{s}^{-1/2}$
at this point, while the Seeback coefficient ($\bar{L}^{12}/\bar{\sigma}$)
remains bounded within EMT. The divergence of transport parameters at the
percolation transition signals a failure of effective medium theory in the
limit $f_{s}\rightarrow 0$. However the numerics indicate that EMT is
sufficient for realistic parameters, and can illustrate the key points in this
work.
To develop a simple picture for how inhomogeneity leads to an enhancement in
$L^{12}$, I consider Eq. (1) for two dissimilar materials placed in series
(see Fig. (2)), where $\sigma_{1}\ll\sigma_{2},~{}\kappa_{1}\ll\kappa_{2},$
and $L^{12}_{1}=L^{12}_{2}$. $L^{12}$ of the composite system is given by
$\frac{j}{\Delta T}|_{\Delta V=0}$. The large thermal resistance of material 1
implies the temperature drop occurs mostly between sites 1 and 2. This drives
a large thermoelectric charge current between sites 1 and 2, which in turn
induces a voltage at site 2. This voltage drives an Ohmic charge current
between sites 2 and 3. This Ohmic current, derived from the inhomogeneous
voltage, is the source of enhanced charge current (and therefore enhanced
$L^{12}$) of the composite system. The same scenario occurs in 3-dimensional
disordered networks of materials (under the same parameter region described
above), so that when the system is maximally disordered at the percolation
transition, $L^{12}$ shows the maximum enhancement.
Figure 2: Cartoon indicating the underlying mechanism for the enhancement of
$L^{12}$ for an inhomogeneous systems consisting of 2 resistors. (a) shows the
predominant paths for charge current, (b) shows the voltage and temperature
profiles for an applied temperature difference.
Thermal conductivity \- Next I consider the electronic and phonon
contributions to the thermal conductivity of the disordered material. Let
$\kappa_{\gamma 1}>\kappa_{\gamma 2}$; this describes a system in which one
material has good electronic thermoelectric properties (i.e. a large Onsager
number), but a detrimentally large $\kappa_{\gamma}$, while the other material
has a low Onsager number and a small $\kappa_{\gamma}$. I assume that both
$\frac{\kappa_{\gamma 1}}{\kappa_{e2}}$ and $\frac{\kappa_{\gamma
2}}{\kappa_{e2}}$ are small (of the same order as
$\frac{\kappa_{e1}}{\kappa_{e2}}$). Fig. (3a) shows $\kappa_{e}$ and
$\kappa_{\gamma}$ as a function of concentration. There is good agreement
between numerical results and EMT. Fig. (3b) shows that near the percolation
threshold, the electronic contribution to the thermal conductivity is not
related to the electrical conductivity via the W-F law. At the percolation
threshold, the expressions for the total thermal conductivity $\kappa$ and its
partition into electronic and phonon parts $\kappa_{e},\kappa_{\gamma}$ are
given as (to linear order in $f_{k},f_{s}$):
$\displaystyle\bar{\kappa}$ $\displaystyle=$
$\displaystyle\kappa_{2}\left[\sqrt{\frac{f_{k}}{2}}+\frac{f_{k}}{4}+\ldots\right]$
(10) $\displaystyle\bar{\kappa}_{e}$ $\displaystyle=$
$\displaystyle\kappa_{2,e}\left[\sqrt{\frac{f_{k}}{2}}+\left(f_{s}-\frac{3f_{k}}{4}\right)+\ldots\right]$
(11) $\displaystyle\bar{\kappa}_{\gamma}$ $\displaystyle=$
$\displaystyle\kappa_{1,\gamma}+\ldots$ (12)
The ratio $\sigma/\kappa_{e}$ is easily expressed in terms of $r=f_{k}/f_{s}$:
$\displaystyle\frac{\bar{\sigma}}{\bar{\kappa}_{e}}=L_{0}T\frac{1}{\sqrt{r}}~{}.$
(13)
This indicates that in inhomogeneous materials, inferring $\kappa_{e}$ from
$\sigma$ may not always be appropriate near the percolation transition. This
is because the effective $\kappa_{e}$ is a convolution of the intrinsic
material properties and the geometry of the system: The parallel conduction of
heat current through both electron and phonon channels (in contrast to the
charge current) results in different conducting paths for $j_{Q}$ and $j$, and
differences in the spatial structure of temperature and potential fields. As a
result, the electronic part of $j_{Q}$ in the inhomogeneous system is not
exactly correlated with $j$.
Figure 3: (a) shows the electron and phonon contributions to the total thermal
conductivity as a function of concentration of material 1 (this concentration
is denoted by $c$). Solid lines are EMT result, symbols are numerical results.
(b) shows the electrical conductivity and electron contribution to thermal
conductivity (scaled by the value of material 1 versus concentration (note log
scale). Vertical dashed lines indicate region that is plotted in (d). (c)
shows the figure of merit $ZT$ as a function of concentration. It generally
decreases from the high material value of $ZT=1$. (d) is a zoom-in of (b) near
percolation (on a linear scale), showing the deviation from the W-F law of the
composite medium
Figure of merit \- The electronic and phonon contributions to $ZT$ can now be
assembled. At percolation, the Onsager number and ratio of electronic to
phonon thermal conductivity are:
$\displaystyle N$ $\displaystyle=$ $\displaystyle
N_{1}\frac{1}{\sqrt{r}\left(1+r+2\sqrt{r}\right)}$ (14)
$\displaystyle\frac{\bar{\kappa}_{\gamma}}{\bar{\kappa}_{e}}$ $\displaystyle=$
$\displaystyle\sqrt{\frac{2}{f_{k}}}\left(\frac{\kappa_{\gamma,1}}{\kappa_{e,2}}\right)$
(15)
Plugging the above into Eq. (6) shows that $ZT$ of the composite medium is
always smaller than that of the high-$ZT$ material constituent. Fig. (3c)
shows $ZT$ as a function of concentration. It is seen that under the favorable
parameter set considered here, the high $ZT$ value is maintained for a
substantial amount of low $ZT$ material doping. Generally $ZT$ shows a
decrease by percolation, and rises rapidly to the high value just above this
threshold.
I note that this model ignores interface scattering contributions to
$\sigma,\kappa$, and $L^{12}$. If these contributions lower $\kappa$ and
$\sigma$ more drastically than $L^{12}$, it would imply an overall enhancement
in $ZT$ leonard . A more complete theory requires a combination of bulk
modeling as presented here, combined with more microscopic modeling of
materials interfaces. Nevertheless the model demonstrates that disorder in the
diffusive regime can explain the enhancement in power factor, and can alter
the relationship between electrical and thermal conductivity of electrons.
These are both important considerations in the interpretation of experimental
results, and in assessing the viability of materials for improved
thermoelectric efficiencies.
I gratefully acknowledge very helpful conversations with Fred Sharifi, who
introduced me to this problem.
## References
* (1) R. Venkatasubramanian, E. Siivola, T. Colpitts, and B. O Quinn, Nature 413, 597 (2001).
* (2) T. C. Harman, P. J. Taylor, M. P. Walsh, and B. E. LaForge, Science 29, 2229 (2002).
* (3) S. V. Faleev and F. Léonard, Phys. Rev. B 77, 214304 (2008).
* (4) Moyzhes and V. Nemchinsky, Appl. Phys. Lett. 73, 1895 (1998).
* (5) J. Martin, L. Wang, L. Chen, and G. S. Nolas, Phys. Rev. B 79, 115311 (2009).
* (6) J. P. Heremans, C. M. Thrush, and D. T. Morelli, J. App. Phys. 98, 063703 (2005).
* (7) R. Xu , Nature 390, 57 (1997).
* (8) M. M. Parish and P. B. Littlewood, Nature 426, 162 (2003).
* (9) I. Webman, J. Jortner, and M. H. Cohen, Phys. Rev. B 16, 2959 (1977).
* (10) D. J. Bergman and 0. Levy, J. App. Phys. 70, 6821 (1991).
* (11) H. Littman and B. Davidson, J. App. Phys. 32, 217 (1961).
* (12) G. D. Mahan and J.O. Sofo, Proc. Natl. Acad. Sci. USA 93, 7436 (1996).
* (13) The equation of continuity for heat current is generally $\nabla\cdot j_{Q}=j\cdot\nabla V+j_{Q}\cdot\nabla T/T$. In linear response the right-hand-side of this equation can generally be ignored. The full nonlinear version with heating was solved in the numerical calculation, and was found to agree with the results when heating is neglected.
* (14) S. Kirkpatrick, Phys. Rev. Lett. 27, 1722 (1971).
* (15) N. Mori, H. Okana, and A. Furuya, Phys. Stat. Sol. (a) 203, 2828 (2006).
|
arxiv-papers
| 2011-05-25T22:48:03 |
2024-09-04T02:49:19.098145
|
{
"license": "Public Domain",
"authors": "Paul M. Haney",
"submitter": "Paul Haney Mr.",
"url": "https://arxiv.org/abs/1105.5163"
}
|
1105.5263
|
# On the mean speed of convergence of empirical and occupation measures in
Wasserstein distance
Emmanuel Boissard and Thibaut Le Gouic Université Paul Sabatier
(Date: August 27, 2024)
###### Abstract.
In this work, we provide non-asymptotic bounds for the average speed of
convergence of the empirical measure in the law of large numbers, in
Wasserstein distance. We also consider occupation measures of ergodic Markov
chains. One motivation is the approximation of a probability measure by
finitely supported measures (the quantization problem). It is found that rates
for empirical or occupation measures match or are close to previously known
optimal quantization rates in several cases. This is notably highlighted in
the example of infinite-dimensional Gaussian measures.
## 1\. Introduction
This paper is concerned with the rate of convergence in Wasserstein distance
for the so-called _empirical law of large numbers_ : let $(E,d,\mu)$ denote a
measured Polish space, and let
(1) $L_{n}=\frac{1}{n}\sum_{i=1}^{n}\delta_{X_{i}}$
denote the empirical measure associated with the i.i.d. sample $(X_{i})_{1\leq
i\leq n}$ of law $\mu$, then with probability 1, $L_{n}\rightharpoonup\mu$ as
$n\rightarrow+\infty$ (convergence is understood in the sense of the weak
topology of measures). This theorem is also known as Glivenko-Cantelli theorem
and is due in this form to Varadarajan [26].
For $1\leq p<+\infty$, the _$p$ -Wasserstein distance_ is defined on the set
$\mathcal{P}_{p}(E)^{2}$ of couples of measures with a finite $p$-th moment by
$W_{p}^{p}(\mu,\nu)=\inf_{\pi\in\mathcal{P}(\mu,\nu)}\int
d^{p}(x,y)\pi(dx,dy)$
where the infimum is taken on the set $\mathcal{P}(\mu,\nu)$ of probability
measures with first, resp. second, marginal $\mu$, resp. $\nu$. This defines a
metric on $\mathcal{P}_{p}$, and convergence in this metric is equivalent to
weak convergence plus convergence of the moment of order $p$. These metrics,
and more generally the Monge transportation problem from which they originate,
have played a prominent role in several areas of probability, statistics and
the analysis of P.D.E.s : for a rich account, see C. Villani’s St-Flour course
[27].
Our purpose is to give bounds on the mean speed of convergence in $W_{p}$
distance for the Glivenko-Cantelli theorem, i.e. bounds for the convergence
$\mathbb{E}(W_{p}(L_{n},\mu))\rightarrow 0$. Such results are desirable
notably in view of numerical and statistical applications : indeed, the
approximation of a given probability measure by a measure with finite support
in Wasserstein distance is a topic that appears in various guises in the
literature, see for example [15]. The first motivation for this work was to
extend the results obtained by F. Bolley, A. Guillin and C. Villani [5] in the
case of variables with support in $\mathbb{R}^{d}$. As in this paper, we aim
to produce bounds that are non-asymptotic and effective (that is with explicit
constants), in order to achieve practical relevance.
We also extend the investigation to the convergence of occupation measure for
suitably ergodic Markov chains : again, we have practical applications in
mind, as this allows to use Metropolis-Hastings-type algorithms to approximate
an unknown measure (see 1.3 for a discussion of this).
There are many works in statistics devoted to convergence rates in some metric
associated with the weak convergence of measures, see e.g. the book of A. Van
der Vaart and J. Wellner [25]. Of particular interest for us is R.M. Dudley’s
article [11], see Remark Remark.
Other works have been devoted to convergence of empirical measures in
Wasserstein distance, we quote some of them. Horowitz and Karandikar [17] gave
a bound for the rate of convergence of $\mathbb{E}[W_{2}^{2}(L_{n},\mu)]$ to
$0$ for general measures supported in $\mathbb{R}^{d}$ under a moment
condition. M. Ajtai, J. Komlos and G. Tusnady [1] and M.Talagrand [24] studied
the related problem of the average cost of matching two i.i.d. samples from
the uniform law on the unit cube in dimension $d\geq 2$. This line of research
was pushed further, among others, by V. Dobrić and J.E. Yukich [10] or F.
Barthe and C. Bordenave [2] (the reader may refer to this last paper for an
up-to-date account of the Euclidean matching problem). These papers give a
sharp result for measures in $\mathbb{R}^{d}$, with an improvement both over
[17] and [5]. In the case $\mu\in\mathcal{P}(\mathbb{R})$, del Barrio, Giné
and Matran [7] obtain a central limit theorem for $W_{1}(L_{n},\mu)$ under the
condition that $\int_{-\infty}^{+\infty}\sqrt{F(t)(1-F(t))}dt<+\infty$ where
$F$ is the cumulative distribution function (c.d.f.) of $\mu$. In the
companion paper [4], we investigate the case of the $W_{1}$ distance by using
the dual expression of the $W_{1}$ transportation cost by Kantorovich and
Rubinstein, see therein for more references.
Before moving on to our results, we make a remark on the scope of this work.
Generally speaking, the problem of convergence of $W_{p}(L_{n},\mu)$ to $0$
can be divided in two separate questions :
* •
the first one is to estimate the _mean rate of convergence_ , that is the
convergence rate of $\mathbb{E}[W_{p}(L_{n},\mu)]$,
* •
while the second one is to study the concentration properties of
$W_{p}(L_{n},\mu)$ around its mean, that is to find bounds on the quantities
$\mathbb{P}(W_{p}(L_{n},\mu)-\mathbb{E}[W_{p}(L_{n},\mu)]\geq t).$
Our main concern here is the first point. The second one can be dealt with by
techniques of measure concentration. We will elaborate on this in the case of
Gaussian measures (see Appendix A), but not in general. However, this is a
well-trodden topic, and some results are gathered in [4].
###### Acknowledgements.
We thank Patrick Cattiaux for his advice and careful reading of preliminary
versions, and Charles Bordenave for introducing us to his work [2] and
connected works.
### 1.1. Main result and first consequences
###### Definition 1.1.
For $X\subset E$, the covering number of order $\delta$ for $X$, denoted by
$N(X,\delta)$, is defined as the minimal $n\in\mathbb{N}$ such that there
exist $x_{1},\ldots,x_{n}$ in $X$ with
$X\subset\bigcup_{j=1}^{n}B(x_{i},\delta).$
Our main statement is summed up in the following proposition.
###### Proposition 1.1.
Choose $t>0$. Let $\mu\in\mathcal{P}(E)$ with support included in $X\subset E$
with finite diameter $d$ such that $N(X,t)<+\infty$. We have the bound :
$\mathbb{E}(W_{p}(L_{n},\mu))\leq
c\left(t+n^{-1/2p}\int_{t}^{d/4}N(X,\delta)^{1/2p}d\delta\right).$
with $c\leq 64/3$.
###### Remark.
Proposition 1.1 is related in spirit and proof to the results of R.M. Dudley
[11] in the case of the bounded Lipschitz metric
$d_{BL}(\mu,\nu)=\inf_{f1-\text{Lip},|f|\leq 1}\int fd(\mu-\nu).$
The analogy is not at all fortuitous : indeed, the bounded Lipschitz metric is
linked to the $1$-Wasserstein distance via the well-known Kantorovich-
Rubinstein dual definition of $W_{1}$ :
$W_{1}(\mu,\nu)=\inf_{f1-\text{Lip}}\int fd(\mu-\nu).$
The analogy stops at $p=1$ since there is no representation of $W_{p}$ as an
empirical process for $p>1$ (there is, however, a general dual expression of
the transport cost). In spite of this, the technique of proof in [11] proves
useful in our case, and the technique of using a sequence of coarser and
coarser partitions is at the heart of many later results, notably in the
literature concerned with the problem of matching two independent samples in
Euclidean space, see e.g. [24] or the recent paper [2].
We now give a first example of application, under an assumption that the
underlying metric space is of finite-dimensional type in some sense. More
precisely, we assume that there exist $k_{E}>0$, $\alpha>0$ such that
(2) $N(E,\delta)\leq k_{E}(\text{Diam }E/\delta)^{\alpha}.$
Here, the parameter $\alpha$ plays the role of a dimension.
###### Corollary 1.2.
Assume that $E$ satisfies (2), and that $\alpha>2p$. With notations as
earlier, the following holds :
$\mathbb{E}[W_{p}(L_{n},\mu)]\leq c\frac{\alpha}{\alpha-2p}\,\text{Diam
}E\,k_{E}^{1/\alpha}n^{-1/\alpha}$
with $c\leq 64/3$.
###### Remark.
In the case of measures supported in $\mathbb{R}^{d}$, this result is neither
new nor fully optimal. For a sharp statement in this case, the reader may
refer to [2] and references therein. However, we recover at least the exponent
of $n^{-1/d}$ which is sharp for $d\geq 3$, see [2] for a discussion. And on
the other hand, Corollary 1.2 extends to more general metric spaces of finite-
dimensional type, for example manifolds.
As opposed to Corollary 1.2, our next result is set in an infinite-dimensional
framework.
### 1.2. An application to Gaussian r.v.s in Banach spaces
We apply the results above to the case where $E$ is a separable Banach space
with norm $\|.\|$, and $\mu$ is a centered Gaussian random variable with
values in $E$, meaning that the image of $\mu$ by every continuous linear
functional $f\in E^{*}$ is a centered Gaussian variable in $\mathbb{R}$. The
couple $(E,\mu)$ is called a (separable) Gaussian Banach space.
Let $X$ be a $E$-valued r.v. with law $\mu$, and define the weak variance of
$\mu$ as
$\sigma=\sup_{f\in E^{*},\,|f|\leq 1}\left(\mathbb{E}f^{2}(X)\right)^{1/2}.$
The small ball function of a Gaussian Banach space $(E,\mu)$ is the function
$\psi(t)=-\log\mu(B(0,t)).$
We can associate to the couple $(E,\mu)$ their Cameron-Martin Hilbert space
$H\subset E$, see e.g. [19] for a reference. It is known that the small ball
function has deep links with the covering numbers of the unit ball of $H$, see
e.g. Kuelbs-Li [18] and Li-Linde [21], as well as with the approximation of
$\mu$ by measures with finite support in Wasserstein distance (the
quantization or optimal quantization problem), see Fehringer’s Ph.D. thesis
[12], Dereich-Fehringer-Matoussi-Scheutzow [8], Graf-Luschgy-Pagès [16].
We make the following assumptions on the small ball function :
1. (1)
there exists $\kappa>1$ such that $\psi(t)\leq\kappa\psi(2t)$ for $0<t\leq
t_{0}$,
2. (2)
for all $\varepsilon>0$, $n^{-\varepsilon}=o(\psi^{-1}(\log n))$.
Assumption (2) implies that the Gaussian measure is genuinely infinite
dimensional : indeed, in the case when $\text{dim }K<+\infty$, the measure is
supported in a finite-dimensional Banach space, and in this case the small
ball function behaves as $\log t$.
###### Theorem 1.3.
Let $(E,\mu)$ be a Gaussian Banach space with weak variance $\sigma$ and small
ball function $\psi$. Assume that Assumptions (1) and (2) hold.
Then there exists a universal constant $c$ such that for all
$n\geq(6+\kappa)(\log 2\vee\psi(1)\vee\psi(t_{0}/2)\vee 1/\sigma^{2}),$
the following holds :
(3) $\mathbb{E}(W_{2}(L_{n},\mu))\leq c\left[\psi^{-1}(\frac{1}{6+\kappa}\log
n)+\sigma n^{-1/[4(6+\kappa)]}\right].$
In particular, there is a $C=C(\mu)$ such that
(4) $\mathbb{E}(W_{2}(L_{n},\mu))\leq C\psi^{-1}(\log n).$
Moreover, for $\lambda>0$,
(5) $W_{2}(L_{n},\mu)\leq(C+\lambda)\psi^{-1}(\log n)\text{ with probability
}1-\exp-n\psi^{-1}(\log n)\frac{\lambda^{2}}{2\sigma^{2}}.$
###### Remark.
Note that the choice of $6+\kappa$ is not particularly sharp and may likely be
improved.
In order to underline the interest of the result above, we introduce some
definitions from optimal quantization. For $n\geq 1$ and $1\leq r<+\infty$,
define the optimal quantization error at rate $n$ as
$\delta_{n,r}(\mu)=\inf_{\nu\in\mathcal{P}_{n}}W_{r}(\mu,\nu)$
where the infimum runs on the set $\mathcal{P}_{n}$ of probability measures
with finite support of cardinal bounded by $n$. Under some natural
assumptions, the upper bound of (5) is matched by a lower bound for the
quantization error. Theorem 3.1 in [8] states the following : if for every
$0<\zeta<1$,
$\mu((1-\zeta)\varepsilon B)=o(\mu(\varepsilon B))\text{ as
}\varepsilon\rightarrow 0,$
then
$\delta_{n,r}\gtrsim\psi^{-1}(\log n)$
(where $a_{n}\gtrsim b_{n}$ means $\liminf a_{n}/b_{n}\geq 1$).
In the terminology of quantization, Theorem 1.3 states that the empirical
measure is a rate-optimal quantizer with high probability (under some
assumptions on the small ball function). This is of practical interest, since
obtaining the empirical measure is only as difficult as simulating an instance
of the Gaussian vector, and one avoids dealing with computation of appropriate
weights in the approximating discrete measure.
We leave aside the question of determining the sharp asymptotics for the
average error $\mathbb{E}(W_{2}(L_{n},\mu))$, that is of finding $c$ such that
$\mathbb{E}(W_{2}(L_{n},\mu))\sim c\psi^{-1}(\log n)$. Let us underline that
the corresponding question for quantizers is tackled for example in [22].
### 1.3. The case of Markov chains
We wish to extend the control of the speed of convergence to weakly dependent
sequences, such as rapidly-mixing Markov chains. There is a natural incentive
to consider this question : there are cases when one does not know hom to
sample from a given measure $\pi$, but a Markov chain with stationary measure
$\pi$ is nevertheless available for simulation. This is the basic set-up of
the Markov Chain Monte Carlo framework, and a very frequent situation, even in
finite dimension.
When looking at the proof of Proposition 1.1, it is apparent that the main
ingredient missing in the dependent case is the argument following (18), i.e.
that whenever $A\subset X$ is measurable, $nL_{n}(A)$ follows a binomial law
with parameters $n$ and $\mu(A)$, and this must be remedied in some way. It is
natural to look for some type of quantitative ergodicity property of the
chain, expressing almost-independence of $X_{i}$ and $X_{j}$ in the long range
($|i-j|$ large).
We will consider decay-of-variance inequalities of the following form :
(6) $\text{Var}_{\pi}P^{n}f\leq C\lambda^{n}\text{Var}_{\pi}f.$
In the reversible case, a bound of the type of (6) is ensured by Poincaré or
spectral gap inequalities. We recall one possible definition in the discrete-
time Markov chain setting.
###### Definition 1.2.
Let $P$ be a Markov kernel with _reversible_ measure $\pi\in\mathcal{P}(E)$.
We say that a Poincaré inequality with constant $C_{P}>0$ holds if
(7) $\text{Var}_{\pi}f\leq C_{P}\int f(I-P^{2})fd\pi$
for all $f\in L^{2}(\pi)$.
If (7) holds, we have
$\text{Var}_{\pi}P^{n}f\leq\lambda^{n}\text{Var}_{\pi}f$
with $\lambda=(C_{P}-1)/C_{P}$.
More generally, one may assume that we have a control of the decay of the
variance in the following form :
(8) $\text{Var}_{\pi}P^{n}f\leq C\lambda^{n}\|f-\int fd\pi\|_{L^{p}}.$
As soon as $p>2$, these inequalities are weaker than (6). Our proof would be
easily adaptable to this weaker decay-of-variance setting. We do not provide a
complete statement of this claim.
For a discussion of the links between Poincaré inequality and other notions of
weak dependence (e.g. mixing coefficients), see the recent paper [6].
For the next two theorems, we make the following dimension assumption on $E$ :
there exists $k_{E}>0$ and $\alpha>0$ such that for all $X\subset E$ with
finite diameter,
(9) $N(X,\delta)\leq k_{E}(\text{Diam }X/\delta)^{\alpha}.$
The following theorem is the analogue of Corollary 1.2 under the assumption
that the Markov chain satisfies a decay-of-variance inequality.
###### Theorem 1.4.
Assume that $E$ has finite diameter $d>0$ and (9) holds. Let
$\pi\in\mathcal{P}(E)$, and let $(X_{i})_{i\geq 0}$ be a $E$-valued Markov
chain with initial law $\nu$ such that $\pi$ is its unique invariant
probability. Assume also that (6) holds for some $C>0$ and $\lambda<1$.
Then if $2p>\alpha(1+1/r)$ and $L_{n}$ denotes the occupation measure
$1/n\sum_{i=1}^{n}\delta_{X_{i}}$, the following holds :
$\mathbb{E}_{\nu}\left[W_{p}(L_{n},\pi)\right]\leq
c\frac{\alpha(1+1/r)}{\alpha(1+1/r)-2p}k_{E}^{1/\alpha}d\left(\frac{C\|\frac{d\nu}{d\pi}\|_{r}}{(1-\lambda)n}\right)^{1/[\alpha(1+1/r)]}$
for some universal constant $c\leq 64/3$.
The previous theorem has the drawback of assuming that the state space has
finite diameter. This can be circumvented, for example by truncation
arguments. Our next theorem is an extension to the unbounded case under some
moment conditions on $\pi$. The statement and the proof involve more
technicalities than Theorem 1.4, so we separate the two in spite of the
obvious similarities.
###### Theorem 1.5.
Assume that (9) holds. Let $\pi\in\mathcal{P}(E)$, and let $(X_{i})_{i\geq 0}$
be a $E$-valued Markov chain with initial law $\nu$ such that $\pi$ is its
unique invariant probability. Assume also that (6) holds for some $C>0$ and
$\lambda<1$. Let $x_{0}\in E$ and for all $\theta\geq 1$, denote
$M_{\theta}=\int d(x_{0},x)^{\theta}d\pi$. Fix $r$ and $\zeta>1$ and assume
$2p>\alpha(1+1/r)(1+1/\zeta)$.
There exist two numerical constant $C_{1}(p,r,\zeta)$ and $C_{2}(p,r,\zeta)$
only depending on $p$, $r$ and $\zeta$ such that whenever
$\frac{C\|\frac{d\nu}{d\pi}\|_{r}}{(1-\lambda)n}\leq C_{1}(p,r,\zeta),$
the following holds :
$\mathbb{E}_{\nu}\left[W_{p}(L_{n},\pi)\right]\leq
C_{1}(p,r,\zeta)K(\zeta)\left(\frac{C\|\frac{d\nu}{d\pi}\|_{r}}{(1-\lambda)n}\right)^{1/[\alpha(1+1/r)(1+1/\zeta)]}$
where
$K(\zeta)=\frac{m_{\zeta}}{m_{p}^{\zeta/p}}\vee\frac{m_{\zeta+2p}}{m_{p}^{1+\zeta/p}}\vee
k_{E}^{1/2p(1+1/r)}\frac{2p}{\alpha(1+1/r)}m_{p}^{\alpha/(2p^{2})(1+1/r)}.$
## 2\. Proofs in the independent case
###### Lemma 2.1.
Let $X\subset E$, $s>0$ and $u,v\in\mathbb{N}$ with $u<v$. Suppose that
$N(X,4^{-v}s)<+\infty$. For $u\leq j\leq v$, there exist integers
(10) $m(j)\leq N(X,4^{-j}s)$
and non-empty subsets $X_{j,l}$ of $X$, $u\leq j\leq v$, $1\leq l\leq m(j)$,
such that the sets $X_{j,l}$ $1\leq l\leq m(j)$ satisfy
1. (1)
for each $j$, $(X_{j,l})_{1\leq l\leq m(j)}$ is a partition of $X$,
2. (2)
$\text{Diam }X_{j,l}\leq 4^{-j+1}s$,
3. (3)
for each $j>u$, for each $1\leq l\leq m(j)$ there exists $1\leq l^{\prime}\leq
m(j-1)$ such that $X_{j,l}\subset X_{j-1,l^{\prime}}$.
In other words, the sets $X_{j,l}$ form a sequence of partitions of $X$ that
get coarser as $j$ decreases (tiles at the scale $j-1$ are unions of tiles at
the scale $j$).
###### Proof.
We begin by picking a set of balls $B_{j,l}=B(x_{j,l},4^{-j}s)$ with $u\leq
j\leq v$ and $1\leq l\leq N(X,4^{-j}s)$, such that for all $j$,
$X\subset\bigcup_{l=1}^{N(X,4^{-j}s)}B_{j,l}.$
Define $X_{v,1}=B_{v,1}$, and successively set $X_{v,l}=B_{v,l}\setminus
X_{v,l-1}$. Discard the possible empty sets and relabel the existing sets
accordingly. We have obtained the finest partition, obviously satisfying
conditions (1)-(2).
Assume now that the sets $X_{j,l}$ have been built for $k+1\leq j\leq v$. Set
$X_{k,1}$ to be the reunion of all $X_{k+1,l^{\prime}}$ such that
$X_{k+1,l^{\prime}}\cap B_{k,1}\neq\emptyset$. Likewise, define by induction
on $l$ the set $X_{k,l}$ as the reunion of all $X_{k+1,l^{\prime}}$ such that
$X_{k+1,l^{\prime}}\cap B_{k,l}\neq\emptyset$ and
$X_{k+1,l^{\prime}}\nsubseteq X_{k,p}$ for $1\leq p<l$. Again, discard the
possible empty sets and relabel the remaining tiles. It is readily checked
that the sets obtained satisfy assumptions (1) and (3). We check assumption
(2) : let $x_{k,l}$ denote the center of $B_{k,l}$ and let $y\in
X_{k+1,l^{\prime}}\subset X_{k,l}$. We have
$d(x_{k,l},y)\leq 4^{-k}s+\text{Diam }X_{k+1,l^{\prime}}\leq 2\times 4^{-k}s,$
thus $\text{Diam }X_{k,l}\leq 4^{-k+1}s$ as desired.
∎
Consider as above a subset $X$ of $E$ with finite diameter $d$, and assume
that $N(X,4^{-k}d)<+\infty$. Pick a sequence of partitions $(X_{j,l})_{1\leq
l\leq m(j)}$ for $1\leq j\leq k$, as per Lemma 2.1. For each $(j,l)$ choose a
point $x_{j,l}\in X_{j,l}$. Define the set of points of level $j$ as the set
$L(j)=\\{x_{j,l}\\}_{1\leq l\leq m(j)}$. Say that $x_{j^{\prime},l^{\prime}}$
is an ancestor of $x_{j,l}$ if $X_{j,l}\subset X_{j^{\prime},l^{\prime}}$ : we
will denote this relation by $(j^{\prime},l^{\prime})\rightarrow(j,l)$.
The next two lemmas study the cost of transporting a finite measure $m_{k}$ to
another measure $n_{k}$ when these measures have support in $L(k)$. The
underlying idea is that we consider the finite metric space formed by the
points $x_{j,l}$, $1\leq j\leq k$, as a _metric tree_ , where points are
connected to their ancestor at the previous level, and we consider the problem
of transportation between two masses at the leaves of the tree. The
transportation algorithm we consider consists in allocating as much mass as
possible at each point, then moving the remaining mass up one level in the
tree, and iterating the procedure.
A technical warning : please note that the transportation cost is usually
defined between two probability measures ; however there is no difficulty in
extending its definition to the transportation between two finite measures of
equal total mass, and we will freely use this fact in the sequel.
###### Lemma 2.2.
Let $m_{j}$, $n_{j}$ be measures with support in $L_{j}$. Define the measures
$\tilde{m}_{j-1}$ and $\tilde{n}_{j-1}$ on $L_{j-1}$ by setting
(11) $\displaystyle\tilde{m}_{j-1}(x_{j-1,l^{\prime}})$
$\displaystyle=\sum_{(j-1,l^{\prime})\rightarrow(j,l)}(m_{j}(x_{j,l})-n_{j}(x_{j,l}))\wedge
0,$ (12) $\displaystyle\tilde{n}_{j-1}(x_{j-1,l^{\prime}})$
$\displaystyle=\sum_{(j-1,l^{\prime})\rightarrow(j,l)}(n_{j}(x_{j,l})-m_{j}(x_{j,l}))\wedge
0.$
The measures $\tilde{m_{j-1}}$ and $\tilde{n_{j-1}}$ have same mass, so the
transportation cost between them may be defined. Moreover, the following bound
holds :
(13) $W_{p}(m_{j},n_{j})\leq 2\times
4^{-j+2}d\|m_{j}-n_{j}\|_{TV}^{1/p}+W_{p}(\tilde{m}_{j-1},\tilde{n}_{j-1}).$
###### Proof.
Set $m_{j}\wedge n_{j}(x_{j,l})=m_{j}(x_{j,l})\wedge n_{j}(x_{j,l})$. By the
triangle inequality,
$\displaystyle W_{p}(m,n)\leq$ $\displaystyle W_{p}(m_{j},m_{j}\wedge
n_{j})+\tilde{m}_{j-1}+W_{p}(m_{j}\wedge n_{j}+\tilde{m}_{j-1},m_{j}\wedge
n_{j}+\tilde{n}_{j-1})$ $\displaystyle+W_{p}(m_{j}\wedge
n_{j}+\tilde{n}_{j-1},n_{j}).$
We bound the term on the left. Introduce the transport plan $\pi_{m}$ defined
by
$\displaystyle\pi_{m}(x_{j,l},x_{j,l})$ $\displaystyle=m_{j}\wedge
n_{j}(x_{j,l}),$ $\displaystyle\pi_{m}(x_{j,l},x_{j-1,l^{\prime}})$
$\displaystyle=(m_{j}(x_{j,l})-n_{j}(x_{j,l}))_{+}\text{ when
}(j-1,l^{\prime})\rightarrow(j,l).$
The reader can check that $\pi_{m}\in\mathcal{P}(m_{j},m_{j}\wedge
n_{j}+\tilde{m}_{j-1})$. Moreover,
$\displaystyle W_{p}(m_{j},\tilde{m_{j-1}})$ $\displaystyle\leq\left(\int
d^{p}(x,y)\pi_{m}(dx,dy)\right)^{1/p}$ $\displaystyle\leq
4^{-j+2}d\left(\sum_{l=1}^{m(j)}(m_{j}(x_{j,l})-n_{j}(x_{j,l}))_{+}\right)^{1/p}.$
Likewise,
$W_{p}(n_{j},m_{j}\wedge n_{j}+\tilde{n_{j-1}})\leq
4^{-j+2}d\left(\sum_{l=1}^{m(j)}(n_{j}(x_{j,l})-m_{j}(x_{j,l}))_{+}\right)^{1/p}.$
As for the term in the middle, it is bounded by
$W_{p}(\tilde{m}_{j-1},\tilde{n}_{j-1})$. Putting this together and using the
inequality $x+y\leq 2^{1-1/p}(x^{p}+y^{p})^{1/p}$,, we get
$W_{p}(m_{j},n_{j})\leq
2^{1-1/p}4^{-j+2}d\left(\sum_{l=1}^{m(j)}|m_{j}(x_{j,l})-n_{j}(x_{j,l})|\right)^{1/p}+W_{p}(\tilde{m}_{j-1},\tilde{n}_{j-1}).$
∎
###### Lemma 2.3.
Let $m_{j}$, $n_{j}$ be measures with support in $L_{j}$. Define for $1\leq
j^{\prime}<j$ the measures $m_{j}^{\prime}$, $n_{j}^{\prime}$ with support in
$L_{j}^{\prime}$ by
(14)
$m_{j^{\prime}}(x_{j^{\prime},l^{\prime}})=\sum_{(j^{\prime},l^{\prime})\rightarrow(j,l)}m_{j}(x_{j,l}),\quad
n_{j^{\prime}}(x_{j^{\prime},l^{\prime}})=\sum_{(j^{\prime},l^{\prime})\rightarrow(j,l)}n_{j}(x_{j,l}).$
The following bound holds :
(15) $W_{p}(m_{j},n_{j})\leq\sum_{j^{\prime}=1}^{j}2\times
4^{-j^{\prime}+2}d\|m_{j}^{\prime}-n_{j}^{\prime}\|_{TV}^{1/p}$
###### Proof.
We proceed by induction on $j$. For $j=1$, the result is obtained by using the
simple bound $W_{p}(m_{1},n_{1})\leq d\|m_{1}-n_{1}\|_{\text{TV}}^{1/p}$.
Suppose that (15) holds for measures with support in $L_{j-1}$. By lemma 2.2,
we have
$W_{p}(m_{j},n_{j})\leq 2\times
4^{-j+2}d\|m_{j}-n_{j}\|_{TV}^{1/p}+W_{p}(\tilde{m}_{j-1},\tilde{n}_{j-1})$
where $\tilde{m}_{j-1}$ and $\tilde{n}_{j-1}$ are defined by (11) and (12)
respectively. For $1\leq i<j-1$, define following (14)
$\tilde{m}_{i}(x_{i,l^{\prime}})=\sum_{(i,l^{\prime})\rightarrow(j-1,l)}\tilde{m}_{j-1}(x_{j-1,l}),\quad\tilde{n}_{i}(x_{i,l^{\prime}})=\sum_{(i,l^{\prime})\rightarrow(j-1,l)}\tilde{n}_{j-1}(x_{j-1,l}).$
We have
$W_{p}(m_{j},n_{j})\leq 2\times
4^{-j+2}d\|m_{j}-n_{j}\|_{TV}^{1/p}+\sum_{j^{\prime}=1}^{j-1}2\times
4^{-j^{\prime}+2}d\|\tilde{m}_{i}-\tilde{n}_{i}\|_{TV}^{1/p}.$
To conclude, it suffices to check that for $1\leq i\leq j-1$,
$\|\tilde{m}_{i}-\tilde{n}_{i}\|_{TV}=\|m_{i}-n_{i}\|_{TV}$.
∎
###### Proof of Proposition 1.1..
We pick some positive integer $k$ whose value will be determined at a later
point. Introduce the sequence of partitions $(X_{j,l})_{1\leq l\leq m(j)}$ for
$0\leq j\leq k$ as in the lemmas above, as well as the points $x_{j,l}$.
Define $\mu_{k}$ as the measure with support in $L(k)$ such that
$\mu_{k}(x_{k,l})=\mu(X_{k,l})$ for $1\leq l\leq m(k)$. The diameter of the
sets $X_{k,l}$ is bounded by $4^{-k+1}d$, therefore $W_{p}(\mu,\mu_{k})\leq
4^{-k+1}d$.
Let $L_{n}^{k}$ denote the empirical measure associated to $\mu_{k}$.
For $0\leq j\leq k-1$, define as in Lemma 2.3 the measures $\mu_{j}$ and
$L_{n}^{j}$ with support in $L(j)$ by
(16) $\displaystyle\mu_{j}(x_{j,l^{\prime}})$
$\displaystyle=\sum_{(j,l^{\prime})\rightarrow(k,l)}\mu_{k}(x_{k,l})$ (17)
$\displaystyle L_{n}^{j}(x_{j,l^{\prime}})$
$\displaystyle=\sum_{(j,l^{\prime})\rightarrow(k,l)}L_{n}^{k}(x_{k,l}).$
It is simple to check that $\mu_{j}(x_{j,l})=\mu(X_{j,l})$, and that
$L_{n}^{j}$ is the empirical measure associated with $\mu_{j}$. Applying (15),
we get
(18) $W_{p}(\mu_{k},L_{n}^{k})\leq\sum_{j=1}^{k}2\times
4^{-j+2}d\|\mu_{j}-L_{n}^{j}\|_{TV}^{1/p}.$
Observe that $nL_{n}^{j}(x_{j,l})$ is a binomial law with parameters $n$ and
$\mu(X_{j,l})$. The expectation of $\|\mu_{j}-L_{n}^{j}\|_{TV}$ is bounded as
follows :
$\displaystyle\mathbb{E}(\|\mu_{i}-L_{n}^{i}\|_{TV})$
$\displaystyle=1/2\sum_{l=1}^{m(j)}\mathbb{E}(|(L_{n}^{j}-\mu_{j})(x_{j,l})|)$
$\displaystyle\leq
1/2\sum_{l=1}^{m(j)}\sqrt{\mathbb{E}(|(L_{n}^{j}-\mu_{j})(x_{j,l})|^{2})}$
$\displaystyle=1/2\sum_{l=1}^{m(j)}\sqrt{\frac{\mu(X_{j,l})(1-\mu(X_{j,l}))}{n}}$
$\displaystyle\leq 1/2\sqrt{\frac{m(j)}{n}}.$
In the last inequality, we use Cauchy-Schwarz’s inequality and the fact that
$(X_{j,l})_{1\leq l\leq m(j)}$ is a partition of $X$. Putting this back in
(18), we get
$\displaystyle\mathbb{E}(W_{p}(\mu_{k},L_{n}^{k}))$ $\displaystyle\leq
n^{-1/2p}\sum_{j=1}^{k}2^{1-1/p}4^{(-j+2)}dm(i)^{1/2p}$ $\displaystyle\leq
2^{5-1/p}n^{-1/2p}\sum_{j=1}^{k}4^{-j}dN(X,4^{-j}d)^{1/2p}$ $\displaystyle\leq
2^{6-1/p}/3n^{-1/2p}\int_{4^{-(k+1)}d}^{d/4}N(X,\delta)^{1/2p}d\delta.$
In the last line, we use a standard sum-integral comparison argument.
By the triangle inequality, we have
$W_{p}(\mu,L_{n})\leq
W_{p}(\mu,\mu_{k})+W_{p}(\mu_{k},L_{n}^{k})+W_{p}(L_{n}^{k},L_{n}).$
We claim that $\mathbb{E}(W_{p}(L_{n}^{k},L_{n}))\leq W_{p}(\mu,\mu_{k})$.
Indeed, choose $n$ i.i.d. couples $(X_{i},X_{i}^{k})$ such that
$X_{i}\sim\mu$, $X_{i}^{k}\sim\mu_{k}$, and the joint law of
$(X_{i},X_{i}^{k})$ achieves an optimal coupling, i.e.
$\mathbb{E}|X_{i}-X_{i}^{k}|^{p}=W_{p}^{p}(\mu,\mu^{k})$. We have the
identities in law
$L_{n}\sim\frac{1}{n}\sum_{i=1}^{n}\delta_{X_{i}},\>L_{n}^{k}\sim\frac{1}{n}\sum_{i=1}^{n}\delta_{X_{i}^{k}}.$
Choose the transport plan that sends $X_{i}$ to $X_{i}^{k}$ : this gives the
upper bound
$W_{p}^{p}(L_{n},L_{n}^{k})\leq 1/n\sum_{i=1}^{n}|X_{i}-X_{i}^{k}|^{p}$
and passing to expectation proves our claim.
Thus, $\mathbb{E}(W_{p}(\mu,L_{n}))\leq
2W_{p}(\mu,\mu_{k})+\mathbb{E}(W_{p}(\mu_{k},L_{n}^{k}))$. Choose now $k$ as
the largest integer such that $4^{-(k+1)}d\geq t$. This imposes $4^{-k+1}d\leq
16t$, and this finishes the proof.
∎
###### Proof of Corollary 1.2.
It suffices to use Proposition 1.1 along with (2) and to optimize in $t$. ∎
## 3\. Proof of Theorem 1.3.
###### Proof of Theorem 1.3..
We begin by noticing that statement (5) is a simple consequence of statement
(4) and the tensorization of $\mathbf{T}_{2}$ : we have by Corollary A.2
$\mathbb{P}(W_{2}(L_{n},\mu)\geq\mathbb{E}(W_{2}(L_{n},\mu)+t)\leq
e^{-nt^{2}/(2\sigma^{2})},$
and it suffices to choose $t=\lambda\psi^{-1}(\log n)$ to conclude. We now
turn to the other claims.
Denote by $K$ the unit ball of the Cameron-Martin space associated to $E$ and
$\mu$, and by $B$ the unit ball of $E$. According to the Gaussian
isoperimetric inequality (see [19]), for all $\lambda>0$ and $\varepsilon>0$,
$\mu(\lambda K+\varepsilon B)\geq\Phi\left(\lambda+\Phi^{-1}(\mu(\varepsilon
B))\right)$
where $\Phi(t)=\int_{-\infty}^{t}e^{-u^{2}/2}du/\sqrt{2\pi}$ is the Gaussian
c.d.f..
Choose $\lambda>0$ and $\varepsilon>0$, and set $X=\lambda K+\varepsilon B$.
Note
$\mu^{\prime}=\frac{1}{\mu(X)}\mathbf{1}_{X}\mu$
the restriction of $\mu$ to the enlarged ball.
The diameter of $X$ is bounded by $2(\sigma\lambda+\varepsilon)$. The $W_{2}$
distance between $L_{n}$ and $\mu$ is thus bounded as follows :
(19) $W_{2}(L_{n},\mu)\leq
2W_{2}(\mu,\mu^{\prime})+ct+cn^{-1/4}\int_{t}^{(\sigma\lambda+\varepsilon)/2}N(X,\delta)^{1/4}d\delta$
Set
(20) $\displaystyle I_{1}$ $\displaystyle=W_{2}(\mu,\mu^{\prime})$ (21)
$\displaystyle I_{2}$ $\displaystyle=t$ (22) $\displaystyle I_{3}$
$\displaystyle=n^{-1/4}\int_{t}^{(\sigma\lambda+\varepsilon)/2}N(X,\delta)^{1/4}d\delta.$
To begin with, set $\varepsilon=t/2$.
_Controlling $I_{1}$._ We use transportation inequalities and the Gaussian
isoperimetric inequality. By Lemma A.1, $\mu$ satisfies a
$\mathbf{T}_{2}(2\sigma^{2})$ inequality, so that we have
$\displaystyle W_{2}(\mu,\mu^{\prime})$
$\displaystyle\leq\sqrt{2\sigma^{2}H(\mu^{\prime}|\mu)}=\sqrt{-2\sigma^{2}\log\mu(\lambda
K+\varepsilon B)}$
$\displaystyle\leq\sqrt{-2\sigma^{2}\log\Phi(\lambda+\Phi^{-1}(\mu(\varepsilon
B)))}$
$\displaystyle=\sqrt{2}\sigma\sqrt{-\log\Phi(\lambda+\Phi^{-1}(e^{-\psi(t/2)}))}.$
Introduce the tail function of the Gaussian distribution
$\Upsilon(x)=\sqrt{2\pi}^{-1}\int_{x}^{+\infty}e^{-y^{2}/2}dy.$
We will use the fact that $\Phi^{-1}+\Upsilon^{-1}=0$, which comes from
symmetry of the Gaussian distribution. We will also use the bound
$\Upsilon(t)\leq e^{-t^{2}/2}/2$, $t\geq 0$ and its consequence
$\Upsilon^{-1}(u)\leq\sqrt{-2\log u},\quad 0<u\leq 1/2.$
We have
$\Phi^{-1}(e^{-\psi(t/2)})=-\Upsilon^{-1}(e^{-\psi(t/2)})\geq-\sqrt{2\psi(t/2)}$
as soon as $\psi(t/2)\geq\log 2$. The elementary bound $\log\frac{1}{1-x}\leq
2x$ for $x\leq 1/2$ yields
$\displaystyle\sqrt{-2\log\Phi(u)}$
$\displaystyle=\sqrt{2}\left(\log\frac{1}{1-\Upsilon(u)}\right)^{1/2}$
$\displaystyle\leq\sqrt{2}e^{-u^{2}/4}$
whenever $u\geq\Upsilon^{-1}(1/2)=0$. Putting this together, we have
(23) $I_{1}\leq\sqrt{2}\sigma e^{-(\lambda-\sqrt{2\psi(t/2)})^{2}/4}.$
whenever
(24) $\psi(t/2)\geq\log 2\text{ and }\lambda-\sqrt{2\psi(t/2)}\geq 0.$
_Controlling $I_{3}$._ The term $I_{3}$ is bounded by
$1/2n^{-1/4}(\sigma\lambda+t/2)N(X,t)^{1/4}$ (just bound the function inside
by its value at $t$, which is minimal). Denote $k=N(\lambda K,t-\varepsilon)$
the covering number of $\lambda K$ (w.r.t. the norm of $E$). Let
$x_{1},\ldots,x_{k}\in K$ be such that union of the balls
$B(x_{i},t-\varepsilon)$ contains $\lambda K$. From the triangle inequality we
get the inclusion
$\lambda K+\varepsilon B\subset\bigcup_{i=1}^{k}B(x_{i},t).$
Therefore, $N(X,t)\leq N(\lambda K,t-\varepsilon)=N(\lambda K,t/2)$.
We now use the well-known link between $N(\lambda K,t/2)$ and the small ball
function. Lemma 1 in [18] gives the bound
$N(\lambda K,t/2)\leq e^{\lambda^{2}/2+\psi(t/4)}\leq
e^{\lambda^{2}/2+\kappa\psi(t/2)}.$
so that
(25)
$I_{3}\leq\frac{1}{2}(\sigma\lambda+t/2)e^{\frac{\lambda^{2}}{8}+\frac{\kappa}{4}\psi(t/2)-\frac{1}{4}\log
n}.$
Remark that we have used the doubling condition on $\psi$, so that we require
(26) $t/4\leq t_{0}.$
_Final step._ Set now $t=2\psi^{-1}(a\log n)$ and $\lambda=2\sqrt{2a\log n}$,
with $a>0$ yet undetermined. Using (23) and (25), we see that there exists a
universal constant $c$ such that
$\displaystyle\mathbb{E}(W_{2}(L_{n},\mu))\leq$ $\displaystyle
c\left[\psi^{-1}(a\log n)+\sigma e^{-(a/2)\log n}\right.$
$\displaystyle\quad\left.+(\sigma\sqrt{a\log n}+\psi^{-1}(a\log
n))e^{[a(1+\kappa/4)-1/4]\log n}\right].$
Choose $a=1/(6+\kappa)$ and assume $\log n\geq(6+\kappa)(\log
2\vee\psi(1)\vee\psi(t_{0}/2))$. This guarantees that the technical conditions
(24) and (26) are enforced, and that $\psi^{-1}(a\log n)\leq 1$. Summing up,
we get :
$\mathbb{E}(W_{2}(L_{n},\mu))\leq c\left[\psi^{-1}(\frac{1}{6+\kappa}\log
n)+(1+\sigma\sqrt{\frac{1}{6+\kappa}\log n})n^{-1/(12+2\kappa)}\right].$
Impose $\log n\geq(6+\kappa)/\sigma^{2}$ : this ensures
$\sigma\sqrt{\frac{1}{6+\kappa}\log n}\geq 1$. And finally, there exists some
$c>0$ such that for all $x\geq 1$, $\sqrt{\log x}x^{-1/4}\leq c$ : this
implies
$\sqrt{\frac{1}{6+\kappa}\log n}n^{-1/(24+4\kappa)}\leq c.$
This gives
$(1+\sigma\sqrt{\frac{1}{6+\kappa}\log n})n^{-1/(12+2\kappa)}\leq c\sigma
n^{-1/[4(6+\kappa)]}$
and the proof is finished.
∎
## 4\. Proofs in the dependent case
We consider hereafter a Markov chain $(X_{n})_{n\in\mathbb{N}}$ defined by
$X_{0}\sim\nu$ and the transition kernel $P$. Let us denote by
$L_{n}=\sum_{i=1}^{n}\delta_{X_{i}}$
its occupation measure.
###### Proposition 4.1.
Suppose that the Markov chain satisfies (6) for some $C>0$ and $\lambda<1$.
Then the following holds :
(27) $\mathbb{E}_{\nu}(W_{p}(L_{n},\pi))\leq
c\left(t+\left(\frac{C}{(1-\lambda)n}\|\frac{d\nu}{d\pi}\|_{r}\right)^{1/2p}\int_{t}^{d/4}N(X,t)^{1/2p(1+1/r)}dt\right).$
###### Proof.
An application of (15) as in (18) yields
(28) $\mathbb{E}(W_{p}(L_{n},\pi))\leq 2\times 4^{-k+1}d+\sum_{j=1}^{k}2\times
4^{-j+2}d\left(\sum_{l=1}^{m(j)}\mathbb{E}|(L_{n}-\pi)(X_{j,l})|\right)^{1/p}.$
Let $A$ be a measurable subset of $X$, and set
$f_{A}(x)=\mathbf{1}_{A}(x)-\pi(A)$. We have
$\displaystyle\mathbb{E}|(L_{n}-\pi)(A)|$
$\displaystyle=1/n\mathbb{E}_{\nu}|\sum_{i=1}^{n}f_{A}(X_{i})|$
$\displaystyle\leq
1/n\sqrt{\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbb{E}_{\nu}\left[f_{A}(X_{i})f_{A}(X_{j})\right]}.$
Let $\tilde{p},\tilde{q},r\geq 1$ be such that
$1/\tilde{p}+1/\tilde{q}+1/r=1$, and let $s$ be defined by
$1/s=1/\tilde{p}+1/\tilde{q}$. Now, using Hölder’s inequality with $r$ and
$s$,
$\mathbb{E}_{\nu}\left[f_{A}(X_{i})f_{A}(X_{j})\right]\leq\|\frac{d\nu}{d\pi}\|_{r}(\mathbb{E}_{\pi}|f_{A}(X_{i})f_{A}(X_{j})|^{s})^{1/s}.$
Use the Markov property and the fact that $f\mapsto Pf$ is a contraction in
$L^{s}$ to get
$\mathbb{E}_{\nu}\left[f_{A}(X_{i})f_{A}(X_{j})\right]\leq\|\frac{d\nu}{d\pi}\|_{r}\|f_{A}P^{j-i}f_{A}\|_{s}.$
Finally, use Hölder’s inequality with $\tilde{p},\tilde{q}$ : we get
(29)
$\mathbb{E}_{\nu}\left[f_{A}(X_{i})f_{A}(X_{j})\right]\leq\|\frac{d\nu}{d\pi}\|_{r}\|P^{j-i}f_{A}\|_{\tilde{p}}\|f_{A}\|_{\tilde{q}}.$
Set $\tilde{p}=2$ and note that for $1\leq t\leq+\infty$, we have
$\|f_{A}\|_{t}\leq 2\pi(A)^{1/t}$. Use (6) applied to the centered function
$f_{A}$ to get
$\mathbb{E}_{\nu}\left[f_{A}(X_{i})f_{A}(X_{j})\right]\leq
4C\lambda^{j-i}\|\frac{d\nu}{d\pi}\|_{r}\pi(A)^{1-1/r},$
and as a consequence,
(30)
$\mathbb{E}|(L_{n}-\pi)(A)|\leq\frac{1}{\sqrt{n}}\frac{2\sqrt{2C}}{\sqrt{1-\lambda}}\|\frac{d\nu}{d\pi}\|_{r}^{1/2}\pi(A)^{1/2-1/2r}.$
Come back to (28) : we have
$\displaystyle\mathbb{E}(W_{p}(L_{n},\pi))$ $\displaystyle\leq
4^{-k+1}d+32(\frac{2\sqrt{2C}}{\sqrt{1-\lambda}})^{1/p}\|\frac{d\nu}{d\pi}\|_{r}^{1/2p}n^{-1/2p}$
$\displaystyle\quad\times\sum_{j=1}^{k}4^{-j}d\left(\sum_{l=1}^{m(j)}\pi(X_{j,l})^{1/2-1/2r}\right)^{1/p}$
$\displaystyle\leq
4^{-k+1}d+c\left(\frac{C}{(1-\lambda)n}\|\frac{d\nu}{d\pi}\|_{r}\right)^{1/2p}\sum_{j=1}^{k}4^{-j}dm(j)^{1/2p(1+1/r)}$
$\displaystyle\leq
c\left(t+\left(\frac{C}{(1-\lambda)n}\|\frac{d\nu}{d\pi}\|_{r}\right)^{1/2p}\int_{t}^{d/4}N(X,t)^{1/2p(1+1/r)}dt\right).$
∎
###### Proof of Theorem 1.4..
Use (27) and (9) to get
$\mathbb{E}W_{p}(L_{n},\mu)\leq c\left[t+At^{-\alpha/2p(1+1/r)+1}\right]$
where
$A=\frac{2p}{\alpha(1+1/r)}(C/(1-\lambda))^{1/2p}\|\frac{d\nu}{d\pi}\|_{r}^{1/2p}n^{-1/2p}d^{\alpha/2p(1+1/r)}.$
Optimizing in $t$ finishes the proof.
∎
We now move to the proof in the unbounded case.
###### Proof of Theorem 1.5.
We remind the reader that the following assumption stands : for $X\subset E$
with diameter bounded by $d$,
(31) $N(X,\delta)\leq k_{E}(d/\delta)^{\alpha}.$
In the following lines, we will make use of the elementary inequalities
(32) $(x+y)^{p}\leq 2^{p-1}(x^{p}+y^{p})\leq 2^{p-1}(x+y)^{p}.$
_Step 1._
Pick increasing sequence of numbers $d_{i}>0$ to be set later on, and some
point $x_{0}\in E$. Define $C_{1}=B(x_{0},d_{1})$, and $C_{i}=B(x_{0},d_{i})\
B(x_{0},d_{i-1})$ for $i\geq 2$.
The idea is as follows : we decompose the state space $E$ into a union of
rings, and deal separately with $C_{1}$ on the one hand, using the case of
Theorem 1.4 as guideline, and with the union of the $C_{i}$, $i\geq 2$ on the
other hand, where we use more brutal bounds.
We define partial occupation measures
$L_{n}^{i}=1/n\sum_{j=1}^{n}\delta_{X_{j}}\mathbf{1}_{X_{j}\in C_{i}}$
and their masses $m_{i}=L_{n}^{i}(E)$. We have the inequality
(33) $W_{p}^{p}(L_{n},\pi)\leq\sum_{i\geq
1}m_{i}W_{p}^{p}(1/m_{i}L_{n}^{i},\pi).$
On the other hand,
$\displaystyle W_{p}(1/m_{i}L_{n}^{i},\pi)$ $\displaystyle\leq(\int
d(x_{0},x)^{p}d\pi)^{1/p}+(\int d(x_{0},x)^{p}d(1/m_{i}L_{n}^{i}))^{1/p}$
$\displaystyle\leq M_{p}^{1/p}+d_{i},$
so that $W_{p}^{p}(1/m_{i}L_{n}^{i},\pi)\leq
2^{p-1}\left(M_{p}+d_{i}^{p}\right)$ using (32). Also, using (33) and (32)
yields
$W_{p}(L_{n},\pi)\leq
m_{1}^{1/p}W_{p}(1/m_{1}L_{n}^{1},\pi)+2^{1-1/p}\left(\sum_{i\geq
2}m_{i}\left[M_{p}+d_{i}^{p}\right]\right)^{1/p}.$
Pass to expectations to get
(34)
$\mathbb{E}[W_{p}(L_{n},\pi)]\leq\mathbb{E}\left[m_{1}^{1/p}W_{p}(1/m_{1}L_{n}^{1},\pi)\right]+2^{1-1/p}\left(\sum_{i\geq
2}\pi(C_{i})\left[M_{p}+d_{i}^{p}\right]\right)^{1/p}$
We bound separately the left and right term in the right-hand side of (34),
starting with the right one.
_Step 2_.
Choose some $q>p$ and use Chebyshev’s inequality to bound the sum on the right
by
(35) $\sum_{i\geq 2}\frac{M_{q}}{d_{i-1}^{q}}\left[M_{p}+d_{i}^{p}\right]$
Take $d_{i}=\rho^{i}M_{p}^{1/p}$, (35) becomes
$\displaystyle M_{q}M_{p}^{1-q/p}\rho^{q}\sum_{i\geq
2}[\rho^{-qi}+\rho^{(p-q)i}]$ $\displaystyle=$ $\displaystyle
M_{q}M_{p}^{1-q/p}\left[\frac{\rho^{-q}}{1-\rho^{-q}}+\frac{\rho^{2p-q}}{1-\rho^{p-q}}\right].$
Assume for example that $\rho\geq 2$ : this implies
$\sum_{i\geq 2}\pi(C_{i})\left[M_{p}+d_{i}^{p}\right]\leq
4M_{q}M_{p}^{1-q/p}\rho^{2p-q}.$
For later use, we set $\zeta=q/p-2$ and the above yields
(36) $2^{1-1/p}\left(\sum_{i\geq
2}\pi(C_{i})\left[M_{p}+d_{i}^{p}\right]\right)^{1/p}\leq
4M_{(\zeta+2)p}^{1/p}M_{p}^{-(1+\zeta)/p}\rho^{-\zeta}.$
_Step 3._
We now turn our attention to the term on the left in (34).
Once again, we apply (15) to obtain
$W_{p}(1/m_{1}L_{n}^{1},\pi)\lesssim
4^{-k}d_{1}+\sum_{j=1}^{k}4^{-j}\left(\sum_{l=1}^{m(j)}|((1/m_{1})L_{n}-\pi)(X_{j,l})|\right)^{1/p}$
Multiply by $m_{1}^{1/p}$ and pass to expectations :
$\displaystyle\mathbb{E}\left[m_{1}^{1/p}W_{p}(m_{1}^{-1}L_{n}^{1},\pi)\right]\lesssim$
$\displaystyle\,\sum_{j=1}^{k}4^{-j}\left(\sum_{l=1}^{m(j)}\mathbb{E}|(L_{n}-m_{1}\pi)(X_{j,l})|\right)^{1/p}$
$\displaystyle+4^{-k}d_{1}\mathbb{E}(m_{1}^{1/p}).$
First, notice that $0\leq m_{1}\leq 1$ a.s. so that
$\mathbb{E}(m_{1}^{1/p})\leq 1$. Next, write
$\displaystyle\sum_{l=1}^{m(j)}\mathbb{E}|(L_{n}-m_{1}\pi)(X_{j,l})|$
$\displaystyle\leq\sum_{l=1}^{m(j)}\mathbb{E}\left(|(L_{n}-\pi)(X_{j,l})|+|(m_{1}\pi-\pi)(X_{j},l)|\right)$
$\displaystyle\leq\sum_{l=1}^{m(j)}\mathbb{E}|(L_{n}-\pi)(X_{j,l})|+\mathbb{E}(|m_{1}-1|)\pi(C_{1})$
$\displaystyle\leq\sum_{l=1}^{m(j)}\mathbb{E}|(L_{n}-\pi)(X_{j,l})|+\mathbb{E}|L_{n}(C_{1})-1|.$
The first of these two terms is controlled using (30) : we have
$\sum_{l=1}^{m(j)}\mathbb{E}|(L_{n}-\pi)(X_{j,l})|\leq\frac{1}{\sqrt{n}}\frac{2\sqrt{2C}}{\sqrt{1-\lambda}}\|\frac{d\nu}{d\pi}\|_{r}^{1/2}m(j)^{1/2+1/2r}$
And on the other hand,
$\displaystyle\mathbb{E}|L_{n}(C_{1})-1|$
$\displaystyle\leq\mathbb{E}|(L_{n}-\pi)(C_{1})|+\pi(C_{1}^{c})$
$\displaystyle\leq\frac{1}{\sqrt{n}}\frac{2\sqrt{2C}}{\sqrt{1-\lambda}}\|\frac{d\nu}{d\pi}\|_{r}^{1/2}+\pi(C_{1}^{c}).$
Here we have used (30) again.
We skip over details here as they are similar to those in previous proofs.
Choosing an appropriate value for $k$ and using the estimates above allows us
to recover the following :
(37)
$\displaystyle\mathbb{E}\left[m_{1}^{1/p}W_{p}(1/m_{1}L_{n}^{1},\pi)\right]\lesssim$
$\displaystyle\left(\frac{C}{(1-\lambda)n}\|\frac{d\nu}{d\pi}\|_{r}\right)^{1/2p}\int_{t}^{d_{1}/4}N(C_{1},\delta)^{1/2p(1+1/r)}d\delta$
$\displaystyle+\pi(C_{1}^{c})+t.$
The term $\pi(C_{1}^{c})$ is bounded by the Chebyshev inequality :
$\pi(C_{1}^{c})\leq\int x^{\zeta}d\pi/d_{1}^{\zeta}=\int
x^{\zeta}d\pi\left(\int x^{p}d\pi\right)^{-\zeta/p}\rho^{-\zeta}.$
_Step 4._
Use (36) and (37), along with assumption (31) : this yields
$\mathbb{E}(W_{p}(L_{n},\pi))\lesssim
K(\zeta)\left(\rho^{-\zeta}+t+A_{n}\rho^{\alpha/2p(1+1/r)}t^{1-\alpha/2p(1+1/r)}\right)$
where
$A_{n}=\left(\frac{C}{(1-\lambda)n}\|\frac{d\nu}{d\pi}\|_{r}\right)^{1/2p}$,
and
$K(\zeta)=\frac{m_{\zeta}}{m_{p}^{\zeta/p}}\vee\frac{m_{\zeta+2p}}{m_{p}^{1+\zeta/p}}\vee
k_{E}^{1/2p(1+1/r)}\frac{2p}{\alpha(1+1/r)}m_{p}^{\alpha/(2p^{2})(1+1/r)}.$
The remaining step is optimization in $t$ and $\rho$. We obtain the following
result : there exists a constant $C(p,r,\zeta)$ depending only on the values
of $p$, $r$, $\zeta)$, such that
$\mathbb{E}(W_{p}(L_{n},\pi))\lesssim
C(p,r,\zeta)K(\zeta)A_{n}^{2p/(\alpha(1+1/r)(1+1/\zeta))}.$
There is a caveat : we have used the condition $\rho\geq 2$ at some point, and
with this restriction the optimization above is valid only when $A_{n}\leq
C^{\prime}(p,r,\zeta)$, where the constant $C^{\prime}(p,r,\zeta)$ only
depends on the values of $p$, $r$, $\zeta$.
∎
## Appendix A Transportation inequalities for Gaussian measures on a Banach
space
Transportation inequalities, also called transportation-entropy inequalities,
have been introduced by K. Marton [23] to study the phenomenon of
concentration of measure. M. Talagrand showed that the finite-dimensional
Gaussian measures satisfy a $\mathbf{T}_{2}$ inequality. The following
appendix contains a simple extension of this result to the infinite-
dimensional case. For much more on the topic of transportation inequalities,
the reader may refer to the survey [14] by N. Gozlan and C. Léonard.
For $\mu\in\mathcal{P}(E)$, let $H(.|\mu)$ denote the relative entropy with
respect to $\mu$ :
$H(\nu|\mu)=\int_{E}\frac{d\nu}{d\mu}\log\frac{d\nu}{d\mu}d\mu$
if $\nu\ll\mu$, and $H(\nu|\mu)=+\infty$ otherwise.
We say that $\mu\in\mathcal{P}_{p}(E)$ satisfies a $\mathbf{T}_{p}(C)$
transportation inequality when
$W_{p}(\nu,\mu)\leq\sqrt{CH(\nu|\mu)}\quad\forall\nu\in\mathcal{P}_{p}(E)$
We identify what kind of transport inequality is satisfied by a Gaussian
measure on a Banach space. We remind the reader of the following definition :
let $(E,\mu)$ be a Gaussian Banach space and $X\sim\mu$ be a $E$-valued r.v..
The weak variance of $\mu$ or $X$ is defined by
$\sigma^{2}=\sup_{f\in E^{*},|f|\leq 1}\mathbb{E}(f^{2}(X)).$
The lemma below is optimal, as shown by the finite-dimensional case.
###### Lemma A.1.
Let $(E,\mu)$ be a Gaussian Banach space, and let $\sigma^{2}$ denote the weak
variance of $\mu$. Then $\mu$ satisfies a $\mathbf{T}_{2}(2\sigma^{2})$
inequality.
###### Proof.
According e.g. to [20], there exists a sequence $(x_{i})_{i\geq 1}$ in $E$ and
an orthogaussian sequence $(g_{i})_{i\geq 1}$ (meaning a sequence of i.i.d.
standard normal variables) such that
$\sum_{i\geq 1}g_{i}x_{i}\sim\mu,$
where convergence of the series holds a.s. and in all the $L^{p}$’s. In
particular, the laws $\mu_{n}$ of the partial sums $\sum_{i=1}^{n}g_{i}x_{i}$
converge weakly to $\mu$.
As a consequence of the stability result of Djellout-Guillin-Wu (Lemma 2.2 in
[9]) showing that $\mathbf{T}_{2}$ is stable under weak convergence, it thus
suffices to show that the measures $\mu_{n}$ all satisfy the
$\mathbf{T}_{2}(2\sigma^{2})$ inequality.
First, by definition of $\sigma$, we have
$\sigma^{2}=\sup_{f\in E^{*},|f|\leq
1}\mathbb{E}(\sum_{i=1}^{+\infty}f(x_{i})g_{i})^{2}$
and since $(g_{i})$ is an orthogaussian sequence, the sum is equal to
$\sum_{i=1}^{+\infty}f^{2}(x_{i})$.
Consider the mapping
$\displaystyle T:$ $\displaystyle(\mathbb{R}^{n},N)\rightarrow(E,\|.\|)$
$\displaystyle(a_{1},\ldots,a_{n})\mapsto\sum_{i=1}^{n}a_{i}x_{i}.$
(here $\mathbb{R}^{n}$ is equipped with the Euclidean norm $N$). With the
remark above it is easy to check that $\|T(a)\|\leq\sigma N(a)$ for
$a\in\mathbb{R}^{n}$. Consequently, $T$ is $\sigma$-Lipschitz, and we can use
the second stability result of Djellout-Guillin-Wu (Lemma 2.1 in [9]) : the
push forward of a measure satisfying $\mathbf{T}_{2}(C)$ by a $L$-Lipschitz
function satisfies $\mathbf{T}_{2}(L^{2}C)$. As is well-known, the standard
Gaussian measure $\gamma^{n}$ on $\mathbb{R}^{n}$ satisfies
$\mathbf{T}_{2}(2)$ and thus $T_{\\#}\gamma^{n}$ satisfies
$\mathbf{T}_{2}(2\sigma^{2})$. But it is readily checked that
$T_{\\#}\gamma^{n}=\mu_{n}$, which concludes this proof.
∎
###### Remark.
M.Ledoux indicated to us another way to obtain this result. First, one shows
that the Gaussian measure satisfies a $\mathbf{T}_{2}(2)$ inequality when
considering the cost function $c=d_{H}^{2}$, where $d_{H}$ denotes the
Cameron-Martin metric on $E$ inherited from the scalar product on the Cameron-
Martin space. This can be done in a number of ways, for example by
tensorization of the finite-dimensional $\mathbf{T}_{2}$ inequality for
Gaussian measures or by adapting the Hamilton-Jacobi arguments of Bobkov-
Gentil-Ledoux [3] in the infinite-dimensional setting. It then suffices to
observe that this transport inequality implies the one we are looking for
since we have the bound $d\leq\sigma d_{H}$ (here $d$ denotes the metric
inherited from the norm of the Banach space).
Let $L_{n}$ denote the empirical measure associated with $\mu$. As a
consequence of Lemma A.1, we can give an inequality for the concentration of
$W_{2}(L_{n},\mu)$ around its mean, using results from transportation
inequalities. This is acutally a simple case of more general results of N.
Gozlan and C. Léonard ([13], [14]), we reproduce a proof here for convenience.
###### Corollary A.2.
Let $\mu$ be as above. The following holds :
$\mathbb{P}(W_{2}(L_{n},\mu)\geq\mathbb{E}[W_{2}(L_{n},\mu)]+t)\leq
e^{-nt^{2}/(2\sigma^{2})}.$
###### Proof.
The proof relies on the property of dimension-free tensorization of the
$\mathbf{T}_{2}$ inequality, see [14]. Since $\mu$ satisfies
$\mathbf{T}_{2}(2\sigma^{2})$, the product measure $\mu^{\otimes n}$ on the
product space $E^{n}$ endowed with the $l_{2}$ metric
$d_{2}((x_{1},\ldots,x_{n}),(y_{1},\ldots,y_{n}))=\sqrt{|x_{1}-y_{1}|^{2}+\ldots+|x_{n}-y_{n}|^{2}}$
also satisfies a $\mathbf{T}_{2}(2\sigma^{2})$ inequality ([14], Corollary
4.4). Therefore, it also satisfies a $\mathbf{T}_{1}$ inequality by Jensen’s
inequality, and this implies that we have the concentration inequality
$\mu^{\otimes n}(f\geq\int fd\mu^{\otimes n}+t)\leq e^{-t^{2}/(2\sigma^{2})}$
for all $1$-Lipschitz functions $f:(E^{n},d_{2})\rightarrow\mathbb{R}$ ([14],
Theorem 1.7). For $x=(x_{1},\ldots,x_{n})\in E^{n}$, denote
$L_{n}^{x}=1/n\sum_{i=1}^{n}\delta x_{i}$. To conclude it suffices to notice
that $(x_{1},\ldots,x_{n})\rightarrow W_{2}(L_{n}^{x},\mu)$ is
$\sqrt{n}$-Lipschitz from $(E^{n},d_{2})$ to $\mathbb{R}$. ∎
## References
* [1] M. Ajtai, J. Komlos, and G. Tusnády. On optimal matchings. Combinatorica, 4(4):259–264, 1984.
* [2] F. Barthe and C. Bordenave. Combinatorial optimization over two random point sets, March 2011.
* [3] S.G. Bobkov, I. Gentil, and M. Ledoux. Hypercontractivity of Hamilton-Jacobi equations. Journal des Mathématiques Pures et Appliqués, 80(7):669–696, 2001.
* [4] E. Boissard and T. Le Gouic. Exact deviations in 1-wasserstein distance for empirical and occupation measures, March 2011.
* [5] F. Bolley, A. Guillin, and C. Villani. Quantitative concentration inequalities for empirical measures on non-compact spaces. Probability Theory and Related Fields, 137:541–593, 2007.
* [6] P. Cattiaux, D. Chafai, and A. Guillin. Central limit theorems for additive functionals of ergodic Markov diffusions processes. Arxiv preprint arXiv:1104.2198, 2011.
* [7] E. Del Barrio, E. Giné, and C. Matrán. Central limit theorems for the Wasserstein distance between the empirical and the true distributions. Annals of Probability, 27(2):1009–1071, 1999.
* [8] S. Dereich, F. Fehringer, A. Matoussi, and M. Scheutzow. On the link between small ball probabilities and the quantization problem for Gaussian measures on Banach spaces. Journal of Theoretical Probability, 16(1):249–265, 2003.
* [9] H. Djellout, A. Guillin, and L. Wu. Transportation cost-information inequalities for random dynamical systems and diffusions. Annals of Probability, 32:2702–2732, 2004.
* [10] V. Dobric and J.E. Yukich. Exact asymptotics for transportation cost in high dimensions. J. Theoretical Prob, pages 97–118, 1995.
* [11] R.M. Dudley. The speed of mean Glivenko-Cantelli convergence. The Annals of Mathematical Statistics, 40(1):40–50, 1969.
* [12] F. Fehringer. Kodierung von Gaußmaßen. 2001\.
* [13] N. Gozlan and C. Léonard. A large deviation approach to some transportation cost inequalities. Probability Theory and Related Fields, 139:235–283, 2007.
* [14] N. Gozlan and C. Léonard. Transport inequalities. A survey. Markov Processes and Related Fields 16 (2010) 635-736, 2010.
* [15] S. Graf and H. Luschgy. Foundations of quantization for probability distributions. Springer-Verlag New York, Inc. Secaucus, NJ, USA, 2000.
* [16] S. Graf, H. Luschgy, and G. Pagès. Functional quantization and small ball probabilities for Gaussian processes. Journal of Theoretical Probability, 16(4):1047–1062, 2003.
* [17] J. Horowitz and R.L. Karandikar. Mean rates of convergence of empirical measures in the Wasserstein metric. Journal of Computational and Applied Mathematics, 55(3):261–273, 1994.
* [18] J. Kuelbs and W.V. Li. Metric entropy and the small ball problem for Gaussian measures. Journal of Functional Analysis, 116(1):133–157, 1993.
* [19] M. Ledoux. Isoperimetry and Gaussian analysis. Lectures on probability theory and statistics, pages 165–294, 1996\.
* [20] M. Ledoux and M. Talagrand. Probability in Banach spaces, volume 23 of Ergebnisse der Mathematik und ihrer Grenzgebiete (3)[Results in Mathematics and Related Areas (3)], 1991\.
* [21] W.V. Li and W. Linde. Approximation, metric entropy and small ball estimates for Gaussian measures. The Annals of Probability, 27(3):1556–1578, 1999.
* [22] H. Luschgy and G. Pagès. Sharp asymptotics of the functional quantization problem for Gaussian processes. The Annals of Probability, 32(2):1574–1599, 2004.
* [23] K. Marton. Bounding $\bar{d}$-distance by informational divergence: a method to prove measure concentration. The Annals of Probability, 24(2):857–866, 1996.
* [24] M. Talagrand. Matching random samples in many dimensions. The Annals of Applied Probability, 2(4):846–856, 1992.
* [25] A.W. Van der Vaart and J.A. Wellner. Weak convergence and empirical processes. Springer Verlag, 1996.
* [26] V.S. Varadarajan. On the convergence of sample probability distributions. Sankhyā: The Indian Journal of Statistics, 19(1):23–26, 1958\.
* [27] C. Villani. Optimal transport. Old and new. Grundlehren der Mathematischen Wissenschaften 338. Berlin: Springer. xxii,, 2009.
|
arxiv-papers
| 2011-05-26T12:41:46 |
2024-09-04T02:49:19.104887
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Emmanuel Boissard, Thibaut Le Gouic",
"submitter": "Emmanuel Boissard",
"url": "https://arxiv.org/abs/1105.5263"
}
|
1105.5321
|
# A Comparison between a Minijet Model and a Glasma Flux Tube Model for
Central Au-Au Collisions at $\sqrt{s_{NN}}$=200 GeV.
R.S. Longacre Brookhaven National Laboratory, Upton, New York 11973
###### Abstract
In is paper we compare two models with central Au-Au collisions at
$\sqrt{s_{NN}}$=200 GeV. The first model is a minijet model which assumes that
around $\sim$50 minijets are produced in back-to-back pairs and have an
altered fragmentation functions. It is also assumed that the fragments are
transparent and escape the collision zone and are detected. The second model
is a glasma flux tube model which leads to flux tubes on the surface of a
radial expanding fireball driven by interacting flux tubes near the center of
the fireball through plasma instabilities. This internal fireball becomes an
opaque hydro fluid which pushes the surface flux tubes outward. Around
$\sim$12 surface flux tubes remain and fragment with $\sim$1/2 the produced
particles escaping the collision zone and are detected. Both models can
reproduce two particle angular correlations in the different $p_{t1}$ $p_{t2}$
bins. We also compare the two models for three additional effects: meson
baryon ratios; the long range nearside correlation called the ridge; and the
so-called mach cone effect when applied to three particle angular
correlations.
###### pacs:
25.75.Nq, 11.30.Er, 25.75.Gz, 12.38.Mh
## I Introduction and review of models
In this paper we discuss two models. The first model is a minijet
modelTrainor1 . The second is a glasma flux tube model (GFTM)Dumitru .
The paper is organized in the following manner:
Sec. 1 is the introduction and review of models. Sec. 2 discuss two particle
angular correlation in the two models. Sec. 3 discuss baryon and anti-baryon
formation in both models. Sec. 4 demonstrates how the ridge is formed by flux
tubes when a jet trigger is added to the GFTM. Sec. 5 treats the so-called
mach cone effect by analyzing three particle angular correlations in the two
models. Sec. 6 presents the summary and discussion.
### I.1 Minijet Model
The analysis of angular correlations led to unanticipated structure in the
final state of p-p and Au-Au collisions, subsequently identified with parton
fragmentation in the form of minijets[3-7]. Two-component analysis of p-p and
Au-Au spectra revealed a corresponding hard component, a minimum-bias fragment
distribution associated with minijets, suggesting that jet phenomena extend
down to 0.1 GeV/cTrainor2 ; star4 . From a given p-p or Au-Au collisions
particles are produced with three kinematic variables ($\eta$,$\phi$,$p_{t}$).
The $p_{t}$ spectra provide information about parton fragmentation, but the
fragmentation process is more accessible on a logarithmic momentum variable.
The dominant hadron (pion) is ultrarelativistic at 1 GeV/c, and a relativistic
kinematic variable is a reasonable alternative. Transverse rapidity in a
longitudinally comoving frame near midrapidity $\eta$=0 is defined by
$y_{t}=ln([m_{t}+p_{t}]/m_{0}),$ (1)
with $m_{0}$ a hadron mass and $m_{t}^{2}$ = $p_{t}^{2}$ \+ $m_{0}^{2}$. If
one integrates over $\phi$ the event multiplicity ($dn/d\eta$ $\equiv$ $\rho$,
the 1D density on $\eta$) can be written for p-p collision in a two component
model as
$\rho(y_{t};\eta=0)=S_{0}(y_{t})+H_{0}(y_{t}).$ (2)
$S_{0}$ is a Levy distribution on $m_{t}$ which represents soft processes and
$H_{0}$ is a Gaussian on $y_{t}$ which represents hard processes. The soft
process has no $\phi$ dependence but does have a Gaussian correlation in the
longitudinal direction($\eta$). This correlation can be expressed in a two
particle way as $\Delta\eta=\eta_{1}-\eta_{2}$ which is the difference of the
psuedorapidity.
$\rho_{s}(\Delta\eta)=A_{0}exp[-\Delta\eta^{2}/2\sigma_{0}^{2}].$ (3)
The hard process arise from a scattering of two partons thus minijets are
formed. Each minijet fragments along its parton axis and generates a 2D
correlation $\Delta\phi=\phi_{1}-\phi_{2}$ and $\Delta\eta=\eta_{1}-\eta_{2}$.
$\rho_{h}(\Delta\phi,\Delta\eta)=A_{h}exp[-\Delta\phi^{2}/2\sigma_{\phi}^{2}]exp[-\Delta\eta^{2}/2\sigma_{\eta}^{2}].$
(4)
Since for every minijet fragmenting leading to a peak at $\Delta\phi$ =
$0^{\circ}$, there is its scattered partner in the backward direction
$\Delta\phi$ = $180^{\circ}$. The backward scattered minijet will range over
many psuedorapidity values, therefore its correlation with the fragmentation
of the near side minijet will have a broad $\Delta\eta$ width. In this
condition simple momentum conservation is good enough and that is a
$-cos(\Delta\phi)$ term.
The two-component model of hadron production in A-A collisions assumes that
the soft component is proportional to the participant pair number (linear
superposition of N-N collisions), and the hard component is proportional to
the number of N-N binary collisions (parton scattering)Nardi . Any deviations
from the model are specific to A-A collisions and may reveal properties of an
A-A medium. In terms of mean participant path length $\nu$ =
2$n_{bin}$/$n_{part}$ the $p_{t}$-integrated A-A hadron density on $\eta$ is
${2\over n_{part}}{dn\over d\eta_{AA}}=\rho_{s}+\nu\rho_{h}.$ (5)
By analogy with Eg. (2) A-A density as a function of centrality parameter
$\nu$ becomes
${2\over n_{part}}\rho_{AA}(y_{t};\nu,\eta=0)=S_{NN}(y_{t})+\nu
r_{AA}(y_{t};\nu)H_{NN}(y_{t}).$ (6)
In the above equation $S_{NN}(y_{t})$ = $S_{0}(y_{t})$ and $H_{NN}(y_{t})$ =
$H_{0}(y_{t})$ which is p-p scattering or nucleon-nucleon scattering. We also
can define for the hard density for A-A collisions in terms of $\nu$ as
$H_{AA}(y_{t};\nu)$ = $r_{AA}(y_{t};\nu)H_{NN}(y_{t})$. The density
2/$n_{part}$ 1/2$\pi$ 1/$y_{t}$ $d^{2}$/$dy_{t}$$d\eta$ for pions as a
function of $y_{t}$ at $\sqrt{s_{NN}}$=200 GeV for p-p, Au-Au, and
$H_{NN}(y_{t})$ is shown in Figure 1.
Figure 1: Pion $y_{t}$ density for five Au-Au centralities (solid curves).
Density for p-p is also shown (solid dots). $H_{NN}$ is the hard Gaussian on
$y_{t}$ which represents hard processes (dash-dot curve).
We see from the above equation the $S_{NN}(y_{t})$ is universal and scales
with the participant pairs. This means we can extract $\nu H_{AA}(y_{t})$ from
the density measured in p-p and Au-Au at $\sqrt{s_{NN}}$=200 GeV from Figure 1
giving us Figure 2.
Figure 2: The hard component of pion $y_{t}$ spectra in the form $\nu
H_{AA}(y_{t})$ (thicker curves with changing line style) compared to two-
component reference $\nu H_{NN}(y_{t})$ (dotted curve). The dashed reference
curves are limiting cases for $\nu$ = 1, 6.
Finally the ratio $H_{AA}$/$H_{NN}$ ($R_{AA}$) is plotted in Figure 3. In the
Au-Au central collision (0-12%) at a $y_{t}$ value of 2 ($p_{t}$=0.5 GeV/c)
there is 5 times as many pions coming from minijet fragmentation as there is
in a N-N collision. This implies a large increase in correlated pion fragments
and should show up as an increased in two particle angular correlations. We
will see this in Sec. 2 where we show these angular correlations and discuss
the number of particles in the minijets.
Figure 3: Hard-component ratio for pions and five Au-Au centralities (thicker
curves with changing line styles) relative to the N-N hard-component
reference. The connected dots are data from NSD p-p collisions.
The common measurement of parton energy loss is expressed as a nuclear
modification factor $R_{AA}$. The hard-component ratio $r_{AA}$ measured over
$y_{t}$ ($p_{t}$) provides similar information. Note that $y_{t}$ = 5
($p_{t}$=10 GeV/c) $r_{AA}$ = 0.2 which is the same value calculated for
$R_{AA}$. This value is considered caused by jet quenching as partons are
absorbed by the opaque medium. However in the minijet picture suggests that no
partons are lost in A-A collisions. Their manifestation (in spectrum structure
and correlations) is simply redistributed within the fragment momentum
distribution, and the fragment number increases. A high-$p_{t}$ triggered jet
yield may be reduced by a factor of five within a particular $p_{t}$ range,
but additional fragments emerge elsewhere, still with jet-like correlation
structurestar1 ; star2 ; Kettler .
### I.2 Glasma Flux Tube Model
A glasma flux tube model (GFTM)Dumitru that had been developed considers that
the wavefunctions of the incoming projectiles, form sheets of color glass
condensates (CGC)CGC that at high energies collide, interact, and evolve into
high intensity color electric and magnetic fields. This collection of
primordial fields is the GlasmaLappi ; Gelis , and initially it is composed of
only rapidity independent longitudinal color electric and magnetic fields. An
essential feature of the Glasma is that the fields are localized in the
transverse space of the collision zone with a size of 1/$Q_{s}$. $Q_{s}$ is
the saturation momentum of partons in the nuclear wavefunction. These
longitudinal color electric and magnetic fields generate topological Chern-
Simons chargeSimons which becomes a source for particle production.
The transverse space is filled with flux tubes of large longitudinal extent
but small transverse size $\sim$$Q^{-1}_{s}$. Particle production from a flux
tube is a Poisson process, since the flux tube is a coherent state. The flux
tubes at the center of the transverse plane interact with each other through
plasma instabilitiesLappi ; Romatschke1 and create a locally thermalized
system, where partons emitted from these flux tubes locally equilibrate. A
hydro system with transverse flow builds causing a radially flowing blast
waveGavin . The flux tubes that are near the surface of the fireball get the
largest radial flow and are emitted from the surface.
$Q_{s}$ is around 1 GeV/c thus the transverse size of the flux tube is about
1/4fm. The flux tubes near the surface are initially at a radius $\sim$5fm.
The $\phi$ angle wedge of the flux tube is $\sim$1/20 radians or
$\sim$$3^{\circ}$. Thus the flux tube initially has a narrow range in $\phi$.
The width in $\Delta\eta$ correlation of particles results from the
independent longitudinal color electric and magnetic fields that created the
Glasma flux tubes. In this paper we relate particle production from the
surface flux tube to a related model Parton Bubble Model(PBM)PBM . It was
shown in Ref.PBMGFTM that for central Au-Au collisions at $\sqrt{s_{NN}}$=200
the PBM is a good approximation to the GFTM surface flux tube formation.
The flux tubes on the surface turn out to be on the average 12 in number. They
form an approximate ring about the center of the collision see Figure 4. The
twelve tube ring creates the average behavior of tube survival near the
surface of the expanding fire ball of the blast wave. The final state surface
tubes that emit the final state particles at kinetic freezeout are given by
the PBM. One should note that the blast wave surface is moving at its maximum
velocity at freezeout (3c/4).
Figure 4: The tube geometry is an 8 fm radius ring perpendicular to and
centered on the beam axis. It is composed of twelve adjacent 2 fm radius
circular tubes elongated along the beam direction as part of the flux tube
geometry. We project on a plane section perpendicular to the beam axis.
The space momentum correlation of the blast wave provides us with a strong
angular correlation signal. PYTHIA fragmentation functionspythia were used
for the tube fragmentation that generate the final state particles emitted
from the tube. The initial transverse size of a flux tube $\sim$1/4fm has
expanded to the size of $\sim$2fm at kinetic freezeout. Many particles that
come from the surface of the fireball will have a $p_{t}$ greater than 0.8
Gev/c. The final state tube size and the Hanbury-Brown and Twiss (HBT)
observationsHBT of pions that have a momentum range greater than 0.8 GeV/c
are consistent both being $\sim$2fm. A single parton using PYTHIA forms a jet
with the parton having a fixed $\eta$ and $\phi$ (see Figure 5). For central
events each of the twelve tubes have 3-4 partons per tube each at a fixed
$\phi$ for a given tube. The $p_{t}$ distribution of the partons is similar to
pQCD but has a suppression at high $p_{t}$ like the data. The 3-4 partons in
the tube which shower using PYTHIA all have a different $\eta$ values but all
have the same $\phi$ (see Figure 6). The PBM explained the high precision Au-
Au central (0-10%) collisions at $\sqrt{s_{NN}}=$ 200 GeVcentralproduction
(the highest RHIC energy).
Figure 5: A jet parton shower.
Figure 6: Each tube contains 3-4 partons as shown.
## II The Correlation Function for Central Au-Au Data
We utilize a two particle correlation function in the two dimensional (2-D)
space111$\Delta\phi=\phi_{1}-\phi_{2}$ where $\phi$ is the azimuthal angle of
a particle measured in a clockwise direction about the beam.
$\Delta\eta=\eta_{1}-\eta_{2}$ which is the difference of the psuedorapidity
of the pair of particles of $\Delta\phi$ versus $\Delta\eta$. The 2-D total
correlation function is defined as:
$C(\Delta\phi,\Delta\eta)=S(\Delta\phi,\Delta\eta)/M(\Delta\phi,\Delta\eta).$
(7)
Where S($\Delta\phi,\Delta\eta$) is the number of pairs at the corresponding
values of $\Delta\phi,\Delta\eta$ coming from the same event, after we have
summed over all the events. M($\Delta\phi,\Delta\eta$) is the number of pairs
at the corresponding values of $\Delta\phi,\Delta\eta$ coming from the mixed
events, after we have summed over all our created mixed events. A mixed event
pair has each of the two particles chosen from a different event. We make on
the order of ten times the number of mixed events as real events. We rescale
the number of pairs in the mixed events to be equal to the number of pairs in
the real events. This procedure implies a binning in order to deal with finite
statistics. The division by M($\Delta\phi,\Delta\eta$) for experimental data
essentially removes or drastically reduces acceptance and instrumental
effects. If the mixed pair distribution was the same as the real pair
distribution C($\Delta\phi,\Delta\eta$) should have unit value for all of the
binned $\Delta\phi,\Delta\eta$. In the correlations used in this paper we
select particles independent of its charge. The correlation of this type is
called a Charge Independent (CI) Correlation. This difference correlation
function has the defined property that it only depends on the differences of
the azimuthal angle ($\Delta\phi$) and the beam angle ($\Delta\eta$) for the
two particle pair. Thus the two dimensional difference correlation
distribution for each tube or minijet which is part of
C($\Delta\phi,\Delta\eta$) is similar for each of the objects and will image
on top of each other. We further divide the data (see Table I) into $p_{t}$
ranges (bins).
$p_{t}$ range | amount
---|---
$4.0GeV/c-1.1GeV/c$ | 149
$1.1GeV/c-0.8GeV/c$ | 171
$0.8GeV/c-0.65GeV/c$ | 152
$0.65GeV/c-0.5GeV/c$ | 230
$0.5GeV/c-0.4GeV/c$ | 208
$0.4GeV/c-0.3GeV/c$ | 260
$0.3GeV/c-0.2GeV/c$ | 291
Table 1: The $p_{t}$ bins and the number of charged particles per bin with
$|\eta|$ $<$ 1.0.
Since we are choosing particle pairs, we choose for the first particle
$p_{t1}$ which could be in one bin and for the second particle $p_{t2}$ which
could be in another bin. Therefore binning implies a matrix of $p_{t1}$ vs
$p_{t2}$. We have have 7 bins thus there are 28 independent combinations. Each
of the combinations will have a different number of enters. In order to take
out this difference one uses multiplicity scalingPBME ; centralitydependence .
The diagonal bins one scales event average of Table I. For the off diagonal
combinations one uses the product of square root of corresponding diagonal
event averages. In Figure 7 we show the correlation function equation 7 for
the highest diagonal bin $p_{t}$ 4.0 to 1.1 GeV/c. Figure 8 is the smallest
diagonal bin $p_{t}$ 0.3 to 0.2 GeV/c.
Figure 7: $\Delta\phi$ vs.$\Delta\eta$ CI correlation for the 0-5% centrality
bin for Au-Au collisions at $\sqrt{s_{NN}}=$ 200 GeV requiring both particles
to be in bin 7 $p_{t}$ greater than 1.1 GeV/c and $p_{t}$ less than 4.0 GeV/c.
Figure 8: $\Delta\phi$ vs.$\Delta\eta$ CI correlation for the 0-5% centrality
bin for Au-Au collisions at $\sqrt{s_{NN}}=$ 200 GeV requiring both particles
to be in bin 1 $p_{t}$ greater than 0.2 GeV/cand $p_{t}$ less than 0.3 GeV/c.
Once we use multiplicity scaling we can compare all 28 combinations. In Figure
9 we show the 28 plots all having the same scale. This make it easy to see how
fast the correlation signals drop off with lowering the momentum. These plots
show the properties of parton fragmentation. P1P7 has the same signal as P2P6,
P3P5, and P4P4. It should be noted that both the minijet model and GFTM give
the same two particle correlations.
Figure 9: $\Delta\phi$ vs.$\Delta\eta$ CI correlation for the 0-5% centrality
bin for Au-Au collisions at $\sqrt{s_{NN}}=$ 200 GeV for all 28 combination of
$p_{t1}$ vs $p_{t2}$ not rescaled (see text).
### II.1 The properties of the minijet model
For the above correlations on average there was 48 minijets per central Au-Au
collision. Each minijet on the average showered into 13 charged particles. The
soft uncorrelated particles accounted for 837 or 57% of the charged particles.
This means that 43% of the charged particles come from minijet fragmentation
(see Table II). All of the particles coming from minijet fragmentation add
toward the final observed correlation signal with none being absorbed. The
fact that the spectrum has been soften and spread out in the beam direction
($\eta$), is a medium modification which has not been yet calculated using
QCD.
variable | amount | fluctuations
---|---|---
$minijets$ | 48 | 4
$particles$ | 13 | 4
$soft$ | 837 | 29
Table 2: Parameters of the minijet model for charged particles.
### II.2 The properties of the Glasma Flux Tube Model
For the above correlations on average there was 12 final state tubes on the
surface of the fireball per central Au-Au collision. Each tube on the average
showered into 49 charged particles. The soft uncorrelated particles accounted
for 873 or 60% of the charged particles. Since the tubes are sitting on the
surface of the fireball and being push outward by radial flow, not all
particles emitted from the tube will escape. Approximately one half of the
particles that are on the outward surface leave the fireball and the other
half are absorbed by the fireball (see Figure 10). This means that 20% of the
charged particles come from tube emission, and 294 particles are added to the
soft particles increasing the number to 1167 (see Table III).
Figure 10: Since the tubes are sitting on the surface of the fireball and being push outward by radial flow, not all particles emitted from the tube will escape. Approximately one half of the particles that are on the outward surface leave the fireball and the other half are absorbed by the fireball. variable | amount | fluctuations
---|---|---
$tubes$ | 12 | 0
$particles$ | 24.5 | 5
$soft$ | 1167 | 34
Table 3: Parameters of the GFTM for charged particles.
The particles that emitted outward are boosted in momentum, while the inward
particles are absorbed by the fireball. Out of the initial 49 particles per
tube the lower $p_{t}$ particles have larger losses. In Table IV we give a
detailed account of these percentage losses and give the average number of
charged particles coming from each tube for each $p_{t}$ bin.
$p_{t}$ | amount | %survive
---|---|---
$4.0GeV/c-1.1GeV/c$ | 4.2 | 100
$1.1GeV/c-0.8GeV/c$ | 3.8 | 76
$0.8GeV/c-0.65GeV/c$ | 3.2 | 65
$0.65GeV/c-0.5GeV/c$ | 4.2 | 54
$0.5GeV/c-0.4GeV/c$ | 3.2 | 43
$0.4GeV/c-0.3GeV/c$ | 3.3 | 35
$0.3GeV/c-0.2GeV/c$ | 2.6 | 25
Table 4: Parameters of the GFTM for $p_{t}$ of the charged particles.
In the surface GFTM we have thermalization and hydro flow for the soft
particles, while all the two particle angular correlations come from the tubes
on the surface. The charge particle spectrum of the GFTM is given by a
blastwave model and the direct tube fragmentation is only 20% of this
spectrum. The initial anisotropic azimuthal distribution of flux tubes is
transported to the final state leaving its pattern on the ring of final state
flux tubes on the surface. This final state anisotropic flow pattern can be
decomposed in a Fourier series ($v_{1}$, $v_{2}$, $v_{3}$, …). These
coefficients have been measureAlver and have been found to be important in
central Au-Au collisions. We will come back to these terms later on when we
consider three particle angular correlations.
## III Formation of baryon and anti-baryon in both models
It was shown above that charged particle production differ between p-p and Au-
Au at $\sqrt{s_{NN}}=$ 200\. High $p_{t}$ charged particles in Au-Au are
suppressed compared to p-p for $p_{t}$ $>$ 2.0 GeV/c ($y_{t}$ $>$ 3.33 for
pions). Figure 3 shows $r_{AA}$ for pions which has a suppression starting at
$y_{t}$ = 3.0 ($p_{t}$ = 1.5 GeV/c). This shift between suppression of charged
particle at $p_{t}$ $>$ 2.0 GeV/c and suppression of pions at $p_{t}$ = 1.5
GeV/c is made up by an enhancement of baryonsAdler . Proton $r_{AA}$ (see
Figure 12) which is the most numerous baryon shows an enhancement starting at
$p_{t}$ $>$ 1.5. This enhancement continues to $p_{t}$ = 4.0 ($y_{t}$ =
4.0)222Note $y_{t}$ is calculated using a pion mass.. Finally at $p_{t}$=10.0
GeV/c ($y_{t}$ = 5.0) $r_{AA}$ = 0.2 which is the same value as pions.
Figure 11: Hard-component ratio for protons and five Au-Au centralities
(thicker curves with changing line styles) relative to the N-N hard-component
reference. The features are only comparable to Figure 3 for 60-80% or $y_{t}$
$>$ 4.5. Note $y_{t}$ is calculated using a pion mass.
### III.1 Minijet Model
There is no a priori model for the proton hard component. The excess in the
proton hard component appears anomalous, but may be simply explained in terms
of parton energy lossTrainor2 .
### III.2 Glasma Flux Tube Model
We have shown previously that inside the tube there are three to four partons
with differing longitudinal momenta all at the same $\phi$. The $p_{t}$
distribution of the partons inside the tube is similar to pQCD but has a
suppression in the high $p_{t}$ region like the data. The partons which are
mainly gluons shower into more gluons which then fragment into quarks and
anti-quarks which overlap in space and time with other quarks and anti-quarks
from other partons. This leads to an enhance possibility for pairs of quarks
from two different fragmenting partons to form a di-quark, since the
recombining partons are localized together in a small volume. The same process
will happen for pairs of anti-quarks forming a di-anti-quark. This
recombination process becomes an important possibility in the GFTM compared to
regular jet fragmentation. Since the quarks which overlap have similar phase
space, the momentum of the di-quark is approximately twice the momentum of the
quarks, but has approximately the same velocity. When mesons are formed quarks
pick up anti-quarks with similar phase space from fragmenting gluons to form a
color singlet state. Thus the meson has approximately twice the momentum of
the quark and anti-quark of which it is made. When the di-quark picks up a
quark and forms a color singlet it will have approximately three times the
momentum of one of the three quarks it is made from. Thus we expect a $p_{t}$
spectrum scaling when we compare mesons to baryons. Figure 12 shows the ratio
protons plus anti-protons to charged particles as a function of $p_{t}$ for
particles in our simulated central Au-Au collisions. In Figure 12 we also plot
the ratio from central Au-Au RHIC dataAdler . These experimental results agree
well (considering the errors) with the GFTM predictions for all charged
particles. The background particles which came from HIJINGhijing have the
same ratios observed in p-p collisions, while particles coming from our tube
have a much larger ratio.
Figure 12: Shows the ratio of protons plus anti-protons to charged particles
as a function of $p_{t}$ for particles in our simulated central Au-Au
collisions. We also plot the ratio from central Au-Au RHIC data. These
experimental results agree well (considering the errors) with the GFTM
predictions for all charged particles. The plotted ratio for the background
particles coming from HIJING is similar to p-p collisions.
## IV The Ridge is formed by the Flux Tubes when a jet trigger is added to
the GFTM
In heavy ion collisions at RHIC there has been observed a phenomenon called
the ridge which has many different explanations[28-35]. The ridge is a long
range charged particle correlation in $\Delta\eta$ (very flat), while the
$\Delta\phi$ correlation is approximately jet-like (a narrow Gaussian). There
also appears with the ridge a jet-like charged-particle-pair correlation which
is symmetric in $\Delta\eta$ and $\Delta\phi$ such that the peak of the jet-
like correlation is at $\Delta\eta$ = 0 and $\Delta\phi$ = 0. The $\Delta\phi$
correlation of the jet and the ridge are approximately the same and smoothly
blend into each other. The ridge correlation is generated when one triggers on
an intermediate $p_{t}$ range charged particle and then forms pairs between
that trigger particle and each of all other intermediate charged particles
with a smaller $p_{t}$ down to some lower limit.
### IV.1 STAR experiment measurement of the ridge
Triggered angular correlation data showing the ridge was presented at Quark
Matter 2006Putschke . Figure 14 shows the experimental $\Delta\phi$ vs.
$\Delta\eta$ CI correlation for the 0-10% central Au-Au collisions at
$\sqrt{s_{NN}}=$ 200 GeV requiring one trigger particle $p_{t}$ between 3.0
and 4.0 GeV/c and an associated particle $p_{t}$ above 2.0 GeV/c. The yield is
corrected for the finite $\Delta\eta$ pair acceptance.
Figure 13: Raw $\Delta\phi$ vs. $\Delta\eta$ CI preliminary correlation
dataPutschke for the 0-10% centrality bin for Au-Au collisions at
$\sqrt{s_{NN}}=$ 200 GeV requiring one trigger particle $p_{t}$ between 3.0 to
4.0 GeV/c and an associated particle $p_{t}$ above 2.0 GeV/c.
In this paper we will investigate whether the GFTM can account for the ridge
once we add a jet trigger to GFTM generatorPBMtoGFTM. However this trigger
will also select jets which previously could be neglected because there was
such strong quenching[37-39] of jets in central collisions when a jet trigger
had not been used. We use HIJINGhijing to determine the number of expected
jets. We have already shown that our final state particles come from hadrons
at or near the fireball surface. We reduce the number of jets by 80% which
corresponds to the estimate that only the parton interactions on or near the
surface are not quenched away, and thus at kinetic freezeout fragment into
hadrons which enter the detector. This 80% reduction is consistent with single
$\pi^{0}$ suppression observed in Ref.quench3 . We find for the reduced HIJING
jets that 4% of the Au-Au central events (0-5%) centrality at $\sqrt{s_{NN}}=$
200 have a charged particle with a $p_{t}$ between 3.0 and 4.0 GeV/c with at
least one other charged particle with its $p_{t}$ greater than 2.0 GeV/c
coming from the same jet. With the addition of the quenched jets to the
generator, we then form two-charged-particle correlations between one-charged-
particle with a $p_{t}$ between 3.0 to 4.0 GeV/c and another charged-particle
whose $p_{t}$ is greater than 2.0 GeV/c. The results of these correlations are
shown in Figure 15.
Figure 14: The GFTM generated CI correlation for the 0-5% centrality bin
requiring one trigger particle $p_{t}$ above 3.0 GeV/c and less than 4.0 GeV/c
and another particle $p_{t}$ above 2.0 GeV/c plotted as a two dimensional
$\Delta\phi$ vs. $\Delta\eta$ perspective plot. The trigger requirements on
this figure are the same as those on the experimental data in Figure 13.
We can compare the two figures, if we realize that the away-side ridge has
around 420,000 pairs in Figure 14 while in Figure 14 the away-side ridge has a
correlation of around 0.995. If we multiply the correlation scale of Figure 14
by 422,111 in order to achieve the number of pairs seen in Figure 13, the
away-side ridge would be at 420,000 and the peak would be at 465,000. This
would make a good agreement between the two figures. We know in our Monte
Carlo which particles come from tubes which would be particles form the ridge.
The correlation formed by the ridge particles is generated almost entirely by
particles emitted by the same tube. Thus we can predict the shape and the
yield of the ridge for the above $p_{t}$ trigger selection and lower cut, by
plotting only the correlation coming from pairs of particles that are emitted
by the same tube (see Figure 15).
In Ref.Putschke it was assumed that the ridge yield was flat across the
acceptance while in Figure 14 we see that this is not the case. Therefore our
ridge yield is 35% larger than estimated in Ref.Putschke . Finally we can plot
the jet yield that we had put into our Monte Carlo. The jet yield is plotted
in Figure 16 where we subtracted the tubes and the background particles from
Figure 14.
Figure 15: The ridge signal is the piece of the CI correlation for the 0-5%
centrality of Figure 15 after removing all other particle pairs except the
pairs that come from the same tube. It is plotted as a two dimensional
$\Delta\phi$ vs. $\Delta\eta$ perspective plot.
Figure 16: The jet signal is left in the CI correlation after the
contributions from the background and all the bubble particles are removed
from the 0-5% centrality (with trigger requirements) of Figure 15. It is
plotted as a two dimensional $\Delta\phi$ vs. $\Delta\eta$ perspective plot.
### IV.2 PHOBOS experiment measurement of the ridge
The PHOBOS detector select triggered charged tracks with $p_{t}$ $>$ 2.5GeV/c
within an acceptance of 0 $<$ $\eta^{trig}$ $<$ 1.5. Associated charge
particles that escape the beam pipe are detected in a range $|\eta|$ $<$ 3.0.
The CI correlation is shown in Figure 17PHOBOS . The near-side structure is
more closely examined by integrating over $|\Delta\phi|$ $<$ 1 and is plotted
in Figure 18 as a function of $\Delta\eta$. PYTHIA simulation for p-p events
is also shown. Bands around the data points represent the uncertainty from
flow subtraction. The error on the ZYAM procedure is shown as a gray band at
zero. All systematics uncertainties are 90% confidence level.
Figure 17: PHOBOS triggered CI correlation from 0-30% central Au-Au
collisions. It is plotted as a two dimensional $\Delta\phi$ vs. $\Delta\eta$
perspective plot.
Figure 18: PHOBOS near-side trigger yield integrated over $|\Delta\phi|$ $<$
1.0 for 0-30% central Au-Au compared to PYTHIA p-p (dashed line) as a function
of $\Delta\eta$. Bands around the data points represent the uncertainty from
flow subtraction. The error on the ZYAM procedure is shown as a gray band at
zero. All systematics uncertainties are 90% confidence level.
We generate using our above GFTM the two-charged-particle correlations between
one-charged-particle with a $p_{t}$ between greater than 2.5 GeV/c which has
an $|\eta|$ $<$ 0.75 and another charged-particle whose $p_{t}$ is greater
than 7 MeV/c in a range $|\eta|$ $<$ 3.0. The results of this correlation is
shown in Figure 20. We see that the triggered correlation is vary similar to
the PHOBOS results. In order to make a comparison we integrate the near-side
structure over $|\Delta\phi|$ $<$ 1 and plot it as a smooth curve in Figure 21
as a function of $\Delta\eta$. We also again plot the PHOBOS points on the
same plot. The long range correlation over $\Delta\eta$ produced by the GFTM
is possible and does not violate causality, since the glasma flux tubes are
generated early in the collision. The radial flow which develops at a later
time pushes the surface tubes outward in the same $\phi$ direction because the
flow is purely radial. In order to achieve such an effect in minijet
fragmentation one would have to have fragmentation moving faster than the
speed of light.
Figure 19: GFTM triggered (see text) CI correlation from 0-5% central Au-Au
collisions. It is plotted as a two dimensional $\Delta\phi$ vs. $\Delta\eta$
perspective plot which is the same as Figure 18.
Figure 20: GFTM $\phi$ integrated ($|\Delta\phi|$ $<$ 1) near-side structure
compared to PHOBOS near-side structure (see Figure 20).
## V The Mach Cone Effect and Three Particle Angular Correlations
It was reported by the PHENIX CollaborationPHENIXCONE that the shape of the
away-side $\Delta\phi$ distributions for non-peripheral collisions are not
consistent with purely stochastic broadening of the peripheral Au-Au away-
side. The broadening and possible change in shape of the away-side jet are
suggestive of a mach conerenk . The mach cone effect depended on the trigger
particles and the associated particles used in the two particle angular
correlations. The mach cone shape is not present if one triggers on 5-10 GeV/c
particles and also use hard associated particles greater than 3 GeV/cjiangyong
. The away-side broadening in the two-particle depends on the $v_{2}$
subtraction and the method of smearing the away-side dijet component and
momentum conservation.
One needs to go beyond two particle correlations in order to learn more. A
three particle azimuthal angle correlation should reveal very clear pattern
for the mach cone compared to other two particle method. The STAR
CollaborationSTARCONE has made such correlation studies and does find a
structure some what like a mach cone. However the trigger dependence as seen
in Ref.jiangyong and the measurement of the conical emission angle to be
independent of the associated particle $p_{t}$ in Ref.STARCONE is not
consistent with a mach cone. The similarity of the mach cone and the ridge is
very interesting to consider and makes one consider that they are the same
effect. In the last section we showed that flux tubes can explain the ridge,
thus they should explain the mach cone. One would not expect the minijet model
will give a mach cone like correlation.
We saw at the end of Sec. 2 that final state anisotropic flow pattern can be
decomposed in a Fourier series ($v_{1}$, $v_{2}$, $v_{3}$, …). When one
triggers on a particle coming from a flux tube, the other flux tubes
contribute the components of the anisotropic flow pattern. In Figure 22 we
show two such tube arrangements. In order to make contact with the data and
show the difference between the minijet model and GFTM we will define a
trigger particle and a reference particle(s). We choose a $p_{t}$ of greater
than 1.1 GeV/c for the trigger and less than this value for the reference. The
two particle correlation for this trigger and reference for the minijet model
and the GFTM are equal to each other.
Figure 21: Top arrangement has a $v_{3}$ component while the bottom gives a
$v_{2}$ component. In most mach cone analyses $v_{2}$ is subtracted.
We define a three particle correlation using the azimuthal angle between a
trigger particle 1 and a reference particle 2 ($\Delta\phi_{12}$) and for the
third particle the azimuthal angle between a trigger particle 1 and a
reference particle 3($\Delta\phi_{13}$). The two particle correlation of the
two models are the same and the three particle correlation is nearly the same.
In order to obtain the true three particle effect we must remove the two
particle correlation from the raw three particle correlation. This removal
gives the so-called three particle cumulant. Figure 22 and Figure 23 shows the
three particle cumulant for $\Delta\phi_{12}$ vs. $\Delta\phi_{13}$ for the
minijet model and the GFTM.
Figure 22: The minijet generated three particle cumulant for the 0-5%
centrality bin requiring one trigger particle $p_{t}$ above 1.1 GeV/c and
other reference particles $p_{t}$ below 1.1 GeV/c plotted as a two dimensional
combinations of trigger particle 1 paired with two reference particles 2 and 3
creating $\Delta\phi_{12}$ vs. $\Delta\phi_{13}$ perspective plot.
Figure 23: The GFTM generated three particle cumulant for the 0-5% centrality
bin requiring one trigger particle $p_{t}$ above 1.1 GeV/c and other reference
particles $p_{t}$ below 1.1 GeV/c plotted as a two dimensional combinations of
trigger particle 1 paired with two reference particles 2 and 3 creating
$\Delta\phi_{12}$ vs. $\Delta\phi_{13}$ perspective plot.
The minijet model Figure 22 shows only diagonal away-side response coming from
the underlying dijet nature of the minijets. The GFTM Figure 23 also shows the
diagonal away-side response of momentum conservation, however $v_{3}$
configurations of Figure 21 gives an off-diagonal island at $\Delta\phi_{12}$
$\sim$ $130^{\circ}$, $\Delta\phi_{13}$ $\sim$ $230^{\circ}$ or vice versa.
Let us look at the reported STAR dataUlery in order to address this off-
diagonal effect. In Figure 24 we show the raw three particle results from STAR
where (a) is the raw two particle correlation (points). They also show in (a)
the background formed from mixed events with flow modulation added-in (solid).
The background subtracted two particle correlation is shown as an inset in (a)
where the double hump mach cone effect is clear. In Figure 24(b) the STAR raw
three particle correlation is shown which required one trigger particle 3.0
$<$ $p_{t}$ $<$ 4.0 GeV/c and other reference particles 2 and 3 with 1.0 $<$
$p_{t}$ $<$ 2.0 GeV/c. An off-diagonal island does appear at $\Delta\phi_{12}$
$\sim$ 2.3, $\Delta\phi_{13}$ $\sim$ 4.0 radians or vice versa. This is the
same values as the island which occurs in the GFTM and is the off-diagonal
excess that is claimed to be the mach cone. Like the ridge effect the mach
cone seems to be the left overs of the initial state flux tube arrangements
related to the fluctuations in the third harmonic. Higher order harmonic
fluctuations become less likely to survive to the final state.
Figure 24: (a) STAR data raw two particle correlation points, background from
mixed events with flow modulation added-in (solid) along with the background
subtracted two particle correlation (inset). (b) Raw three particle
correlation from STAR requiring one trigger particle 3.0 $<$ $p_{t}$ $<$ 4.0
GeV/c and other reference particles 2 and 3 with 1.0 $<$ $p_{t}$ $<$ 2.0 GeV/c
creating a two dimensional perspective plot $\Delta\phi_{12}$ vs.
$\Delta\phi_{13}$.
## VI Summary and Discussion
In this article we have made a comparison between two very different models
for central Au-Au collisions. Both models are successful at describing the the
spectrum of particles produced and the two particle angular correlations
observed in ultrarelativistic heavy ion collisions. The first model is a
minijet model which assumes that around $\sim$50 minijets are produced in
back-to-back pairs and has an altered fragmentation function from that of
vacuum fragmentation. It also assumes that the fragments are transparent and
escape the collision zone and are then detected. The second model is a glasma
flux tube model which leads to longitudinal color electric and magnetic fields
confined in flux tubes on the surface of a radial expanding fireball driven by
interacting flux tubes near the center of the fireball through plasma
instabilities. This internal fireball becomes an opaque hydro fluid which
pushes the surface flux tubes outward. Around $\sim$12 surface flux tubes
remain and fragment with $\sim$1/2 the produced particles escaping the
collision zone and are detected.
We expand our comparison to other phenomenon of the central collisions. We
considered in Sec. 3 baryon and anti-baryon formation in both models. There
was no a priori reason for the excess in the minijet model, while in the
glasma flux tube model(GFTM) recombination of quarks into di-quarks and anti-
quark into anti-di-quarks leads to a natural excess of baryon and anti-baryon
formation in this model.
The formation of the ridge phenomena is discussed in Sec. 4. In order to
achieve a long range correlation effect in minijet fragmentation one would
have to have fragmentation moving faster than the speed of light. The GFTM
however can have a long range correlation over $\Delta\eta$ since the glasma
flux tubes are generated early in the collision. The radial flow which
develops at a later time pushes the surface tubes outward in the same $\phi$
direction because the flow is purely radial. Thus a long range $\Delta\eta$
last to the final state of the collision. A very good comparison was achieved
between data and GFTM.
Sec. 5 treats the so-called mach cone effect by analyzing three particle
angular correlations in the two models. The minijet model and the GFTM have
the same two particle angular correlations but when the three particle
azimuthal angular correlations are compared the two models differ. The minijet
model shows only diagonal away-side response coming from the underlying dijet
nature of the minijets, while GFTM also shows a diagonal away-side response it
also shows an off-diagonal island. Like the ridge effect the mach cone seems
to be left over from the initial state flux tube arrangements related to the
fluctuations in the third harmonic($v_{3}$). This off-diagonal island excess
is seen in the data and is claimed to be the mach cone.
Relativistic Heavy Ion Collider (RHIC) collisions are conventionally described
in terms of two major themes: hydrodynamic evolution of a thermal bulk medium
and energy loss of energetic partons in that medium through gluon
bremsstrahlung. The minijet model is not consistent with is standard view. The
glasma flux tube model generates a fireball which becomes an opaque hydro
fluid that is consistent with conventional ideas. Even though both models can
obtain the same spectrum of particles and the same two particle angular
correlations, it is only the GFTM that can tie all together.
## VII Acknowledgments
This research was supported by the U.S. Department of Energy under Contract
No. DE-AC02-98CH10886. The author thank Sam Lindenbaum and William Love for
valuable discussion and Bill for assistance in production of figures. It is
sad that both are now gone.
## References
* (1) T. Trainor, Phys. Rev. C 80 (2009) 044901.
* (2) A. Dumitru, F. Gelis, L. McLerran and R. Venugopalan, Nucl. Phys. A 810 (2008) 91.
* (3) J. Adams et al., Phys. Rev. C 73 (2006) 064907.
* (4) M. Daugherity, J. Phys.G G35 (2008) 104090.
* (5) Q.J. Liu et al., Phys. Lett. B632 (2006) 197.
* (6) J. Adams et al., J.Phys.G G34 (2007) 451.
* (7) J. Adams et al., J Phys.G G32 (2006) L37.
* (8) T. Trainor, Int. J. Mod. Phys. E 17 (2008) 1499.
* (9) J. Adams et al., Phys. Rev. D 74 (2006) 032006.
* (10) D. Kharzeev and M. Nardi, Phys. Lett. B507 (2001) 121.
* (11) T. Trainor and D. Kettler, Phys. Rev. C 83 (2011) 034903.
* (12) L. McLerran and R. Venugopalan, Phys. Rev. D 49 (1994) 2233; Phys. Rev. D 49 (1994) 3352; Phys. Rev. D 50 (1994) 2225.
* (13) T. Lappi and L. McLerran, Nucl. Phys. A 772 (2006) 200.
* (14) F. Gelis and R. Venugopalan, Acta Phys. Polon. B 37 (2006) 3253\.
* (15) D. Kharzeev, A. Krasnitz and R. Venugopalan, Phys. Lett. B 545 (2002) 298.
* (16) P. Romatschke and R. Venugopalan, Phys. Rev. D 74 (2006) 045011.
* (17) S. Gavin, L. McLerran and G. Moschelli, Phys. Rev. C 79 (2009) 051902.
* (18) S.J. Lindenbaum, R.S. Longacre, Eur. Phys. J. C. 49 (2007)767.
* (19) S.J. Lindenbaum, R.S. Longacre, arXiv:0809.2286(Nucl-th).
* (20) T. Sjostrand, M. van Zijil, Phys. Rev. D 36 (1987) 2019.
* (21) J. Adams et al., Phys. Rev. C 71 (2005) 044906, S.S. Adler et al., Phys. Rev. Lett. 93 (2004) 152302.
* (22) J. Adams et al., Phys. Rev. C 75, 034901 (2007).
* (23) S.J. Lindenbaum and R.S. Longacre, Phys. Rev. C 78 (2008) 054904.
* (24) B.I. Abelev et al., arXiv:0806.0513[nucl-ex].
* (25) B. Alver and G. Roland, Phys. Rev. C 81 (2010) 054905.
* (26) S.S. Adler et al., Phys. Rev. Lett. 91 (2003) 172301.
* (27) X.N. Wang and M. Gyulassy, Phys. Rev. D 44 (1991) 3501.
* (28) N. Armesto, C. Salgado, U.A. Wiedemann, Phys. Rev. Lett. 93 (2004) 242301.
* (29) P. Romatschke, Phys. Rev. C 75 (2007) 014901.
* (30) E. Shuryak, Phys. Rev. C 76 (2007) 047901.
* (31) A. Dumitru, Y. Nara, B. Schenke, M. Strickland, J.Phys.G G35 (2008) 104083.
* (32) V.S. Pantuev, arXiv:0710.1882[hep-ph].
* (33) R. Mizukawa, T. Hirano, M. Isse, Y. Nara, A. Ohnishi, J.Phys.G G35 (2008) 104083.
* (34) C.Y. Wong, Phys. Rev. C 78 (2008) 064905.
* (35) R.C. Hwa, arXiv:0708.1508[nucl-th].
* (36) J. Putschke, Nucl. Phys. A 783 (2007) 507c.
* (37) C. Adler et al., Phys. Rev. Lett. 88 (2002) 022301.
* (38) J. Adams et al., Phys. Rev. Lett. 91 (2003) 172302.
* (39) A. Adare et al., Phys. Rev. Lett. 101 (2008) 232301.
* (40) B. Alver et al., Phys. Rev. Lett. 104 (2010) 062301.
* (41) S.S. Adler et al., Phys. Rev. Lett. 97 (2006) 052301.
* (42) T. Renk and J. Ruppert, Phys. Rev. C 73 (2006) 11901.
* (43) J. Jia PHENIX, AIP Conf.Proc. 828 (2006) 219.
* (44) B.I. Abelev et al., Phys Rev. Lett. 102 (2009) 052302.
* (45) J.G. Ulery STAR, Int. J. Mod. Phys. E16 (2007) 3123.
|
arxiv-papers
| 2011-05-26T15:14:26 |
2024-09-04T02:49:19.113929
|
{
"license": "Public Domain",
"authors": "Ron S. Longacre",
"submitter": "Ron S. Longacre",
"url": "https://arxiv.org/abs/1105.5321"
}
|
1105.5509
|
# A parallel Buchberger algorithm for multigraded ideals
Emil Sköldberg and Mikael Vejdemo-Johansson and Jason Dusek
###### Abstract.
We demonstrate a method to parallelize the computation of a Gröbner basis for
a homogenous ideal in a multigraded polynomial ring. Our method uses anti-
chains in the lattice $\mathbb{N}^{k}$ to separate mutually independent
S-polynomials for reduction.
## 1\. Introduction
In this paper we present a way of parallelizing the Buchberger algorithm for
computing Gröbner bases in the special case of multihomogeneous ideals in the
polynomial algebra over a field. We describe our algorithm as well as our
implementation of it. We also present experimental results on the efficiency
of our algorithm, using the ideal of commuting matrices as illustration.
### 1.1. Motivation
Most algorithms in commutative algebra and algebraic geometry at some stage
involve computing a Gröbner basis for an ideal or module. This ubiquity
together with the exponential complexity of the Buchberger algorithm for
computing Gröbner bases of homogeneous ideals explains the large interest in
improvements of the basic algorithm.
### 1.2. Prior Work
Several approaches have been tried in the literature. Some authors, such as
Chakrabarti–Yelick [1] and Vidal [2] have constructed general algorithms for
distributed memory and shared memory machines respectively. Reasonable
speedups were achieved on small numbers of processors. Another approach has
been using factorization of polynomials; all generated S-polynomials are
factorized on a master node, and the reductions of its factors are carried out
on the slave nodes. Work by Siegl [3], Bradford [4], Gräbe and Lassner [5]. In
a paper by Leykin [6] a coarse grained parallelism was studied that was
implemented both in the commutative and non-commutative case.
Good surveys of the various approaches can be found in papers by Mityunin and
Pankratiev [7] and Amrhein, Bündgen and Küchlin [8]. Mityunin and Pankratiev
also give a theoretical analysis of and improvements to algorithms known at
that time.
Finally, an approach by Reeves [9] parallelizes on the compiler level for
modular coefficient fields.
### 1.3. Our approach
Our approach restricts the class of Gröbner bases treated to homogenous
multigraded Gröbner bases. While certainly not general enough to handle all
interesting cases, the multigraded case covers several interesting examples.
For these ideals we describe a coarsely grained parallelization of the
Buchberger algorithm with promising results.
###### Example 1.1.
Set $R=k[x_{1},\dots,x_{n^{2}},y_{1},\dots,y_{n^{2}}]$ where $k$ is a field.
Let $X$ and $Y$ be square $n\times n$-matrices with entries the variables
$x_{1},\dots,x_{n^{2}}$ and $y_{1},\dots,y_{n^{2}}$ respectively. Then the
entries of the matrix
$I_{n}=XY-YX$
form $n^{2}$ polynomials generating the ideal $I_{n}$ $R$.
The computation of a Gröbner basis for $I_{1}$ and $I_{2}$ is trivial and may
be carried out on a blackboard. A Gröbner basis for $I_{3}$ is a matter of a
few minutes on most modern computer systems, and already the computation of a
Gröbner basis for $I_{4}$ is expensive using the standard reverse
lexicographic term order in $R$; the Macaulay2 system [10] several hours are
needed to obtain a Gröbner basis with 563 elements. However, using clever
product orders, Hreinsdóttir has been able to find bases with 293 and 51
elements [11, 12]. As far as we are aware of, a Gröbner basis for $I_{5}$ is
not known.
By assigning multidegrees $(1,0)$ to all the variables $x_{1},\dots,x_{n^{2}}$
and $(0,1)$ to all the variables $y_{1},\dots,y_{n^{2}}$, the ideal $I_{n}$
becomes multigraded over $\mathbb{N}\times\mathbb{N}$, and thus approachable
with our methods.
###### Example 1.2.
While this paper presents only the approach to multigraded ideals in a
polynomial ring, an extension to free multigraded modules over multigraded
rings is easily envisioned, and will be dealt with in later work.
Gröbner bases for such free modules would be instrumental in computing
invariants from applied algebraic topology such as the _rank invariant_ as
well as more involved normal forms for higher dimensional persistence
modules.[13]
## 2\. Partially ordered monoids
We shall recall some definitions and basic facts about partially ordered sets
that will be of fundamental use in the remainder of this paper.
A partially ordered set is a set equipped with a binary, reflective, symmetric
and transitive _order_ operation $\leq$. Two objects $a,b$ such that either
$a\leq b$ or $b\leq a$ are called _comparable_ , and two objects for which
neither $a\leq b$ nor $b\leq a$ are called _incomparable_. A subset $A$ of a
partially ordered set in which all objects are mutually incomparable is called
an _antichain_. An element $p$ is _minimal_ if there are no distinct $q$ with
$q\leq p$, _maximal_ if there are no distinct $q$ with $p\leq q$, _smallest_
if all other $q$ fulfill $p\leq q$ and _largest_ if all other $q$ fulfill
$q\leq p$.
There is a partially ordered set structure on $\mathbb{N}^{d}$ in which
$(a_{1},\dots,a_{d})\leq(b_{1},\dots,b_{d})$ iff $a_{i}\leq b_{i}$ for all
$i$. This structure is compatible with the monoid structure on
$\mathbb{N}^{d}$ in the sense that if $\bar{a}\leq\bar{b}$, then
$\bar{c}*\bar{a}\leq\bar{c}*\bar{b}$ for
$\bar{a},\bar{b},\bar{c}\in\mathbb{N}^{d}$. If a monoid has a partial order
compatible with the multiplication in this manner, we call it a _partially
ordered monoid_.
In a partially ordered set $P$, we say that a subset $Q$ is an _ideal_ if it
is downward closed, or in other words if for any $p\in P,q\in Q$ such that
$p\leq q$, then $p\in Q$. It is called a _filter_ if it is upward closed, or
if for $p\in P,q\in Q$ such that $q\leq p$, then $p\in Q$.
An element $p$ is maximal in an ideal if any element $q$ such that $p\leq q$
is not a member of the ideal. Minimal elements of filters are defined
equivalently. An ideal (filter) is _generated_ by its maximal (minimal)
elements in the sense that the membership condition of the ideal (filter) is
equivalent to being larger than (smaller than) at least one generator.
Generators of an ideal or filter form an antichain. Indeed, if these were not
an antichain, two of them would be comparable, and then one of these two would
not be maximal (minimal). An ideal (filter) is _finitely generated_ if it has
finitely many generators, and it is _principal_ if it has exactly one
generator.
There is a partially ordered monoid structure on $\mathbb{N}^{d}$, given by
$(p_{1},\dots,p_{d})\leq(q_{1},\dots,q_{d})$ if $p_{i}\leq q_{i}$ for all $i$,
and by
$(p_{1},\dots,p_{d})*(q_{1},\dots,q_{d})=(p_{1}+q_{1},\dots,p_{d}+q_{d})$.
This structure will be the main partially ordered monoid in us in this paper.
## 3\. Multigraded rings and the grading lattice
A polynomial ring $R=\mathbb{k}[x_{1},\dots,x_{r}]$ over a field $\mathbb{k}$
is said to be _multigraded_ over $P$ if each variable $x_{j}$ carries a
_degree_ $|x_{j}|\in P$ for some partially ordered monoid $P$. We expect of
the partial order on $P$ that if $p,q\in P$ then $p\leq p*q$ and $q\leq p*q$.
The degree extends from variables to entire monomials by requiring
$|mn|=|m|*|n|$ for monomials $m,n$; and from thence a multigrading of the
entire ring $R$ follows by decomposing $R=\bigoplus_{p\in P}R_{p}$ where
$R_{p}$ is the set of all _homogenous_ polynomials in $R$ of degree $p$, i.e.
polynomials with all monomials of degree $p$. A homogenous polynomial of
degree $(n_{1},\dots,n_{d})$ is said to be of _total degree_
$n_{1}+\dots+n_{d}$. We note that for the $\mathbb{N}^{d}$-grading on $R$, the
only monomial with degree $(0,0,\dots,0)$ is $1$, and thus the smallest degree
is assigned both to the identity of the grading monoid and to the identity of
the ring.
We write $\operatorname{lm}p$, $\operatorname{lt}p$, $\operatorname{lc}p$ for
the leading monomial, leading term and leading coefficient of $p$.
###### Proposition 3.1.
Suppose $p$ and $q$ are homogenous. If $|p|\not\leq|q|$ then
$\operatorname{lm}p$ does not divide $\operatorname{lm}q$.
###### Proof.
If $\operatorname{lm}p|\operatorname{lm}q$ then
$\operatorname{lm}q=c\operatorname{lm}p$, and thus
$|\operatorname{lm}q|=|c|*|\operatorname{lm}p|$, and thus, since $|1|\leq|c|$,
by our requirement for a partially ordered monoid,
$\operatorname{lm}p\leq\operatorname{lm}q$. ∎
###### Proposition 3.2.
Reduction in the Buchberger algorithm of a given multidegree for a homogenous
generating set depends only on its principal ideal in the partial order of
degrees.
###### Proof.
We recall that the reduction of a polynomial $p$ with respect to polynomials
$q_{1},\dots,q_{k}$ is given by computing
$p^{\prime}=p-\frac{\gcd(\operatorname{lm}p,\operatorname{lm}q_{j})}{\operatorname{lm}p}q_{j}$
for a polynomial $q_{j}$ such that
$\operatorname{lm}q_{j}|\operatorname{lm}p$. We note that by Proposition 3.1,
this implies $|\operatorname{lm}q_{j}|\leq|\operatorname{lm}p|$ and thus
$|q_{j}|\leq|p|$. ∎
We note that Proposition 3.2 implies that if two S-polynomials are
incomparable to each other, then their reductions against a common generating
set are completely independent of each other. Furthermore, since
$|p^{\prime}|=|p|$, in the notation of the proof of Proposition 3.2, a
reduction of an incomparable S-polynomial can never have an effect on the
future reductions of any given S-polynomial.
Hence, once S-polynomials have been generated, their actual reductions may be
computed independently across antichains in the partial order of multidegrees,
and each S-polynomial only has to be reduced against the part of the Gröbner
basis that resides below it in degree.
## 4\. Algorithms
The arguments from Section 3 lead us to an approach to parallelization in
which we partition the S-polynomials generated by their degrees, pick out a
minimal antichain, and generate one computational task for each degree in the
antichain.
One good source for minimal antichains, that is guaranteed to produce an
antichain, though most often will produce more tasks than are actually
populated by S-polynomials is to consider the minimal total degree for an
unreduced S-polynomial, and produce as tasks the antichain of degrees with the
same total degree.
Another, very slightly more computationally intense method is to take all
minimal occupied degrees. These, too, form an antichain by minimality, and are
guaranteed to only yield as many tasks as have content.
Either of these suggestions leads to a master-slave distributed algorithm as
described in pseudocode in Algorithms 1 and 2.
The resulting master node algorithm can be seen in Algorithm 1, and the
simpler slave node algorithm in Algorithm 2.
loop
if have waiting degrees and waiting slaves then
nextdeg $\leftarrow$ pop(waiting degrees)
nextslave $\leftarrow$ pop(waiting slaves)
send nextdeg to nextslave
else if all slaves are working then
wait for message from slave newslave
push(newslave, waiting slaves)
else if no waiting degrees and some working slaves then
wait for message from slave newslave
push(newslave, waiting slaves)
else if no waiting degrees and no working slaves then
generate new antichain of degrees
if no such antichain available then
finish up
end if
else
continue
end if
end loop
Algorithm 1 Master algorithm for a distributed Gröbner basis computation
loop
receive message msg from master
if msg = finish then
return
else if msg = new degree $d$ then
reduce all S-polynomials in degree d and append to Gröbner basis
compute new S-polynomials based on new basis elements
send finished degree to master
end if
end loop
Algorithm 2 Slave algorithm for a distributed Gröbner basis computation
## 5\. Experiments
We have implemented the master-slave system described in Section 4 in Sage
[14], using MPI for Python [15, 16] for distributive computing infrastructure
and SQLAlchemy [17] interfacing with a MySQL database [18] for an abstraction
of a common storage for serialized python objects.
In order to test our implementation, we have used a computational server
running 8 Intel Xeon processors at 2.83 GHz, with a 5M cache, and a total RAM
available of 16G.
We have run test with the Gröbner basis problem $I_{3}$, and recorded total
wallclock timings, as well as specific timings for the S-polynomial generation
and reduction steps. The problem was run for each possible number of allocated
core (1 to 7 slave processors), and the server was the entire time otherwise
un-utilized.
Figure 1. Timings (seconds) for $I_{3}$ on an 8-core computational server.
Timing runs were made with between 2 and 8 active processors, and the total
wallclock times (top left), SQL interaction times (top right, the S-polynomial
reduction times (bottom left), and the S-polynomial generation times (bottom
right) were measured.
Figure 2. Logarithms of maximal, minimal and average timings (seconds) for
reduction (left) and generation (right) in the $I_{3}$ computations.
As can be seen in Figure 1, parallelization decreases the wall-clock timings
radically compared to single-core execution (2 processors, with the slave
processor doing all work essentially serially). However, the subsequent
decrease in computational times is less dramatic.
Looking into specific aspects of the computation, we can see that while the
average computational times decrease radically with the number of available
processors, the maximal computation time behaves much worse. With the
reduction step, maximal computation times still decrease, mostly, with the
number of available processors. The S-polynomial generation step however
displays almost constant maximal generation times along the computation.
Furthermore, compared to the time needed for the algebraic computations, the
relatively slow, database engine mediated storage and recovery times are
almost completely negligible.
These trends are even more clear when we concentrate on only the maximal,
minimal and average computation times, as in Figure 2. We see a proportional
decrease in average computation times, and a radical drop-off in minimal
computation times, which certainly sounds promising. The global behaviour,
however, is dictated by the maximal thread execution times, which are rather
disappointingly behaving throughout.
## 6\. Conclusions and Future Work
In conclusion, we have demonstrated that while the parallel computation of
Gröbner bases in general is a problem haunted by the ghost of data dependency,
the lattice structure in an appropriate choice of multigrading will allow for
easy control of dependencies. Specifically, picking out antichains in the
multigrading lattice gives a demonstrable parallelizability, that saturates
the kind of computing equipment that is easily accessible by researchers of
today.
Furthermore, we have developed our methods publically
accessible,111http://code.google.com/p/dph-mg-grobner, the code used for
Section 5 can be found in the sage subdirectory and released it under the very
liberal BSD license. Hence, with the ease of access to our code and to the
Sage computing system, we try to set the barrier to build further on our work
as low as we possibly can.
However, the techniques we have developed here are somewhat sensitive to the
distribution of workload over the grading lattice: if certain degrees are
disproportionately densely populated, then the computational burden of an
entire Gröbner basis is dictated by the essentially _serial_ computation of
the highly populated degrees. As such, we suspect these methods to work at
their very best in combination with other parallelization techniques.
The Gröbner basis implementation used was a rather naïve one, and we fully
expect speed-ups from sophisticated algorithms to combine cleanly with the
constructions we use. This is something we expect to examine in future
continuation of this project.
There are many places to go from here. We are ourselves interested in
investigating many avenues for the further application of the basic ideas
presented here:
* •
Adaptation to state-of-the-art Gröbner basis techniques for single processors.
Improve the handling of each separate degree, potentially subdividing work
even finer.
* •
Multigraded free modules, and Gröbner bases of these; opening up for the use
of these methods in computational and applied topology, as a computational
back bone for multigraded persistence.
* •
Multigraded free resolutions; opening up for the application of these methods
in parallelizing computations in homological algebra.
* •
Adaptation to non-commutative cases; in particular to use for ideals in and
modules over quiver algebras.
* •
Building on work by Dotsenko and Khoroshkin, and by Dotsenko and Vejdemo-
Johansson, there is scope to apply this parallelization to the computation of
Gröbner bases for operads. [19, 20]
## References
* [1] Chakrabarti, S., Yelick, K.: Implementing an irregular application on a distributed memory multiprocessor. In: PPOPP ’93: Proceedings of the fourth ACM SIGPLAN symposium on Principles and practice of parallel programming, New York, NY, USA, ACM (1993) 169–178
* [2] Vidal, J.P.: The computation of gröbner bases on a shared memory multiprocessor. In: DISCO ’90: Proceedings of the International Symposium on Design and Implementation of Symbolic Computation Systems, London, UK, Springer-Verlag (1990) 81–90
* [3] Siegl, K.: A parallel factorization tree Gröbner basis algorithm. In: Proceedings of PASCO’94. (1994)
* [4] Bradford, R.: A parallelization of the buchberger algorithm. In: ISSAC ’90: Proceedings of the international symposium on Symbolic and algebraic computation, New York, NY, USA, ACM (1990) 296
* [5] Gräbe, H.G., Lassner, W.: A parallel Gröbner factorizer. In: Proceedings of PASCO’94. (1994)
* [6] Leykin, A.: On parallel computation of Gröbner bases. In: Parallel Processing Workshops, International Conference on. Volume 0., Los Alamitos, CA, USA, IEEE Computer Society (2004) 160–164
* [7] Mityunin, V.A., Pankrat’ev, E.V.: Parallel algorithms for the construction of Gröbner bases. Sovrem. Mat. Prilozh. (30, Algebra) (2005) 46–64
* [8] Amrhein, B., Bündgen, R., Küchlin, W.: Parallel completion techniques. In: Symbolic rewriting techniques (Ascona, 1995). Volume 15 of Progr. Comput. Sci. Appl. Logic. Birkhäuser, Basel (1998) 1–34
* [9] Reeves, A.A.: A parallel implementation of Buchberger’s algorithm over $Z_{p}$ for $p\leq 31991$. J. Symbolic Comput. 26(2) (1998) 229–244
* [10] Grayson, D.R., Stillman, M.E.: Macaulay2, a software system for research in algebraic geometry. Available at http://www.math.uiuc.edu/Macaulay2/
* [11] Hreinsdóttir, F.: A case where choosing a product order makes the calculations of a Groebner basis much faster. J. Symbolic Comput. 18(4) (1994) 373–378
* [12] Hreinsdóttir, F.: An improved term ordering for the ideal of commuting matrices. J. Symbolic Comput. 41(9) (2006) 999–1003
* [13] Carlsson, G., Singh, G., Zomorodian, A.: Computing multidimensional persistence. In Dong, Y., Du, D.Z., Ibarra, O.H., eds.: ISAAC. Volume 5878 of Lecture Notes in Computer Science., Springer (2009) 730–739
* [14] Stein, W.: Sage: Open Source Mathematical Software (Version 4.3.5). The Sage Group. (2010) http://www.sagemath.org.
* [15] Dalcín, L., Paz, R., Storti, M.: MPI for Python. J. Parallel Distrib. Comput. 65(9) (2005) 1108 – 1115
* [16] Dalcín, L., Paz, R., Storti, M., D’Elía, J.: MPI for Python: Performance improvements and MPI-2 extensions. J. Parallel Distrib. Comput. 68(5) (2008) 655–662
* [17] Bayer, M.: SQLAlchemy. http://www.sqlalchemy.org/
* [18] Oracle: MySQL. http://mysql.com/
* [19] Dotsenko, V., Khoroshkin, A.: Gröbner bases for operads. To appear, Duke Math. Journal (2008)
* [20] Dotsenko, V., Vejdemo-Johansson, M.: Implementing Gröbner bases for operads. To appear, Séminaires et Congrès (2009)
|
arxiv-papers
| 2011-05-27T09:46:53 |
2024-09-04T02:49:19.126502
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Mikael Vejdemo-Johansson and Emil Sk\\\"oldberg and Jason Dusek",
"submitter": "Mikael Vejdemo-Johansson",
"url": "https://arxiv.org/abs/1105.5509"
}
|
1105.5534
|
# Evolution of the dust/gas environment around Herbig Ae/Be stars
Tie Liu11affiliation: Department of Astronomy, Peking University, 100871,
Beijing China; liutiepku@gmail.com , Huawei Zhang11affiliation: Department of
Astronomy, Peking University, 100871, Beijing China; liutiepku@gmail.com ,
Yuefang Wu11affiliation: Department of Astronomy, Peking University, 100871,
Beijing China; liutiepku@gmail.com , Sheng-Li Qin22affiliation: I.
Physikalisches Institut, Universität zu Köln, Zülpicher Str. 77, 50937 Köln,
Germany , Martin Miller22affiliation: I. Physikalisches Institut, Universität
zu Köln, Zülpicher Str. 77, 50937 Köln, Germany
###### Abstract
With the KOSMA 3-m telescope, 54 Herbig Ae/Be stars were surveyed in CO and
13CO emission lines. The properties of the stars and their circumstellar
environments are studied by fitting the SEDs. The mean line width of 13CO
(2-1) lines of this sample is 1.87 km s-1. The average column density of H2 is
found to be $4.9\times 10^{21}$ cm-2 for the stars younger than $10^{6}$ yr,
while drops to $2.5\times 10^{21}$ cm-2 for those older than $10^{6}$ yr. No
significant difference is found among the SEDs of Herbig Ae stars and Herbig
Be stars at the same age. The infrared excess decreases with age. The envelope
masses and the envelope accretion rates decease with age after $10^{5}$ yr.
The average disk mass of the sample is $3.3\times 10^{-2}~{}M_{\sun}$. The
disk accretion rate decreases more slowly than the envelope accretion rate. A
strong correlation between the CO line intensity and the envelope mass is
found.
Massive core:pre-main sequence-ISM: molecular-ISM: kinematics and dynamics-
ISM: jets and outflows-stars: formation
††slugcomment: Accepted to ApJ
## 1 Introduction
Although significant number of astrophysical processes such as outflow,
inflow, disk, rotation have been observed towards high-mass star formation
regions, it is still unclear whether high-mass stars form in the same way as
low-mass counterparts. Thus detail studies of the circumstellar environment of
young high-mass stars are especially important to understand their formation
and evolution processes. However, most of high-mass stars are far away from us
and deeply embedded in the clouds, which add to the difficulties of detailed
studies. Intermediate-mass (3-8 M☉) pre-main-sequence Herbig Ae/Be stars are
visible at optical and infrared wavelengths which share similar circumstellar
properties with massive stars, therefore will provide clues in understanding
the circumstellar structure of high-mass young stars. Additionally, since
Herbig Ae (HAe) stars are the precursors of vega-type systems, investigations
of their circumstellar environment are also important for planet formation
studies.
Hebig Ae/Be stars are still located in molecular clouds. More and more
observational evidences and detailed modeling of the spectral energy
distributions (SEDs) strongly suggest the presence of circumstellar disks
around Herbig Ae/Be stars (Hillenbrand et al., 1992; Chiang et al., 2001;
Dominik et al., 2003). To probe their surrounding gas properties especially to
investigate the outer part of the disk and determine the mass and grain
properties, millimeter observations are needed (Alonso-Albi et al., 2009).
Recently, a handful of disks around Herbig Ae/Be stars have been spatially and
spectrally resolved by optical and infrared (Boccaletti, 2003; Eisner, 2004;
Monnier et al., 2008; Murakawa, 2010; Perrin et al., 2006; Okamoto et al.,
2009), as well as millimeter and sub-millimeter interferometric observations
(Alonso-Albi et al., 2009; Fuente et al., 2006; Hamidouche, 2010; Matthews et
al., 2007; Öberg et al., 2010; Schreyer et al., 2008; Wang et al., 2008).
However, the number of disks well studied at millimeter and sub-millimeter
wavelengths is still very small, and more samples are needed. In addition, we
now have more multi-wavelength datasets, including long-wavelength bands, than
what were available for previous studies, enabling us to model their detail
structures including star-disk-envelope.
In this paper, we report the results from a survey towards 54 Herbig Ae/Be
stars using the KOSMA 3-m telescope in CO and 13CO emission lines. Together
with SED modeling, we investigate the properties of the circumstellar
environment around these Herbig Ae/Be stars and explore their evolution. Our
studies also provide a sample of Herbig Ae/Be stars surrounded by rich
dust/gas, which are ideal for further higher spatial resolution observations
by interferometers at millimeter and sub-millimeter wavelengths. The next
section is about sample selection, and Section 3 describes the observation.
The basic results are presented in Section 4, and more detailed discussions
are given in Section 5. In section 6, a summary is present.
## 2 Sample
In order to collect a complete sample, we surveyed 54 Herbig Ae/Be stars from
Thé et al. (1994) with declination $\delta>-20\arcdeg$ which are accessible to
KOSMA telescope. These sources cover a large age range from 104 yr to 107 yr,
which are very useful to study the evolution of Herbig Ae/Be stars and their
circumstellar environments. The basic parameters of these stars are listed in
Table 1. The distances and spectral types of these stars are mainly obtained
from Thé et al. (1994) and Manoj et al. (2006), and completed with the help of
the SIMBAD database, operated at CDS, Strasbourg, France. There are 27 Ae
stars, 24 Be stars and 3 Fe stars in this sample. The masses of 40 sources are
obtained from Manoj et al. (2006). The effective temperatures are assigned
from the spectral types and listed in the seventh column of Table 1.
## 3 Observations
The single-point survey observations in 12CO (2-1), 12CO (3-2), 13CO (2-1) and
13CO (3-2) were carried out between January and February 2010 using the KOSMA
3-m telescope at Gornergrat, Switzerland. The medium and variable resolution
acousto optical spectrometers with bandwidth of 300 and 655-1100 MHz at 230
and 345 GHz were used as backends. The spectral resolutions were 0.22 km/s and
0.29 km/s for 230 and 345 GHz, respectively. The beam width at 230 and 345 GHz
were 130$\arcsec$ and 82$\arcsec$, respectively. Dual channel SIS receiver for
230/345 GHz is used for frontend, with the typical system temperatures of
120/150 K for 230/345 GHz, respectively. The forward efficiency Feff was 0.93
and the corresponding main beam efficiencies Beff were 0.68 and 0.72 at 230
and 345 GHz, respectively. Pointing was frequently checked on planets and was
better than 10$\arcsec$. The integration time for each source was about 8
seconds using the total power observation mode. For the data analysis, the
GILDAS software package including CLASS and GREG was employed (Guilloteau &
Lucas, 2000).
## 4 Results
### 4.1 Survey results
All of the sample sources were surveyed in 12CO (2-1) and 12CO (3-2) lines
which have been successfully detected towards 41 out of the 54 sources.
However, due to the bad weather, only 28 of the 41 sources were surveyed in
13CO (2-1) and 13CO (3-2). Fig. 1 and Fig. 2 presents the spectra of all the
54 sources. The red and green lines represent 12CO (2-1) and 13CO (2-1),
respectively. For the sources not surveyed in 13CO emission, only 12CO (2-1)
spectra are presented.
For the 41 sources successfully detected in 12CO emission, we fitted these
observed lines with a Gaussian function and present the fitting parameters
including the $V_{LSR}$, line width (FWHM) and antenna temperature
$T_{A}^{\phantom{A}*}$ in Table 2. The systematic velocities of each source
are obtained by averaging the $V_{LSR}$ of 13CO (2-1) and 13CO (3-2) lines if
available. For the sources not surveyed in 13CO emission, the systematic
velocities are obtained by averaging the $V_{LSR}$ of 12CO (2-1) and 12CO
(3-2) lines. The derived systematic velocities are listed in the [col. 2] of
Table 3. The spectral profiles of these sources show rich properties. Together
with 13CO (2-1) lines, the profiles of 12CO (2-1) lines are identified and
listed in the last column of Table 2. There are 11 sources having multiple
components; 3 sources having flat tops; 5 sources showing red asymmetry
profile and 5 sources showing blue asymmetry profile. Additionally, 2 sources
are identified as blue profile, 1 source as red profile, and 1 source shows
self-absorption.
The intensity ratios of 12CO (2-1) and 13CO (2-1) are derived and listed in
the [col. 3] of Table 3. The 12CO (2-1) optical depth can be obtained from the
comparison of the observed 12CO (2-1) and 13CO (2-1) antenna temperature,
assuming [12CO]/[13CO]=89. The 12CO (2-1) lines are found to be always
optically thick with an average optical depth of 26.6. Since the 12CO is
optically thick, the excitation temperature $T_{ex}$ can be obtained according
to Garden et al. (1991):
$T_{r}=\frac{T_{A}^{*}}{\eta_{b}}=\frac{h\nu}{k}[\frac{1}{exp(h\nu/kT_{ex})-1}\frac{1}{exp(h\nu/kT_{bg})-1}]\times~{}f_{\nu}$
(1)
where $T_{bg}=2.73~{}K$ is the temperature of the cosmic background radiation,
$f_{\nu}=f_{82}=\frac{\Theta_{s}^{2}}{\Theta_{s}^{2}+82^{2}}$,
$f_{\nu}=f_{130}=\frac{\Theta_{s}^{2}}{\Theta_{s}^{2}+130^{2}}$ are the beam-
filling factors of the 12CO (3-2) and the 12CO (2-1), respectively. The source
size $\Theta_{s}$ and the excitation temperature $T_{ex}$ can be calculated
simultaneously from equ (1) using 12CO (2-1) and 12CO (3-2) data together. To
calculate the line ratios R3,2 of the 12CO (3-2) and the 12CO (2-1), beam
filling factor should be taken into account. The corrected line ratios R3,2
can be obtained from:
$R_{3,2}=f_{82,130}\cdot\frac{\int~{}T_{A}(12CO(3-2))dv}{\int~{}T_{A}(12CO(2-1))dv}$,
where $f_{82,130}=\frac{1+(82/\Theta_{s})^{2}}{1+(130/\Theta_{s})^{2}}$ and
$\Theta_{s}$ is the source size.
Assuming the 13CO (2-1) is optically thin, the optical depth and column
density of the 13CO can be straightforwardly obtained under local thermal
equilibrium (LTE) assumption. With the abundance ratio [H2]/[13CO]=$8.9\times
10^{5}$, the column density of H2 also can be calculated. For these sources
which have not been surveyed with the 13CO, the column densities of H2 are
firstly obtained from the 12CO (2-1) lines under the assumption that the 12CO
(2-1) emission is optically thin and [H2]/[12CO]=$10^{4}$, and then multiplied
by an optical depth correction factor $C_{\tau}=\tau/(1-e^{-\tau})$. The
optical depths of the 12CO (2-1) for those sources not surveyed in the 13CO
emission are assigned as 26.6, the average value obtained above. All the
results are collected in Table 3.
### 4.2 The SED
We have compiled the SEDs for 53 sources in the sample except MWC 614, which
has not enough data to use for modeling. The optical (UBV) data are from Manoj
et al. (2006), JHK fluxes from the 2MASS All-Sky Catalog of Point Sources,
IRAC and MIPS data from Harvey et al. (2007); Gutermuth et al. (2009), IRAS
data from the IRAS catalogue of Point Sources, MSX data from MSX6C Infrared
Point Source Catalog, AKAIR data from AKARI/IRC mid-IR all-sky Survey and
AKARI/FIS All-Sky Survey Point Source Catalogues, SCUBA 450 $\micron$ and 850
$\micron$ submillimeter data from Di Francesco et al. (2008), 1.3 mm continuum
data from Henning et al. (1994). We also collected sub-mm and mm data for some
sources from Mannings et al. (1994), Acke et al. (2004), Di Francesco et al.
(1997), and Alonso-Albi et al. (2009). Although not all the sources have data
at all the wavelength bands, there are still enough data for each source to be
used for fitting the SED.
The SEDs are then modeled using the online 2D-radiative transfer tool
developed by Robitaille et al. (2006), which has been successfully tested by
Robitaille et al. (2007), on a sample of low-mass YSOs and by Grave & Kumar
(2009) on high-mass protostars. The geometry in the models consists of a
central star surrounded by a flared accretion disk, a rotationally flattened
infalling envelope and bipolar cavities. The SED fitting tool uses 200,000 YSO
model SEDs (20,000 sets of physical parameters and 10 viewing angles). The
distance and visual extinction are the only two input parameters in the
fitting process. During the fitting process, all the model SEDs in the grid
are compared with the observed data, and the model SEDs that fit a source well
can be picked out within a specified limit on $\chi^{2}$. For each source, a
minimum of three data points that are of good quality are required by the SED
model tool. In our sample, all the 53 sources have at least ten such data
points, spreading over the wavelength region of 0.3 to 2600 $\micron$, which
can well constraint the models with least standard deviations (Robitaille et
al., 2007). However, the two input parameters of the distance and visual
extinction are far from fixed, which lead to the degeneracy of the models. In
our cases, we assumed 30% uncertainty for each of the distances obtained from
literatures, and varied the visual extinction Av from 0 to 30. Although the
best fit model has the least $\chi^{2}$ value, it may not actually represent
the data very well (Grave & Kumar, 2009). In order to find a representative
set of values for each source, all the models are accepted and analyzed, which
satisfy the criterion of $\chi^{2}-\chi_{best}^{2}<3\times n_{data}$, where
$n_{data}$ is the number of the data points. Fig. 3 and Fig. 4 shows the
observed SEDs and model SEDs with $\chi^{2}-\chi_{best}^{2}<3\times n_{data}$.
Following Grave & Kumar (2009), a weighted mean and standard deviation are
derived for all the parameters of each source, with the weights being the
inverse of the $\chi^{2}$ of each model. All the parameters derived are listed
in Table 4. The various columns in Table 4. are as follows: [col. 1] name of
the source;[col. 2] visual extinction; [col. 3] age of the star; [col. 4] mass
of the star; [col. 5] radius of the star; [col. 6] total luminosity of the
star; [col. 7] temperature of the star; [col. 8] envelope mass; [col. 9]
envelope accretion rate; [col. 10] disk mass ; [col. 11] disk inclination
angle; [col. 12] outer radius of the disk; [col. 13] disk accretion rate.
## 5 Discussion
### 5.1 The CO emission
The mean line width of the 13CO(2-1) lines of this sample is 1.87 km s-1 with
a standard deviation of 0.66 km s-1. This line width is similar to that of
intermediate-mass star formation regions ($\sim$2 km s-1) (Sun et al., 2006),
larger than that of low-mass star formation regions (1.3 km s-1) (Myers, Linke
& Benson, 1983), but much smaller than that of high-mass star formation
regions associated with IRAS sources (3.09 km s-1) (Wang et al., 2009; Wu et
al., 2003).
The average column density of H2 is found to be $4.9\times 10^{21}$ cm-2
before $10^{6}$ yr, while drops to $2.5\times 10^{21}$ cm-2 after $10^{6}$ yr.
As shown in the left panel of Fig.5, the optical depth of the 12CO(2-1) ranges
from 7.5 to 54 with an average value of 27.7 before $10^{6}$ yr, but the
optical depth drops below 26.6 (the average value in this sample) after
$10^{6}$ yr, with an average value of 22.5. The excited temperature of CO
ranges from 4.3 K to 33 K, with an average of 13.8 K. As shown in the right
panel of Fig.5, a decreased trend is found for the excited temperature with
age. The decrease of the optical depth and excitation temperature with age may
be due to the different gas dispersion mechanisms in which most of gas should
be dispersed by outflows and stellar winds at the early evolutionary stage of
the stars, and UV radiation and photodissociation play an important role in
further gas dispersion as the stars evolve (Fuente et al., 2002). Therefore
only less dense molecular gas with low excitation temperature existed in outer
envelope of late-type stars is observed at millimeter and submillimeter
wavelengths.
### 5.2 The properties of the stars
The ages of the sample stars spread from 103 yr to 107 yr. About 40% sample
stars are younger than 106 yr, and the others are older than 106 yr. The
average mass of the stars is $3.9\pm 2.2$ M☉. In the left panel of Fig.6, we
compare the masses derived from SED fitting to those obtained from Manoj et
al. (2006). The masses (Mmod) from the SED modeling are well coincident with
the masses (Mlit) obtained from literatures with a relationship of
Mmod=M${}_{lit}\pm$0.44, where the correlation coefficient r=0.64. Although
the masses derived from the SED fitting seems slightly higher than those from
literatures, they are found to agree with each other better than $\pm 0.3$
orders of magnitude. We also compare the effective temperatures of the stars
obtained from SED fitting to those corresponding to the spectral types. We
find the agreement of the effective temperatures obtained from the two
independent methods for 80% of the sample sources is better than $\pm 0.3$
orders of magnitude. The average effective temperature of the sample stars is
(1.1$\pm$0.6)$\times 10^{4}$ K. As shown in Fig.7, the mass-luminosity
function of the sample stars is found to be $Log(L_{*}/L_{\sun})=(3.08\pm
0.17)Log(M_{*}/M_{\sun})+(0.48\pm 0.10)$, with the correlation coefficient
r=0.93.
### 5.3 The evolution of the SED
Meeus et al. (2001) proposed that the SEDs of Herbig Ae/Be stars can be
divided into two groups based on the different mid-infrared excesses: Group I
sources with strong mid-infrared flux while Group II sources with weaker
infrared excesses. Acke et al. (2005) found most HAe stars belong to Group I
and most HBe stars belong to Group II. However, there may be an evolutionary
link between Group I and Group II sources. We display the SEDs of the HAe
stars and HBe stars in our sample in Fig.3 and Fig.4 ordered by age (age
increases from top-left to bottom-right panel), respectively. Firstly, we find
no significant difference among the SEDs of HAe stars and HBe stars at the
same age; secondly, it is clearly to see the infrared excess decreases with
age for both HAe stars and HBe stars. The dashed lines show the SEDs of the
stellar photosphere, including the effect of foreground extinction, from the
best-fitting model (Robitaille et al., 2006). As shown in the second column of
Table 4, the interstellar extinctions Av range from 0 to 12 mag in our sample
from the SED modeling. We find at early ages ($<10^{6}$ yr), the optical and
near-infrared fluxes of most of the HAe and HBe stars can not be simply fitted
by a stellar photosphere with interstellar extinction, while the older ones
($>10^{6}$ yr) can be well fitted, indicating huge amounts of circumstellar
dust/gas exist at early ages, which causes large extinction and infrared
excess. The shortest wavelength at which infrared excess is recognizable is
around 2 $\micron$. At early ages, the excess at sub-mm/mm regions is also
very significant and HAe stars seem to have stronger sub-mm/mm excess than HBe
stars at the same age.
### 5.4 The circumstellar envelopes and the disks
The left panel of Fig.8 gives the number distribution of the envelope mass and
presents the envelope mass as function of stellar age. There is no difference
in the distribution of the envelope masses between HAe stars and HBe stars.
The envelope masses of the HAe and HBe stars are larger than
$10^{-2}~{}M_{\sun}$ before $10^{6}$ yr, but smaller than $10^{-2}~{}M_{\sun}$
after $10^{6}$ yr. The envelope masses decease with age after $10^{5}$ yr
following the relationship:$Log(M_{env}/M_{\sun})=(-4.0\pm
0.5)Log(t/yr)+(22\pm 3.4)$, with the correlation coefficient r=0.74. The
number distribution of the envelope accretion rate and the envelope accretion
rate as function of stellar age are presented in the right panel of Fig.8. The
average envelope accretion rate for the Herbig Ae/Be stars younger than
$10^{6}$ yr is $2.5\times 10^{-5}$ M${}_{\sun}\cdot$yr-1. The envelope
accretion rate drops below $10^{-6}$ M${}_{\sun}\cdot$yr-1 after $10^{6}$ yr,
and 25 sources older than $10^{6}$ yr have zero envelope accretion rate, which
means those sources can be explained by disk-only sources without infalling
envelope. We also find the envelope accretion rate decease with age after
$10^{5}$ yr following the
relationship:$Log(\dot{M}_{env}/M_{\sun}~{}\cdot~{}yr^{-1})=(-2.4\pm
0.5)Log(t/yr)+(8.2\pm 3.0)$, with the correlation coefficient r=0.72.
From the left panel of Fig.9, it can be seen the disk masses locate at a
narrow range from $10^{-3}~{}M_{\sun}$ to $10^{-1}~{}M_{\sun}$ if neglecting
the three points with extremely small disk mass. The average disk mass is
$3.3\times 10^{-2}~{}M_{\sun}$, and the maximum one is $0.26~{}M_{\sun}$ from
AS 447. No obvious evolutionary pattern is found towards the disk masses. The
right panel of Fig. 9 shows the disk accretion rate as function of stellar
age. The average disk accretion rate is $5\times 10^{-8}$
M${}_{\sun}\cdot$yr-1. An average disk accretion rate of $3.6\times 10^{-7}$
M${}_{\sun}\cdot$yr-1 is found for those sources younger than $10^{6}$ yr,
which is much lower than the corresponding envelope accretion rate, suggesting
the envelopes dominate the accretion at early stages of star formation.
However, after $10^{6}$ yr, the envelope accretion halts ($\dot{M}_{env}\sim
0$), while the disk accretion rate remains $\sim 1.6\times 10^{-8}$
M${}_{\sun}\cdot$yr-1. Additionally, the disk accretion rate deceases with age
following the
relationship:$Log(\dot{M}_{disk}/M_{\sun}~{}\cdot~{}yr^{-1})=(-0.6\pm
0.1)Log(t/yr)+(3.5\pm 0.7)$, with the correlation coefficient r=0.6,
nevertheless the disk accretion rate deceases much more slowly than the
envelope accretion rate.
### 5.5 The relationship between CO gas and the envelope
Fig.10 presents the line intensity of the 12CO (2-1) and 12CO (3-2) as
function of stellar age. It can be seen the CO line intensities of most of the
sources younger than $10^{6}$ yr are larger than 10 K km s-1, and drop below
10 K km s-1 after $10^{6}$ yr. The evolution behaviors of CO line intensity is
very similar to that of envelope (see Fig.8), suggesting possible co-evolution
between the envelope and the CO gas. We plot the line intensity of the 12CO
(2-1) and the 12CO (3-2) as function of envelope mass in Fig.11. Exclude those
data with line intensities smaller than 1 K km s-1, which have low signal-to-
noise level or no detection of CO emission, the CO line intensity is well
correlated with the envelope mass. For 12CO (2-1), the relationship can be
fitted by a power-law:$Log(I_{CO(2-1)}/K~{}km~{}s^{-1})=(0.076\pm
0.017)Log(M_{env}/M_{\sun})+(1.129\pm 0.052)$, where the correlation
coefficient r=0.64. For 12CO(3-2), the relationship can also be fitted by a
power-law:$Log(I_{CO(3-2)}/K~{}km~{}s^{-1})=(0.098\pm
0.019)Log(M_{env}/M_{\sun})+(1.037\pm 0.066)$, with a correlation coefficient
r=0.64. It seems that CO gas is associated with the envelopes.
## 6 Summary
We have surveyed 54 Herbig Ae/Be stars using the KOSMA 3-m telescope in the CO
and the 13CO emission lines. Together with SED modeling, we have investigated
the properties of the circumstellar environments around those Herbig Ae/Be
stars and explored their evolutionary characteristics. The main findings of
this paper are as below:
(1).We have successfully detected CO emission towards 41 out of the 54
sources, and 28 of the 41 sources were surveyed in 13CO (2-1) and 13 CO(3-2).
The mean line width of the 13CO (2-1) line width of this sample is 1.87 km
s-1, which is similar to that of intermediate-mass star formation regions
($\sim$2 km s-1). The average column density of H2 is found to be $4.9\times
10^{21}$ cm-2 before $10^{6}$ yr, while drops to $2.5\times 10^{21}$ cm-2
after $10^{6}$ yr.
(2).The SEDs of 53 sources are compiled and modeled using the online
2D-radiative transfer tool. From the SED fitting, we find 40% sample stars are
younger than 106 yr, and the others are older than 106 yr. The average mass of
the stars is $3.9\pm 2.2$ M☉. The mass-luminosity function of the sample stars
can be described as $Log(L_{*}/L_{\sun})=(3.08\pm
0.17)Log(M_{*}/M_{\sun})+(0.48\pm 0.10)$.
(3).We find no significant difference among the SEDs of HAe stars and HBe
stars at the same age and the infrared excess decreases with age for both HAe
stars and HBe stars.
(4).The envelop masses decease with age after $10^{5}$ yr. The average envelop
accretion rate for the Herbig Ae/Be stars younger than $10^{6}$ yr is
$2.5\times 10^{-5}$ M${}_{\sun}\cdot$yr-1 and drops below $10^{-6}$
M${}_{\sun}\cdot$yr-1 after $10^{6}$ yr. In our sample 25 sources older than
$10^{6}$ yr have zero envelop accretion rate. The average disk mass of the
sample is $3.3\times 10^{-2}~{}M_{\sun}$. The disk accretion rate deceases
with age, but more slowly than the envelope accretion rate.
(5).The CO line intensities of most of the sources younger than $10^{6}$ yr
are larger than 10 K km s-1, and drop below 10 K km s-1 after $10^{6}$ yr. The
CO line intensity is well correlated with the envelope mass, suggesting
possible co-evolution between the envelope and the CO gas.
## Acknowledgment
This work was supported by the NSFC under grants No. 11073003, 10733030, and
10873019, and by the National key Basic Research Program (NKBRP) No.
2007CB815403.
## References
* Acke et al. (2004) Acke, B., van den Ancker, M. E., Dullemond, C. P., van Boekel, R., Waters, L. B. F. M., 2004, A&A, 422, 621
* Acke et al. (2005) Acke, B., van den Ancker, M. E., & Dullemond, C. P., 2005, A&A, 436, 209
* Alonso-Albi et al. (2008) Alonso-Albi, T., Fuente, A., Bachiller, R., Neri, R., Planesas, P., Testi, L., 2008, ApJ, 680, 1289
* Alonso-Albi et al. (2009) Alonso-Albi, T., Fuente, A., Bachiller, R., Neri, R., Planesas, P., Testi, L., Bern$\acute{e}$, O., Joblin, C., 2009, A&A, 497, 117
* Boccaletti (2003) Boccaletti, A., Augereau, J.-C., Marchis, F., Hahn, J., 2003, ApJ, 585, 494
* Chiang et al. (2001) Chiang, E. I., Joung, M. K., Creech-Eakman, M. J., Qi, C., Kessler, J. E., Blake, G. A., van Dishoeck, E. F., 2001, ApJ, 547,1077
* Di Francesco et al. (1997) Di Francesco, James., Evans, Neal J., II., Harvey, Paul M., Mundy, Lee G., Guilloteau, Stephane., Chandler, Claire J., 1997, ApJ, 482, 433
* Di Francesco et al. (2008) Di Francesco, James., Johnstone, Doug., Kirk, Helen., MacKenzie, Todd., Ledwosinska, Elizabeth., 2008, ApJS, 175, 277
* Dominik et al. (2003) Dominik, C., Dullemond, C. P., Waters, L. B. F. M., Natta, A., 2003, ASPC, 287, 313
* Eisner (2004) Eisner, J. A., Lane, B. F., Hillenbrand, L. A., Akeson, R. L., Sargent, A. I., 2004, ApJ, 613, 1049
* Fuente et al. (2002) Fuente, A., Martín-Pintado, J., Bachiller, R., Rodríguez-Franco, A., Palla, F., 2002, A&A, 387, 977
* Fuente et al. (2003) Fuente, A., Rodríguez-Franco, A., Testi, L., Natta, A., Bachiller, R., Neri, R., 2003, ApJ, 598, L39
* Fuente et al. (2006) Fuente, A., Alonso-Albi, T., Bachiller, R., Natta, A., Testi, L., Neri, R., Planesas, P., 2006, ApJ, 649, L119
* Garden et al. (1991) Garden, R. P., Hayashi, M., Hasegawa, T., Gatley, I., Kaifu, N., 1991, ApJ, 374, 540
* Grave & Kumar (2009) Grave J. M. C. & Kumar M. S. N., 2009, A&A, 498, 147
* Guilloteau & Lucas (2000) Guilloteau, S. & Lucas, R., 2000, in Astronomical Society of the Pacific Conference Series, Vol. 217, Imaging at Radio through Submillimeter Wavelengths, ed. J. G. Mangum & S. J. E. Radford, 299
* Gutermuth et al. (2009) Gutermuth, R. A., Megeath, S. T., Myers, P. C., Allen, L. E., Pipher, J. L., Fazio, G. G., 2009, ApJS, 184, 18
* Hamidouche (2010) Hamidouche, M., 2010, ApJ, 722, 204
* Harvey et al. (2007) Harvey, Paul; Merín, Bruno; Huard, Tracy L.; Rebull, Luisa M.; Chapman, Nicholas; Evans, Neal J., II; Myers, Philip C., 2007, ApJ, 663, 1149
* Henning et al. (1994) Henning, Th., Launhardt, R., Steinacker, J., Thamm, E., 1994, A&A, 291, 546
* Hillenbrand et al. (1992) Hillenbrand, Lynne A., Strom, Stephen E., Vrba, Frederick J., Keene, Jocelyn., 1992, ApJ, 397, 613
* Manoj et al. (2006) Manoj, P., Bhatt, H. C., Maheswar, G., Muneer, S., 2006, ApJ, 653, 657
* Mannings et al. (1994) Mannings, V., 1994, MNRAS, 271, 587
* Matthews et al. (2007) Matthews, B. C., Graham, J. R., Perrin, M. D., Kalas, P., 2007, ApJ, 671, 483
* Meeus et al. (2001) Meeus, G., Waters, L. B. F. M., Bouwman, J., van den Ancker, M. E., Waelkens, C., Malfait, K., 2006, A&A, 365, 476
* Monnier et al. (2008) Monnier, J. D., Tannirkulam, A., Tuthill, P. G., Ireland, M., Cohen, R., Danchi, W. C., Baron, F., 2008, ApJ, 681, L97
* Murakawa (2010) Murakawa, K., 2010, A&A, 522, 46
* Myers, Linke & Benson (1983) Myers, P. C., Linke, R. A., & Benson, P. J., 1983, ApJ, 264, 517
* Öberg et al. (2010) Öberg, K. I., et al., 2010, ApJ, 720, 480
* Okamoto et al. (2009) Okamoto, Y. K., et al., 2009, ApJ, 706, 665
* Perrin et al. (2006) Perrin, M. D., Schneider, G., Duchene, G., Pinte, C., Grady, C. A., Wisniewski, J. P., Hines, D. C., 2009, ApJ, 707, L132
* Robitaille et al. (2006) Robitaille T. P., Whitney B. A., Indebetouw R., Wood K., & Denzmore P., 2006, ApJS, 167, 256
* Robitaille et al. (2007) Robitaille T. P., Whitney B. A., Indebetouw R., and Wood K., 2007, ApJS, 169, 328
* Schreyer et al. (2008) Schreyer, K., et al., 2008, A&A, 491, 821
* Sun et al. (2006) Sun, K., Kramer, C., Ossenkopf, V., Bensch, F., Stutzki, J., Miller, M., A&A, 451, 539
* Thé et al. (1994) Thé, P. S., de Winter, D., & Perez, M. R., 1994, A&AS, 104, 315
* Wang et al. (2009) Wang, K., Wu, Y. F., Ran, L., Yu, W. T., Miller, M., 2009, A&A, 507, 369
* Wang et al. (2008) Wang, S., Looney, L. W., Brandner, W., Close, L. M., 2008, ApJ, 673, 315
* Wu et al. (2003) Wu, Y., Wang, J., & Wu, J., 2003, Chin. Phys. Letter, 20, 1409
Table 1: Parameters of all the Herbig Ae/Be stars surveyed Name | $\alpha$(J2000) | $\delta$(J2000) | Distance | Spectral Type | M∗ | log(Teff)
---|---|---|---|---|---|---
| (h m s) | ($\arcdeg\arcmin\arcsec$) | (pc) | | M☉ | log(K)
MacC H12 | 00 07 02.6 | 65 38 38.2 | 850 | A5 D | | 3.91
LkHA 198 / V633casbbCircumstellar disks detected by millimeter and sub-millimeter interferometric observations (Alonso-Albi et al., 2009; Fuente et al., 2006; Hamidouche, 2010; Matthews et al., 2007; Öberg et al., 2010; Schreyer et al., 2008; Wang et al., 2008). | 00 11 26.0 | 58 49 29.1 | 600 | B9 D | 4.25 | 4.02
Vx Cas | 00 31 30.7 | 61 58 51.0 | 760 | A0 D | 3 | 3.98
RNO 6 | 02 16 30.1 | 55 22 57.0 | 1600 | B3 D | 5.99 | 4.27
IP Per | 03 40 47.0 | 32 31 53.7 | 350 | A6 D | 2 | 3.92
XY Per | 03 49 36.2 | 38 58 55.5 | 160 | A2IIv+ C | 2 | 3.91
V892 Taua,ba,bfootnotemark: | 04 18 40.6 | 28 19 15.5 | 160 | B8 D | $>$5.11 | 4.08
AB Aura,ba,bfootnotemark: | 04 55 45.8 | 30 33 04.3 | 144 | A0Vpe C | 2.77 | 3.97
MWC 480a,ba,bfootnotemark: | 04 58 46.3 | 29 50 37.0 | 131 | A3Ve D | 1.99 | 3.94
HD 35929 | 05 27 42.8 | -08 19 38.4 | 345 | F2III D | 3.41 | 3.84
HD 36112 | 05 30 27.5 | 25 19 57.1 | 205 | A5 IVe | 2.17 | 3.91
HD 245185 | 05 35 09.6 | 10 01 51.5 | 400 | A1 D | 2.07 | 3.97
T OriaaCircumstellar disks detected by optical or infrared observations (Boccaletti, 2003; Eisner, 2004; Monnier et al., 2008; Murakawa, 2010; Perrin et al., 2006; Okamoto et al., 2009). | 05 35 50.4 | -05 28 34.9 | 460 | A3V C | 3.34 | 3.98
CQ Taua,ba,bfootnotemark: | 05 35 58.5 | 24 44 54.1 | 100 | F3 D | 1.5 | 3.83
V380 Ori | 05 36 25.4 | -06 42 57.7 | 510 | A1e D | $>$4.93 | 3.97
V586 Ori | 05 36 59.3 | -06 09 16.4 | 510 | A2V C | 3 | 3.98
BF Ori | 05 37 13.3 | -06 35 00.6 | 430 | A5II-IIIev C | 2.5 | 3.95
HD37411 | 05 38 14.5 | -05 25 13.3 | | B9Ve | | 4.02
Haro 13A | 05 38 18.2 | -07 02 25.9 | 460 | Be | |
V599 Ori | 05 38 58.6 | -07 16 45.6 | 360 | A5 D | | 3.91
RR Tau | 05 39 30.5 | 26 22 27.0 | 800 | A2II-IIIe C | 4.26 | 3.98
V350 Ori | 05 40 11.8 | -09 42 11.1 | 460 | A1 D | 2.22 | 3.97
MWC 789 | 06 01 60.0 | 16 30 56.7 | 700 | B9 D | 4.13 | 4.02
LkHA 208 | 06 07 49.5 | 18 39 26.5 | 1000 | A7 D | 3.24 | 3.89
LkHA 339 | 06 10 57.8 | -06 14 37 | 830 | A1 D | 3.18 | 3.97
LkHA 215bbCircumstellar disks detected by millimeter and sub-millimeter interferometric observations (Alonso-Albi et al., 2009; Fuente et al., 2006; Hamidouche, 2010; Matthews et al., 2007; Öberg et al., 2010; Schreyer et al., 2008; Wang et al., 2008). | 06 32 41.8 | 10 09 33.6 | 800 | B6 D | $>$5.43 | 4.15
R Mona,ba,bfootnotemark: | 06 39 10 | 08 44 09.7 | 800 | B0 | $>$5.11 | 4.08
V590 Mon | 06 40 44.6 | 09 48 02.1 | 800 | B7 D | $<$3.35 | 4.11
GU CMa | 07 01 49.5 | -11 18 03.3 | | B2Vne C | | 4.34
HD 141569aaCircumstellar disks detected by optical or infrared observations (Boccaletti, 2003; Eisner, 2004; Monnier et al., 2008; Murakawa, 2010; Perrin et al., 2006; Okamoto et al., 2009). | 15 49 57.7 | -03 55 16.3 | 99 | A0Ve D | 2.18 | 3.98
VV Sera,ba,bfootnotemark: | 18 28 47.9 | 00 08 39.8 | 330 | B6 D | $>$5.43 | 4.15
MWC 300 | 18 29 25.7 | -06 04 37.1 | 650 | Bpe D | |
AS 310 | 18 33 27 | -04 58 06 | 2500 | B1 | $>$6 | 4.4
MWC 614 | 19 11 04.3 | 21 11 26 | | | |
Par 21 | 19 29 00.8 | 09 38 46.7 | 300 | A5e | | 3.91
V1295 AqlaaCircumstellar disks detected by optical or infrared observations (Boccaletti, 2003; Eisner, 2004; Monnier et al., 2008; Murakawa, 2010; Perrin et al., 2006; Okamoto et al., 2009). | 20 03 02.5 | 05 44 16.7 | 290 | A2IVe D | 3.03 | 3.95
V1685 CygaaCircumstellar disks detected by optical or infrared observations (Boccaletti, 2003; Eisner, 2004; Monnier et al., 2008; Murakawa, 2010; Perrin et al., 2006; Okamoto et al., 2009). | 20 20 28.2 | 41 21 51.6 | 980 | B3 D | $>$5.99 | 4.27
Par 22 | 20 24 29.5 | 42 14 03.7 | | A5 eV | | 3.91
PV CepbbCircumstellar disks detected by millimeter and sub-millimeter interferometric observations (Alonso-Albi et al., 2009; Fuente et al., 2006; Hamidouche, 2010; Matthews et al., 2007; Öberg et al., 2010; Schreyer et al., 2008; Wang et al., 2008). | 20 45 53.9 | 67 57 38.9 | 500 | A5 D | | 3.91
AS 442aaCircumstellar disks detected by optical or infrared observations (Boccaletti, 2003; Eisner, 2004; Monnier et al., 2008; Murakawa, 2010; Perrin et al., 2006; Okamoto et al., 2009). | 20 47 37.5 | 43 47 25.0 | 826 | B9 D | | 4.02
LkHA 134 | 20 48 04.8 | 43 47 25.8 | 700 | B2 D | $>$6 | 4.34
HD 200775aaCircumstellar disks detected by optical or infrared observations (Boccaletti, 2003; Eisner, 2004; Monnier et al., 2008; Murakawa, 2010; Perrin et al., 2006; Okamoto et al., 2009). | 21 01 36.9 | 68 09 47.8 | 429 | B2 Ve C | $>$5.99 | 4.27
LkHA 324 | 21 03 54.2 | 50 15 10.2 | 780 | B8 D | $>$5.11 | 4.08
HD 203024 | 21 16 03 | 68 54 52.1 | | A D | |
V645 Cyg | 21 39 58.2 | 50 14 21.2 | 3500 | A0 D | $>$4.94 | 3.98
LkHA 234 | 21 43 02.3 | 66 06 29 | 1250 | B5 Vev | $>$5.29 | 4.11
AS 477 / BD 46 | 21 52 34.1 | 47 13 43.6 | 1200 | B9.5Ve C | $>$4.94 | 3.98
LkHA 257 | 21 54 18.8 | 47 12 09.7 | 900 | B5 D | | 4.19
BH Cep | 22 01 42.9 | 69 44 36.5 | 450 | F5IV C | 1.73 | 3.81
SV Cep | 22 21 33.2 | 73 40 27.1 | 440 | A0 D | 2.5 | 3.98
V375 Lac | 22 34 41 | 40 40 04.5 | 880 | A4 D | 3.19 | 3.93
IL Cep | 22 53 15.6 | 62 08 45 | 725 | B2IV-Vne C | | 4.34
MWC 1080a,ba,bfootnotemark: | 23 17 25.6 | 60 50 43.6 | 2200 | B0eq D | $>$6 | 4.48
LkHA 259 | 23 58 41.6 | 66 26 12.6 | 890 | A9 D | | 3.87
Table 2: Observational Parameters of the lines Name | 13CO (2-1) | | 13CO (3-2) | | CO (2-1) | | CO (3-2) | Notes
---|---|---|---|---|---|---|---|---
| VLSR | FWHM | T${}_{A}^{*}$ | | VLSR | FWHM | T${}_{A}^{*}$ | | VLSR | FWHM | T${}_{A}^{*}$ | | VLSR | FWHM | T${}_{A}^{*}$ |
| (km s-1) | (km s-1) | (K) | | (km s-1) | (km s-1) | (K) | | (km s-1) | (km s-1) | (K) | | (km s-1) | (km s-1) | (K) |
MacC H12 | $-4.82\pm 0.02$ | $1.67\pm 0.04$ | 2.64 | | $-4.63\pm 0.07$ | $1.03\pm 0.16$ | 1.57 | | $-5.05\pm 0.02$ | $2.30\pm 0.03$ | 6.98 | | $-5.14\pm 0.04$ | $2.50\pm 0.09$ | 7.92 | flat top
LkHA 198 | $-0.25\pm 0.01$ | $2.25\pm 0.02$ | 3.00 | | $-0.06\pm 0.03$ | $1.98\pm 0.06$ | 2.59 | | $-0.14\pm 0.01$ | $3.07\pm 0.01$ | 6.84 | | $-0.06\pm 0.02$ | $2.67\pm 0.05$ | 6.06 | wings
RNO 6 | $-36.20\pm 0.02$ | $1.59\pm 0.03$ | 1.50 | | $-35.87\pm 0.04$ | $1.51\pm 0.10$ | 1.41 | | $-36.31\pm 0.02$ | $2.34\pm 0.04$ | 2.81 | | $-36.27\pm 0.09$ | $2.31\pm 0.21$ | 2.50 | flat top
XY Per | $-4.20\pm 0.07$ | $2.11\pm 0.18$ | 0.47 | | | | | | $-4.87\pm 0.04$ | $3.28\pm 0.09$ | 2.23 | | $-4.73\pm 0.10$ | $2.68\pm 0.30$ | 2.47 | red asy
V892 Tau | $7.40\pm 0.01$ | $1.33\pm 0.02$ | 2.14 | | $6.98\pm 0.05$ | $1.49\pm 0.15$ | 1.12 | | $7.09\pm 0.01$ | $2.92\pm 0.03$ | 4.14 | | $6.89\pm 0.02$ | $2.31\pm 0.05$ | 3.64 | blue asy
AB Aur | | | | | | | | | $6.10\pm 0.01$ | $1.04\pm 0.01$ | 4.70 | | $6.18\pm 0.01$ | $0.96\pm 0.02$ | 4.96 |
T Ori | $7.47\pm 0.01$ | $1.28\pm 0.03$ | 4.27 | | $7.48\pm 0.03$ | $1.19\pm 0.06$ | 8.58 | | $7.29\pm 0.21$ | $1.78\pm 0.21$ | 13.51 | | $7.20\pm 0.01$ | $1.58\pm 0.01$ | 24.73 | three comp
| $10.83\pm 0.12$ | $2.50\pm 0.18$ | 2.76 | | $11.15\pm 0.25$ | $1.99\pm 0.55$ | 3.07 | | $10.42\pm 0.21$ | $3.07\pm 0.21$ | 14.38 | | $10.34$ | $2.73\pm 0.03$ | 19.27 |
| $13.13\pm 0.16$ | $2.32\pm 0.25$ | 1.86 | | $13.34\pm 0.62$ | $2.02\pm 0.84$ | 1.35 | | $13.30\pm 0.21$ | $3.37\pm 0.21$ | 11.40 | | $13.17$ | $3.42\pm 0.04$ | 14.13 |
V380 Ori | $6.99\pm 0.05$ | $1.35\pm 0.11$ | 2.34 | | $6.93\pm 0.09$ | $0.82\pm 0.17$ | 1.85 | | | | | | | | |
| $8.95\pm 0.03$ | $2.14\pm 0.06$ | 6.44 | | $9.02\pm 0.05$ | $2.03\pm 0.12$ | 5.60 | | $8.85$ | $4.01\pm 0.02$ | 7.46 | | $8.95\pm 0.02$ | $3.40\pm 0.05$ | 6.50 | two comp
V586 Ori | | | | | | | | | $6.54\pm 0.03$ | $1.77\pm 0.07$ | 2.18 | | $6.54\pm 0.06$ | $1.93\pm 0.21$ | 1.70 | two comp
| | | | | | | | | $8.74\pm 0.01$ | $1.42\pm 0.02$ | 7.43 | | $8.75\pm 0.01$ | $1.31\pm 0.03$ | 9.09 |
BF Ori | $6.59$ | $1.64\pm 0.05$ | 2.86 | | $5.77\pm 0.07$ | $1.12\pm 0.16$ | 2.46 | | $6.54\pm 0.21$ | $2.82\pm 0.21$ | 6.75 | | $6.05\pm 0.04$ | $2.46\pm 0.06$ | 6.53 | three comp
| $9.17\pm 0.09$ | $1.82\pm 0.18$ | 1.49 | | | | | | $9.14\pm 0.21$ | $2.32\pm 0.21$ | 3.98 | | $8.77\pm 0.05$ | $2.75\pm 0.13$ | 4.36 |
| $10.89\pm 0.10$ | $1.22\pm 0.24$ | 0.88 | | | | | | $10.70\pm 0.21$ | $1.56\pm 0.21$ | 1.79 | | $10.40\pm 0.02$ | $1.22\pm 0.08$ | 1.32 |
Haro 13A | $5.78\pm 0.02$ | $2.10\pm 0.05$ | 2.38 | | $5.47\pm 0.07$ | $1.01\pm 0.19$ | 2.17 | | $5.71\pm 0.02$ | $3.62\pm 0.02$ | 6.00 | | $5.19\pm 0.08$ | $2.92\pm 0.07$ | 5.14 | blue asy
V599 Ori | $5.02\pm 0.04$ | $2.40\pm 0.12$ | 1.35 | | $5.07\pm 0.20$ | $2.73\pm 0.53$ | 0.67 | | $5.25\pm 0.02$ | $3.51\pm 0.03$ | 4.78 | | $4.94\pm 0.15$ | $3.49\pm 0.11$ | 3.24 | two comp?
| $7.18\pm 0.05$ | $1.24\pm 0.13$ | 0.77 | | | | | | | | | | | | |
RR Tau | $-5.40\pm 0.02$ | $1.36\pm 0.06$ | 1.55 | | $-5.41\pm 0.04$ | $0.98\pm 0.10$ | 1.14 | | $-5.09\pm 0.01$ | $1.83\pm 0.02$ | 5.22 | | $-5.02\pm 0.01$ | $1.84\pm 0.02$ | 6.80 |
V350 Ori | $4.39\pm 0.06$ | $1.37\pm 0.16$ | 0.69 | | | | | | $3.70\pm 0.04$ | $3.25\pm 0.06$ | 1.84 | | $3.68\pm 0.05$ | $3.78\pm 0.41$ | 1.01 | blue profile
MWC 789 | $2.57\pm 0.08$ | $1.80\pm 0.20$ | 0.58 | | | | | | $2.61\pm 0.02$ | $2.01\pm 0.04$ | 1.95 | | $2.62\pm 0.05$ | $1.80\pm 0.12$ | 1.12 | blue asy
LkHA 208 | $-0.13\pm 0.04$ | $1.50\pm 0.08$ | 1.30 | | | | | | $0.02\pm 0.03$ | $1.87\pm 0.07$ | 3.01 | | $-0.04\pm 0.04$ | $1.21\pm 0.13$ | 2.53 | two comp
LkHA 339 | $11.57\pm 0.02$ | $3.08\pm 0.04$ | 2.97 | | $11.03\pm 0.06$ | $2.27\pm 0.14$ | 2.49 | | | | | | | | | self-abs
LkHA 215 | $2.67\pm 0.02$ | $1.96\pm 0.05$ | 1.70 | | $2.34\pm 0.03$ | $1.10\pm 0.07$ | 2.10 | | $2.82\pm 0.02$ | $2.55\pm 0.04$ | 6.62 | | $2.89\pm 0.05$ | $2.48\pm 0.13$ | 7.62 | red-asy
R Mon | $9.55\pm 0.04$ | $1.57\pm 0.09$ | 0.61 | | | | | | $9.51\pm 0.02$ | $2.35\pm 0.06$ | 4.10 | | $9.69\pm 0.08$ | $1.93\pm 0.07$ | 3.80 | red-asy
V590 Mon | $5.48\pm 0.04$ | $1.75\pm 0.09$ | 0.72 | | $4.96\pm 0.12$ | $3.00\pm 0.34$ | 0.84 | | $5.06\pm 0.03$ | $3.25\pm 0.06$ | 2.41 | | | | | three comp
| $9.02\pm 0.05$ | $1.61\pm 0.13$ | 0.68 | | $8.76\pm 0.10$ | $1.74\pm 0.22$ | 0.83 | | $8.93\pm 0.01$ | $2.02\pm 0.02$ | 6.37 | | $8.95\pm 0.04$ | $1.93\pm 0.10$ | 6.74 |
| $11.48\pm 0.06$ | $1.78\pm 0.20$ | 0.56 | | $11.38\pm 0.05$ | $1.31\pm 0.15$ | 1.33 | | $11.45\pm 0.01$ | $1.58\pm 0.02$ | 4.99 | | $11.53\pm 0.04$ | $1.52\pm 0.09$ | 5.87 |
VV Ser | | | | | | | | | $5.34$ | $1.73\pm 0.04$ | 1.14 | | $5.38\pm 0.29$ | $1.43\pm 0.29$ | 0.77 | three comp
| | | | | | | | | $7.32\pm 0.01$ | $1.91\pm 0.04$ | 1.29 | | $7.53\pm 0.29$ | $1.73\pm 0.29$ | 1.92 |
| | | | | | | | | $9.51\pm 0.01$ | $2.08\pm 0.04$ | 0.92 | | $9.13\pm 0.29$ | $1.47\pm 0.29$ | 0.81 |
MWC 300 | | | | | | | | | $6.54\pm 0.21$ | $2.12\pm 0.21$ | 0.42 | | $6.48\pm 0.13$ | $0.55\pm 0.21$ | 0.21 | three comp?
| | | | | | | | | $8.27\pm 0.21$ | $1.87\pm 0.21$ | 1.15 | | $7.92\pm 0.06$ | $1.44\pm 0.20$ | 0.85 |
| | | | | | | | | $10.12\pm 0.21$ | $1.62\pm 0.21$ | 1.24 | | $10.00\pm 0.08$ | $1.70\pm 0.19$ | 0.76 |
AS 310 | | | | | | | | | $7.07\pm 0.07$ | $2.15\pm 0.13$ | 0.53 | | $7.28\pm 0.20$ | $1.90\pm 0.87$ | 0.54 |
MWC 614 | | | | | | | | | $2.22\pm 0.09$ | $1.10\pm 0.20$ | 0.39 | | | | |
Par 21 | | | | | | | | | $16.79\pm 0.04$ | $1.07\pm 0.09$ | 0.66 | | $16.97\pm 0.07$ | $0.81\pm 0.15$ | 0.82 |
V1685 Cyg | $7.61\pm 0.02$ | $1.90\pm 0.05$ | 5.22 | | $7.79\pm 0.03$ | $1.89\pm 0.07$ | 6.60 | | $7.59\pm 0.02$ | $2.83\pm 0.06$ | 9.98 | | $7.60\pm 0.07$ | $2.68\pm 0.22$ | 9.83 | two comp
| | | | | | | | | $12.64\pm 0.08$ | $3.27\pm 0.19$ | 2.86 | | | | |
Par 22 | | | | | | | | | $4.84$ | $3.17\pm 0.02$ | 8.04 | | $5.00\pm 0.02$ | $2.90\pm 0.04$ | 7.69 | two comp?
| | | | | | | | | $9.21\pm 0.03$ | $2.90\pm 0.06$ | 1.61 | | $9.51\pm 0.11$ | $3.14\pm 0.33$ | 1.34 |
PV Cep | $2.78\pm 0.07$ | $0.66\pm 0.16$ | 0.72 | | | | | | $2.15\pm 0.03$ | $2.98\pm 0.06$ | 1.97 | | | | | red asy
AS 442 | $0.14\pm 0.04$ | $1.12\pm 0.09$ | 0.77 | | | | | | $0.23\pm 0.01$ | $1.44\pm 0.04$ | 3.14 | | $0.22\pm 0.03$ | $1.43\pm 0.09$ | 3.29 | red wing
LkHA 134 | | | | | | | | | $0.70\pm 0.02$ | $1.79\pm 0.04$ | 2.15 | | $0.94\pm 0.03$ | $1.28\pm 0.08$ | 1.89 |
HD 200775 | $2.30\pm 0.02$ | $1.90\pm 0.05$ | 1.49 | | $2.66\pm 0.14$ | $2.68\pm 0.38$ | 0.95 | | $1.68\pm 0.01$ | $3.53\pm 0.01$ | 4.71 | | $1.60\pm 0.03$ | $3.18\pm 0.06$ | 4.91 | blue asy
LkHA 324 | | | | | | | | | $-2.43\pm 0.03$ | $5.62\pm 0.07$ | 3.36 | | $-2.17\pm 0.04$ | $5.13\pm 0.09$ | 3.96 | blue asy?
V645 Cyg | $-44.05\pm 0.07$ | $2.94\pm 0.15$ | 1.96 | | $-43.52\pm 0.10$ | $1.45\pm 0.19$ | 3.89 | | $-44.07\pm 0.03$ | $3.87\pm 0.07$ | 3.64 | | $-43.88\pm 0.03$ | $2.88\pm 0.07$ | 4.93 |
LkHA 234 | $-10.40\pm 0.01$ | $2.15\pm 0.02$ | 5.14 | | $-10.55\pm 0.03$ | $1.98\pm 0.08$ | 6.71 | | $-10.11\pm 0.02$ | $3.85\pm 0.05$ | 10.58 | | $-10.11\pm 0.02$ | $2.94\pm 0.05$ | 10.21 | red wing?
AS 477 | | | | | | | | | $6.54\pm 0.01$ | $2.74\pm 0.04$ | 3.83 | | $6.66\pm 0.03$ | $2.12\pm 0.10$ | 4.41 | red asy?
BD46 | $6.25\pm 0.02$ | $1.39\pm 0.05$ | 2.21 | | $6.89\pm 0.07$ | $1.21\pm 0.16$ | 2.37 | | $6.70\pm 0.01$ | $2.30\pm 0.04$ | 3.54 | | $6.71\pm 0.03$ | $1.97\pm 0.07$ | 3.44 | blue wing
BH Cep | | | | | | | | | $1.81\pm 0.07$ | $0.93\pm 0.16$ | 0.35 | | | | |
V375 Lac | | | | | | | | | $-0.18$ | $1.72\pm 0.01$ | 7.13 | | $-0.28$ | $1.55\pm 0.01$ | 9.90 |
IL Cep | $-9.99\pm 0.03$ | $1.78\pm 0.08$ | 1.62 | | | | | | $-10.13\pm 0.04$ | $2.95\pm 0.09$ | 1.72 | | $-9.95\pm 0.09$ | $2.10\pm 0.24$ | 0.83 | flat top
MWC 1080 | $-30.52\pm 0.02$ | $4.47\pm 0.06$ | 1.91 | | $-30.52\pm 0.05$ | $3.48\pm 0.14$ | 1.46 | | | | | | | | | red profile,wings
LkHA 259 | $-7.13\pm 0.06$ | $1.98\pm 0.13$ | 1.17 | | $-6.04\pm 0.07$ | $1.45\pm 0.17$ | 2.27 | | | | | | | | | blue profile
Table 3: Derived parameters of the lines Name | VLSR | $\frac{{}^{12}co(2-1)}{{}^{13}co(2-1)}$ | $\tau_{13co(2-1)}$ | $\tau_{12co(2-1)}$ | Tex | $\Theta_{s}$ | $\frac{{}^{12}co(3-2)}{{}^{12}co(2-1)}$ | N${}_{H_{2}}$
---|---|---|---|---|---|---|---|---
| (km s-1) | | | | (K) | $(\arcsec$) | | ($10^{21}$cm-2)
MacC H12 | -4.7 | 3.6 | 0.33 | 28.96 | 20.93 | 179 | 0.95 | 5.49
LkHA 198 | -0.2 | 3.1 | 0.39 | 34.66 | 15.76 | 530 | 0.68 | 5.89
RNO 6 | -36.0 | 2.8 | 0.44 | 39.32 | 10.20 | 230 | 0.77 | 3.06
XY Per | -4.2 | 7.4 | 0.15 | 12.92 | 11.24 | 133 | 0.64 | 1.55
V892 Tau | 7.2 | 4.2 | 0.27 | 24.20 | 12.05 | 310 | 0.64 | 2.76
AB Aur | 6.1 | | | 26.60 | 15.29 | 188 | 0.72 | 1.40
T Ori | 7.5 | 4.4 | 0.26 | 22.95 | 86.79 | 74 | 0.87 | 39.81
| 11.0 | 6.4 | 0.17 | 15.12 | 44.65 | 141 | 0.87 | 13.47
| 13.2 | 8.9 | 0.12 | 10.61 | 32.99 | 162 | 0.92 | 6.16
V380 Ori | 7.0 | | | 26.60 | | | |
| 9.0 | 2.2 | 0.61 | 53.95 | 16.36 | 800 | 0.69 | 12.83
V586 Ori | 6.5 | | | 26.60 | 8.36 | 300 | 0.72 | 1.23
| 8.7 | | | 26.60 | 24.12 | 153 | 0.82 | 3.68
BF Ori | 6.2 | 4.1 | 0.28 | 24.88 | 17.12 | 285 | 0.72 | 4.42
| 9.2 | 3.4 | 0.35 | 31.00 | 14.57 | 164 | 1.00 | 3.59
| 10.9 | 2.6 | 0.49 | 43.21 | 7.55 | 300 | 0.54 | 1.66
Haro 13A | 5.6 | 4.3 | 0.26 | 23.56 | 14.25 | 600 | 0.68 | 4.11
V599 Ori | 5.0 | 5.2 | 0.21 | 19.01 | 11.43 | | 0.70 | 2.66
| 7.2 | | | 26.60 | | | |
RR Tau | -5.4 | 4.5 | 0.25 | 22.37 | 21.09 | 126 | 0.90 | 3.43
V350 Ori | 4.4 | 6.3 | 0.17 | 15.38 | 6.84 | | 0.60 | 1.20
MWC 789 | 2.6 | 3.8 | 0.31 | 27.18 | 7.06 | | 0.50 | 1.33
LkHA 208 | -0.1 | 2.9 | 0.42 | 37.63 | 10.05 | 290 | 0.45 | 2.28
LkHA 339 | 11.3 | 3.8 | 0.31 | 27.18 | $>12.7$ | | | $>7.53$
LkHA 215 | 2.5 | 5.1 | 0.22 | 19.42 | 20.66 | 170 | 0.86 | 4.08
R Mon | 9.6 | 10.1 | 0.10 | 9.28 | 12.58 | 250 | 0.61 | 0.91
V590 Mon | 5.2 | 6.2 | 0.18 | 15.65 | 10.16 | | | 1.08
| 8.9 | 11.8 | 0.09 | 7.88 | 18.24 | 207 | 0.83 | 1.09
| 11.4 | 7.9 | 0.14 | 12.05 | 17.88 | 152 | 0.82 | 1.26
VV Ser | 5.4 | | | 26.60 | 6.23 | 270 | 0.44 | 0.98
| 7.4 | | | 26.60 | 12.69 | 74 | 0.71 | 2.04
| 9.3 | | | 26.60 | 6.81 | 140 | 0.43 | 1.25
MWC 300 | 6.5 | | | 26.60 | 4.32 | 300 | 0.09 | 1.00
| 8.1 | | | 26.60 | 6.53 | 220 | 0.51 | 1.08
| 10.1 | | | 26.60 | 6.08 | 500 | 0.58 | 0.91
AS 310 | 7.2 | | | 26.60 | 6.47 | 95 | 0.55 | 1.24
MWC 614 | 2.2 | | | 26.60 | 4.56 | | | 0.35
Par 21 | 16.9 | | | 26.60 | 8.18 | 78 | 0.50 | 0.74
V1685 Cyg | 7.7 | 2.8 | 0.44 | 39.32 | 22.23 | 330 | 0.83 | 10.00
| 12.6 | | | 26.60 | 11.26 | | | 2.00
Par 22 | 4.9 | | | 26.60 | 18.75 | 340 | 0.83 | 5.64
| 9.4 | | | 26.60 | 7.84 | 200 | 0.74 | 1.92
PV Cep | 2.8 | 12.4 | 0.08 | 7.48 | 9.05 | | | 0.43
AS 442 | 0.1 | 5.2 | 0.21 | 19.01 | 12.40 | 165 | 0.77 | 1.11
LkHA 134 | 0.8 | | | 26.60 | 9.07 | 205 | 0.50 | 1.34
HD 200775 | 2.5 | 5.9 | 0.19 | 16.53 | 15.13 | 193 | 0.73 | 3.08
LkHA 324 | -2.3 | | | 26.60 | 14.55 | 135 | 0.71 | 7.11
V645 Cyg | -43.8 | 2.4 | 0.54 | 47.97 | 18.13 | 109 | 0.65 | 12.32
LkHA 234 | -10.5 | 3.7 | 0.32 | 28.04 | 22.56 | 390 | 0.66 | 10.13
AS477/BD46 | 6.6 | 2.7 | 0.46 | 41.17 | 15.10 | 147 | 0.59 | 4.68
BH Cep | 1.8 | | | 26.60 | 4.41 | | | 0.29
V375 Lac | -0.2 | | | 26.60 | 28.38 | 119 | 0.81 | 5.76
IL Cep | -10.0 | 1.8 | 0.81 | 72.17 | 6.51 | | 0.30 | 5.34
MWC 1080 | -30.5 | | | 26.60 | $>11.8$ | | | $>7.19$
LkHA 259 | -6.6 | | | 26.60 | $>6.4$ | | | $>3.49$
Table 4: The SED fitting results Name | Av | log(Age) | M∗ | R∗ | log(L∗) | log(T∗) | log(Menv) | log$\dot{M}_{env}$) | log(M${}_{disk})$ | Incl | log(Rout) | log($\dot{M}_{disk}$)
---|---|---|---|---|---|---|---|---|---|---|---|---
| (mag) | log(yr) | (M☉) | (R☉) | log(L☉) | log(K) | log(M☉) | log(M☉yr-1) | log(M☉) | ($\arcdeg$) | log(AU) | log(M☉yr-1)
MacC H12 | $0.61\pm 0.57$ | $3.73\pm 0.32$ | $1.84\pm 0.16$ | $14.96\pm 2.44$ | $1.79\pm 0.13$ | $3.62$ | $0.32\pm 0.16$ | $-4.88\pm 0.07$ | $-1.74\pm 0.10$ | $18.19$ | $1.21\pm 0.32$ | $-6.55\pm 0.35$
LkHA 198 | $0.00$ | $3.07\pm 0.06$ | $3.84\pm 0.38$ | $29.95\pm 5.60$ | $2.42\pm 0.14$ | $3.62$ | $0.06\pm 0.35$ | $-4.56\pm 0.13$ | $-2.12\pm 0.63$ | $57.17\pm 24.78$ | $0.54\pm 0.16$ | $-5.33\pm 0.12$
Vx Cas | $1.48\pm 0.30$ | $6.66\pm 0.25$ | $3.62\pm 0.24$ | $2.22\pm 0.08$ | $2.17\pm 0.11$ | $4.13\pm 0.02$ | $-4.30\pm 0.54$ | | $-2.55\pm 0.13$ | $41.80\pm 19.87$ | $3.00\pm 0.32$ | $-7.89\pm 0.46$
RNO 6 | $0.71\pm 0.65$ | $6.03\pm 0.07$ | $5.10\pm 0.42$ | $2.76\pm 0.08$ | $2.75\pm 0.12$ | $4.23\pm 0.02$ | $1.08\pm 0.10$ | $-8.47\pm 0.31$ | $-1.78\pm 0.23$ | $78.59\pm 2.92$ | $2.40\pm 0.21$ | $-6.79\pm 0.94$
IP Per | $0.00\pm 0.01$ | $5.71\pm 0.09$ | $2.17\pm 0.47$ | $5.03\pm 0.50$ | $1.09\pm 0.09$ | $3.67\pm 0.01$ | $-1.64\pm 0.44$ | $-5.62\pm 0.41$ | $-1.49\pm 0.35$ | $28.66\pm 9.47$ | $2.45\pm 0.25$ | $-6.85\pm 0.78$
V892 Tau | $6.44\pm 1.95$ | $6.50\pm 0.37$ | $2.50\pm 0.88$ | $4.04\pm 2.84$ | $1.81\pm 0.24$ | $3.98\pm 0.18$ | $-0.88\pm 0.38$ | $-5.62\pm 0.38$ | $-1.29\pm 0.30$ | $30.50\pm 9.53$ | $2.69\pm 0.35$ | $-7.14\pm 0.15$
XY Per | $3.06\pm 0.09$ | $6.99$ | $2.81$ | $1.93$ | $1.75$ | $4.06$ | $-5.71$ | | $-2.02$ | $78.61\pm 2.92$ | $2.77$ | $-8.81$
AB Aur | $1.04$ | $5.14$ | $1.10$ | $6.73$ | $1.10$ | $3.62$ | $-1.33$ | $-5.65$ | $-2.12$ | $31.79$ | $2.15$ | $-7.34$
MWC 480 | $0.33\pm 0.35$ | $6.33\pm 0.16$ | $3.04\pm 0.33$ | $4.47\pm 1.10$ | $1.78\pm 0.92$ | $3.86\pm 0.19$ | $-6.13\pm 0.35$ | | $-1.29\pm 0.25$ | $54.75\pm 19.35$ | $2.38\pm 0.25$ | $-6.36\pm 0.67$
HD 35929 | $0.24\pm 0.16$ | $6.36\pm 0.19$ | $3.10\pm 0.47$ | $5.19\pm 0.95$ | $1.81\pm 0.23$ | $3.85\pm 0.03$ | $-2.87\pm 0.36$ | | $-4.87\pm 0.63$ | $56.30\pm 19.64$ | $3.61\pm 0.72$ | $-10.66\pm 0.71$
HD 36112 | $0.60\pm 0.04$ | $6.96\pm 0.05$ | $1.95\pm 0.05$ | $1.83\pm 0.05$ | $1.11\pm 0.02$ | $3.91$ | $-5.86\pm 0.44$ | | $-1.78\pm 0.49$ | $35.05\pm 11.45$ | $2.58\pm 0.33$ | $-8.23\pm 0.14$
HD 245185 | $0.00$ | $6.13$ | $3.74$ | $5.68$ | $2.17$ | $3.93$ | $-6.84$ | | $-1.41$ | $81.37$ | $2.29$ | $-7.19$
T Ori | $1.47\pm 0.16$ | $6.69\pm 0.28$ | $3.72\pm 0.51$ | $2.33\pm 0.31$ | $2.27\pm 0.18$ | $4.13\pm 0.04$ | $-5.42\pm 0.46$ | | $-1.26\pm 0.51$ | $53.80\pm 15.54$ | $2.60\pm 0.71$ | $-6.54\pm 0.43$
CQ Tau | $2.31\pm 0.13$ | $6.85\pm 0.15$ | $2.82\pm 0.27$ | $2.07\pm 0.03$ | $1.78\pm 0.16$ | $4.04\pm 0.04$ | $-4.50\pm 0.32$ | | $-2.03\pm 0.47$ | $47.41\pm 31.50$ | $2.67\pm 0.23$ | $-7.47\pm 0.54$
V380 Ori | $2.87\pm 1.42$ | $5.87\pm 0.11$ | $4.68\pm 0.05$ | $6.62\pm 2.95$ | $2.62\pm 0.21$ | $4.03\pm 0.13$ | $0.88\pm 0.21$ | $-5.66\pm 0.43$ | $-3.51\pm 0.29$ | $45.12\pm 18.54$ | $2.88\pm 0.39$ | $-8.52\pm 0.48$
V586 Ori | $1.00$ | $6.01$ | $3.86$ | $7.62$ | $1.93$ | $3.80$ | $-0.06$ | $-7.56$ | $-1.60$ | $81.37$ | $2.24$ | $-6.86$
BF Ori | $1.81\pm 0.31$ | $6.72\pm 0.17$ | $3.09\pm 0.31$ | $2.19\pm 0.29$ | $1.94\pm 0.16$ | $4.07\pm 0.04$ | $-4.60\pm 0.98$ | | $-2.94\pm 0.57$ | $49.03\pm 21.24$ | $2.58\pm 0.53$ | $-8.08\pm 0.59$
HD37411 | $11.65$ | $6.63$ | $0.35$ | $1.04$ | $-0.15$ | $3.55$ | $-8.77$ | | $-2.33$ | $63.26$ | $2.09$ | $-7.18$
Haro 13A | $0.00$ | $3.02$ | $3.47$ | $24.48$ | $1.34\pm 0.32$ | $3.63$ | $-0.25\pm 0.65$ | $-4.71$ | $-1.84$ | $64.16\pm 21.08$ | $0.67$ | $-5.62\pm 0.36$
V599 Ori | $2.94\pm 1.62$ | $5.64\pm 0.30$ | $1.83\pm 1.19$ | $5.29\pm 1.59$ | $1.20\pm 0.31$ | $3.65\pm 0.06$ | $-1.36\pm 0.52$ | $-5.76\pm 0.35$ | $-1.24\pm 0.27$ | $42.47\pm 24.22$ | $2.59\pm 0.46$ | $-6.71\pm 0.37$
RR Tau | $1.82\pm 0.27$ | $6.53\pm 0.28$ | $3.68\pm 0.42$ | $2.60\pm 0.49$ | $2.30\pm 0.17$ | $4.12\pm 0.03$ | $-5.71\pm 0.46$ | | $-1.12\pm 0.24$ | $43.72\pm 19.52$ | $2.60\pm 0.32$ | $-6.41\pm 0.32$
V350 Ori | $1.69\pm 0.47$ | $6.73\pm 0.16$ | $2.73\pm 0.39$ | $2.05\pm 0.31$ | $1.75\pm 0.19$ | $4.03\pm 0.06$ | $-3.48\pm 0.39$ | | $-2.52\pm 0.50$ | $43.21\pm 19.30$ | $3.39\pm 0.32$ | $-7.29\pm 0.73$
MWC 789 | $2.42\pm 0.77$ | $6.24\pm 0.17$ | $3.90\pm 0.37$ | $2.81\pm 0.29$ | $2.36\pm 0.10$ | $4.12\pm 0.02$ | $-6.47\pm 0.18$ | | $-1.02\pm 0.13$ | $22.65\pm 6.39$ | $2.43\pm 0.05$ | $-6.49\pm 0.37$
LkHA 208 | $5.39\pm 0.09$ | $5.97$ | $4.23$ | $6.69$ | $2.32$ | $3.93$ | $-0.93$ | $-7.33$ | $-1.00$ | $55.80\pm 14.37$ | $3.06$ | $-7.71$
LkHA 339 | $2.96\pm 0.12$ | $6.60\pm 0.31$ | $3.42\pm 0.60$ | $2.25\pm 0.25$ | $2.17\pm 0.28$ | $4.11\pm 0.05$ | $-2.38\pm 0.76$ | | $-1.78\pm 0.36$ | $40.20\pm 21.08$ | $3.73\pm 0.52$ | $-8.67\pm 0.38$
LkHA 215 | $1.34\pm 0.35$ | $5.74$ | $5.13$ | $8.46$ | $2.59$ | $3.95$ | $1.05$ | $-5.01$ | $-2.15$ | $46.40\pm 21.09$ | $3.65$ | $-9.19$
R Mon | $4.87$ | $4.24$ | $4.67$ | $24.59$ | $2.29$ | $3.64$ | $-0.61$ | $-4.56$ | $-1.95$ | $18.19$ | $1.55$ | $-6.97$
V590 Mon | $0.42\pm 0.15$ | $6.45\pm 0.54$ | $4.71\pm 0.20$ | $5.86\pm 3.12$ | $2.47\pm 0.04$ | $4.06\pm 0.15$ | $-1.86\pm 0.68$ | $-6.57\pm 0.73$ | $-2.25\pm 0.31$ | $84.45\pm 2.87$ | $3.03\pm 0.36$ | $-8.03\pm 0.31$
GU CMa | $0.42$ | $6.90$ | $3.14$ | $2.05$ | $1.93$ | $4.09$ | $-3.51$ | | $-7.11$ | $49.46$ | $3.62$ | $-13.76$
HD 141569 | $0.26\pm 0.04$ | $6.93$ | $2.14$ | $1.74$ | $1.33$ | $3.97$ | $-2.99$ | | $-4.81$ | $56.31\pm 23.40$ | $3.72$ | $-11.82$
VV Ser | $3.98\pm 0.25$ | $6.81\pm 0.20$ | $3.32\pm 0.35$ | $2.16\pm 0.15$ | $2.10\pm 0.18$ | $4.10\pm 0.03$ | $-5.83\pm 0.63$ | | $-1.39\pm 0.31$ | $45.71\pm 15.48$ | $2.72\pm 0.28$ | $-6.43\pm 0.34$
MWC 300 | $4.23\pm 2.13$ | $6.56\pm 0.11$ | $7.37\pm 0.67$ | $3.33\pm 0.19$ | $3.35\pm 0.13$ | $4.33\pm 0.03$ | $-6.44\pm 0.38$ | | $-1.05\pm 0.01$ | $62.27\pm 13.03$ | $2.27\pm 0.23$ | $-5.80\pm 0.80$
AS 310 | $6.65\pm 3.01$ | $5.85\pm 0.21$ | $9.86\pm 0.53$ | $3.90\pm 0.10$ | $3.79\pm 0.09$ | $4.41\pm 0.01$ | $1.61\pm 0.08$ | $-4.31\pm 0.9$ | $-0.59\pm 0.11$ | $87.13$ | $3.26\pm 0.44$ | $-5.53\pm 0.80$
Par 21 | $1.61$ | $6.13$ | $3.74$ | $5.68$ | $2.17$ | $3.93$ | $-6.84$ | | $-1.41$ | $87.13$ | $2.29$ | $-7.19$
V1295 Aql | $0.36$ | $6.69\pm 0.30$ | $3.22\pm 0.37$ | $2.29\pm 0.34$ | $2.00\pm 0.18$ | $4.07\pm 0.01$ | $-6.67\pm 0.30$ | | $-2.55\pm 0.31$ | $66.45\pm 9.44$ | $1.55\pm 0.09$ | $-7.18\pm 0.53$
V1685 Cyg | $1.35\pm 0.95$ | $4.93\pm 0.37$ | $6.75\pm 1.40$ | $24.50\pm 12.49$ | $3.08\pm 0.14$ | $3.85\pm 0.13$ | $1.30\pm 0.24$ | $-4.08\pm 0.32$ | $-1.04\pm 0.35$ | $35.64\pm 14.64$ | $1.74\pm 0.34$ | $-4.61\pm 0.38$
Par 22 | $3.67\pm 2.57$ | $5.35\pm 0.35$ | $1.29\pm 1.14$ | $6.46\pm 2.88$ | $1.24\pm 0.34$ | $3.59\pm 0.09$ | $-0.06\pm 0.48$ | $-4.21\pm 0.55$ | $-1.71\pm 0.35$ | $54.82\pm 29.65$ | $2.00\pm 1.47$ | $-6.35\pm 0.52$
PV Cep | $3.16\pm 0.43$ | $3.08\pm 0.06$ | $0.28\pm 0.09$ | $5.69\pm 0.25$ | $1.77\pm 0.18$ | $3.51\pm 0.03$ | $-0.49\pm 0.19$ | $-5.33\pm 0.06$ | $-1.53\pm 0.34$ | $18.19$ | $0.30\pm 0.16$ | $-4.40\pm 0.06$
AS 442 | $1.98\pm 0.24$ | $6.54\pm 0.22$ | $4.31\pm 0.37$ | $2.67\pm 0.60$ | $2.48\pm 0.15$ | $4.17\pm 0.04$ | $-3.41\pm 0.61$ | | $-1.99\pm 0.39$ | $55.67\pm 13.68$ | $2.92\pm 0.45$ | $-6.99\pm 0.47$
LkHA 134 | $2.67\pm 0.11$ | $6.27\pm 0.13$ | $4.13\pm 0.31$ | $2.67\pm 0.36$ | $2.45\pm 0.14$ | $4.16\pm 0.02$ | $-2.07\pm 0.06$ | $-8.05\pm 0.50$ | $-1.75\pm 0.26$ | $58.33\pm 20.25$ | $3.90\pm 0.20$ | $-9.06\pm 0.52$
HD 200775 | $0.12\pm 0.08$ | $5.34\pm 0.01$ | $6.65\pm 0.09$ | $14.22\pm 0.06$ | $2.78\pm 0.03$ | $3.88\pm 0.01$ | $1.61\pm 0.05$ | $-3.59\pm 0.30$ | $-2.33\pm 0.01$ | $26.17\pm 11.03$ | $1.93\pm 0.12$ | $-7.86\pm 0.11$
LkHA 324 | $1.01\pm 1.09$ | $6.16\pm 1.27$ | $2.32\pm 0.77$ | $4.87\pm 1.24$ | $1.47\pm 0.43$ | $3.76\pm 0.20$ | $-1.50\pm 0.51$ | $-5.54\pm 0.42$ | $-1.89\pm 0.45$ | $41.68\pm 19.41$ | $2.67\pm 0.97$ | $-7.74\pm 0.56$
HD 203024 | $0.45\pm 0.30$ | $6.89\pm 0.07$ | $2.35\pm 0.15$ | $1.89\pm 0.06$ | $1.48\pm 0.09$ | $3.99\pm 0.03$ | $-5.33\pm 0.36$ | | $-2.06\pm 0.02$ | $74.29\pm 4.39$ | $2.27\pm 0.26$ | $-8.20\pm 0.30$
V645 Cyg | $0.42$ | $5.16$ | $10.59$ | $4.16$ | $3.88$ | $4.42$ | $2.46$ | $-3.38$ | $-2.26$ | $18.19$ | $2.96$ | $-7.44$
LkHA 234 | $3.05\pm 3.29$ | $5.15\pm 0.02$ | $8.96\pm 0.25$ | $4.91\pm 0.39$ | $3.76\pm 0.01$ | $4.35\pm 0.02$ | $2.82\pm 0.06$ | $-2.97\pm 0.05$ | $-0.79\pm 0.72$ | $45.13\pm 4.01$ | $2.21\pm 0.47$ | $-5.34\pm 1.02$
AS 477 | $0.60\pm 0.25$ | $6.43\pm 0.22$ | $5.11\pm 0.66$ | $2.72\pm 0.23$ | $2.76\pm 0.20$ | $4.22\pm 0.04$ | $-5.86\pm 0.45$ | | $-1.65\pm 0.34$ | $59.44\pm 13.66$ | $2.55\pm 0.36$ | $-6.27\pm 0.35$
LkHA 257 | $2.62\pm 0.83$ | $6.62\pm 0.16$ | $3.61\pm 0.64$ | $2.68\pm 1.60$ | $2.18\pm 0.16$ | $4.11\pm 0.09$ | $-2.31\pm 0.66$ | $-6.99\pm 0.70$ | $-2.46\pm 0.71$ | $57.05\pm 22.40$ | $3.43\pm 0.34$ | $-7.14\pm 0.71$
BH Cep | $0.80\pm 0.34$ | $6.61\pm 0.29$ | $3.27\pm 0.21$ | $3.35\pm 1.33$ | $2.05\pm 0.13$ | $4.03\pm 0.06$ | $-3.49\pm 0.33$ | | $-2.50\pm 0.70$ | $84.26\pm 2.88$ | $3.53\pm 0.14$ | $-8.24\pm 0.96$
SV Cep | $2.37\pm 0.17$ | $6.71\pm 0.17$ | $3.33\pm 0.48$ | $2.12\pm 0.18$ | $2.07\pm 0.24$ | $4.10\pm 0.04$ | $-6.40\pm 0.54$ | | $-2.32\pm 0.42$ | $48.85\pm 22.50$ | $2.31\pm 0.36$ | $-7.55\pm 0.68$
V375 Lac | $1.85\pm 1.14$ | $3.71\pm 0.40$ | $1.88\pm 0.51$ | $15.15\pm 2.93$ | $1.92\pm 0.16$ | $3.62\pm 0.01$ | $0.55\pm 0.79$ | $-4.45\pm 0.15$ | $-1.28\pm 0.75$ | $18.19$ | $0.85\pm 0.32$ | $-5.18\pm 0.48$
IL Cep | $0.28\pm 0.32$ | $6.02\pm 0.10$ | $3.85\pm 0.19$ | $7.53\pm 1.07$ | $1.94\pm 0.15$ | $3.81\pm 0.07$ | $-2.43\pm 0.61$ | $-7.08\pm 0.54$ | $-2.54\pm 0.68$ | $47.87\pm 18.62$ | $2.47\pm 0.45$ | $-8.25\pm 0.34$
MWC 1080 | $1.01\pm 0.78$ | $5.37\pm 0.02$ | $10.59\pm 0.87$ | $4.00\pm 0.19$ | $3.88\pm 0.12$ | $4.43\pm 0.02$ | $2.08\pm 0.17$ | $-3.58\pm 0.21$ | $-1.01\pm 0.47$ | $38.03\pm 4.59$ | $2.72\pm 0.30$ | $-5.92\pm 0.37$
LkHA 259 | $1.78\pm 0.19$ | $3.29\pm 0.26$ | $2.63\pm 0.75$ | $22.94\pm 5.25$ | $2.19\pm 0.22$ | $3.61$ | $1.02\pm 0.15$ | $-3.09\pm 0.14$ | $-1.69\pm 0.32$ | $18.19$ | $0.59\pm 0.14$ | $-5.50\pm 0.28$
Figure 1: 12CO (2-1) (red) and 13CO (2-1) (green) lines of the A Type stars.
The source names are plotted in the upper-left corner of each panel.
Figure 2: 12CO (2-1) (red) and 13CO (2-1) (green) lines of the B/F Type stars.
The source names are plotted in the upper-left corner of each panel.
Figure 3: The SEDs of A Type stars. The observed data are plotted as circular
symbols. The black solid line represents the best fit to the data, while the
grey solid lines represent the fits with $\chi^{2}-\chi_{best}^{2}<3\times
n_{data}$. The dashed lines represent photospheric contributions, including
the effect of foreground extinction, from the best-fitting models. As shown by
the vertical lines, some model SEDs are with bad signal-to-noise ratios for
wavelengths beyond 1-100 $\micron$ (Robitaille et al., 2006).
Figure 4: The SEDs of B/F Type stars. The symbols and lines are the same as in
Fig.3
Figure 5: The optical depth $\tau$ (left) of 12CO (2-1) and the excited
temperature Tex (right) as function of the age. The horizonal and vertical
dashed lines in the left panel mark age at 106 yr and the average optical
depth of 26.6.
Figure 6: The masses derived from SED fitting compared to those obtained from
Manoj et al. (2006) (left) and the effective temperatures of the stars
obtained from SED fitting compared to the effective temperatures corresponding
to the spectral types (right). The solid lines in both panels describe the
least-square fitting. The dashed line in the left panel shows that Mmod is
consistent with Mlit. The two dashed lines in the right panel mark out the
area where the effective temperatures obtained from the two independent
methods agree better than $\pm$0.3 orders of magnitude. Figure 7: The mass-
luminosity function of the sample stars. The solid line represents the best
least-square fitting.
Figure 8: The envelope mass (left) and the envelope accretion rate (right) as
function of the age. The solid lines represent the best least-square fitting.
The vertical dashed lines mark age at 105 and 106 yr. The horizonal dashed
line in the left and right panels mark the envelope mass of 10-2 M☉ and the
envelope accretion rate of 10-6 M☉yr-1, respectively. The dashed oval in the
right panel marks the sources with zero envelope accretion rate.
Figure 9: The disk mass (left) and the disk accretion rate (right) as function
of the age. The solid lines represent the best least-square fitting.
Figure 10: The 12CO (2-1) intensity (left) and the 12CO (3-2) intensity
(right) as function of the age. The vertical and horizonal dashed lines are at
age equalling 106 yr and 12CO intensity corresponding to 10 K km s-1,
respectively. The dashed ellipses encircle the sources with low signal-to-
noise level in CO emission.
Figure 11: The 12CO (2-1) intensity (left) and the 12CO (3-2) intensity
(right) as function of the envelope mass. The solid lines represent the best
least-square fitting. The horizonal dashed line in the right panel is at where
12CO intensity corresponds to 10 K km s-1.The sources with low signal-to-noise
level in CO emission are shown in the dashed ellipses.
|
arxiv-papers
| 2011-05-27T11:51:10 |
2024-09-04T02:49:19.132682
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Tie Liu, Huawei Zhang, Yuefang Wu, Sheng-Li Qin, Martin Miller",
"submitter": "Tie Liu",
"url": "https://arxiv.org/abs/1105.5534"
}
|
1105.5760
|
# Glass transitions in two-dimensional suspensions of colloidal ellipsoids
Zhongyu Zheng, Feng Wang and Yilong Han∗ Department of Physics, Hong Kong
University of Science and Technology, Clear Water Bay, Hong Kong, China
yilong@ust.hk
###### Abstract
We observed a two-step glass transition in monolayers of colloidal ellipsoids
by video microscopy. The glass transition in the rotational degree of freedom
was at a lower density than that in the translational degree of freedom.
Between the two transitions, ellipsoids formed an orientational glass.
Approaching the respective glass transitions, the rotational and translational
fastest-moving particles in the supercooled liquid moved cooperatively and
formed clusters with power-law size distributions. The mean cluster sizes
diverge in power law as approaching the glass transitions. The clusters of
translational and rotational fastest-moving ellipsoids formed mainly within
pseudo-nematic domains, and around the domain boundaries, respectively.
Colloids are outstanding model systems for glass transition studies because
the trajectories of individual particles are measurable by video microscopy
Weeks00 . In the past two decades, significant experimental effort has been
applied to studying colloidal glasses consisting of isotropic particles
Gotze91 ; Megen93 ; Weeks00 ; Kegel00 ; Zhang09 , but little to anisotropic
particles Yunker11 . The glass transition of anisotropic particles has been
studied in three dimensions (3D) mainly through simulation Stillinger94 ;
Yatsenko08 . Molecular mode-coupling theory (MMCT) predicts that particle
anisotropy should lead to new phenomena in glass transitions Letz00 ;
Schilling97 , and some of these have been observed in recent 3D simulations of
hard ellipsoids Michele07 ; Pfleiderer08 . MMCT Franosch97 ; Schilling97
suggests that hard ellipsoids with an aspect ratio $p>2.5$ in 3D can form an
orientational glass in which rotational degrees of freedom become glass while
the center-of-mass motion remains ergodic Letz00 . Such a “liquid glass”
Schilling00 , in analogy to a liquid crystal, has not yet been explored in 3D
or even 2D experiments. Anisotropic particles should also enable exploration
of the dynamic heterogeneity in the rotational degrees of freedom. Moreover
the glass transitions of monodispersed particles have not yet been studied in
2D. It is well known that monodispersed spheres can be quenched to a glass in
3D, but hardly in 2D even at the fastest accessible quenching rate. Hence
bidispersed or highly polydispersed spheres have been used in experiments
Konig05 ; Zhang09 ; Yunker09 , simulations Speedy99 and theory Bayer07 for
2D glasses. In contrast, we found that monodispersed ellipsoids of
intermediate aspect ratio are excellent glass formers in 2D because their
shape can effectively frustrate crystallization and nematic order.
Here we investigate the glass transition in monolayers of colloidal ellipsoids
using video microscopy. We measured the translational and rotational
relaxation times, the non-Gaussian parameter of the distribution of
displacements, and the clusters of cooperative fastest-moving particles. These
results consistently showed that the glass transitions of rotational and
translational motions occur in two different area fractions, defining an
intermediate orientational glass phase.
The ellipsoids were synthesized by stretching polymethyl methacrylate (PMMA)
spheres Ho93 ; Han09 . They had a small polydispersity of 5.6% with the semi-
long axis $a=3.33~{}\mu$m and the semi-short axes $b=c=0.56~{}\mu$m. 3mM
sodium dodecyl sulfate (SDS) was added to stabilize ellipsoids and the $>3$ mM
ionic strength in the aqueous suspension made ellipsoids moderately hard
particles. A monolayer of ellipsoids was strongly confined between two glass
walls Han09 . Light interference measurements showed that the wall separation
varied by only $\sim$30 nm per 1 mm Han09 , so the walls could be considered
as parallel within the field of view. The area fraction $\phi\equiv\pi
ab\rho$, where $\rho$ is the number density averaged over all video frames.
Twelve densities were measured in the range $0.20\leq\phi\leq 0.81$. During
the three to six hours measurments at each $\phi$, no drift flow or density
change was observed. The thermal motion of the ellipsoids was recorded using a
charge-coupled device camera resolving 1392$\times$1040 pixels at 1 frame per
second (fps) for the highest five concentrations and at 3 fps for lower
concentrations. The center-of-mass positions and orientations of individual
ellipsoids were tracked using our image processing algorithm Zheng10 . The
angular resolution was 1∘ and the spatial resolutions were 0.12 $\mu$m and
0.04 $\mu$m along the long and the short axes respectively. More experimental
details are in the Supplemental Material (SM).
Figure 1: (color online) (a) The self-intermediate scattering function
$F_{s}(q,t)$ at $q_{m}=2.3~{}\mu$m-1 and (b) the orientational correlation
$L_{4}(t)$ for different area fractions. (c) The exponent $\beta$ of the
fitting function $e^{-(t/\tau)^{\beta}}$ for the long-time $F_{s}(q_{m},t)$
and $L_{4}(t)$. (d) The fitted relaxation time
$\tau(\phi)\sim(\phi_{c}-\phi)^{-\gamma}$. Solid symbols: different choices of
$q$ in $F(q,t)$ for the translational motion. Open symbols: different choices
of $n$ in $L_{n}(t)$ for the orientational motion.
At high densities the ellipsoids spontaneously formed small pseudo-nematic
domains with branch-like structures each involving about $10^{2}$ particles,
see Fig. S1 of the SM. The translational relaxation was characterized by the
self-intermediate scattering function
$F_{s}(q,t)\equiv\langle\sum_{j=1}^{N}e^{i\mathbf{q}\cdot(\mathbf{x}_{j}(t)-\mathbf{x}_{j}(0))}\rangle/N$
where $\mathbf{x}_{j}(t)$ is the position of ellipsoid $j$ at time $t$, $N$ is
the total number of particles, $\mathbf{q}$ is the scattering vector and
$\langle~{}\rangle$ denotes a time average. In Fig. 1(a), we chose
$q_{m}=2.3~{}\mu$m-1 measured from the first peak position in the structure
factor at high density. The rotational relaxation can be characterized by the
$n^{\textrm{th}}$ order of the orientational correlation function
$L_{n}(t)\equiv\langle\sum_{j=1}^{N}\cos{n(\theta_{j}(t)-\theta_{j}(0))}\rangle/N$
where $n$ is a positive integer and $\theta_{j}$ is the orientation of
ellipsoid $j$. $L_{n}(t)$ decays faster for larger $n$, and different choices
of $n$ can yield the same glass transition point. $n=4$ in Fig. 1(b) was
chosen so that $L_{n}(t)$ can be better displayed within our measured time
scales. At high $\phi$, both $F_{s}(q_{m},t)$ and $L_{4}(t)$ develop two-step
relaxations, the characteristics upon approaching the glass transition. The
short-time $\beta$-relaxation corresponds to motion within cages of
neighboring particles, and the long-time $\alpha$-relaxation reflects
structural rearrangement involving a series of cage breakings. According to
mode-coupling theory (MCT), the $\alpha$-relaxation follows
$e^{-(t/\tau)^{\beta}}$. Figure 1(c) shows that $\beta$ decreases with
density, indicating dynamic slowing down upon supercooling Kawasaki07 ;
Michele07 .
MCT predicts that the relaxation time $\tau(\phi)$ diverges algebraically
approaching the critical point $\phi_{c}$:
$\tau(\phi)\sim(\phi_{c}-\phi)^{-\gamma}$ where $\gamma=1/(2a)+1/(2b)$ Gotze92
. Here $a$ and $b$ are the exponents in the critical-decay law
$F_{s}(q,t)=f_{q}^{c}+h_{q}t^{-a}$ and the von Schweidler law
$F_{s}(q,t)=f_{q}^{c}-h_{q}t^{b}$ at the initial stage of the
$\beta$-relaxation and the crossover time to the $\alpha$-relaxation
respectively. The fitted $a$ or $b$ is almost a constant at different $\phi$,
indicating that $F_{s}(q,t)$ can collapse onto a master curve in the
appropriate time regime. This demonstrates that $F_{s}(q,t)$ can be separated
into a $q$-dependent and a $t$-dependent part Gotze92 . Interestingly,
$L_{n}(t)$ can similarly collapse. The fitted $a_{T}=0.3\pm 0.02$ and
$b_{T}=0.63\pm 0.02$ for $F_{s}(q_{m},t)$ and $a_{\theta}=0.32\pm 0.02$ and
$b_{\theta}=0.65\pm 0.02$ for $L_{4}(t)$ yield $\gamma_{T}=2.45\pm 0.05$ and
$\gamma_{\theta}=2.33\pm 0.05$ for the translational and orientational
correlations respectively. These values are close to the $\gamma_{T}=2.3$
measured for 3D ellipsoids Pfleiderer08 . In Fig. 1(d), $\tau^{-1/\gamma}$ is
linear in $\phi$ for different choices of $q$ and $n$. Interestingly, all the
scalings show that the glass transitions are at $\phi_{c}^{\theta}=0.72\pm
0.01$ for rotational motion and $\phi_{c}^{{}_{T}}=0.79\pm 0.01$ for
translational motion. This indicates three distinct phases: liquid
($\phi<0.72$), an intermediate orientational glass which is liquid-like in its
translational degrees of freedom but glassy in its rotational degrees of
freedom ($0.72<\phi<0.79$), and the glass state for both degrees of freedom
($\phi>0.79$).
Figure 2: (color online) (a) The non-Gaussian parameters of translational
displacements along the long axis ($\alpha_{2}^{||}(t)$, solid symbols) and
the short axis ($\alpha_{2}^{\perp}(t)$, open symbols).
$\phi=0.70,0.74,0.77,0.81$ as labeled in the figures. (b) The non-Gaussian
parameters of rotational displacements.
Besides the extrapolations in Fig. 1(d), the existence of the orientational
glass phase was verified from the non-Gaussian parameters
$\alpha_{2}(t)=\langle\Delta x^{4}\rangle/(3\langle\Delta x^{2}\rangle^{2})-1$
of particle displacements $\Delta x$ during time $t$ Weeks00 . In supercooled
liquids, the distribution of $\Delta x$ is Gaussian at short and long times
because the motions are diffusive, but it becomes non-Gaussian with long tails
at the intermediate times due to cooperative out-of-cage displacements Weeks00
; Kegel00 ; Donati99 . This behavior is reflected in the peak of
$\alpha_{2}(t)$, see Fig. 2. As $\phi$ increases, the peak rises and shifts
towards a longer time, indicating growing dynamic heterogeneity on approaching
the glass transitions. In contrast, the glass phase lacks cooperative out-of-
cage motions, so $\alpha_{2}(t)$ exhibits no distinct peak and declines with
time Weeks00 . Such a sharp change has been regarded as a characteristic of a
glass transition Weeks00 . Figure 2 clearly shows the glass transitions at
$\phi_{c}^{\theta}=0.72\pm 0.02$ for rotational motion and at
$\phi_{c}^{{}_{T}}=0.79\pm 0.02$ for translational motion. In Fig. 2(a),
$\alpha_{2}^{||}(t)$ is always greater than the corresponding
$\alpha_{2}^{\perp}(t)$, indicating that the translational relaxations and
cooperative out-of-cage motions are mainly along the long axes of the
ellipsoids.
Figure 3: (color online) The spatial distributions of the fastest-moving 8% of
the particles (labeled in colors) in translational (a, c, e) and rotational
(b, d, f) motions. Ellipsoids in the same cluster have the same color. (a, b)
The same frame at $\phi=0.70$ (supercooled liquid); (c, d) The same frame at
$\phi=0.77$ (orientational glass); (e, f) The same frame at $\phi=0.81$
(glass) with $\sim$5500 particles.
The two glass transitions can be further confirmed from the spatial
distribution of the fastest-moving particles which characterizes the
structural relaxation and dynamic heterogeneity Weeks00 . In Fig. 3, the
fastest-moving 8% of the particles is labeled in colors because the non-
Gaussian long tail of the distribution of $\Delta x(t^{*})$ covers about 8% of
the population. Here $t^{*}$ corresponds to the maximum of $\alpha_{2}$
Weeks00 . Different choices of $t$ and the percentage yield the similar
results. Neighboring fastest-moving ellipsoids form clusters and are labeled
using the same color. Here two ellipsoids are defined as neighbors if they
overlap after being expanded by 1.5 times and their closest distance does not
intersect a third particle. In the supersaturated liquid, most fast particles
were strongly spatially correlated and formed large extended clusters, see
Figs. 3 and Fig. S3 in the SM. This demonstrates the $\alpha$-relaxation
occurs by cooperative particle motion in both the translational and rotational
degrees of freedom: when one particle moves, another particle moves closely
following the first. The colloidal glasses, in contrast, show no discernible
$\alpha$-relaxation, and the fastest particles in $\beta$-relaxation are
randomly dispersed without forming large clusters Weeks00 , as observed in the
3D glass transition of colloidal spheres Weeks00 . Figure 3 clearly depicts
three regimes: both the translational and rotational fast particles are
distributed heterogeneously with large clusters at $\phi<0.72$; the rotational
fast particles are dispersed homogenously while the translational fast
particles form large clusters at $0.72<\phi<0.79$; and both types of fast
particles are dispersed homogenously at $\phi>0.79$.
The spatial distributions of translational and rotational fast-particle
clusters were anticorrelated. Figures 3(a,c) show that most translational fast
particles belonged to a few large ribbon-like clusters aligned with their long
axes within the pseudo-nematic domains. In contrast, the clusters of
rotational fast particles formed branch-like structures extending over several
small domains around the domain boundaries, see Fig. 3(b). Fast rotational
ellipsoids moved between domains by cooperative rotational motion. This
demonstrates that the nematic order within a domain facilitates translational
relaxation while the orientational disorder near domain boundaries promotes
rotational relaxation. Fast translational particles are responsible for the
out-of-cage diffusion, while fast rotational particles are responsible for
domain transformations such as splitting, merging and rotating. All the phases
in Figs. 3(a-f) contain some isolated fast translational and rotational
particles; they are mainly distributed at the domain boundaries with random
orientations.
Figure 4: (color online) The probability distribution functions for the
cluster size of (a) translational and (b) rotational fastest-moving particles.
The lines are the best fits of $P(N_{c})\sim N_{c}^{-\mu}$. (c) The fitted
exponents $\mu^{\theta}$ for rotational motions and $\mu^{{}_{T}}$ for
translational motions. The vertical dotted and dashed lines represent the
glass transitions for rotational and translational motions respectively. (d)
The weighted mean cluster size $\langle
N_{c}\rangle\sim(\phi_{c}-\phi)^{-\eta}$ where $\phi_{c}^{\theta}=0.71$ and
$\phi_{c}^{T}=0.79$.
The cluster sizes of the fast particles, $N_{c}$, exhibit a power-law
distribution $P(N_{c})\sim N_{c}^{-\mu}$ as shown in Figs. 4(a, b). The fitted
exponents $\mu$ for translational and rotational motions change dramatically
near their respective glass transitions see Fig. 4(c). The
$\mu^{\theta,T}=2.0\pm 0.2$ for supersaturated liquids is close to the
$\mu^{{}_{T}}=2.2\pm 0.2$ estimated for hard spheres Weeks00 and the
$\mu^{{}_{T}}=1.9\pm 0.1$ for Lennard-Jones particles in 3D Donati99 , while
the $\mu^{\theta,T}=3.2\pm 0.1$ for glasses is close to the $\mu^{{}_{T}}=3.1$
estimated for hard spheres in 3D Weeks00 . Hence $\mu\simeq 2.5$ might
characterize such glass transitions in general. Figure 4(d) shows the weighted
mean cluster size $\langle N_{c}\rangle=\sum N_{c}^{2}P(N_{c})/\sum
N_{c}P(N_{c})$ Weeks00 ; Donati99 at different densities. Both $\langle
N_{c}^{\theta}\rangle$ and $\langle N_{c}^{{}_{T}}\rangle$ diverge on
approaching the corresponding $\phi_{c}$: $\langle
N_{c}\rangle\sim(\phi_{c}-\phi)^{-\eta}$ with fitted $\eta^{\theta}=0.81$ and
$\eta^{{}_{T}}=0.75$, indicating growing cooperative regions of mobile
particles. Similar scaling and $\eta^{{}_{T}}$s have been observed in a
Lennard-Jones system Donati99 , but the mechanism is not clear.
We did not observed nematic phase or semetic domains found in 3D
spherocylinders Ni10 because 1) the elliptical shape facilitates particles
changing orientation and forming branch-like structures at high densities
Narayan06 ; 2) The 5.6% polydispersity promotes glass formation. 3) Long-
wavelength fluctuations are stronger in 2D than in 3D, which can more easily
break the long-range order as described by Mermin-Wagner theorem. Ellipsoids
with $p\sim 6$ appeared to be good glass formers, which can easily preempt any
isotropic-nematic (IN) phase transition Cuesta90 . In contrast, the glass
transition can be preempted by crystallization for $p\simeq 1$ in 2D, or by an
IN transition for rods with $p\gtrsim 25$ in 3D Yatsenko08 .
All of the measurements consistently showed that the glass transitions for
ellipsoids with $p=6$ confined between two walls are at
$\phi_{c}^{\theta}=0.72$ for rotational motion and at $\phi_{c}^{{}_{T}}=0.79$
for translational motion. For longer ellipsoids with $p=9$ ($a=5.9~{}\mu$m,
$b=c=0.65~{}\mu$m), $\phi_{c}^{\theta}=0.60\pm 0.02$ and
$\phi_{c}^{{}_{T}}=0.72\pm 0.02$ were observed in the two-wall confinement.
This suggests that the intermediate regime between $\phi_{c}^{\theta}$ and
$\phi_{c}^{{}_{T}}$ increases with the aspect ratio, which could be the reason
why such an intermediate regime has not been observed in previous 3D
simulations of ellipsoids with small aspect ratios Pfleiderer08 ; Chong05 . We
also observed the two-step glass transitions in monolayers of heavy ellipsoid
sediment near one wall, but the transitions increased by 3% area fraction
because of the stronger out-of-plane fluctuations.
We conclude that colloidal ellipsoids in a 2D system exhibit two glass
transitions with an intermediate orientational glass. This behavior has been
predicted in 3D by MMCT but not studied in 2D before. The two glass
transitions in the rotational and translational degrees of freedom correspond
to inter-domain freezing and inner-domain freezing respectively. The
orientational glass regime appears to increase with the aspect ratio.
Approaching the glass transitions, the structural relaxation time and the mean
cluster size for cooperative motion diverge $-$ typical features of a glass
transition Gotze91 ; Gotze92 ; Donati99 . Interestingly, the translational and
orientational cooperative motions are anticorrelated in space, which has not
been predicted in theory or simulation. A similar two-step glass transition
has been observed in a 3D liquid-crystal system and explained as the freezing
of the orientations of the pseudo-nematic domains and the freezing of the
translational motion within domains Cang03 . Here we directly observed the
conjectured pseudo-nematic domains in ref. Cang03 . These results at single-
particle resolution shed new light on the formation of molecular glasses,
especially at low dimensionality.
We thank Ning Xu and Penger Tong for the helpful discussion. This work was
supported by the HKUST grant RPC07/08.SC04 and by GRF grant 601208.
## References
* (1) E. R. Weeks, J. C. Crocker, A. C. Levitt, A. Schofield, and D. A. Weitz, Science 287, 627 (2000).
* (2) W. Götze and L. Sjögren, Phys. Rev. A 43, 5442 (1991).
* (3) W. van Megen and S. M. Underwood, Phys. Rev. Lett. 70, 2766 (1993).
* (4) W. Kegel et al., Science 287, 290 (2000).
* (5) Z. Zhang, et al. Nature 459, 230 (2009).
* (6) P. J. Yunker, et al. Phys. Rev. E 83, 011403 (2011).
* (7) F. Stillinger and J. Hodgdon, Phys. Rev. E 50, 2064 (1994); V. Ilyin, E. Lerner, T.-S. Lo, and I. Procaccia, Phys. Rev. Lett. 99, 135702 (2007).
* (8) G. Yatsenko and K. Schweizer, Langmuir 24, 7474 (2008).
* (9) M. Letz, R. Schilling, and A. Latz, Phys. Rev. E 62, 5173 (2000).
* (10) R. Schilling and T. Scheidsteger, Phys. Rev. E 56, 2932 (1997).
* (11) C. De Michele, R. Schilling, and F. Sciortino, Phys. Rev. Lett. 98, 265702 (2007).
* (12) P. Pfleiderer, K. Milinkovic, and T. Schilling, Europhys. Lett. 84, 16003 (2008).
* (13) T. Franosch, M. Fuchs, W. Götze, M. R. Mayr, and A. P. Singh, Phys. Rev. E 56, 5659 (1997).
* (14) R. Schilling, J. Phys.: Cond. Matt. 12, 6311 (2000).
* (15) H. König, R. Hund, K. Zahn, and G. Maret, Eur. Phys. J. E 18, 287 (2005).
* (16) P. Yunker, Z. Zhang, K. B. Aptowicz, and A. G. Yodh, Phys. Rev. Lett. 103, 115701 (2009); S. Mazoyer, F. Ebert, G. Maret, and P. Keim, Europhys. Lett. 88, 66004 (2009).
* (17) R. Speedy, J. Chem. Phys. 110, 4559 (1999).
* (18) M. Bayer, et al. Phys. Rev. E 76, 011508 (2007); D. Hajnal, J. Brader, and R. Schilling, Phys. Rev. E 80, 021503 (2009); D. Hajnal, M. Oettel, and R. Schilling, J. Non-Cryst. Solids 357, 302 (2011).
* (19) C. Ho, A. Keller, J. Odell, and R. Ottewill, Colloid. Polym. Sci. 271, 469 (1993).
* (20) Y. Han, A. Alsayed, M. Nobili, and A. G. Yodh, Phys. Rev. E 80, 011403 (2009).
* (21) Z. Zheng and Y. Han, J. Chem. Phys. 133, 124509 (2010).
* (22) T. Kawasaki, T. Araki, and H. Tanaka, Phys. Rev. Lett. 99, 215701 (2007).
* (23) W. Götze and L. Sjögren, Rep. Prog. Phys. 55, 241 (1992); S. P. Das, Rev. Mod. Phys. 76, 785 (2004).
* (24) C. Donati, S. C. Glotzer, P. H. Poole, W. Kob, and S. J. Plimpton, Phys. Rev. E 60, 3107 (1999).
* (25) R. Ni, S. Belli, R. van Roij, and M. Dijkstra, Phys. Rev. Lett. 105, 088302 (2010).
* (26) V. Narayan, N. Menon, and S. Ramaswamy, J. Stat. Mech. Theor. Exp. 01005 (2006).
* (27) J. A. Cuesta and D. Frenkel, Phys. Rev. A 42, 2126 (1990).
* (28) S.-H. Chong, A. J. Moreno, F. Sciortino, and W. Kob, Phys. Rev. Lett. 94, 215701 (2005).
* (29) H. Cang, J. Li, V. Novikov, and M. Fayer, J. Chem. Phys. 119, 10421 (2003).
|
arxiv-papers
| 2011-05-29T06:33:13 |
2024-09-04T02:49:19.145976
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Zhongyu Zheng, Feng Wang and Yilong Han",
"submitter": "Yilong Han",
"url": "https://arxiv.org/abs/1105.5760"
}
|
1105.5821
|
# Identifying Hosts of Families of Viruses:
A Machine Learning Approach
Anil Raj Department of Applied Physics and Applied Mathematics
Columbia University, New York ar2384@columbia.edu Michael Dewar Department
of Applied Physics and Applied Mathematics
Columbia University, New York Gustavo Palacios Center for Infection and
Immunity
Mailman School of Public Health
Columbia University, New York Raul Rabadan Department of Biomedical
Informatics
Center for Computational Biology and Bioinformatics
Columbia University College of Physicians and Surgeons, New York Chris H.
Wiggins Department of Applied Physics and Applied Mathematics
Center for Computational Biology and Bioinformatics
Columbia University, New York
###### Abstract
## Abstract
Identifying viral pathogens and characterizing their transmission is essential
to developing effective public health measures in response to a pandemic.
Phylogenetics, though currently the most popular tool used to characterize the
likely host of a virus, can be ambiguous when studying species very distant to
known species and when there is very little reliable sequence information
available in the early stages of the pandemic. Motivated by an existing
framework for representing biological sequence information, we learn sparse,
tree-structured models, built from decision rules based on subsequences, to
predict viral hosts from protein sequence data using popular discriminative
machine learning tools. Furthermore, the predictive motifs robustly selected
by the learning algorithm are found to show strong host-specificity and occur
in highly conserved regions of the viral proteome.
viral host, machine learning, adaboost, alternating decision tree, mismatch
k-mers
## 1 Introduction
Emerging pathogens constitute a continuous threat to our society, as it is
notoriously difficult to perform a realistic assessment of optimal public
health measures when little information on the pathogen is available. Recent
outbreaks include the West Nile virus in New York (1999); SARS coronavirus in
Hong Kong (2002-2003); LUJO virus in Lusaka (2008); H1N1 influenza pandemic
virus in Mexico and the US (2009); and cholera in Haiti (2010). In all these
cases, an outbreak of unusual clinical diagnoses triggered a rapid response,
and an essential part of this response is the accurate identification and
characterization of the pathogen.
Sequencing is becoming the most common and reliable technique to identify
novel organisms. For instance, LUJO was identified as a novel, very distinct
virus after the sequence of its genome was compared to other arenaviruses
Briese2009 . The genome of an organism is a unique fingerprint that reveals
many of its properties and past history. For instance, arenaviruses are
zoonotic agents usually transmitted from rodents.
Another promising area of research is metagenomics, in which DNA and RNA
samples from different environments are sequenced using shotgun approaches.
Metagenomics is providing an unbiased understanding of the different species
that inhabit a particular niche. Examples include the human microbiome and
virome, and the Ocean metagenomics collection Williamson2008 . It has been
estimated that there are more than 600 bacterial species living in the mouth
but that only 20% have been characterized.
Pathogen identification and metagenomic analysis point to an extremely rich
diversity of unknown species, where partial genomic sequence is the only
information available. The main aim of this work is to develop approaches that
can help infer characteristics of an organism from subsequences of its genomic
sequence where primary sequence information analysis does not allow us to
identify its origin. In particular, our work will focus on predicting the host
of a virus from the viral genome.
The most common approach to deduce a likely host of a virus from the viral
genome is sequence / phylogenetic similarity (i.e., the most likely host of a
particular virus is the one that is infected by related viral species).
However, similarity measures based on genomic / protein sequence or protein
structure could be misleading when dealing with species very distant to known,
annotated species. Other approaches are based on the fact that viruses undergo
mutational and evolutionary pressures from the host. For instance, viruses
could adapt their codon bias for a more efficient interaction with the host
translational machinery or they could be under pressure of deaminating enzymes
(e.g. APOBEC3G or HIV infection). All these factors imprint characteristic
signatures in the viral genome. Several techniques have been developed to
extract these patterns (e.g., nucleotide and dinucleotide compositional
biases, and frequency analysis techniques Touchon2008 ). Although most of
these techniques could reveal an underlying biological mechanism, they lack
sufficient accuracy to provide reliable assessments. A relatively similar
approach to the one presented here is DNA barcoding. Genetic barcoding
identifies conserved genomic structures that contain the necessary information
for classification.
Using contemporary machine learning techinques, we present an approach to
prediciting the hosts of unseen viruses, based on the amino acid sequences of
proteins of viruses whose hosts are well known. Using sequence and host
information of known viruses, we learn a multi-class classifier composed of
simple sequence-motif based questions (e.g., does the viral sequence contain
the motif ‘DALMWLPD’?) that achieves high prediction accuracies on held-out
data. Prediction accuracy of the classifier is measured by the area under the
ROC curve, and is compared to a straightforward nearest-neighbour classifier.
Importantly (and quite surprisingly), a post-processing study of the highly
predictive sequence-motifs selected by the algorithm identifies strongly
conserved regions of the viral genome, facilitating biological interpretation.
## 2 Methods
Our overall aim is to discover aspects of the relationship between a virus and
its host. Our approach is to develop a model that is able to predict the host
of a virus given its sequence; those features of the sequence that prove most
useful are then assumed to have a special biological significance. Hence, an
ideal model is one that is parsimonious and easy to interpret, whilst
incorporating combinations of biologically relevant features. In addition, the
interpretability of the results is improved if we have a simple learning
algorithm which can be straightforwardly verified.
Formally, for a given virus family, we learn a function
$g:\mathcal{S}\rightarrow\mathcal{H}$, where $\mathcal{S}$ is the space of
viral sequences and $\mathcal{H}$ is the space of viral hosts. The space of
viral sequences $\mathcal{S}$ is generated by an alphabet $\mathcal{A}$ where,
$|\mathcal{A}|=4$ (genome sequence) or $|\mathcal{A}|=20$ (primary protein
sequence).
Defining a function on a sequence requires representation of the sequence in
some feature space. Below, we specify a representation
$\phi:\mathcal{S}\rightarrow\mathcal{X}$, where a sequence $s\in\mathcal{S}$
is mapped to a vector of counts of subsequences
$x\in\mathcal{X}\subset\mathbb{N}_{0}^{D}$. Given this representation, we have
the well-posed problem of finding a function
${f}:\mathcal{X}\rightarrow\mathcal{H}$ built from a space of simple binary-
valued functions.
### 2.1 Collected Data
The collected data consist of $N$ genome sequences or primary protein
sequences, denoted $s_{1}\ldots s_{N}$, of viruses whose host class, denoted
$h_{1}\ldots h_{N}$ is known. For example, these could be ‘plant’,
‘vertebrate’ and ‘invertebrate’. The label for each virus is represented
numerically as $\mathbf{y}\in\mathcal{Y}=\\{0,1\\}^{L}$ where
$[\mathbf{y}]_{l}=1$ if the index of the host class of the virus is $l$, and
where $L$ denotes the number of host classes. Note that this representation
allows for a virus to have multiple host classes. Here and below we use
boldface variables to indicate vectors and square brackets to denote the
selection of a specific element in the vector, e.g., $[\mathbf{y}_{n}]_{l}$ is
the $l^{\mathrm{th}}$ element of the $n^{\mathrm{th}}$ label vector.
### 2.2 Mismatch Feature Space
A possible feature space representation of a viral sequence is the vector of
counts of exact matches of all possible $k$-length subsequences ($k$-mers).
However, due to the high mutation rate of viral genomes Duffy2008 ; Pybus2009
, a predictive function learned using this simple representation of counts
would fail to generalize well to new viruses. Instead, motivated by Leslie2004
, we count not just the presence of an individual $k$-mer but also the
presence of subsequences within $m$ mismatches from that $k$-mer. The
mismatch- or $m$-neighborhood of a $k$-mer $\alpha$, denoted
$\mathcal{N}^{m}_{\alpha}$, is the set of all $k$-mers with a Hamming distance
Hamming1950 at most $m$ from it, as shown in Table 1. Let
$\delta_{\mathcal{N}^{m}_{\alpha}}$ denote the indicator function of the
$m$-neigbourhood of $\alpha$ such that
$\delta_{\mathcal{N}^{m}_{\alpha}}(\beta)=\left\\{\begin{array}[]{rl}1&\mathrm{if}~{}\beta\in\mathcal{N}^{m}_{\alpha}\\\
0&\mathrm{otherwise}.\end{array}\right.$ (1)
Table 1: Mismatch feature space representation. The mismatch feature space representation of a segment of a protein sequence …AQGPRIYDDTCQHPSWWMNFEYRGSP… $m=0$ | $m=1$ | $m=2$
---|---|---
kmer | count | kmer | count | kmer | count
$\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$
DQGPS | 0 | DQGPS | 0 | DQGPS | 1
CQGPS | 0 | CQGPS | 1 | CQGPS | 1
CQHPS | 1 | CQHPS | 1 | CQHPS | 1
CQIPS | 0 | CQIPS | 1 | CQIPS | 1
DQIPS | 0 | DQIPS | 0 | DQIPS | 1
$\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$
APGPQ | 0 | APGPQ | 0 | APGPQ | 1
AQGPQ | 0 | AQGPQ | 1 | AQGPQ | 1
AQGPR | 1 | AQGPR | 1 | AQGPR | 1
AQGPS | 0 | AQGPS | 1 | AQGPS | 1
ASGPS | 0 | ASGPS | 0 | ASGPS | 1
$\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$
ARGMP | 0 | ARGMP | 0 | ARGMP | 1
ARGSP | 0 | ARGSP | 1 | ARGSP | 1
YRGSP | 1 | YRGSP | 1 | YRGSP | 1
WRGSP | 0 | WRGSP | 1 | WRGSP | 1
WRGNP | 0 | WRGNP | 0 | WRGNP | 1
$\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$
We can then define, for any possible $k$-mer $\beta$, the mapping $\phi$ from
the sequence $s$ onto a count of the elements in $\beta$’s $m$-neighbourhood
as
$\phi_{k,m}(s,\beta)=\sum_{\begin{subarray}{c}\alpha\in s\\\
|\alpha|=k\end{subarray}}\delta_{\mathcal{N}^{m}_{\alpha}}(\beta).$ (2)
Finally, the $d^{\mathrm{th}}$ element of the feature vector for a given
sequence is then defined elementwise as
$[\mathbf{x}]_{d}=\phi_{k,m}(s,\beta_{d})$ (3)
for every possible $k$-mer $\beta_{d}\in\mathcal{A}^{k}$, where $d=1\dots D$
and $D=|\mathcal{A}^{k}|$.
Note that when $m=0$, $\phi_{k,0}$ exactly captures the simple count
representation described earlier. This biologically realistic relaxation
allows us to learn discriminative functions that better capture rapidly
mutating and yet functionally conserved regions in the viral genome
facilitating generalization to new viruses.
### 2.3 Alternating Decision Trees
Given this representation of the data, we aim to learn a discriminative
function that maps features $\mathbf{x}$ onto host class labels $\mathbf{y}$,
given some training data
$\\{(\mathbf{x}_{1},\mathbf{y}_{1}),\dots,(\mathbf{x}_{N},\mathbf{y}_{N})\\}$.
We want the discriminative function to output a measure of “confidence”
Schapire1999 in addition to a predicted host class label. To this end, we
learn on a class of functions
$\mathbf{f}:\mathcal{X}\rightarrow\mathbb{R}^{L}$, where the indices of
positive elements of $\mathbf{f}(\mathbf{x})$ can be interpreted as the
predicted labels to be assigned to $\mathbf{x}$ and the magnitudes of these
elements to be the confidence in the predictions.
A simple class of such real-valued discriminative functions can be constructed
from the linear combination of simple binary-valued functions
$\psi:\mathcal{X}\rightarrow\\{0,1\\}$. The functions $\psi$ can, in general,
be a combination of single-feature decision rules or their negations:
$\displaystyle\mathbf{f}(\mathbf{x})$ $\displaystyle=$
$\displaystyle\sum_{p=1}^{P}\mathbf{a}_{p}\psi_{p}(\mathbf{x})$ (4)
$\displaystyle\psi_{p}(\mathbf{x})$ $\displaystyle=$ $\displaystyle\prod_{d\in
S_{p}}\mathbb{I}(x_{d}\geq\theta_{d})$ (5)
where $\mathbf{a}_{p}\in\mathbb{R}^{L}$, $P$ is the number of binary-valued
functions, $\mathbb{I}(\cdot)$ is 1 if its argument is true, and zero
otherwise, $\theta\in\\{0,1,\dots,\Theta\\}$, where
$\Theta=\max_{d,n}[\mathbf{x}_{n}]_{d}$, and $S_{p}$ is a subset of feature
indices. This formulation allows functions to be constructed using
combinations of simple rules. For example, we could define a function $\psi$
as the following
$\psi(\mathbf{x})=\mathbb{I}(x_{5}\geq 2)\times\lnot\mathbb{I}(x_{11}\geq
1)\times\mathbb{I}(x_{1}\geq 4)$ (6)
where $\lnot\mathbb{I}(\cdot)=1-\mathbb{I}(\cdot)$.
Alternatively, we can view each function $\psi_{p}$ to be parameterized by a
vector of thresholds $\boldsymbol{\theta}_{p}\in\\{0,1,\dots,\Theta\\}^{D}$,
where $[\boldsymbol{\theta}_{p}]_{d}=0$ indicates $\psi_{p}$ is not a function
of the $d^{\mathrm{th}}$ feature $[\mathbf{x}]_{d}$. In addition, following
Busa2009 , we can decompose the weights
$\mathbf{a}_{p}=\alpha_{p}\mathbf{v}_{p}$ into a vote vector
$\mathbf{v}\in\\{+1,-1\\}^{L}$ and a scalar weight $\alpha\in\mathbb{R}_{+}$.
The discriminative model, then, can be written as
$\displaystyle\mathbf{f}(\mathbf{x})$ $\displaystyle=$
$\displaystyle\sum_{p=1}^{P}\alpha_{p}\mathbf{v}_{p}\psi_{\boldsymbol{\theta}_{p}}(\mathbf{x}),$
(7) $\displaystyle\psi(\mathbf{x};\boldsymbol{\theta}_{p})$ $\displaystyle=$
$\displaystyle\prod_{d=1}^{D}\mathbb{I}(x_{d}\geq[\boldsymbol{\theta}_{p}]_{d}).$
(8)
Figure 1: Alternating Decision Tree. An example of an ADT where rectangles are
decision nodes, circles are output nodes and, in each decision node,
$[\beta]=\phi_{k,m}(s,\beta)$ is the feature associated with the $k$-mer
$\beta$ in sequence $s$. The output nodes connected to each decision node are
associated with a pair of binary-valued functions $(\psi,\tilde{\psi})$. The
binary-valued function corresponding to the highlighted path is given as
$\tilde{\psi}(\mathbf{x};\boldsymbol{\theta}_{3})=\mathbb{I}([\mathrm{AKNELSID}]\geq
2)\times\lnot\mathbb{I}([\mathrm{AAALASTM}]\geq 1)$ and the associated
$\tilde{\alpha}=0.3$.
Every function in this class of models can be concisely represented as an
Alternating Decision Tree (ADT) Freund1999 . Similar to ordinary decision
trees, ADTs have two kinds of nodes: decision nodes and output nodes. Every
decision node is associated with a single-feature decision rule, the
attributes of the node being the relevant feature and corresponding threshold.
Each decision node is connected to two output nodes corresponding to the
associated decision rule and its negation. Thus, binary-valued functions in
the model come in pairs $(\psi,\tilde{\psi})$; each pair is associated with
the the pair of output nodes for a given decision node in the tree (see Figure
1). Note that $\psi$ and $\tilde{\psi}$ share the same threshold vector
$\boldsymbol{\theta}$ and only differ in whether they contain the associated
decision rule or its negation. The attributes of the output node pair are the
vote vectors $(\mathbf{v},\tilde{\mathbf{v}})$ and the scalar weights
$(\alpha,\tilde{\alpha})$ associated with the corresponding functions
$(\psi,\tilde{\psi})$.
Each function $\psi$ has a one-to-one correspondence with a path from the root
node to its associated output node in the tree; the single-feature decision
rules in $\psi$ being the same as those rules associated with decision nodes
in the path, with negations applied appropriately. Combinatorial features can,
thus, be incorporated into the model by allowing for trees of depth greater
than 1. Including a new function $\psi$ in the model is, then, equivalent to
either adding a new path of decision and output nodes at the root node in the
tree or growing an existing path at one of the existing output nodes. This
tree-structured representation of the model will play an important role in
specifying how Adaboost, the learning algorithm, greedily searches over an
exponentially large space of binary-valued functions. It is important to note
that, unlike ordinary decision trees, each example runs down an ADT through
every path originating from the root node.
### 2.4 Multi-class Adaboost
Having specified a representation for the data and the model, we now briefly
describe Adaboost, a large-margin supervised learning algorithm which we use
to learn an ADT given a data set. Ideally, a supervised learning algorithm
learns a discriminative function $\mathbf{f}^{*}(\mathbf{x})$ that minimizes
the number of mistakes on the training data, known as the Hamming loss
Hamming1950 :
$\mathbf{f}^{*}(\mathbf{x})=\arg\min_{\mathbf{f}}\mathcal{L}_{h}(\mathbf{f})=\sum_{\begin{subarray}{c}1\leq
n\leq N\\\ 1\leq l\leq
L\end{subarray}}\mathbb{I}\left(H([\mathbf{f}(\mathbf{x}_{n})]_{l})\neq[\mathbf{y}_{n}]_{l}\right)$
(9)
where $H(.)$ denotes the Heaviside function. The Hamming loss, however, is
discontinuous and non-convex, making optimization intractable for large-scale
problems.
Adaboost is the unconstrained minimization of the exponential loss, a smooth,
convex upper-bound to the Hamming loss, using a coordinate descent algorithm.
$\tilde{\mathbf{f}}^{*}(\mathbf{x})=\arg\min_{\mathbf{f}}\mathcal{L}_{e}(\mathbf{f})=\sum_{\begin{subarray}{c}1\leq
n\leq N\\\ 1\leq l\leq
L\end{subarray}}\exp\left(-[\mathbf{y}_{n}]_{l}[f_{l}(\mathbf{x}_{n})]_{l}\right).$
(10)
Adaboost learns a discriminative function $\mathbf{f}(\mathbf{x})$ by
iteratively selecting the $\psi$ that maximally decreases the exponential
loss. Since each $\psi$ is parameterized by a $D$-dimensional vector of
thresholds $\boldsymbol{\theta}$, the space of functions $\psi$ is of size
$O((\Theta+1)^{D})$, where $\Theta$ is the largest $k$-mer count observed in
the data, making an exhaustive search at each iteration intractable for high-
dimensional problems.
To avoid this problem, at each iteration, we only allow the ADT to grow by
adding one decision node to one of the existing output nodes. To formalize
this, let us define $Z(\boldsymbol{\theta})=\\{d:[\boldsymbol{\theta}]_{d}\neq
0\\}$ to be the set of active features corresponding to a function $\psi$. At
the $t^{\mathrm{th}}$ iteration of boosting, the search space of possible
threshold vectors is then given as
$\\{\boldsymbol{\theta}:\exists\tau<t,Z(\boldsymbol{\theta})\supset
Z(\boldsymbol{\theta}_{\tau}),|Z(\boldsymbol{\theta})|-|Z(\boldsymbol{\theta}_{\tau})|=1\\}$.
In this case, the search space of thresholds at the $t^{\mathrm{th}}$
iteration is of size $O(t\Theta D)$ and grows linearly in a greedy fashion at
each iteration (see Figure 1). Note, however, that this greedy search,
enforced to make the algorithm tractable, is not relevant when the class of
models are constrained to belong to ADTs of depth 1.
In order to pick the best function $\psi$, we need to compute the decrease in
exponential loss admitted by each function in the search space, given the
model at the current iteration. Formally, given the model at the
$t^{\mathrm{th}}$ iteration, denoted $\mathbf{f}^{t}(\mathbf{x})$, the
exponential loss upon inclusion of a new decision node, and hence the creation
of two new paths
$(\psi_{\boldsymbol{\theta}},\tilde{\psi}_{\boldsymbol{\theta}})$, into the
model can be written as
$\displaystyle\mathcal{L}_{e}(\mathbf{f}^{t+1})$ $\displaystyle=$
$\displaystyle\sum_{n=1}^{N}\sum_{l=1}^{L}\exp\left(-[\mathbf{y}_{n}]_{l}[\mathbf{f}^{t}(\mathbf{x}_{n})+\alpha\mathbf{v}\psi_{\boldsymbol{\theta}}(\mathbf{x}_{n})+\tilde{\alpha}\tilde{\mathbf{v}}\tilde{\psi}_{\boldsymbol{\theta}}(\mathbf{x}_{n})]_{l}\right)$
(11) $\displaystyle=$
$\displaystyle\sum_{n=1}^{N}\sum_{l=1}^{L}w^{t}_{nl}\exp\left(-[\mathbf{y}_{n}]_{l}[\alpha\mathbf{v}\psi_{\boldsymbol{\theta}}(\mathbf{x}_{n})+\tilde{\alpha}\tilde{\mathbf{v}}\tilde{\psi}_{\boldsymbol{\theta}}(x_{n})]_{l}\right)$
(12)
where
$w^{t}_{nl}=\exp\left(-[\mathbf{y}_{n}]_{l}[\mathbf{f}^{t}(\mathbf{x}_{n})]_{l}\right)$.
Here $w_{nl}^{t}$ is interpreted as the weight on each sample, for each label,
at boosting round $t$. If, at boosting round $t-1$, the model disagrees with
the true label $l$ for sample $n$, then $w_{nl}^{t}$ is large. If the model
agrees with the label then the weight is small. This ensures that the boosting
algorithm chooses a decision rule at round $t$, preferentially discriminating
those examples with a large weight, as this will lead to the largest reduction
in $\mathcal{L}_{e}$.
For every possible new decision node that can be introduced to the tree,
Adaboost finds the ($\alpha$,$\mathbf{v}$) pair that minimizes the exponential
loss on the training data. These optima can be derived as
$[\mathbf{v}^{*}]_{l}=\begin{cases}1&\text{if
}\omega^{t}_{+,l}\geq\omega^{t}_{-,l}\\\ -1&\text{otherwise}\end{cases}$ (13)
$\alpha^{*}=\frac{1}{2}\ln\frac{W^{t}_{+}}{W^{t}_{-}}$ (14)
where for each new path $\psi_{n}$ associated with each new decision node
$\displaystyle\omega^{t}_{\pm,l}$ $\displaystyle=$
$\displaystyle\sum_{n:\psi_{n}y_{nl}=\pm 1}w^{t}_{nl}$ (15) $\displaystyle
W^{t}_{\pm}$ $\displaystyle=$ $\displaystyle\sum_{n,l:v_{l}\psi_{n}y_{nl}=\pm
1}w^{t}_{nl}.$ (16)
Corresponding equations for the ($\tilde{\alpha}$,$\tilde{\mathbf{v}}$) pair
can be written in terms of $\tilde{W}^{t}_{\pm,l}$ and $\tilde{W}^{t}_{\pm}$
obtained by replacing $\psi_{n}$ with $\tilde{\psi}_{n}$ in the equations
above. The minimum loss function for the threshold $\boldsymbol{\theta}$ is
then given as
$\mathcal{L}_{e}(\mathbf{f}^{t+1})=2\sqrt{W^{t}_{+}W^{t}_{-}}+2\sqrt{\tilde{W}^{t}_{+}\tilde{W}^{t}_{-}}+W^{t}_{o}$
(17)
where $W^{t}_{o}=\sum_{n,l:\psi_{n}=\tilde{\psi}_{n}=0}w^{t}_{nl}$. Based on
these model update equations, each iteration of the Adaboost algorithm
involves building the set of possible binary-valued functions to search over,
selecting the one for which the loss function given by Eq. 17 and computing
the associated $(\alpha,\mathbf{v})$ pair using Eq. 13 and Eq. 14.
## 3 Results
### 3.1 Data specifications
We aim to learn a predictive model to identify hosts of viruses belonging to a
specific family; we show results for _Picornaviridae_ and _Rhabdoviridae_.
_Picornaviridae_ is a family of viruses that contain a single stranded,
positive sense RNA. The viral genome usually contains about 1-2 Open Reading
Frames (ORF), each coding for protein sequences about 2000-3000 amino acids
long. _Rhabdoviridae_ is a family of negative sense single stranded RNA
viruses whose genomes typically code for five different proteins: large
protein (L), nucleoprotein (N), phosphoprotein (P), glycoprotein (G), and
matrix protein (M). The data consist of 148 viruses in the _Picornaviridae_
family and 50 viruses in the _Rhabdoviridae_ family. For some choice of $k$
and $m$, we represent each virus as a vector of counts of all possible
$k$-mers, up to $m$-mismatches, generated from the amino-acid alphabet. Each
virus is also assigned a label depending on its host: vertebrate /
invertebrate / plant in the case of _Picornaviridae_ , and animal / plant in
the case of _Rhabdoviridae_ (see Table S1 for the names and label assignments
of viruses). Using multiclass Adaboost, we learn an ADT classifier on training
data drawn from the set of labeled viruses and test the model on the held-out
viruses.
### 3.2 BLAST Classifier accuracy
Given whole protein sequences, a straightforward classifier is given by a
nearest-neighbour approach based on the Basic Local Alignment Search Tool
(BLAST) Altschul1990 . We can use BLAST score (or $P$-value) as a measure of
the distances between the unknown virus and a set of viruses with known hosts.
The nearest neighbor approach to classification then assigns the host of the
closest virus to the unknown virus. Intuitively, as this approach uses the
whole protein to perform the classification, we expect the accuracy to be very
high. This is indeed the case – BLAST, along with a $1$-nearest neighbor
classifier, successfully classifies all viruses in the _Rhabdoviridae_ family,
and all but 3 viruses in the _Picornaviridae_ family. What is missing from
this approach, however, is the ability to ascertain and interpret host
relevant motifs.
### 3.3 ADT Classifier accuracy
The accuracy of the ADT model, at each round of boosting, is evaluated using a
multi-class extension of the Area Under the Curve (AUC). Here the ‘curve’ is
the Receiver Operating Characteristic (ROC) which traces a measure of the
classification accuracy of the ADT for each value of a real-valued
discrimination threshold. As this threshold is varied, a virus is considered a
true (or false) positive if the prediction of the ADT model for the true class
of that protein is greater (or less) than the threshold value. The ROC curve
is then traced out in True Positive Rate – False Positive Rate space by
changing the threshold value and the AUC score is defined as the area under
this ROC curve.
The ADT is trained using 10-fold cross validation, calculating the AUC, at
each round of boosting, for each fold using the held-out data. The mean AUC
and standard deviation over all folds is plotted against boosting round in
Figures 2, 3. Note that the ‘smoothing effect’ introduced by using the
mismatch feature space allows for improved prediction accuracy for $m>0$. For
_Picornaviridae_ , the best accuracy is achieved at $m=5$, for a choice of
$k=12$; this degree of ‘smoothing’ is optimal for the algorithm to capture
predictive amino-acid subsequences present, up to a certain mismatch, in
rapidly mutating viral protein sequences. For _Rhabdoviridae_ , near perfect
accuracy is achieved with merely one decision rule, i.e., _Rhabdoviridae_ with
plant or animal hosts can be distinguished based on the presence or absence of
one highly conserved region in the L protein.
Figure 2: Prediction accuracy for _Picornaviridae_. A plot of (a) mean AUC vs
boosting round, and (b) 95% confidence interval vs boosting round. The mean
and standard deviation were computed over 10-folds of held-out data, for
_Picornaviridae_ , where $k=12$. Figure 3: Prediction accuracy for
_Rhabdoviridae_. A plot of (a) mean AUC vs boosting round, and (b) 95%
confidence interval vs boosting round. The mean and standard deviation were
computed over 5-folds of held-out data, for _Rhabdoviridae_ , where $k=10$.
The relatively higher uncertainty for this virus family was likely due to very
small sample sizes. Note that the cyan curve lies on top of the red curve.
### 3.4 Predictive subsequences are conserved within hosts
Having learned a highly predictive model, we would like to locate where the
selected $k$-mers occur in the viral proteomes. We visualize the $k$-mer
subsequences selected in a specific ADT by indicating elements of the mismatch
neighborhood of each selected subsequence on the virus protein sequences. In
Figure 4, the virus proteomes are grouped vertically by their label with their
lengths scaled to $[0,1]$. Quite surprisingly, the predictive $k$-mers occur
in regions that are strongly conserved among viruses sharing a specific host.
Note that the representation we used for viral sequences retained no
information regarding the location of each $k$-mer on the virus protein.
Furthermore, these selected $k$-mers are significant as they are robustly
selected by Adaboost for different choices of train / test split of the data,
as shown in Figure 5.
Figure 4: Visualizing predictive subsequences. A visualization of the mismatch
neighborhood of the first 7 $k$-mers selected in an ADT for _Picornaviridae_ ,
where $k=12,m=5$. The virus proteomes are grouped vertically by their label
with their lengths scaled to $[0,1]$. Regions containing elements of the
mismatch neighborhood of each $k$-mer are then indicated on the virus
proteome. Note that the proteomes are not aligned along the selected $k$-mers
but merely stacked vertically with their lengths normalized. Figure 5:
Visualizing predictive regions of protein sequences. A visualization of the
mismatch neighborhood of the first 7 $k$-mers, selected in all ADTs over
10-fold cross validation, for _Picornaviridae_ , where $k=12,m=5$. Regions
containing elements of the mismatch neighborhood of each selected $k$-mer are
indicated on the virus proteome, with the grayscale intensity on the plot
being inversely proportional to the number of cross-validation folds in which
some $k$-mer in that region was selected by Adaboost. Thus, darker spots
indicate that some $k$-mer in that part of the proteome was robustly selected
by Adaboost. Furthermore, a vertical cluster of dark spots indicate that
region, selected by Adaboost to be predictive, is also strongly conserved
among viruses sharing a common host type.
## 4 Discussion
We have presented a supervised learning algorithm that learns a model to
classify viruses according to their host and identifies a set of highly
discriminative oligopeptide motifs. As expected, the $k$-mers selected in the
ADT for _Picornaviridae_ (Figure 4, 5) and _Rhabdoviridae_ (Figure S.1, S.2)
are mostly selected in areas corresponding to the replicase motifs of the
polymerase – one of the most conserved parts of the viral genome. Thus, given
that partial genomic sequence is normally the only information available, we
could achieve quicker bioinformatic characterization by focusing on the
selection and amplification of these highly predictive regions of the genome,
instead of full genomic characterization and contiguing. Moreover, in contrast
with generic approaches currently under use, such a targeted amplification
approach might also speed up the process of sample preparation and improve the
sensitivity for viral discovery.
Over representation of highly similar viruses within the data used for
learning is an important source of overfitting that should be kept in mind
when using this technique. Specifically, if the data largely consist of
nearly-similar viral sequences (e.g. different sequence reads from the same
virus), the learned ADT model would overfit to insignificant variations within
the data (even if 10-fold cross validation were employed), making
generalization to new subfamilies of viruses extremely poor. To check for
this, we hold out viruses corresponding to a particular subfamily (see Table
S1 for subfamily annotation of the viruses used), run 10-fold cross validation
on the remaining data and compute the expected fraction of misclassified
viruses in the held-out subfamily, averaged over the learned ADT models. For
_Picornaviridae_ , viruses belonging to the subfamilies _Parechovirus_ (0.47),
_Tremovirus_ (0.8), _Sequivirus_ (0.5), and _Cripavirus_ (1.0) were poorly
classified with misclassification rates indicated in parentheses. Note that
the _Picornaviridae_ data used consist mostly of Cripaviruses; thus, the high
misclassification rate could also be attributed to a significantly lower
sample size used in learning. For _Rhabdoviridae_ , viruses belonging to
_Novirhabdovirus_ (0.75) and _Cytorhabdovirus_ (0.77) were poorly classified.
The poorly classified subfamilies, however, contain a very small number of
viruses, showing that the method is strongly generalizable on average.
Other applications for this technique include identification of novel
pathogens using genomic data, analysis of the most informative fingerprints
that determine host specificity, and classification of metagenomic data using
genomic information. For example, an alternative application of our approach
would be the automatic discovery of multi-locus barcoding genes. Multi-locus
barcoding is the use of a set of genes which are discriminative between
species, in order to identify known specimens and to flag possible new species
Seberg2009 . While we have focused on virus host in this work, ADTs could be
applied straightforwardly to the barcoding problem, replacing the host label
with a species label. Additional constraints on the loss function would have
to be introduced to capture the desire for suitable flanking sites of each
selected $k$-mer in order to develop the universal PCR primers important for a
wide application of the discovered barcode Kress2008 .
## Acknowledgments
The authors thank Vladimir Trifonov and Joseph Chan for interesting
suggestions and discussions.
## References
* [1] Thomas Briese, Janusz T Paweska, Laura K McMullan, Stephen K Hutchison, Craig Street, Gustavo Palacios, Marina L Khristova, Jacqueline Weyer, Robert Swanepoel, Michael Egholm, Stuart T Nichol, and W Ian Lipkin. Genetic detection and characterization of Lujo virus, a new hemorrhagic fever-associated arenavirus from southern Africa. PLoS pathogens, 5(5):e1000455, May 2009.
* [2] Shannon J Williamson, Douglas B Rusch, Shibu Yooseph, Aaron L Halpern, Karla B Heidelberg, John I Glass, Cynthia Andrews-Pfannkoch, Douglas Fadrosh, Christopher S Miller, Granger Sutton, Marvin Frazier, and J Craig Venter. The Sorcerer II Global Ocean Sampling Expedition: metagenomic characterization of viruses within aquatic microbial samples. PloS one, 3(1):e1456, January 2008.
* [3] Marie Touchon and Eduardo P C Rocha. From GC skews to wavelets: a gentle guide to the analysis of compositional asymmetries in genomic data. Biochimie, 90(4):648–59, 2008.
* [4] Siobain Duffy, Laura A Shackelton, and Edward C Holmes. Rates of evolutionary change in viruses: patterns and determinants. Nature Reviews Genetics, 9(4):267–276, 2008.
* [5] Oliver G Pybus and Andrew Rambaut. Evolutionary analysis of the dynamics of viral infectious disease. Nature reviews. Genetics, 10(8):540–50, August 2009.
* [6] C S Leslie, E Eskin, A Cohen, J Weston, and W S Noble. Mismatch string kernels for discriminative protein classification. Bioinformatics, 20(4):467–476, 2004.
* [7] R W Hamming. Error detecting and error correcting codes. Bell System Technical Journal, 29:147—-160, 1950.
* [8] R E Schapire and Y Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3):297–336, 1999.
* [9] R Busa-Fekete and B Kegl. Accelerating AdaBoost using UCB. In JMLR: Workshop and Conference Proceedings, KDD cup 2009, volume 7, pages 111–122, 2009.
* [10] Y Freund and L Mason. The Alternating Decision Tree Algorithm. In Proceedings of the 16th International Conference on Machine Learning, pages 124–133, 1999.
* [11] Stephen F Altschul, Warren Gish, Webb Miller, Eugene W Myers, and David J Lipman. Basic Local Alignment Search Tool. Journal of Molecular Biology, 215:403–410, 1990.
* [12] Ole Seberg and Gitte Petersen. How many loci does it take to DNA barcode a crocus? PloS one, 4(2):e4598, January 2009.
* [13] W John Kress and David L Erickson. DNA barcodes: genes, genomics, and bioinformatics. Proceedings of the National Academy of Sciences of the United States of America, 105(8):2761–2762, 2008.
## Supplementary Figures
Figure S.1: Visualizing predictive subsequences for _Rhabdoviridae_. A
visualization of the mismatch neighborhood of the $k$-mer selected in an ADT
for _Rhabdoviridae_ , where $k=10,m=2$. The virus proteomes are grouped
vertically by their label with their lengths scaled to $[0,1]$. Regions
containing elements of the mismatch neighborhood of each $k$-mer are then
indicated on the virus proteome. Figure S.2: Visualizing predictive regions
for _Rhabdoviridae_. A visualization of the mismatch neighborhood of the
$k$-mers selected in an ADT for _Rhabdoviridae_ , where $k=10,m=2$. The virus
proteomes are grouped vertically by their label with their lengths scaled to
$[0,1]$. Regions containing elements of the mismatch neighborhood of each
$k$-mer are then indicated on the virus proteome.
|
arxiv-papers
| 2011-05-29T19:36:40 |
2024-09-04T02:49:19.151977
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Anil Raj, Michael Dewar, Gustavo Palacios, Raul Rabadan and Chris H.\n Wiggins",
"submitter": "Anil Raj",
"url": "https://arxiv.org/abs/1105.5821"
}
|
1105.6021
|
# Symmetry protected topological orders of 1D spin systems with $D_{2}+T$
symmetry
Zheng-Xin Liu Institute for Advanced Study, Tsinghua University, Beijing,
100084, P. R. China Department of Physics, Massachusetts Institute of
Technology, Cambridge, Massachusetts 02139, USA Xie Chen Department of
Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts
02139, USA Xiao-Gang Wen Department of Physics, Massachusetts Institute of
Technology, Cambridge, Massachusetts 02139, USA Institute for Advanced Study,
Tsinghua University, Beijing, 100084, P. R. China
###### Abstract
In [Z.-X. Liu, M. Liu, X.-G. Wen, arXiv:1101.5680], we studied 8 gapped
symmetric quantum phases in $S=1$ spin chains which respect a discrete spin
rotation $D_{2}\subset SO(3)$ and time reversal $T$ symmetries. In this paper,
using a generalized approach, we study all the 16 possible gapped symmetric
quantum phases of 1D integer spin systems with only $D_{2}+T$ symmetry. Those
phases are beyond Landau symmetry breaking theory and cannot be characterized
by local order parameters, since they do not break any symmetry. They
correspond to 16 symmetry protected topological (SPT) orders. We show that all
the 16 SPT orders can be fully characterized by the physical properties of the
symmetry protected degenerate boundary states (end ‘spins’) at the ends of a
chain segment. So we can measure and distinguish all the 16 SPT orders
experimentally. We also show that all these SPT orders can be realized in
$S=1$ spin ladder models. The gapped symmetric phases protected by subgroups
of $D_{2}+T$ are also studied. Again, all these phases can be distinguished by
physically measuring their end ‘spins’.
###### pacs:
75.10.Pq, 64.70.Tg
## I introduction
In recent years, topological orderWtop ; Wrig and symmetry protected
topological (SPT) orderWqoslpub ; GWtefr for gapped quantum ground states has
attracted much interest. Here ‘topological’ means that this new kind of orders
is different from the symmetry breaking orders.L3726 ; GL5064 ; LanL58 The
new orders include fractional quantum Hall statesTSG8259 ; L8395 , 1D Haldane
phaseH8364 , chiral spin liquids,KL8795 ; WWZ8913 $Z_{2}$ spin liquids,RS9173
; W9164 ; MS0181 non-Abelian fractional quantum Hall states,MR9162 ; W9102 ;
WES8776 ; RMM0899 quantum orders characterized by projective symmetry group
(PSG),Wqoslpub ; W0303a topological insulatorsKM0501 ; KM0502 ; BZ0602 ;
MB0706 ; FKM0703 ; QHZ0824 , etc.
Recent studies indicate that the patterns of entanglements provide a
systematic and comprehensive point of view to understand topological orders
and SPT orders.LW0605 ; KP0604 ; CGW1038 ; W0275 The phases with long-ranged
entanglement have intrinsic topological orders, while symmetric short-range
entangled nontrivial phases are said to have SPT orders. With a definition of
phase and phase transition using local unitary transformations, one can get a
complete classification for all 1D gapped quantum phases,CGW1107 ; SPC1032 ;
CGW1123 and partial classifications for some gapped quantum phases in higher
dimensions.LWstrnet ; CGW1038 ; GWW1017
In contradiction to the suggestion from the symmetry breaking theory, even
when the ground states of two Hamiltonians have the same symmetry, sometimes,
they still cannot be smoothly connected by deforming the Hamiltonian without
closing the energy gap and causing a phase transition, as long as the deformed
Hamiltonians all respect the symmetry. So those two states with the same
symmetry can belong to two different phases. Those kind of phases, if gapped,
are called SPT phases. The Haldane phase of spin-1 chainH8364 is the first
example of SPT phase, which is known to be protected by the
$D_{2}=\\{E,R_{x}=e^{i\pi S_{x}},R_{y}=e^{i\pi S_{y}},R_{z}=e^{i\pi S_{z}}\\}$
symmetry.PBT0959 Interestingly, when additional time reversal symmetry is
present, more SPT phases emerges.LLW1180 ; CGW1123
Topological insulatorsKM0501 ; BZ0602 ; KM0502 ; MB0706 ; FKM0703 ; QHZ0824
is another examples of SPT phases which has attracted much interest in
literature. Compared to the topological insulators formed by free electrons,
most SPT phases (including the ones discussed in this paper) are strongly
correlated. A particular kind of strongly correlated SPT phases protected by
time reversal symmetry is called the fractionalized topological insulators by
some people.
An interesting and important question is how to classify different 1D SPT
phases even in presence of strong correlations/interactions. For the Haldane
phase in spin chains, it was thought that the degenerate end states and non-
local string order can be used to describe the hidden topological order.
However, if we remove the spin rotation symmetry but keep the parity symmetry,
the Haldane phase is still different from the $\otimes_{i}|z\rangle_{i}$
($S_{z}|z\rangle=0$) trivial phase, despite that the degenerate end states and
non-local string order are destroyed by the absence of spin rotation
symmetry.BTG0819 ; GWtefr ; PBT0959
Recently, it was argued in LABEL:LH0804 that the entanglement spectrum
degeneracy (ESD) can be considered as the criteria to tell whether a phase is
topologically ordered or not. However, it is known that all 1D gapped states
are short range entangled and have no intrinsic topological orders from
entanglement point of view.VCL0501 ; CGW1107 On the other hand, many gapped
1D phases have non-trivial ESD. So ESD cannot correspond to the intrinsic
topological orders. Then, one may try to use ESD to characterize non-trivial
SPT orders as suggested in LABEL:PBT1039. ESD does appear to describe non-
translation invariant SPT phases protected by on-site symmetry. In particular,
the ESD reveal an important connection to the projective representation of the
on-site symmetry group.PBT1039
It turns out that a clear picture and a systematic classification of all 1D
SPT phases can be obtained after realizing the deep connection between local
unitary transformation and gapped (symmetry protected) topological
phases.CGW1107 ; SPC1032 ; CGW1123 In particular, for 1D systems, all gapped
phase that do not break the symmetry are classified by the 1D representations
and projective representations of the symmetry group $G$ (ie by the group
comology classes ${\cal H}^{1}[G,U_{T}(1)]$ and ${\cal H}^{2}[G,U_{T}(1)]$,
see appendix A).CGW1107 ; SPC1032 ; CGW1123
In our previous paper, we have calculated the eight classes of unitary
projective representations of the point group $D_{2h}=D_{2}+T$, based on which
we predicted eight SPT phases in integer spin models that respect the $D_{2h}$
symmetry. We realized four interesting SPT phases in $S=1$ spin chains, and
showed that these phases can be distinguished experimentally by their
different responses of the end states to magnetic field. In this paper we will
show that the group $D_{2}+T$ has totally 16 projective representations when
the representation of $T$ is anti-unitary. We then study the properties of the
corresponding 16 SPT phases, such as the dimension of their degenerate end
states and their response to perturbations. Interestingly, we find that all
these SPT phases can be distinguished by their different responses of the end
states to various physical perturbations. We also show that all these SPT
phases can be realized in spin ladders. Finally we discuss the situations when
the symmetry reduces to the subgroups of $D_{2}+T$.
This paper is organized as following. In section II we show that there are 16
SPT phases that respect $D_{2}+T$ symmetry, and all these phases can be
distinguished experimentally. The realization of the 16 SPT phases in $S=1$
spin chains and spin ladders are given in section III. In section IV, we
discuss the projective representations and SPT phases of two subgroups of
$D_{2}+T$. Section V is the conclusion and discussion. Some details about the
derivations, together with a brief introduction to projective representations
(and group cohomology) and general classification of SPT phases, are given in
the appendices.
## II Distinguishing 16 SPT phases with $D_{2}+T$ symmetry
Table 1: All the projective representations of group $D_{2h}=D_{2}+T$. We only give the representation matrices for the three generators $R_{z},R_{x}$ and $T$. $K$ stands for the anti-linear operator. The 16 projective representations corresponds to 16 different SPT phase. This result agrees with the classification of combined symmetry $D_{2}+T$ given in Ref. CGW1123, . The indexes $(\omega,\beta,\gamma)=(\omega(D_{2}),\beta(T),\gamma(D_{2}))$ show this correspondence. Five of these SPT phases can be realized in $S=1$ spin chain models and others can be realized in $S=1$ spin ladders or large-spin spin chains. The active operators are those physical perturbations which (partially) split the irreducible end states. | $R_{z}$ | $R_{x}$ | $T$ | $\omega,\beta,\gamma$ | dim. | active operators111In the ground states of SPT phases corresponding to the 2-dimensional projective representations, the active operators behave as $(\sigma_{x},\sigma_{y},\sigma_{z})$, and for the 4-dimensional projective representations, the active operators behave as ($\sigma_{x}\otimes I,\sigma_{y}\otimes I,\sigma_{z}\otimes I,\sigma_{x}\otimes\sigma_{x},$ $\sigma_{y}\otimes\sigma_{x},\sigma_{z}\otimes\sigma_{x},I\otimes\sigma_{x},\sigma_{x}\otimes\sigma_{y},\sigma_{y}\otimes\sigma_{y},\sigma_{z}\otimes\sigma_{y},I\otimes\sigma_{y},\sigma_{x}\otimes\sigma_{z},\sigma_{y}\otimes\sigma_{z},\sigma_{z}\otimes\sigma_{z},I\otimes\sigma_{z}$). | spin models ($S=1$)
---|---|---|---|---|---|---|---
$E_{0}$ | $1$ | $1$ | $K$ | 1, 1,$A$ | 1 | | chain(trivial phase)
$E_{0}^{\prime}$ | $I$ | $I$ | $\sigma_{y}K$ | 1,-1,$A$ | 2 | $(S_{xyz},S_{xyz},S_{xyz}$)222We notate $S_{mn}=S_{m}S_{n}+S_{n}S_{m}$, where $m,n=x,y,z$. For $S=1$, $S_{xyz}$ means a multi-spin operator, such as $S_{xy,i}S_{z,i+1}$ . | ladder
$E_{1}$ | $I$ | $i\sigma_{z}$ | $\sigma_{y}K$ | 1,-1,$B_{1}$ | 2 | $(S_{z},S_{z},S_{xyz})$ | ladder
$E_{1}^{\prime}$ | $I$ | $i\sigma_{z}$ | $\sigma_{x}K$ | 1, 1,$B_{1}$ | 2 | $(S_{xy},S_{xy},S_{xyz})$ | ladder
$E_{3}$ | $\sigma_{z}$ | $I$ | $i\sigma_{y}K$ | 1,-1,$B_{3}$ | 2 | $(S_{x},S_{x},S_{xyz})$ | ladder
$E_{3}^{\prime}$ | $\sigma_{z}$ | $I$ | $i\sigma_{x}K$ | 1, 1,$B_{3}$ | 2 | $(S_{yz},S_{yz},S_{xyz})$ | ladder
$E_{5}$ | $i\sigma_{z}$ | $\sigma_{x}$ | $IK$ | -1, 1,$A$ | 2 | $(S_{yz},S_{y},S_{xy})$ | chain($T_{y}$ phase)
$E_{5}^{\prime}$ | $I\otimes i\sigma_{z}$ | $I\otimes\sigma_{x}$ | $\sigma_{y}\otimes IK$ | -1,-1,$A$ | 4 | $(S_{xyz}^{3},S_{x}^{3},S_{yz}^{1},S_{xz}^{3},S_{y}^{1},S_{z}^{3},S_{xy}^{1})$333$(S_{xyz}^{3},S_{x}^{3},S_{yz}^{1},S_{xz}^{3},S_{y}^{1},S_{z}^{3},S_{xy}^{1})=(S_{xyz},S_{xyz},S_{xyz},S_{x},S_{x},S_{x},S_{yz},S_{xz},S_{xz},S_{xz},S_{y},S_{z},S_{z},S_{z},S_{xy})$. Here $S^{3}_{x}$, for example, means that $S_{x}$ appears for three times: $S^{3}_{x}\to S_{x},S_{x},S_{x}$. Also, these three $S_{x},S_{x},S_{x}$ do not correspond to the same physical operator. They correspond to three different operators that transform in the same way as the $S_{x}$ operator. For instance, they may correspond to $S_{x}$ at three different sites near the end spin. | ladder
$E_{7}$ | $\sigma_{z}$ | $i\sigma_{z}$ | $i\sigma_{x}K$ | 1, 1,$B_{2}$ | 2 | $(S_{xz},S_{xz},S_{xyz})$ | ladder
$E_{7}^{\prime}$ | $\sigma_{z}$ | $i\sigma_{z}$ | $i\sigma_{y}K$ | 1,-1,$B_{2}$ | 2 | $(S_{y},S_{y},S_{xyz})$ | ladder
$E_{9}$ | $i\sigma_{z}$ | $\sigma_{x}$ | $i\sigma_{x}K$ | -1, 1,$B_{3}$ | 2 | $(S_{yz},S_{xz},S_{z})$ | chain($T_{z}$ phase)
$E_{9}^{\prime}$ | $I\otimes i\sigma_{z}$ | $I\otimes\sigma_{x}$ | $\sigma_{y}\otimes i\sigma_{x}K$ | -1,-1,$B_{3}$ | 4 | $(S_{xyz}^{3},S_{x}^{3},S_{yz}^{1},S_{y}^{3},S_{xz}^{1},S_{xy}^{3},S_{z}^{1})$444$(S_{xyz}^{3},S_{x}^{3},S_{yz}^{1},S_{y}^{3},S_{xz}^{1},S_{xy}^{3},S_{z}^{1})=(S_{xyz},S_{xyz},S_{xyz},S_{x},S_{x},S_{x},S_{yz},S_{y},S_{y},S_{y},S_{xz},S_{xy},S_{xy},S_{xy},S_{z})$. | ladder
$E_{11}$ | $i\sigma_{z}$ | $i\sigma_{x}$ | $\sigma_{z}K$ | -1, 1,$B_{1}$ | 2 | $(S_{x},S_{xz},S_{xy})$ | chain($T_{x}$ phase)
$E_{11}^{\prime}$ | $I\otimes i\sigma_{z}$ | $I\otimes i\sigma_{x}$ | $\sigma_{y}\otimes\sigma_{z}K$ | -1,-1,$B_{1}$ | 4 | $(S_{xyz}^{3},S_{yz}^{3},S_{x}^{1},S_{y}^{3},S_{xz}^{1},S_{z}^{3},S_{xy}^{1})$555$(S_{xyz}^{3},S_{yz}^{3},S_{x}^{1},S_{y}^{3},S_{xz}^{1},S_{z}^{3},S_{xy}^{1})=(S_{xyz},S_{xyz},S_{xyz},S_{yz},S_{yz},S_{yz},S_{x},S_{y},S_{y},S_{y},S_{xz},S_{z},S_{z},S_{z},S_{xy})$. | ladder
$E_{13}$ | $i\sigma_{z}$ | $i\sigma_{x}$ | $i\sigma_{y}K$ | -1,-1,$B_{2}$ | 2 | $(S_{x},S_{y},S_{z})$ | chain($T_{0}$ phase)
$E_{13}^{\prime}$ | $I\otimes i\sigma_{z}$ | $I\otimes i\sigma_{x}$ | $\sigma_{y}\otimes i\sigma_{y}K$ | -1, 1,$B_{2}$ | 4 | $(S_{xyz}^{3},S_{yz}^{3},S_{x}^{1},S_{xz}^{3},S_{y}^{1},S_{xy}^{3},S_{z}^{1})$666$(S_{xyz}^{3},S_{yz}^{3},S_{x}^{1},S_{xz}^{3},S_{y}^{1},S_{xy}^{3},S_{z}^{1})=(S_{xyz},S_{xyz},S_{xyz},S_{yz},S_{yz},S_{yz},S_{x},S_{xz},S_{xz},S_{xz},S_{y},S_{xy},S_{xy},S_{xy},S_{z}).$ | ladder
All the linear representations of the group anti-unitary $D_{2}+T$ are
1-dimensional (1-D). The number of linear representations of depends on the
representation space. When acting on Hilbert space, the linear representations
are classified by $H^{1}(D_{2}+T,U_{T}(1))=(Z_{2})^{2}$, which contains four
elements. When acting on Hermitian operators, the linear representations are
classified by $H^{1}(D_{2}+T,(Z_{2})_{T})=(Z_{2})^{3}$, which contains eight
elements. More details about linear representations and the first group
cohomology are given in appendix C. The 8 linear representations (with
Hermitian operators as the representation space) are shown in Table 4. These 8
representations collapse into 4 if the representation space is a Hilbert
space, because the bases $|1,x\rangle$ and $i|1,x\rangle$ (similarly,
$|1,y\rangle$ and $i|1,y\rangle$, $|1,z\rangle$ and $i|1,z\rangle$,
$|0,0\rangle$ and $i|0,0\rangle$) are not independent. In the following
discussion, if there is no further clarification, we will assume the linear
representations are defined on a Hermitian operator space. Some of these
Hermitian operators, called active operators which will be defined later, are
very important to distinguish different STP phases.
The projective representations are classified by the group cohomology
$H^{2}(D_{2}+T,U_{T}(1))$. There are totally 16 different classes of
projective representations for $D_{2}+T$, as shown in Tabel 1. More
discussions about group cohomology and projective representation are given in
appendices A, B, D and E. The 16 classes of projective representations
correspond to 16 SPT phases. Our result agrees with the classification in Ref.
CGW1123, , and the correspondence is illustrated by the indices
$(\omega(D_{2}),\beta(T),\gamma(D_{2}))$.
In all these 16 SPT phases, the bulk is gapped and we can only distinguish
them by their different edge states which are described by the projective
representations. We stress that all the properties of each SPT phase are
determined by the edge states and can be detected experimentally. The idea is
to add various perturbations that break the $D_{2}+T$ symmetry, and to see how
those perturbations split the degeneracy of the edge states.
Let us firstly consider the case that the space of degenerate end ‘spin’ is
2-dimensional. We have three Pauli matrices
$(\sigma_{x},\sigma_{y},\sigma_{z})$ to lift the end ‘spin’ degeneracy. During
various perturbations of the system, only those that reduce to the Pauli
matrices $(\sigma_{x},\sigma_{y},\sigma_{z})$ can split the degeneracy of the
ground states. These perturbations will be called active operators. To
identify whether a perturbation is an active operator, one can compare its
symmetry transformation properties under $D_{2}+T$ with those of the three
Pauli matrices $(\sigma_{x},\sigma_{y},\sigma_{z})$. For different SPT phases,
the end spin forms different projective representations of the $D_{2}+T$
group, and consequently the three Pauli matrices
$(\sigma_{x},\sigma_{y},\sigma_{z})$ form different linear representations of
$D_{2}+T$. So they correspond to different active operators in different SPT
phases.
Let $O$ be a perturbation operator, under the symmetry operation $g$ it varies
as
$\displaystyle u(g)^{{\dagger}}Ou(g)$ $\displaystyle=$
$\displaystyle\eta_{g}(O)O,$ (1)
where $u(g)$ is the representation of symmetry transformation $g$ on the
physical spin Hilbert space, $\eta_{g}(O)$ is equal to 1 or -1 and forms a 1-D
representation of the symmetry group $D_{2}+T$. On the other hand, the three
Pauli matrices $(\sigma_{x},\sigma_{y},\sigma_{z})$ also form linear
representations of $D_{2}+T$. In the end ‘spin’ space, the Pauli matrices
transform as ($m=x,y,z$)
$\displaystyle M(g)^{\dagger}\sigma_{m}M(g)$ $\displaystyle=$
$\displaystyle\eta_{g}(\sigma_{m})\sigma_{m},$ (2)
where $M(g)$ is the projective representation of $g$ (see Tabel 1) on the end
‘spin’ Hilbert space. If the physical operator $O$ and the end ‘spin’ operator
$\sigma_{m}$ form the same linear representation of the symmetry group,
namely, $\eta_{g}(O)=\eta_{g}(\sigma_{m})$, then they should have the same
matrix elements (up to a constant factor) in the end spin subspace. In Table
1, the sequence of operators $(O_{1},O_{2},O_{3})$ are the active operators
corresponding to the end ‘spin’ operators
$(\sigma_{x},\sigma_{y},\sigma_{z})$, respectively.
Similarly, in the case that the end ‘spin’ is 4-dimensional, there are 15
$4\times 4$ matrices that can (partially) lift the degeneracy of the end
states, namely, $(\sigma_{x}\otimes I,\sigma_{y}\otimes I,\sigma_{z}\otimes
I,\sigma_{x}\otimes\sigma_{x},\sigma_{y}\otimes\sigma_{x},\sigma_{z}\otimes\sigma_{x},I\otimes\sigma_{x},\sigma_{x}\otimes\sigma_{y},\sigma_{y}\otimes\sigma_{y},\sigma_{z}\otimes\sigma_{y},I\otimes\sigma_{y},\sigma_{x}\otimes\sigma_{z},\sigma_{y}\otimes\sigma_{z},\sigma_{z}\otimes\sigma_{z},I\otimes\sigma_{z})$.
And the corresponding active operators are given in Table 1.
Since the active operators are perturbations that spilt the ground state
degeneracy, through linear response theory, they correspond to measurable
physical quantities. For example, if the spin $S_{m}$ is an active operator,
it couples to a magnetic field through the interaction
$\displaystyle
H^{\prime}=\sum_{i}\left(g_{x}\mu_{B}B_{x}S_{x,i}+g_{y}\mu_{B}B_{y}S_{y,i}+g_{z}\mu_{B}B_{z}S_{z,i}\right).$
(3)
The end ‘spins’ may be polarized by above perturbation. In a real spin-chain
materials, due to structural defects, there are considerable number of end
‘spins’. They behave as impurity spins (the gapped bulk can be seen as a
paramagnetic material). Thus, the polarizing of the end ‘spins’ can be
observed by measuring the magnetic susceptibility, which obeys the Curie law
($m=x,y,z$)
$\displaystyle\chi_{m}(T)={Ng_{m}^{2}\mu_{B}\over 3k_{B}T},$
where $N$ is the number of end ‘spins’.
Notice that different projective representations have different active
operators. Thus we can distinguish all of the 16 SPT phases experimentally.
For instance, the active operators of the $E_{1}$ and $E_{1}^{\prime}$ phases
are $(S_{z},S_{z},S_{xyz})$ and $(S_{xy},S_{xy},S_{xyz})$, respectively. Here
$S_{mn}=S_{m}S_{n}+S_{n}S_{m}$ is a spin quadrupole operator, and $S_{xyz}$ is
a third order spin operator, such as $S_{xy,i}S_{zi+1}$ or
$S_{x,i}S_{y,i+1}S_{z,i+2}$. We will show that the two SPT phases $E_{1}$ and
$E^{\prime}_{1}$ can be distinguished by the perturbation (3). In $E_{1}$
phase, the active operators contain $S_{z}$, so it response to $B_{z}$. In
consequence, the $g$-factors $g_{z}$ is finite, but $g_{x},g_{y}=0$ (because
$S_{x},S_{y}$ are not active operators). However, in $E_{1}^{\prime}$ phase,
none of $S_{x},S_{y},S_{z}$ is active, so the end ‘spins’ do not response to
magnetic field at all. As a consequence, all components of the $g$-factor
approaches zero: $g_{x},g_{y},g_{z}=0$. This difference distinguishes the two
phases.
To completely separate all the 16 SPT phases, one need to add perturbations by
the spin-quadrupole operators $S_{xy},S_{yz},S_{xz}$ and the third-order spin
operators such as $S_{xy,i}S_{z,i+1}$. Actually, these perturbations may be
realized experimentally. For instance, the interaction between the spin-
quadrupole and a nonuniform magnetic field is reasonable in principle:
$H^{\prime}=g_{xy}\left(\frac{\partial B_{x}}{\partial y}+\frac{\partial
B_{y}}{\partial x}\right)S_{xy}+...$
One can measure the corresponding ‘quadrupole susceptibility’ corresponding to
above perturbation. Similar to the spin susceptibility, different SPT phases
have different coupling constants for the ‘quadrupole susceptibility’.
Consequently, from the information of the spin dipole- and quadrupole-
susceptibilities (and other information corresponding to the third-order spin
operators), all the 16 SPT phases can be distinguished.
## III Realization of SPT phases in $S=1$ spin chains and ladders
In this section, we will illustrate that all these 16 SPT phases can be
realized in $S=1$ spin chains or ladders.
### III.1 spin-chains
#### III.1.1 SPT phases for nontrivial projective representations
In Ref. LLW1180, , we have studied four nontrivial SPT phases
$T_{0},T_{x},T_{y},T_{z}$ in $S=1$ spin chains. The ground states of these
phases are written as a matrix product state (MPS)
$\displaystyle|\phi\rangle=\sum_{\\{m_{i}\\}}\mathrm{Tr}(A^{m_{1}}_{1}A^{m_{2}}_{2}...A^{m_{N}}_{N})|m_{1}m_{2}...m_{N}\rangle.$
where $m_{i}=x,y,z$. More information about MPS is given in appendix. B.
1) $T_{0}$ phase. The end ‘spins’ of this phase belong to the projective
representation $E_{13}$, and a typical MPS in this phase is
$\displaystyle A^{x}=a\sigma_{x},\ \ \ A^{y}=b\sigma_{y},\ \ \
A^{z}=c\sigma_{z},$ (4)
where $a,b,c$ are real numbers.note:SO(3) Table 1 shows that the active
operators in this phase are $S_{x},S_{y},S_{z}$, so the end spins will
response to the magnetic field along all the three directions.
2) $T_{x}$ phase. The end ‘spins’ of this phase belong to the projective
representation $E_{11}$, and a typical MPS in this phase is
$\displaystyle A^{x}=a\sigma_{x},A^{y}=ib\sigma_{y},A^{z}=ic\sigma_{z},$ (5)
where $a,b,c$ are real numbers. Table 1 shows that there is only one active
operator $S_{x}$ in this phase, so the end spins will only response to the
magnetic field along $x$ direction.
3) $T_{y}$ phase. The end ‘spins’ of this phase belong to the projective
representation $E_{5}$, and a typical MPS in this phase is
$\displaystyle A^{x}=ia\sigma_{x},A^{y}=b\sigma_{y},A^{x}=ic\sigma_{z},$ (6)
where $a,b,c$ are real numbers. Table 1 shows that there is only one active
operator $S_{y}$ in this phase, so the end spins will only response to the
magnetic field along $y$ direction.
4) $T_{z}$ phase. The end ‘spins’ of this phase belong to the projective
representation $E_{9}$, and a typical MPS in this phase is
$\displaystyle A^{x}=ia\sigma_{x},A^{y}=ib\sigma_{y},A^{z}=c\sigma_{z},$ (7)
where $a,b,c$ are real numbers. Table 1 shows that there is only one active
operator $S_{z}$ in this phase, so the end spins will only response to the
magnetic field along $z$ direction.
#### III.1.2 SPT phases for trivial projective representations
Corresponding to the trivial projective IRs, we can also construct trivial
phases. Here ‘trivial’ means that the ground state is in some sense like a
direct product state. In these phase the matrix $A^{m}$ also vary as Eqs. (20)
and (26), except that $A^{m}$ is a 1-D matrix, and $M(g)$ is a 1-d
representation of $D_{2}+T$. Since all the 1-D representation belongs to the
same class, there is only one trivial phase.
A simple example of the states in this phase is a direct product state
$|\phi\rangle=|m\rangle_{1}|m\rangle_{2}...|m\rangle_{N}.$
This state can be realized by a strong (positive) on-site single-ion
anisotropy term $(S_{m})^{2}$, $m=x,y,z$. In this phase, there is no edge
state, and no linear response to all perturbations.
### III.2 spin ladders
In last section we have realized 5 of the 16 different SPT phases (with only
$D_{2}+T$ symmetry) in $S=1$ spin chains. In this section, we will show that
all the other phases can be realized in $S=1$ ladders.
#### III.2.1 General discussion for spin ladders
For simplicity, we will consider the spin-ladder models without inter-chain
interaction.laddernote In that case, the ground state of the spin ladder is a
direct product of the ground states of the independent chains. For example,
for a two-leg ladder, the physical Hilbert space at each site is a direct
product space $\mathcal{H}=\mathcal{H}_{1}\otimes\mathcal{H}_{2}$ spanned by
bases $|m_{1}n_{1}\rangle=|m_{1}\rangle|n_{1}\rangle$, with
$m_{1},n_{1}=x,y,z$. If the ground state of the two chains are
$|\phi_{1}\rangle$ and $|\phi_{2}\rangle$ respectively,
$\displaystyle|\phi_{1}\rangle=\sum_{\\{m\\}}\mathrm{Tr}(A^{m_{1}}...A^{m_{N}})|m_{1}...m_{N}\rangle,$
$\displaystyle|\phi_{2}\rangle=\sum_{\\{n\\}}\mathrm{Tr}(B^{n_{1}}...B^{n_{N}})|n_{1}...n_{N}\rangle,$
(8)
with
$\displaystyle\sum_{m^{\prime}}u(g)_{mm^{\prime}}A^{m^{\prime}}=e^{i\alpha_{1}(g)}M(g)^{\dagger}A^{m}M(g),$
$\displaystyle\sum_{n^{\prime}}v(g)_{nn^{\prime}}B^{n^{\prime}}=e^{i\alpha_{2}(g)}N(g)^{\dagger}B^{n}N(g),$
(9)
for an unitary operator $\hat{g}$ and
$\displaystyle\sum_{m^{\prime}}u(T)_{mm^{\prime}}(A^{m^{\prime}})^{*}=M(T)^{\dagger}A^{m}M(T),$
$\displaystyle\sum_{n^{\prime}}v(T)_{nn^{\prime}}(B^{n^{\prime}})^{*}=N(T)^{\dagger}B^{n}N(T),$
(10)
for the time reversal operator $T$. Then the ground state of the ladder is
$\displaystyle|\phi\rangle$ $\displaystyle=$
$\displaystyle|\phi_{1}\rangle\otimes|\phi_{2}\rangle$ (11) $\displaystyle=$
$\displaystyle\sum_{\\{m,n\\}}\mathrm{Tr}(A^{m_{1}}...A^{m_{N}})\mathrm{Tr}(B^{n_{1}}...B^{n_{N}})|m_{1}n_{1}...m_{N}n_{N}\rangle$
$\displaystyle=$ $\displaystyle\sum_{\\{m,n\\}}\mathrm{Tr}[(A^{m_{1}}\otimes
B^{n_{1}})...(A^{m_{N}}\otimes B^{n_{N}})]$ $\displaystyle\ \ \ \ \ \
\times|m_{1}n_{1}...m_{N}n_{N}\rangle$
which satisfies
$\displaystyle\sum_{m,n,m^{\prime},n^{\prime}}[u(g)\otimes
v(g)]_{mn,m^{\prime}n^{\prime}}(A^{m^{\prime}}\otimes B^{n^{\prime}})$
$\displaystyle=e^{i\alpha(g)}(M\otimes N)^{\dagger}(A^{m}\otimes
B^{n})(M\otimes N)$ (12)
for an unitary $\hat{g}$ (here $\alpha(g)=\alpha_{1}(g)+\alpha_{2}(g)$) and
$\displaystyle\sum_{m,n,m^{\prime},n^{\prime}}[u(T)\otimes
v(T)]_{mn,m^{\prime}n^{\prime}}(A^{m^{\prime}}\otimes B^{n^{\prime}})^{*}$
$\displaystyle=(M\otimes N)^{\dagger}(A^{m}\otimes B^{n})(M\otimes N)$ (13)
for the time reversal operator $T$. This shows that the ground state of the
ladder is also a MPS which is represented by $A^{m}\otimes B^{n}$, and
$M\otimes N$ is a projective representation of the symmetry group $G$.
Specially, if $B^{n}$ is 1-D and $N(g)=1$ (representing a trivial phase), then
we have
$\displaystyle\sum_{m,n,m^{\prime},n^{\prime}}[u(g)\otimes
v(g)]_{mn,m^{\prime}n^{\prime}}(A^{m^{\prime}}\otimes B^{n^{\prime}})$ (14)
$\displaystyle=e^{i\alpha(g)}M^{\dagger}(A^{m}\otimes B^{n})M.$
In general the projective representation $M(g)\otimes N(g)$ is reducible. This
means that the end ‘spin’ of the ladder is a direct sum space of several
irreducible projective representations (IPRs). These IPRs are degenerate and
belong to the same class. However, this degeneracy is accidental, because only
irreducible representation protected by symmetry is robust. Notice that we
didn’t consider the inter-chain interaction in the ladder. If certain
interaction is considered, the degeneracy between the same classes of IPRs can
be lifted, and only one IPR remains as the end ‘spin’ in the ground state.
This IPR (or more precisely the class it belongs to) determines which phase
the spin ladder belongs to.
Table 2: Projective representations of group $\bar{D}_{2}=\\{E,R_{z}T,R_{x}T,R_{y}\\}$. There are 4 classes of projective representations, meaning that the second group cohomology contains 4 elements. class | E | $R_{y}$ | $R_{z}T$ | $R_{x}T$ | dimension | effecive/active operators | spin models ($S=1$)
---|---|---|---|---|---|---|---
1 | 1 | $1$ | $K$ | $K$ | 1 | | chain(trivial phase)
| $I$ | $\sigma_{y}$ | $i\sigma_{z}K$ | $\sigma_{x}K$ | 2 | $\sigma_{x}\sim S_{z},S_{yz};\ \sigma_{y}\sim S_{y};\ \sigma_{z}\sim S_{x},S_{xy}$ | chain
2 | $I$ | $I$ | $\sigma_{y}K$ | $\sigma_{y}K$ | 2 | $\sigma_{x},\sigma_{y},\sigma_{z}\sim S_{xz}$ | ladder
| $I\otimes I$ | $I\otimes\sigma_{y}$ | $\sigma_{y}\otimes i\sigma_{z}K$ | $\sigma_{y}\otimes\sigma_{x}K$ | 4 | | ladder
3 | $I$ | $i\sigma_{z}$ | $\sigma_{y}K$ | $\sigma_{x}K$ | 2 | $\sigma_{x},\sigma_{y}\sim S_{z},S_{yz};\ \sigma_{z}\sim S_{xz}$ | chain
| $I$ | $\sigma_{y}$ | $i\sigma_{y}K$ | $iIK$ | 2 | $\sigma_{x},\sigma_{z}\sim S_{z},S_{yz};\ \sigma_{y}\sim S_{xz}$ | chain
4 | $I$ | $i\sigma_{z}$ | $\sigma_{x}K$ | $\sigma_{y}K$ | 2 | $\sigma_{x},\sigma_{y}\sim S_{x},S_{xy};\ \sigma_{z}\sim S_{xz}$ | chain
| $I$ | $\sigma_{y}$ | $iIK$ | $i\sigma_{y}K$ | 2 | $\sigma_{x},\sigma_{z}\sim S_{x},S_{xy};\ \sigma_{y}\sim S_{xz}$ | chain
Table 3: Projective representations of group $Z_{2}+T=\\{E,R_{z},T,R_{z}T\\}$. class | E | $R_{z}$ | $T$ | $R_{z}T$ | dimension | effecitive/active operators | spin models ($S=1$)
---|---|---|---|---|---|---|---
1 | 1 | $1$ | $K$ | $K$ | 1 | | chain
| $I$ | $\sigma_{y}$ | $i\sigma_{z}K$ | $\sigma_{x}K$ | 2 | $\sigma_{x}\sim S_{x},S_{y};\ \sigma_{y}\sim S_{xy};\ \sigma_{z}\sim S_{yz},S_{xz}$ | chain
2 | $I$ | $I$ | $\sigma_{y}K$ | $\sigma_{y}K$ | 2 | $\sigma_{x},\sigma_{y},\sigma_{z}\sim S_{z}$ | ladder
| $I\otimes I$ | $I\otimes\sigma_{y}$ | $\sigma_{y}\otimes i\sigma_{z}K$ | $\sigma_{y}\otimes\sigma_{x}K$ | 4 | | ladder
3 | $I$ | $i\sigma_{z}$ | $\sigma_{y}K$ | $\sigma_{x}K$ | 2 | $\sigma_{x},\sigma_{y}\sim S_{x},S_{y};\ \sigma_{z}\sim S_{z}$ | chain
| $I$ | $\sigma_{y}$ | $i\sigma_{y}K$ | $iIK$ | 2 | $\sigma_{x},\sigma_{z}\sim S_{x},S_{y};\ \sigma_{y}\sim S_{z}$ | chain
4 | $I$ | $i\sigma_{z}$ | $\sigma_{x}K$ | $\sigma_{y}K$ | 2 | $\sigma_{x},\sigma_{y}\sim S_{xz},S_{yz};\ \sigma_{z}\sim S_{z}$ | chain
| $I$ | $\sigma_{y}$ | $iIK$ | $i\sigma_{y}K$ | 2 | $\sigma_{x},\sigma_{z}\sim S_{xz},S_{yz};\ \sigma_{y}\sim S_{z}$ | chain
#### III.2.2 $S=1$ spin ladders in different SPT phases
In appendix E, we show how to obtain all the other IPRs by reducing the direct
product representations of $E_{13},E_{11},E_{5},E_{9}$. We start with these
four IPRs because the corresponding SPT phases $T_{0},T_{x},T_{y},T_{z}$ have
been realized in spin chains. Actually, the reduction procedure provides a
method to construct spin ladders from spin chains and to realize all the SPT
phases.
By putting two different spin chains (belonging to the
$T_{0},T_{x},T_{y},T_{z}$ phases) into a ladder, we obtain 6 new phases
corresponding to
$E_{1},E_{1}^{\prime},E_{3},E_{3}^{\prime},E_{7},E_{7}^{\prime}$,
respectively. If we put one more spin chain into the ladder, then we obtain 5
more new phases corresponding to
$E_{0}^{\prime},E_{5}^{\prime},E_{9}^{\prime},E_{11}^{\prime},E_{13}^{\prime}$,
respectively. Therefore, together with $T_{0},T_{x},T_{y},T_{z}$ and the
trivial phase in spin chains, we have realized all the 16 SPT phases listed in
Table 1. Furthermore, if we have translational symmetry, then from section
III.1.1 and Eq. (14), we have totally $16\times 4=64$ different SPT phases in
spin ladders, in accordance with the result of Ref. CGW1107, .
## IV SPT phases for subgroups of $D_{2}+T$
From the projective representations of group $D_{2}+T$, we can easily obtain
the projective representations of its subgroups. According to Table 1, the
representation matrices for the subgroups also form a projective
representation, but usually it is reducible. By reducing these matrices, we
can obtain all the IPRs of the subgroup.
### IV.1 $\bar{D}_{2}=\\{E,R_{z}T,R_{x}T,R_{y}\\}$
This group is also a $D_{2}$ group except that half of its elements are anti-
unitary. Notice that $T$ itself is not a group element. This group has four
1-D linear representations. In Table 5 in appendix C, we list the
representation matrix elements, representational bases of physical spin and
spin operators (for $S=1$) according to each linear representation.
The projective representations of the subgroup $\bar{D}_{2}$ are shown in
Table 2. By reducing the representation matrix of $D_{2}+T$, we obtained 8
projective representations. They are classified into 4 classes. This can be
shown by calculating the corresponding 2-cocycles of these projective
representations. Two projective representations belonging to the same class
means that the corresponding 2-cocycle differ by a 2-coboundary (see
appendices A, B and D).
As shown in Table 2, the 2-dimensional representation in class-1 is trivial
(or linear), it belongs to the same class as the 1-D representation. This
means that the edge states in this phase is not protected by symmetry, the
ground state degeneracy can be smoothly lifted without phase transition. The
class-3 and class-4 nontrivial SPT phases can be realized in spin chains.
These two phases can be distinguished by magnetic fields. The phase
corresponding to the class-3 projective representation only response to the
magnetic field along $z$ direction, and the phase corresponding to class-4
projective representation only respond to the magnetic field along $x$
direction. The remaining two nontrivial SPT phases of class 2 can be realized
by spin ladders.
### IV.2 $Z_{2}+T=\\{E,R_{z},T,R_{z}T\\}$
This subgroup is also a direct product group. The linear representations and
projective representations are given in Table. 6 (see appendix C) and 3,
respectively. This group is isomorphic to
$\bar{D}_{2}=\\{E,R_{z}T,R_{x}T,R_{y}\\}$ , so its projective representations
and SPT phases are one to one corresponding to those in 2. However, the
corresponding SPT phases in 3 and 2 are not the same, because they have
different response to external perturbations.
Notice that, this simple symmetry is very realistic for materials. For
example, the quasi-1D anti-ferromagnets CaRuO3CaRuO3 and NaIrO3 NaIrO3
respect this $Z_{2}+T$ symmetry due to spin-orbital coupling. Their ground
state, if non-symmetry breaking, should belong to one of the four SPT phases
listed in Table 3.
## V conclusion and discussion
In summary, through the projective representations, we studied all the 16
different SPT phases for integer spin systems that respect only
$D_{2h}=D_{2}+T$ on-site symmetry. We provided a method to measure all the SPT
orders. We showed that in different SPT phase the end ‘spins’ respond to
various perturbations differently. These perturbations include spin dipole-
(coupling to uniform magnetic fields) and quadrupole- operators (coupling to
nonuniform magnetic fields). We illustrated that the SPT orders in different
SPT phases can be observed by experimental measurements, such as the
temperature dependence of the magnetic susceptibility and asymmetric
$g$-factors. We illustrated that all the 16 SPT phases can be realized in
$S=1$ spin chains or ladders. Finally we studied the SPT phases for two
subgroups of $D_{2}+T$, one of the subgroup is the symmetry group of some
interesting materials.CaRuO3 ; NaIrO3 Certainly, our method of studying SPT
orders can be generalized to other symmetry groups.
## VI acknowledgements
We thank Ying Ran for helpful discussions. This research is supported by NSF
Grant No. DMR-1005541 and NSFC 11074140.
## Appendix A Group cohomology
We consider a finite group $G=\\{g_{1},g_{2},...\\}$ with its module space
$U_{T}(1)$. The group elements of $G$ are operators on the module space. A
$n$-cochain $\omega_{n}(g_{1},g_{2},...,g_{n})$ is a function on the group
space which maps $\otimes^{n}G\to U(1)$. The cochains can be classified with
the coboundary operator.
Suppose the cochain $\omega_{n}(g_{1},g_{2},...,g_{n})\in U(1)$, then the
coboundary operator is defined as
$\displaystyle(d\omega_{n})(g_{1},g_{2},...,g_{n+1})=g_{1}\cdot\omega_{n}(g_{2},g_{3},...,g_{n+1})$
$\displaystyle\omega_{n}^{-1}(g_{1}g_{2},g_{3},...,g_{n+1})\
\omega_{n}(g_{1},g_{2}g_{3},...,g_{n+1})\ ...$
$\displaystyle\omega_{n}^{(-1)^{i}}(g_{1},g_{2},...,g_{i}g_{i+1},...,g_{n+1})\
...$
$\displaystyle\omega_{n}^{(-1)^{n}}(g_{1},g_{2},...,g_{n}g_{n+1})\omega_{n}^{(-1)^{n+1}}(g_{1},g_{2},...,g_{n}),$
for $n\geq 1$, and
$\displaystyle(d\omega_{0})(g_{1})=\frac{g_{1}\cdot\omega_{0}}{\omega_{0}},$
(15)
for $n=0$. Here $g\cdot\omega_{n}$ is a group action on the module space
$U(1)$. If $g$ is an unitary operator, it acts on $U(1)$ trivially
$g\cdot\omega_{n}=\omega_{n}$. If $g$ is anti-unitary (such as the time
reversal operator $T$), then the action is given as
$g\cdot\omega_{n}=\omega_{n}^{*}=\omega_{n}^{-1}$. We will use $U_{T}(1)$ to
denote such a module space. We note that, if $G$ contain no time reversal
transformation, then $U_{T}(1)=U(1)$.
A cochain $\omega_{n}$ satisfying $d\omega_{n}=1$ is called a $n$-cocycle. If
$\omega_{n}$ satisfies $\omega_{n}=d\omega_{n-1}$, then it is called a
$n$-coboundary. Since $d^{2}\omega=1$, a coboundary is always a cocycle. The
following are two examples of cocycle equations. 1-cocycle equation:
$\displaystyle\frac{g_{1}\cdot\omega_{2}(g_{2})\omega(g_{1})}{\omega_{2}(g_{1}g_{2})}=1.$
(16)
2-cocyle equation:
$\displaystyle\frac{g_{1}\cdot\omega_{2}(g_{2},g_{3})\omega_{2}(g_{1},g_{2}g_{3})}{\omega_{2}(g_{1}g_{2},g_{3})\omega_{2}(g_{1},g_{2})}=1.$
The group cohomology is defined as $H^{n}(G,U_{T}(1))=Z^{n}/B^{n}$. Here
$Z^{n}$ is the set of $n$-cocycles and $B^{n}$ is the set of $n$-coboundarys.
If two $n$-cocycles $\omega_{n}$ and $\omega^{\prime}_{n}$ differ by a
$n$-coboundary $\tilde{\omega}_{n}$, namely,
$\omega^{\prime}_{n}=\omega_{n}\tilde{\omega}_{n}^{-1}$, then they are
considered to be equivalent. The set of equivalent $n$-cocycles is called a
equivalent class. Thus, the $n$-cocycles are classified with different
equivalent classes, these classes form the (Abelian) cohomology group
$H^{n}(G,U_{T}(1))=Z^{n}/B^{n}$.
As an example, we see the cohomology of $Z_{2}=\\{E,\sigma\\}$, where $E$ is
the identity element and $\sigma^{2}=E$. Since this group $Z_{2}$ is unitary,
it acts on the module space trivially and $U_{T}(1)=U(1)$:
$g\cdot\omega_{n}=\omega_{n}$. From (16) the first cohomology is the 1-D
representations.
$H^{1}(Z_{2},U(1))=Z_{2},$
The second cohomology classifies the projective representations (see appendix
B). It can be shown that all the solutions of (A) are 2-coboundaries
$\omega_{2}=d\omega_{1}$. So all the 2-cocycles belong to the same class,
consequently,
$H^{2}(Z_{2},U(1))=0.$
Let us see another example, the time reversal group $Z_{2}^{T}=\\{E,T\\}$.
Notice that the time reversal operator $T$ is antiunitary, it acts on
$U_{T}(1)$ nontrivially: $T\cdot\omega_{n}=\omega_{n}^{-1}$. As a result, the
cohomology of $Z_{2}^{T}$ is different from that of $Z_{2}$:
$\displaystyle H^{1}(Z_{2}^{T},U_{T}(1))=0,$ $\displaystyle
H^{2}(Z_{2}^{T},U_{T}(1))=Z_{2}.$
The group $Z_{2}^{T}$ have two orthogonal 1-d representations (see appendix
C), but above result shows that these two 1-D representations belongs to the
same class. Further more, the nontrivial second group cohomology shows that
$Z_{2}^{T}$ has a nontrivial projective representation, which is well known:
$M(E)=I,M(T)=i\sigma_{y}K$.
## Appendix B Brief review of the classification of 1D SPT orders
A key trick to use local unitary transformation to study/classify 1D gapped
SPT phases is the matrix product state (MPS) representation of the ground
states. The simplest example is the $S=1$ AKLT wave function AKL8877 in the
Haldane phase which can be written as a $2\times 2$ MPS. Later it was shown
that in 1D all gapped many-body spin wave functions (it was generalized to
fermion systems) can be well approximated by a MPS as long as the dimension
$D$ of the matrix is large enough MPS
$\displaystyle|\phi\rangle=\sum_{\\{m_{i}\\}}\mathrm{Tr}(A^{m_{1}}_{1}A^{m_{2}}_{2}...A^{m_{N}}_{N})|m_{1}m_{2}...m_{N}\rangle.$
(18)
Here $m$ is the index of the $d$-component physical spin, and $A^{m_{i}}_{i}$
is a $D\times D$ matrix. Provided that the system is translationally
invariant, then one set all the matrices $A^{m}$ as the same over all sites.
In the MPS picture, it is natural to understand that projective
representations can be used as a label of different SPT phase. Suppose that a
system has an on-site unitary symmetry group $G$ which keep the ground state
$|\phi\rangle$ invariant
$\displaystyle\hat{g}|\phi\rangle=u(g)\otimes u(g)\otimes...\otimes
u(g)|\phi\rangle=(e^{i\alpha(g)})^{N}|\phi\rangle,$ (19)
where $\hat{g}\in G$ is a group element of $G$, $u(g)$ is its $d$-dimensional
(maybe reducible) representation and $e^{i\alpha(g)}$ is its 1-D
representation. We only consider the case that $u(g)$ is a linear presentation
of $G$. The case that $u(g)$ forms a projective representation of $G$ (such as
half-integer spin chain) has been studied in LABEL:CGW1107,CGW1123. Eqs. (18)
and (19) require that the matrix $A^{m}$ should vary in the following
wayPBT1039 ; CGW1107
$\displaystyle\sum_{m^{\prime}}u(g)_{mm^{\prime}}A^{m^{\prime}}=e^{i\alpha(g)}M(g)^{\dagger}A^{m}M(g),$
(20)
where $M(g)$ is an invertible matrix and is essential for the classification
of different SPT phases. Notice that if $M(g)$ satisfies Eq. (20), so does
$M(g)e^{i\varphi(g)}$. Since $u(g_{1}g_{2})=u(g_{1})u(g_{2})$ and
$e^{i\alpha(g_{1}g_{2})}=e^{i\alpha(g_{1})}e^{i\alpha(g_{2})}$, we obtain
$\displaystyle M(g_{1}g_{2})=M(g_{1})M(g_{2})e^{i\theta(g_{1},g_{2})}.$ (21)
Above equation shows that up to a phase $e^{i\theta(g_{1},g_{2})}$, $M(g)$
satisfies the multiplication rule of the group. Further, $M(g)$ satisfies the
associativity condition
$M(g_{1}g_{2}g_{3})=M(g_{1}g_{2})M(g_{3})e^{i\theta(g_{1}g_{2},g_{3})}=M(g_{1})M(g_{2}g_{3})e^{i\theta(g_{1},g_{2}g_{3})}$,
or equivalently
$e^{i\theta(g_{2},g_{3})}e^{i\theta(g_{1},g_{2}g_{3})}=e^{i\theta(g_{1},g_{2})}e^{i\theta(g_{1}g_{2},g_{3})}.$
Above equation coincide with the cocycle equation (A) when $G$ is unitary. The
matrices $M(g)$ that satisfies above conditions are called projective
representation of the symmetry group $G$. Above we also shows the relation
between projective representations and 2-cocycle.
For a projective representation, the two-element function
$e^{i\theta(g_{1},g_{2})}$ has redundant degrees of freedom. Suppose that we
introduce a phase transformation,
$M(g_{1})^{\prime}=e^{i\varphi(g_{1})}M(g_{1})$,
$M(g_{2})^{\prime}=e^{i\varphi(g_{2})}M(g_{2})$ and
$M(g_{1}g_{2})^{\prime}=e^{i\varphi(g_{1}g_{2})}M(g_{1}g_{2})$, then the
function $e^{i\theta(g_{1},g_{2})}$ becomes
$\displaystyle
e^{i\theta(g_{1},g_{2})^{\prime}}=\frac{e^{i\varphi(g_{1}g_{2})}}{e^{i\varphi(g_{1})}e^{i\varphi(g_{2})}}e^{i\theta(g_{1},g_{2})}.$
(22)
Notice that $e^{i\theta(g_{1},g_{2})^{\prime}}$ and $e^{i\theta(g_{1},g_{2})}$
differs by a 2-coboundary, so they belong to the same class. Thus, the
projective representations are classified by the second group cohomology
$H^{2}(G,U_{T}(1))$. If $M(g)$ and $\tilde{M}(g)$ belong to different (classes
of) projective representations, then they cannot be smoothly transformed into
each other, therefore the corresponding quantum states $A^{m}$ and
$\tilde{A}^{m}$ fall in different phases. In other words, the projective
representation $\omega_{2}\in H^{2}(G,U_{T}(1))$ provides a label of a SPT
phase. If the system is translationally invariant, then $e^{i\alpha(g)}\in
H^{1}(G,U_{T}(1))$ is also a label of a SPT phase. In this case, the complete
label of a SPT phase is $(\omega_{1},\alpha)$. If translational symmetry is
absent, we can regroup the matrix $A^{m}$ such that $e^{i\alpha(g)}=1$, then
each SPT phase is uniquely labeled by $\omega_{2}$.
## Appendix C Linear representations for $D_{2}+T$ and its subgroups
Generally, the 1-D linear representations of a group $G$ are classified by its
first group cohomology $H^{1}(G)$. However, there is a subtlety to choose the
coefficient of $H^{1}(G)$. We will show that if the representation space is a
Hilbert space, the 1-D representations are characterized by $H^{1}(G,U(1))$
(or $H^{1}(G,U_{T}(1))$ if $G$ contains anti-unitary elements); while if the
representation space is a Hermitian operator space, then the 1-D
representations are characterized by $H^{1}(G,Z_{2})$ (notice that
$H^{1}(G,(Z_{2})_{T})=H^{1}(G,Z_{2})$, there is no difference whether $G$
contains anti-unitary elements or not).
Since the discusses for unitary group and anti-unitary group are very similar,
we will only consider a group $G$ which contains anti-unitary elements.
Firstly, we consider the 1-D linear representations on a Hilbert space
$\mathcal{H}$. Suppose $\phi\in\mathcal{H}$ is a basis, and $g\in G$ is an
anti-unitary element, then
$\displaystyle\hat{g}|\phi\rangle=\eta(g)K|\phi\rangle,$ (23)
where the number $\eta(g)$ is the representation of $g$. Notice that $g$ is
anti-linear, which may change the phase of $|\phi\rangle$. To see that, we
suppose $K|\phi\rangle=|\phi\rangle$, and introduce a phase transformation for
the basis $|\phi\rangle$, namely, $|\phi^{\prime}\rangle=|\phi\rangle
e^{i\theta}$. Now we choose $|\phi^{\prime}\rangle$ as the basis, then
$\displaystyle\hat{g}|\phi^{\prime}\rangle=\eta(g)e^{i2\theta}K|\phi^{\prime}\rangle,$
(24)
so the representation $\eta(g)^{\prime}=\eta(g)e^{i2\theta}$ changes
accordingly. This means that the 1-D representation of the group $G$ is
$U(1)$-valued, and is characterized by the first cohomology group
$H^{1}(G,U(1))$. In the case of $D_{2}+T$, we have
$H^{1}(D_{2}+T,U_{T}(1))=(Z_{2})^{2},$
so $D_{2}+T$ has 4 different 1-D linear representations on Hilbert space,
which can be labeled as $A,B_{1},B_{2},B_{3}$ respectively.
Now we consider the 1-D representations on a Hermitian operator space. Suppose
$O_{1},O_{2},...,O_{N}$ are orthonormal Hermitian operators satisfying
$\mathrm{Tr}(O_{m}O_{n})=\delta_{mn}$, an anti-unitary element $g\in G$ act on
these operators as
$\displaystyle\hat{g}O_{m}=KM(g)^{\dagger}O_{m}M(g)K=\sum_{n}\zeta(g)_{mn}O_{n},$
(25)
Here $M(g)K$ is either a linear or a projective representation of $g$, while
$\zeta(g)$ is always a linear representation. Since
$[KM(g)^{\dagger}O_{m}M(g)K]^{\dagger}=KM(g)^{\dagger}O_{m}M(g)K$, we have
$[\sum_{n}\zeta(g)_{mn}O_{n}]^{\dagger}=\sum_{n}\zeta(g)_{mn}^{*}O_{n}=\sum_{n}\zeta(g)_{mn}O_{n}$,
which gives
$\zeta(g)^{*}=\zeta(g).$
The same result can be obtained if $G$ is unitary. So we conclude that, all
the linear representations defined on Hermitian operator space are real. Now
we focus on 1-D linear representations. Since $g$ is either unitary or anti-
unitary, we have $|\zeta(g)|=1$. On the other hand, $\zeta(g)$ must be real,
so $\zeta(g)=\pm 1$. As a result, all the 1-D linear representations on
Hermitian operator space are $Z_{2}$ valued, which are characterized by the
first group cohomology $H^{1}(G,(Z_{2})_{T})$. For the group $D_{2}+T$,
$H^{1}(D_{2}+T,(Z_{2})_{T})=(Z_{2})^{3},$
so there are 8 different 1-D linear representation, corresponding to 8 classes
of Hermitian operators as shown in Tabel 4. Since all the linear
representations of $D_{2}+T$ are 1-dimensional, this 8 1-D representations are
all of its linear representations.
Above discussion is also valid for the subgroups of $D_{2}+T$. In Tabels 5 and
6, we give the linear representations of its two subgroups (the number of 1-D
linear representations on Hilbert space is half of that on Hermitian operator
space).
We have shown that for 1-D linear representations defined on Hermitian
operator space, there is no difference whether a group element is unitary or
anti-unitary. This conclusion is also valid for higher dimensional linear
representations (however, if the representation space is a Hilbert space,
unitary or anti-unitary group elements will be quite different). The linear
representations on Hermitian operator space are used to define the active
operators.
For a general group $G$, if it has a nontrivial projective representation,
which correspond to a SPT phase, then the active operators are defined in the
following way: for a set of Hermitian operators
$O^{\mathrm{ph}}_{1},...,O^{\mathrm{ph}}_{n}$ acting on the physical spin
Hilbert space, if we can find a set of Hermitian operators
$O^{\mathrm{in}}_{1},...,O^{\mathrm{in}}_{n}$ acting on the internal-spin
Hilbert space (or the projective representation space), such that
$O^{\mathrm{ph}}$ and $O^{\mathrm{in}}$ form the same $n$-dimensional real
linear representation of $G$, then the operators $O^{\mathrm{ph}}$ are called
active operators. Different SPT phases have different set of active operators,
so we can use these active operators to distinguish different SPT phases.
Table 4: Linear representations of $D_{2h}=D_{2}+T$ | $E$ | $R_{x}$ | $R_{y}$ | $R_{z}$ | $T$ | $R_{x}T$ | $R_{y}T$ | $R_{z}T$ | bases | operators |
---|---|---|---|---|---|---|---|---|---|---|---
$A_{g}$ | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | $|0,0\rangle$ | $S_{x}^{2},S_{y}^{2},S_{z}^{2}$ |
$B_{1g}$ | 1 | -1 | -1 | 1 | 1 | -1 | -1 | 1 | $i|1,z\rangle$ | $S_{xy}$ |
$B_{2g}$ | 1 | -1 | 1 | -1 | 1 | -1 | 1 | -1 | $i|1,y\rangle$ | $S_{xz}$ |
$B_{3g}$ | 1 | 1 | -1 | -1 | 1 | 1 | -1 | -1 | $i|1,x\rangle$ | $S_{yz}$ |
$A_{u}$ | 1 | 1 | 1 | 1 | -1 | -1 | -1 | -1 | $i|0,0\rangle$ | $(S_{x,i}S_{yz,i+1})$ |
$B_{1u}$ | 1 | -1 | -1 | 1 | -1 | 1 | 1 | -1 | $|1,z\rangle$ | $S_{z}$ |
$B_{2u}$ | 1 | -1 | 1 | -1 | -1 | 1 | -1 | 1 | $|1,y\rangle$ | $S_{y}$ |
$B_{3u}$ | 1 | 1 | -1 | -1 | -1 | -1 | 1 | 1 | $|1,x\rangle$ | $S_{x}$ |
Table 5: Linear representations of $\bar{D}_{2}=\\{E,R_{z}T,R_{x}T,R_{y}\\}$ | $E$ | $R_{z}T$ | $R_{x}T$ | $R_{y}$ | bases or operators |
---|---|---|---|---|---|---
$A$ | 1 | 1 | 1 | 1 | $|0,0\rangle$,$|1,y\rangle$ | $S_{y}$,$S_{x}^{2},S_{y}^{2},S_{z}^{2}$
$B_{1}$ | 1 | 1 | -1 | -1 | $|1,x\rangle$,$i|1,z\rangle$ | $S_{x}$,$S_{xy}$
$B_{2}$ | 1 | -1 | -1 | 1 | $i|0,0\rangle$,$i|1,y\rangle$ | $S_{xz}$
$B_{3}$ | 1 | -1 | 1 | -1 | $|1,z\rangle$,$i|1,x\rangle$ | $S_{z}$,$S_{yz}$
Table 6: Linear representations of $Z_{2}+T=\\{E,R_{z},T,R_{z}T\\}$ | $E$ | $R_{z}$ | $T$ | $R_{z}T$ | bases or operators |
---|---|---|---|---|---|---
$A_{g}$ | 1 | 1 | 1 | 1 | $|0,0\rangle$,$i|1,z\rangle$ | $S_{xy},S_{x}^{2},S_{y}^{2},S_{z}^{2}$
$A_{u}$ | 1 | 1 | -1 | -1 | $i|0,0\rangle$,$|1,z\rangle$ | $S_{z}$
$B_{g}$ | 1 | -1 | 1 | -1 | $i|1,x\rangle$,$i|1,y\rangle$ | $S_{yz},S_{xz}$
$B_{u}$ | 1 | -1 | -1 | 1 | $|1,x\rangle$,$|1,y\rangle$ | $S_{x},S_{y}$
## Appendix D 16 projective representations of $D_{2}+T$ group
We have shown in appendices A and B that the projective representations are
classified by the second group cohomology $H^{2}(G,U_{T}(1))$. However,
usually it is not easy to calculate the group cohomology. So we choose to
calculate the projective representations directly. In the following we give
the method through which we obtain all the 16 projective representations of
$D_{2}+T$ in Table 1.
The main trouble comes from the anti-unitarity of some symmetry operators,
such as the time reversal operator $T$. Under anti-unitary operators (such as
$T$), the matrix $A^{m}$ varies as
$\displaystyle\sum_{m^{\prime}}u(T)_{mm^{\prime}}(A^{m^{\prime}})^{*}=M(T)^{\dagger}A^{m}M(T).$
(26)
Notice that $e^{i\alpha(T)}$ is absent because we can always set it to be 1 by
choosing proper phase of $A^{m}$. To see more difference between the unitary
operator and anti-unitary operators, we introduce an unitary transformation to
the bases of the virtual ‘spin’ such that $A^{m}$ becomes
$\bar{A}^{m}=U^{\dagger}A^{m}U$. Then for an unitary symmetry operation $g$,
Eq. (20) becomes
$\displaystyle\sum_{m^{\prime}}u(g)_{mm^{\prime}}\bar{A}^{m^{\prime}}=e^{i\alpha(g)}\bar{M}(g)^{\dagger}\bar{A}^{m}\bar{M}(g),$
where $\bar{M}(g)=U^{\dagger}M(g)U$. However, for the anti-unitary operator
$T$, $\bar{A}^{m}$ varies as
$\displaystyle\sum_{m^{\prime}}u(T)_{mm^{\prime}}(\bar{A}^{m^{\prime}})^{*}=\tilde{M}(T)^{\dagger}\bar{A}^{m}\tilde{M}(T),$
where $\tilde{M}(T)=U^{\dagger}M(T)U^{*}=U^{\dagger}[M(T)K]U$. Therefore, we
can see that $M(T)K$ as a whole is the anti-unitary projective representation
of $T$ when acting on the virtual ‘spin’ space.
Table 7: Unitary projective representations of $D_{2h}=D_{2}+T$, here we consider $T$ as an unitary operator. | $R_{z}$ | $R_{x}$ | $T$
---|---|---|---
$A_{g}$ | 1 | 1 | 1
$B_{1g}$ | 1 | -1 | 1
$B_{2g}$ | -1 | -1 | 1
$B_{3g}$ | -1 | 1 | 1
$A_{u}$ | 1 | 1 | -1
$B_{1u}$ | 1 | -1 | -1
$B_{2u}$ | -1 | -1 | -1
$B_{3u}$ | -1 | 1 | -1
$E_{1}$ | I | $i\sigma_{z}$ | $\sigma_{y}$
$E_{2}=E_{1}\otimes B_{3g}$ | -I | $i\sigma_{z}$ | $\sigma_{y}$
$E_{3}$ | $\sigma_{z}$ | I | $i\sigma_{y}$
$E_{4}=E_{3}\otimes B_{1g}$ | $\sigma_{z}$ | -I | $i\sigma_{y}$
$E_{5}$ | $i\sigma_{z}$ | $\sigma_{x}$ | I
$E_{6}=E_{5}\otimes A_{u}$ | $i\sigma_{z}$ | $\sigma_{x}$ | -I
$E_{7}$ | $\sigma_{z}$ | $i\sigma_{z}$ | $i\sigma_{x}$
$E_{8}=E_{7}\otimes B_{1g}$ | $\sigma_{z}$ | -$i\sigma_{z}$ | $i\sigma_{x}$
$E_{9}$ | $i\sigma_{z}$ | $\sigma_{x}$ | $i\sigma_{x}$
$E_{10}=E_{9}\otimes A_{u}$ | $i\sigma_{z}$ | $\sigma_{x}$ | -$i\sigma_{x}$
$E_{11}$ | $i\sigma_{z}$ | $i\sigma_{x}$ | $\sigma_{z}$
$E_{12}=E_{11}\otimes B_{3g}$ | $i\sigma_{z}$ | $i\sigma_{x}$ | -$\sigma_{z}$
$E_{13}$ | $i\sigma_{z}$ | $i\sigma_{x}$ | $i\sigma_{y}$
$E_{14}=E_{13}\otimes A_{u}$ | $i\sigma_{z}$ | $i\sigma_{x}$ | -$i\sigma_{y}$
The question is how to obtain the matrix $M(T)$. In Ref. LLW1180, , we firstly
treated $T$ as an unitary operator, and we got 8 classes of unitary projective
representations for the group $D_{2h}$ (see Table 7). By replacing $M(T)$ by
$M(T)K$, we obtained 8 different classes of anti-unitary projective
representations. However, not all the projective representations can be
obtained this way. Notice that $[M(T)K]^{2}=1$ and $[M(T)K]^{2}=-1$ belong to
two different projective representations, the anti-unitary projective
representations are twice as many as the unitary projective representations.
Fortunately, all the remaining (anti-unitary) projective representations can
be obtained from the known ones. Notice that the direct product of any two
projective representations is still a projective representation of the group,
which can be reduced to a direct sum of several projective representations.
There may be new ones in the reduced representations that are different from
the 8 known classes. Repeating this procedure (until it closes), we finally
obtain 16 different classes of projective representations (see appendix E).
Notice that the Clebsch-Gordan coefficients which reduce the product
representation should be real, otherwise it does not commute with $K$ and will
not block diagonalize the product representation matrix of $T$ (and other
anti-unitary symmetry operators). Because of this restriction, we obtain four
4-dimensional irreducible projective representations (IPRs) which are absent
in the unitary projective representations.
## Appendix E Realization of SPT phases in $S=1$ spin ladders
From the knowledge of section III.1, together with Eqs. (III.2.1) and (14), we
can construct different SPT phases with spin ladders. From the discussion in
section III.2.1, the projective representation $M(g)\otimes N(g)$ is usually
reducible. It can be reduced to several IPRs of the same class. This class of
projective representation determines which phase the ladder belongs to. Thus,
the decomposition of direct products of different projective representations
is important. Since the SPT phases corresponding to
$E_{13},E_{11},E_{5},E_{9}$ ($T_{0},T_{x},T_{y},T_{z}$, separately) have been
already realized in spin chains, we will first study the decompositions of the
direct product of two of them.
$E_{5}\otimes
E_{9}=(\sigma_{z},I,i\sigma_{x})\oplus(\sigma_{z},-I,i\sigma_{x})=E_{3}^{\prime}\oplus
E_{4}^{\prime}$;
$E_{5}\otimes
E_{11}=(I,i\sigma_{z},\sigma_{x})\oplus(-I,i\sigma_{z},\sigma_{x})=E_{1}^{\prime}\oplus
E_{2}^{\prime}$;
$E_{5}\otimes
E_{13}=(\sigma_{z},i\sigma_{z},i\sigma_{y})\oplus(\sigma_{z},-i\sigma_{z},i\sigma_{y})=E_{7}^{\prime}\oplus
E_{8}^{\prime}$;
$E_{9}\otimes
E_{11}=(\sigma_{z},i\sigma_{z},i\sigma_{x})\oplus(\sigma_{z},-i\sigma_{z},i\sigma_{x})=E_{7}\oplus
E_{8}$;
$E_{9}\otimes
E_{13}=(I,i\sigma_{z},\sigma_{y})\oplus(-I,i\sigma_{z},\sigma_{y})=E_{1}\oplus
E_{2}$;
$E_{11}\otimes
E_{13}=(\sigma_{z},I,i\sigma_{y})\oplus(\sigma_{z},-I,i\sigma_{y})=E_{3}\oplus
E_{4}$.
In above decomposition, all the CG coefficients are real. The three matrices
in each bracket are the representation matrices for the three generators
$R_{z},R_{x},T$, separately. We omitted the anti-unitary operator $K$ for the
representation matrix of $T$. Further, $E_{1}$ and $E_{2}$ ($E_{3}$ and
$E_{4}$, so on and so forth) belong to the same class of projective
representation, and differs only by a phase transformation. So with spin
ladders, we realize 6 SPT phases corresponding to the projective
representations
$E_{1},E_{1}^{\prime},E_{3},E_{3}^{\prime},E_{7},E_{7}^{\prime}$.
Using these projective representations
$E_{1},E_{1}^{\prime},E_{3},E_{3}^{\prime},E_{7},E_{7}^{\prime}$, together
with $E_{13},E_{11},E_{5},E_{9}$, we can repeat above procedure and obtain
more projective representations and their corresponding SPT phases. The result
is shown below:
$E_{1}\otimes
E_{3}=(\sigma_{z},-i\sigma_{z},i\sigma_{x})\oplus(\sigma_{z},i\sigma_{z},-i\sigma_{x})=E_{7}\oplus
E_{8}$;
$E_{1}\otimes E_{5}=(-I\otimes i\sigma_{z},I\otimes
i\sigma_{x},-\sigma_{y}\otimes\sigma_{z})=E_{11}^{\prime}$;
$E_{1}\otimes
E_{7}=(\sigma_{z},-I,i\sigma_{y})\oplus(-\sigma_{z},I,i\sigma_{y})=E_{3}\oplus
E_{4}$;
$E_{1}\otimes
E_{9}=(-i\sigma_{z},-i\sigma_{x},-i\sigma_{y})\oplus(-i\sigma_{z},i\sigma_{x},-i\sigma_{y})=E_{13}\oplus
E_{14}$;
$E_{1}\otimes E_{11}=(-I\otimes
i\sigma_{z},-I\otimes\sigma_{x},\sigma_{y}\otimes I)=E_{5}^{\prime}$;
$E_{1}\otimes
E_{13}=(-i\sigma_{z},I,-i\sigma_{x})\oplus(-i\sigma_{z},-I,-i\sigma_{x})=E_{9}\oplus
E_{10}$;
$E_{1}^{\prime}\otimes
E_{3}=(-\sigma_{z},-i\sigma_{z},-i\sigma_{y})\oplus(-\sigma_{z},i\sigma_{z},i\sigma_{y})=E_{7}^{\prime}\oplus
E_{8}^{\prime}$;
$E_{1}^{\prime}\otimes
E_{5}=(-i\sigma_{z},i\sigma_{x},-\sigma_{z})\oplus(-i\sigma_{z},i\sigma_{x},\sigma_{z})=E_{11}\oplus
E_{12}$;
$E_{1}^{\prime}\otimes
E_{7}=(-\sigma_{z},I,i\sigma_{x})\oplus(-\sigma_{z},-I,-i\sigma_{x})=E_{3}^{\prime}\oplus
E_{4}^{\prime}$;
$E_{1}^{\prime}\otimes E_{9}=(-I\otimes i\sigma_{z},I\otimes
i\sigma_{x},-i\sigma_{y}\otimes\sigma_{y})=E_{13}^{\prime}$;
$E_{1}^{\prime}\otimes
E_{11}=(-i\sigma_{z},-\sigma_{x},I)\oplus(-i\sigma_{z},-\sigma_{x},-I)=E_{5}\oplus
E_{6}$;
$E_{1}^{\prime}\otimes E_{13}=(-I\otimes
i\sigma_{z},-I\otimes\sigma_{x},-i\sigma_{y}\otimes\sigma_{x})=E_{9}^{\prime}$;
$E_{3}\otimes E_{5}=(-I\otimes
i\sigma_{z},I\otimes\sigma_{x},i\sigma_{y}\otimes\sigma_{x})=E_{9}^{\prime}$;
$E_{3}\otimes
E_{7}=(-I,i\sigma_{x},\sigma_{y})\oplus(I,i\sigma_{x},\sigma_{y})=E_{1}\oplus
E_{2}$;
$E_{3}\otimes E_{9}=(-I\otimes
i\sigma_{z},I\otimes\sigma_{x},-\sigma_{y}\otimes I)=E_{5}^{\prime}$;
$E_{3}\otimes
E_{11}=(-i\sigma_{z},i\sigma_{x},i\sigma_{y})\oplus(-i\sigma_{z},i\sigma_{x},-i\sigma_{y})=E_{13}\oplus
E_{14}$;
$E_{3}\otimes
E_{13}=(-i\sigma_{z},i\sigma_{x},-\sigma_{z})\oplus(-i\sigma_{z},-i\sigma_{x},\sigma_{z})=E_{11}\oplus
E_{12}$;
$E_{3}^{\prime}\otimes
E_{5}=(-i\sigma_{z},\sigma_{x},-i\sigma_{x})\oplus(-i\sigma_{z},-\sigma_{x},-i\sigma_{x})=E_{9}\oplus
E_{10}$;
$E_{3}^{\prime}\otimes
E_{7}=(-I,i\sigma_{x},-\sigma_{z})\oplus(I,i\sigma_{x},-\sigma_{z})=E_{1}^{\prime}\oplus
E_{2}^{\prime}$;
$E_{3}^{\prime}\otimes
E_{9}=(-i\sigma_{z},\sigma_{x},-I)\oplus(-i\sigma_{z},-\sigma_{x},I)=E_{5}\oplus
E_{6}$;
$E_{3}^{\prime}\otimes E_{11}=(-I\otimes i\sigma_{z},I\otimes
i\sigma_{x},i\sigma_{y}\otimes\sigma_{y})=E_{13}^{\prime}$;
$E_{3}^{\prime}\otimes E_{13}=(-I\otimes i\sigma_{z},I\otimes
i\sigma_{x},\sigma_{y}\otimes\sigma_{z})=E_{11}^{\prime}$;
$E_{7}\otimes E_{5}=(-I\otimes i\sigma_{z},I\otimes
i\sigma_{x},-\sigma_{y}\otimes\sigma_{y})=E_{13}^{\prime}$;
$E_{7}\otimes
E_{9}=(-i\sigma_{z},i\sigma_{x},\sigma_{z})\oplus(i\sigma_{z},-i\sigma_{x},\sigma_{z})=E_{11}\oplus
E_{12}$;
$E_{7}\otimes
E_{11}=(-i\sigma_{z},\sigma_{x},-i\sigma_{x})\oplus(i\sigma_{z},\sigma_{x},i\sigma_{x})=E_{9}\oplus
E_{10}$;
$E_{7}\otimes E_{13}=(-I\otimes
i\sigma_{z},-I\otimes\sigma_{x},-\sigma_{y}\otimes I)=E_{5}^{\prime}$;
$E_{7}^{\prime}\otimes
E_{5}=(-i\sigma_{z},i\sigma_{x},i\sigma_{y})\oplus(-i\sigma_{z},i\sigma_{x},-i\sigma_{y})=E_{13}\oplus
E_{14}$;
$E_{7}^{\prime}\otimes E_{9}=(-I\otimes\sigma_{z},I\otimes
i\sigma_{x},\sigma_{y}\otimes\sigma_{z})=E_{11}^{\prime}$;
$E_{7}^{\prime}\otimes E_{11}=(-I\otimes
i\sigma_{z},-I\otimes\sigma_{x},-i\sigma_{y}\otimes\sigma_{x})=E_{9}^{\prime}$;
$E_{7}^{\prime}\otimes
E_{13}=(-i\sigma_{z},\sigma_{x},-I)\oplus(-i\sigma_{z},\sigma_{x},I)=E_{5}\oplus
E_{6}$;
$E_{1}\otimes
E_{1}^{\prime}=(I,I,\sigma_{y})\oplus(I,-I,-\sigma_{y})=E_{0}^{\prime}\oplus
E_{0}^{\prime}$;
$E_{3}\otimes
E_{3}^{\prime}=(-I,I,-\sigma_{y})\oplus(I,I,\sigma_{y})=E_{0}^{\prime}\oplus
E_{0}^{\prime}$;
$E_{7}\otimes
E_{7}^{\prime}=(I,-I,\sigma_{y})\oplus(-I,I,\sigma_{y})=E_{0}^{\prime}\oplus
E_{0}^{\prime}$.
Above we get four SPT phases corresponding to
$E_{5}^{\prime},E_{9}^{\prime},E_{11}^{\prime},E_{13}^{\prime}$, all of them
have 4-dimensional end ‘spins’. We also get a SPT phase corresponding to
$E_{0}^{\prime}$, which has 2-dimensional end ‘spins’.
Notice that the number of classes of unitary projective representations of
$D_{2h}$ is 8, but considering that $T$ is anti-unitary such that $T^{2}$ can
be either 1 or -1, we obtain 16 classes of projective representations for
$D_{2}+T$.
## References
* (1) X.-G. Wen, Phys. Rev. B 40, 7387 (1989)
* (2) X.-G. Wen, Int. J. Mod. Phys. B 4, 239 (1990)
* (3) X.-G. Wen, Phys. Rev. B 65, 165113 (2002), cond-mat/0107071
* (4) Z.-C. Gu and X.-G. Wen, Phys. Rev. B 80, 155131 (2009), arXiv:0903.1069
* (5) L. D. Landau, Phys. Z. Sowjetunion 11, 26 (1937)
* (6) V. L. Ginzburg and L. D. Landau, Zh. Ekaper. Teoret. Fiz. 20, 1064 (1950)
* (7) L. D. Landau and E. M. Lifschitz, _Statistical Physics - Course of Theoretical Physics Vol 5_ (Pergamon, London, 1958)
* (8) D. C. Tsui, H. L. Stormer, and A. C. Gossard, Phys. Rev. Lett. 48, 1559 (1982)
* (9) R. B. Laughlin, Phys. Rev. Lett. 50, 1395 (1983)
* (10) F. D. M. Haldane, Physics Letters A 93, 464 (1983)
* (11) V. Kalmeyer and R. B. Laughlin, Phys. Rev. Lett. 59, 2095 (1987)
* (12) X.-G. Wen, F. Wilczek, and A. Zee, Phys. Rev. B 39, 11413 (1989)
* (13) N. Read and S. Sachdev, Phys. Rev. Lett. 66, 1773 (1991)
* (14) X.-G. Wen, Phys. Rev. B 44, 2664 (1991)
* (15) R. Moessner and S. L. Sondhi, Phys. Rev. Lett. 86, 1881 (2001)
* (16) G. Moore and N. Read, Nucl. Phys. B 360, 362 (1991)
* (17) X.-G. Wen, Phys. Rev. Lett. 66, 802 (1991)
* (18) R. Willett, J. P. Eisenstein, H. L. Strörmer, D. C. Tsui, A. C. Gossard, and J. H. English, Phys. Rev. Lett. 59, 1776 (1987)
* (19) I. P. Radu, J. B. Miller, C. M. Marcus, M. A. Kastner, L. N. Pfeiffer, and K. W. West, Science 320, 899 (2008)
* (20) X.-G. Wen, Phys. Rev. D 68, 065003 (2003), hep-th/0302201
* (21) C. L. Kane and E. J. Mele, Phys. Rev. Lett. 95, 226801 (2005), cond-mat/0411737
* (22) C. L. Kane and E. J. Mele, Phys. Rev. Lett. 95, 146802 (2005), cond-mat/0506581
* (23) B. A. Bernevig and S.-C. Zhang, Phys. Rev. Lett. 96, 106802 (2006)
* (24) J. E. Moore and L. Balents, Phys. Rev. B 75, 121306 (2007), cond-mat/0607314
* (25) L. Fu, C. L. Kane, and E. J. Mele, Phys. Rev. Lett. 98, 106803 (2007), cond-mat/0607699
* (26) X.-L. Qi, T. Hughes, and S.-C. Zhang, Phys. Rev. B 78, 195424 (2008), arXiv:0802.3537
* (27) M. Levin and X.-G. Wen, Phys. Rev. Lett. 96, 110405 (2006), cond-mat/0510613
* (28) A. Kitaev and J. Preskill, Phys. Rev. Lett. 96, 110404 (2006)
* (29) X. Chen, Z.-C. Gu, and X.-G. Wen, Phys. Rev. B 82, 155138 (2010), arXiv:1004.3835
* (30) X.-G. Wen, Physics Letters A 300, 175 (2002), cond-mat/0110397
* (31) X. Chen, Z.-C. Gu, and X.-G. Wen, Phys. Rev. B 83, 035107 (2011), arXiv:1008.3745
* (32) N. Schuch, D. Perez-Garcia, and I. Cirac(2011), arXiv:1010.3732
* (33) X. Chen, Z.-C. Gu, and X.-G. Wen(2011), arXiv:1103.3323
* (34) M. Levin and X.-G. Wen, Phys. Rev. B 71, 045110 (2005), cond-mat/0404617
* (35) Z.-C. Gu, Z. Wang, and X.-G. Wen(2010), arXiv:1010.1517
* (36) F. Pollmann, E. Berg, A. M. Turner, and M. Oshikawa(2009), arXiv:0909.4059
* (37) Z.-X. Liu, M. Liu, and X.-G. Wen(2011), arXiv:1101.1662
* (38) E. Berg, E. G. D. Torre, T. Giamarchi, and E. Altman, Phys. Rev. B 77, 245119 (2008)
* (39) H. Li and F. D. M. Haldane, Phys. Rev. Lett. 101, 010504 (2008)
* (40) F. Verstraete, J. I. Cirac, J. I. Latorre, E. Rico, and M. M. Wolf, Phys. Rev. Lett. 94, 140601 (2005)
* (41) F. Pollmann, E. Berg, A. M. Turner, and M. Oshikawa, Phys. Rev. 81, 064439 (2010), arXiv:0910.1811
* (42) I. Affleck, T. Kennedy, E. H. Lieb, and H. Tasaki, Commun. Math. Phys. 115, 477 (1988)
* (43) Guifré Vidal, Phys. Rev. Lett. 91, 147902 (2003).
* (44) When $a=b=c=1$, this state is invariant under $SO(3)+T$, where $SO(3)$ is generated by $S_{x},S_{y},S_{z}$ and $T=e^{i\pi S_{y}}K$. From Ref. CGW1123, , systems with $SO(3)+T$ symmetry have 4 SPT phases. It seems strange that its subgroup $D_{2}+T$ contains more SPT phases. Actually, there are four distinct $SO(3)+T$ groups which contian $D_{2}+T$ as a subgroup. In these four groups, $T$ is always defined as $T=e^{i\pi S_{y}}K$, but the $SO(3)$ parts are different. Except for the one mentioned above, we have additional three choices: $-S_{x},S_{xz},S_{xy}$ or $S_{yz},-S_{y},S_{xy}$ or $S_{yz},S_{xz},-S_{z}$. Each of the four groups contains 4 SPT phases, so their common subgroup $D_{2}+T$ contains $4\times 4=16$ SPT phases.
* (45) Actually, provided that the the symmetry group of the Hamiltonian of the ladder is $D_{2}+T$, inter-chain interactions must be considered (otherwise the symmetry group should be $(D_{2}+T)\otimes(D_{2}+T)$). Here we take the limit that the strength of inter-chain interaction tends to zero.
* (46) Y. Shirako, H. Satsukawa, X. X. Wang, J. J. Li, Y. F. Guo, M. Arai, K. Yamaura, M. Yoshida, H. Kojitani, T. Katsumata, Y. Inaguma, K. Hiraki, T. Takahashi, M. Akaogi, arXiv:1104.1461.
* (47) M. Bremholma, S.E. Duttona, P.W. Stephensb and R.J. Cavaa, arXiv:1011.5125.
|
arxiv-papers
| 2011-05-30T15:50:35 |
2024-09-04T02:49:19.163810
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Zheng-Xin Liu, Xie Chen, Xiao-Gang Wen",
"submitter": "Zheng-Xin Liu",
"url": "https://arxiv.org/abs/1105.6021"
}
|
1105.6023
|
arxiv-papers
| 2011-05-30T16:01:30 |
2024-09-04T02:49:19.170910
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Jie Ren",
"submitter": "Jie Ren",
"url": "https://arxiv.org/abs/1105.6023"
}
|
|
1105.6086
|
#
${{~{}~{}~{}~{}~{}~{}~{}~{}~{}}^{{}^{{}^{Extended~{}version~{}of~{}a~{}paper~{}published~{}in}}}}$
${{~{}~{}}^{{}^{{}^{``Celestial~{}Mechanics~{}\&~{}Dynamical~{}Astronomy"\,,~{}\,Vol.~{}112\,,~{}~{}pp.~{}283~{}-~{}330~{}~{}~{}(March\,~{}2012)}}}}$
Bodily tides near spin-orbit resonances
Michael Efroimsky
US Naval Observatory, Washington DC 20392 USA
e-mail: michael.efroimsky @ usno.navy.mil
###### Abstract
Spin-orbit coupling can be described in two approaches. The first method,
known as the _“MacDonald torque”_ , is often combined with a convenient
assumption that the quality factor $\,Q\,$ is frequency-independent. This
makes the method inconsistent, for the MacDonald theory tacitly fixes the
rheology of the mantle by making $\,Q\,$ scale as the inverse tidal frequency.
Spin-orbit coupling can be treated also in an approach called _“the Darwin
torque”_. While this theory is general enough to accommodate an arbitrary
frequency-dependence of $\,Q\,$, this advantage has not yet been fully
exploited in the literature, where $\,Q\,$ is often assumed constant or is set
to scale as inverse tidal frequency, the latter assertion making the Darwin
torque equivalent to a corrected version of the MacDonald torque.
However neither a constant nor an inverse-frequency $Q$ reflect the properties
of realistic mantles and crusts, because the actual frequency-dependence is
more complex. Hence it is necessary to enrich the theory of spin-orbit
interaction with the right frequency-dependence.
We accomplish this programme for the Darwin-torque-based model near
resonances. We derive the frequency-dependence of the tidal torque from the
first principles of solid-state mechanics, i.e., from the expression for the
mantle’s compliance in the time domain. We also explain that the tidal torque
includes not only the customary, secular part, but also an oscillating part.
We demonstrate that the $\,lmpq\,$ term of the Darwin-Kaula expansion for the
tidal torque smoothly passes zero, when the secondary traverses the $\,lmpq\,$
resonance (e.g., the principal tidal torque smoothly goes through nil as the
secondary crosses the synchronous orbit).
Thus we prepare a foundation for modeling entrapment of a despinning primary
into a resonance with its secondary. The roles of the primary and secondary
may be played, e.g., by Mercury and the Sun, correspondingly, or by an icy
moon and a Jovian planet.
We also offer a possible explanation for the unexpected frequency-dependence
of the tidal dissipation rate in the Moon, discovered by LLR.
## 1 Introduction
We continue a critical examination of the tidal-torque techniques, begun in
Efroimsky & Williams (2009), where the empirical treatment by and MacDonald
(1964) was considered from the viewpoint of a more general and rigorous
approach by Darwin (1879, 1880) and Kaula (1964). Referring the Reader to
Efroimsky & Williams (2009) for proofs and comments, we begin with an
inventory of the key formulae describing the spin-orbit interaction. While in
Ibid. we employed those formulae to explore tidal despinning well outside the
1:1 resonance (and in neglect of the intermediate resonances), in the current
paper we apply this machinery to the case of despinning in the vicinity of a
spin-orbit resonance.
Although the topic has been popular since mid-sixties and has already been
addressed in books, the common models are not entirely adequate to the actual
physics. Just as in the nonresonant case discussed in Ibid., a generic problem
with the popular models of libration or of capture into a resonance is that
they employ wrong rheologies (the work by Rambaux et al. 2010 being the only
exception we know of). Above that, the model based on the MacDonald torque
suffers a defect stemming from a genuine inconsistency inherent in the theory
by MacDonald (1964).
As explained in Efroimsky and Williams (2009) and Williams and Efroimsky
(2012), the MacDonald theory, both in its original and corrected versions,
tacitly fixes an unphysical shape of the functional dependence $\,Q(\chi)\,$,
where $\,Q\,$ is the dissipation quality factor and $\,\chi\,$ is the tidal
frequency (Williams & Efroimsky 2012). So we base our approach on the
developments by Darwin (1879, 1880) and Kaula (1964), combining those with a
realistic law of frequency-dependence of the damping rate.
Since our main purpose is to lay the groundwork for the subsequent study of
the process of falling into a resonance, the two principal results obtained in
this paper are the following:
(a) Starting with the realistic rheological model (the expression for the
compliance in the time domain), we derive the complex Love numbers
$\,\bar{k}_{\textstyle{{}_{l}}}\,$ as functions of the frequency $\,\chi\,$,
and write down their negative imaginary parts as functions of the frequency:
$~{}\,-\,{\cal{I}}{\it{m}}\left[\,\bar{k}_{\textstyle{{}_{l}}}(\chi)\,\right]\,=\,|k_{\textstyle{{}_{l}}}(\chi)|~{}\sin\epsilon_{\textstyle{{}_{l}}}(\chi)\,$.
It is these expressions that appear as factors in the terms of the Darwin-
Kaula expansion of tides. These factors’ frequency-dependencies demonstrate a
nontrivial shape, especially near resonances. This shape plays a crucial role
in modeling of despinning in general, specifically in modeling the process of
falling into a spin-orbit resonance.
(b) We demonstrate that, beside the customary secular part, the Darwin torque
contains a usually omitted oscillating part.
## 2 Linear bodily tides
Linearity of tide means that: (a) under a static load, deformation scales
linearly, and (b) under undulatory loading, the same linear law applies,
separately, to each frequency mode. The latter implies that the deformation
magnitude at a certain frequency should depend linearly upon the tidal stress
at this frequency, and should bear no dependence upon loading at other tidal
modes. Thence the dissipation rate at that frequency will depend on the stress
at that frequency only.
### 2.1 Linearity of the tidal deformation
At a point $\mbox{{\boldmath$\vec{R}$}}=(R,\lambda,\phi)$, the potential due
to a tide-raising secondary of mass $M^{*}_{sec}$, located at
$\,{\mbox{{\boldmath$\vec{r}$}}}^{\;*}=(r^{*},\,\lambda^{*},\,\phi^{*})\,$
with $\,r^{*}\geq R\,$, is expandable over the Legendre polynomials $\,P_{\it
l}(\cos\gamma)\;$:
$\displaystyle
W(\mbox{{\boldmath$\vec{R}$}}\,,\,\mbox{{\boldmath$\vec{r}$}}^{~{}*})$
$\displaystyle=$
$\displaystyle\sum_{{\it{l}}=2}^{\infty}~{}W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,~{}\mbox{{\boldmath$\vec{r}$}}^{~{}*})~{}=~{}-~{}\frac{G\;M^{*}_{sec}}{r^{\,*}}~{}\sum_{{\it{l}}=2}^{\infty}\,\left(\,\frac{R}{r^{~{}*}}\,\right)^{\textstyle{{}^{\it{l}}}}\,P_{\it{l}}(\cos\gamma)~{}~{}~{}~{}$
(1) $\displaystyle=$
$\displaystyle-\,\frac{G~{}M^{*}_{sec}}{r^{\,*}}\sum_{{\it{l}}=2}^{\infty}\left(\frac{R}{r^{~{}*}}\right)^{\textstyle{{}^{\it{l}}}}\sum_{m=0}^{\it
l}\frac{({\it l}-m)!}{({\it
l}+m)!}(2-\delta_{0m})P_{{\it{l}}m}(\sin\phi)P_{{\it{l}}m}(\sin\phi^{*})~{}\cos
m(\lambda-\lambda^{*})~{}~{},\quad\,\quad$
where $G=6.7\times 10^{-11}\,\mbox{m}^{3}\,\mbox{kg}^{-1}\mbox{s}^{-2}\,$ is
Newton’s gravity constant, and $\gamma\,$ is the angular separation between
the vectors ${\mbox{{\boldmath$\vec{r}$}}}^{\;*}$ and $\vec{R}$ pointing from
the primary’s centre. The latitudes $\phi,\,\phi^{*}$ are reckoned from the
primary’s equator, while the longitudes $\lambda,\,\lambda^{*}$ are reckoned
from a fixed meridian.
Under the assumption of linearity, the $\,{\emph{l}}^{~{}th}$ term
$\,W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,~{}\mbox{{\boldmath$\vec{r}$}}^{~{}*})\,$
in the secondary’s potential causes a linear deformation of the primary’s
shape. The subsequent adjustment of the primary’s potential being linear in
the said deformation, the $\,{\emph{l}}^{~{}th}$ adjustment $\,U_{\it{l}}\,$
of the primary’s potential is proportional to $\,W_{\it{l}}\,$. The theory of
potential requires $\,U_{\it{l}}(\mbox{{\boldmath$\vec{r}$}})\,$ to fall off,
outside the primary, as $\,r^{-(\it{l}+1)}\,$. Thus the overall amendment to
the potential of the primary amounts to:
$\displaystyle U(\mbox{{\boldmath$\vec{r}$}})~{}=~{}\sum_{{\it
l}=2}^{\infty}~{}U_{\it{l}}(\mbox{{\boldmath$\vec{r}$}})~{}=~{}\sum_{{\it
l}=2}^{\infty}~{}k_{\it l}\;\left(\,\frac{R}{r}\,\right)^{{\it
l}+1}\;W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,\;\mbox{{\boldmath$\vec{r}$}}^{\;*})~{}~{}~{},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(2)
$R\,$ now being the mean equatorial radius of the primary,
$\,\mbox{{\boldmath$\vec{R}$}}\,=\,(R\,,\,\phi\,,\,\lambda)\,$ being a surface
point, $\,\mbox{{\boldmath$\vec{r}$}}\,=\,(r\,,\,\phi\,,\,\lambda)\,$ being an
exterior point located above it at a radius $\,r\,\geq\,R\,$. The coefficients
$\,k_{\it l}~{}$, called Love numbers, are defined by the primary’s rheology.
For a homogeneous incompressible spherical primary of density $\,\rho\,$,
surface gravity g, and rigidity $\,\mu\,$, the _static_ Love number of degree
$\,l\,$ is given by
$\displaystyle k_{\it l}\,=\;\frac{3}{2\,({\it
l}\,-\,1)}\;\,\frac{1}{1\;+\;A_{\it
l}}~{}~{}~{},~{}~{}~{}~{}\mbox{where}~{}~{}~{}~{}A_{\it
l}\,\equiv\;\frac{\textstyle{(2\,{\it{l}}^{\,2}\,+\,4\,{\it{l}}\,+\,3)\,\mu}}{\textstyle{{\it{l}}\,\mbox{g}\,\rho\,R}}\;=\;\frac{\textstyle{3\;(2\,{\it{l}}^{\,2}\,+\,4\,{\it{l}}\,+\,3)\,\mu}}{\textstyle{4\;{\it{l}}\,\pi\,G\,\rho^{2}\,R^{2}}}~{}~{}~{}.~{}~{}~{}$
(3)
For $\,R\,\ll\,r\,,\,r^{*}\,$, consideration of the $\,l=2\,$ input in (2)
turns out to be sufficient.111 Special is the case of Phobos, for whose
orbital evolution the $k_{3}$ and perhaps even the $k_{4}$ terms may be
relevant (Bills et al. 2005). Another class of exceptions is constituted by
close binary asteroids. The topic is addressed by Taylor & Margot (2010), who
took into account the Love numbers up to $\,k_{6}\,$.
These formulae apply to _static_ deformations. However an actual tide is never
static, except in the case of synchronous orbiting with a zero eccentricity
and inclination.222 The case of a permanently deformed moon in a 1:1 spin-
orbit resonance falls under this description too. Recall that in the tidal
context the distorted body is taken to be the primary. So from the viewpoint
of the satellite its host planet is orbiting the satellite synchronously, thus
creating a static tide. Hence a realistic perturbing potential produced by the
secondary carries a spectrum of modes
$\,\omega_{\textstyle{{}_{{\it{l}}mpq}}}\,$ (positive or negative) numbered
with four integers $\,{\it{l}}mpq\,$ as in formula (105) below. The
perturbation causes a spectrum of stresses in the primary, at frequencies
$\,\chi_{\textstyle{{}_{{\it{l}}mpq}}}\,=\,|\omega_{\textstyle{{}_{{\it{l}}mpq}}}|\,$.
Although in a linear medium strains are generated exactly at the frequencies
of the stresses, friction makes each Fourier component of the strain fall
behind the corresponding component of the stress. Friction also reduces the
magnitude of the shape response – hence the deviation of a dynamical Love
number $\,k_{\it l}(\chi)\,$ from its static counterpart $\,k_{\it
l}\,=\,k_{\it l}(0)\,$. Below we shall explain that formulae (2 \- 3) can be
easily adjusted to the case of undulatory tidal loads in a homogeneous planet
or in tidally-despinning homogeneous satellite (treated now as the primary,
with its planet playing the role of the tide-raising secondary). However
generalisation of formulae (2 \- 3) to the case of a librating moon (treated
as a primary) turns out to be highly nontrivial. As we shall see, the standard
derivation by Love (1909, 1911) falls apart in the presence of the non-
potential inertial force containing the time-derivative of the primary’s
angular velocity.
The frequency-dependence of a dynamical Love numbers takes its origins in the
“inertia” of strain and, therefore, of the shape of the body. Hence the
analogy to linear circuits: the $\,{\emph{l}}^{\,th}$ components of $\,W\,$
and $\,U\,$ act as a current and voltage, while the $\,{\emph{l}}^{\,th}$ Love
number plays, up to a factor, the role of impedance. Therefore, under a
sinusoidal load of frequency $\,\chi\,$, it is convenient to replace the
actual Love number with its complex counterpart
$\displaystyle\bar{k}_{\emph{l}\,}(\chi)\;=\;|\,\bar{k}_{\emph{l}\,}(\chi)\,|\;\exp\left[\,-\;{\it
i}\,\epsilon_{\emph{l}\,}(\chi)\,\right]~{}~{}~{},$ (4)
$\epsilon_{\emph{l}\,}\,$ being the frequency-dependent phase delay of the
reaction relative to the load (Munk & MacDonald 1960, Zschau 1978). The
“minus” sign in (4) makes $\,U\,$ lag behind $\,W\,$ for a positive
$\,\epsilon_{\emph{l}\,}\,$. (So the situation resembles a circuit with a
capacitor, where the current leads voltage.)
In the limit of zero frequency, i.e., for a steady deformation, the lag should
vanish, and so should the entire imaginary part:
$\displaystyle{\cal{I}}{\it{m}}\left[\,\bar{k}_{\it
l}(0)\,\right]\;=\;|\,\bar{k}_{\emph{l}\,}(0)\,|\;\sin\epsilon_{\emph{l}\,}(0)\;=\;0\;\;\;,$
(5)
leaving the complex Love number real:
$\displaystyle\bar{k}_{\emph{l}}(0)~{}=~{}{\cal{R}}{\it{e}}\left[\,\bar{k}_{\it
l}(0)\,\right]~{}=~{}|\,\bar{k}_{\emph{l}\,}(0)\,|\;\cos\epsilon_{\emph{l}\,}(0)\;\;\;,$
(6)
and equal to the customary static Love number:
$\displaystyle\bar{k}_{\emph{l}\,}(0)\;=\,k_{\emph{l}}\;\;\;.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(7)
Solution of the equation of motion combined with the constitutive
(rheological) equation renders the complex $\bar{k}_{\emph{l}}(\chi)$, as
explained in Appendix D.1. Once $\bar{k}_{\emph{l}}(\chi)$ is found, its
absolute value
$\displaystyle
k_{\emph{l}\,}(\chi)~{}\equiv~{}|\,\bar{k}_{\emph{l}\,}(\chi)\,|$ (8)
and negative argument
$\displaystyle\epsilon_{\it
l}(\chi)\;=\;-\;\arctan\frac{{\cal{I}}{\it{m}}\left[\,\bar{k}_{\it
l}(\chi)\,\right]}{{\cal{R}}{\it{e}}\left[\,\bar{k}_{\it l}(\chi)\,\right]}$
(9)
should be inserted into the $\,{\emph{l}}^{\,th}\,$ term of the Fourier
expansion for the tidal potential. Things get simplified when we study how the
tide, caused on the primary by a secondary, is acting on that same secondary.
In this case, the $\,{\emph{l}}^{\,th}\,$ term in the Fourier expansion
contains $\,|k_{\emph{l}\,}(\chi)|\,$ and $\,\epsilon_{\emph{l}\,}(\chi)\,$ in
the convenient combination
$\,k_{\emph{l}\,}(\chi)\,\sin\epsilon_{\emph{l}\,}(\chi)\,$, which is exactly
$\;\,-\;{\cal{I}}{\it{m}}\left[\,\bar{k}_{\it l}(\chi)\,\right]\;$.
Rigorously speaking, we should say not “the ${\emph{l}}^{\,th}$ term”, but
“the ${\emph{l}}^{\,th}$ terms”, as each ${\it l}$ corresponds to an infinite
set of positive and negative Fourier modes $\,\omega_{{\it l}mpq}\,$, the
physical forcing frequencies being $\,\chi=\chi_{{\it
l}mpq}\equiv|\omega_{{\it l}mpq}|\,$. Thus, while the functional forms of both
$|k_{\it l}(\chi)|$ and $\sin\epsilon_{\emph{l}\,}(\chi)$ depend only on ${\it
l}\,$, both functions take values that are different for different sets of
numbers $mpq$. This happens because $\chi$ assumes different values
$\chi_{{\it l}mpq}$ on these sets. Mind though that for triaxial bodies the
functional forms of $|k_{\it l}(\chi)|$ and $\sin\epsilon_{\emph{l}}(\chi)$
may depend also on $m,\,p,\,q$.
### 2.2 Damping of a linear tide
Beside the standard assumption $\;U_{\it
l}(\mbox{{\boldmath$\vec{r}$}})\propto W_{\it
l}(\mbox{{\boldmath$\vec{R}$}},\,\mbox{{\boldmath$\vec{r}$}}^{\,*})\;$, the
linearity condition includes the requirement that the functions $\,k_{\it
l}(\chi)\,$ and $\,\epsilon_{\textstyle{{}_{{\it{l}}}}}(\chi)\,$ be well
defined. This implies that they depend solely upon the frequency $\chi$, and
not upon the other frequencies involved. Nor shall the Love numbers or lags be
influenced by the stress or strain magnitudes at this or other frequencies.
Then, at frequency $\chi$, the mean (over a period) damping rate
$\langle\stackrel{{\scriptstyle\centerdot}}{{E}}(\chi)\rangle$ depends on the
value of $\chi$ and on the loading at that frequency, and is not influenced by
the other frequencies:
$\displaystyle\langle\,\dot{E}(\chi)\,\rangle\;=\;-\;\frac{\textstyle\chi
E_{peak}(\chi)}{\textstyle Q(\chi)}\;$ (10)
or, equivalently:
$\displaystyle\Delta
E_{cycle}(\chi)\;=\;-\;\frac{2\;\pi\;E_{peak}(\chi)}{Q(\chi)}\;\;\;,$ (11)
$\Delta E_{cycle}(\chi)\,$ being the one-cycle energy loss, and $\,Q(\chi)\,$
being the so-called quality factor.
If $\,E_{peak}(\chi)\,$ in (10 \- 11) is agreed to denote the peak energy
stored at frequency $\,\chi\,$, the appropriate $Q$ factor is connected to the
phase lag $\,\epsilon(\chi)\,$ through
$\displaystyle Q^{-1}_{\textstyle{{}_{energy}}}$ $\displaystyle=$
$\displaystyle\sin|\epsilon|~{}~{}~{}.$ (12)
and not through $~{}Q^{-1}_{\textstyle{{}_{energy}}}\,=\,\tan|\epsilon|~{}$ as
often presumed (see Appendix B for explanation).
If $E_{peak}(\chi)$ is defined as the peak work, the corresponding $Q$ factor
is related to the lag via
$\displaystyle
Q^{-1}_{\textstyle{{}_{work}}}\;=\;\frac{\tan|\epsilon|}{1\;-\;\left(\;\frac{\textstyle\pi}{\textstyle
2}\;-\;|\epsilon|\;\right)\;\tan|\epsilon|}\;\;\;,~{}~{}~{}~{}~{}$ (13)
as demonstrated in Appendix B below.333Deriving this formula in Appendix to
Efroimsky & Williams (2009), we inaccurately termed $\,E_{peak}(\chi)\,$ as
peak energy. However our calculation of $Q$ was carried out in understanding
that $\,E_{peak}(\chi)\,$ is the peak work. In the limit of a small
$\,\epsilon\,$, (13) becomes
$\displaystyle Q^{-1}_{\textstyle{{}_{work}}}$ $\displaystyle=$
$\displaystyle\sin|\epsilon|\;+\;O(\epsilon^{2})\;=\;|\epsilon|\;+\;O(\epsilon^{2})\;\;\;,$
(14)
so definition (13) makes $\,1/Q\,$ a good approximation to $\,\sin\epsilon\,$
for small lags only.
For the lag approaching $\,\pi/2\,$, the quality factor defined through (12)
attains its minimum, $\,Q_{\textstyle{{}_{energy}}}=1\,$, while definition
(13) furnishes $\,Q_{\textstyle{{}_{work}}}=0\,$. The latter is not
surprising, as in the said limit no work is carried out on the system.
Linearity requires the functions $\,\bar{k}_{\emph{l}}(\chi)\,$ and therefore
also $\,\epsilon_{\emph{l}}(\chi)\,$ to be well-defined, i.e., to be
independent from all the other frequencies but $\chi$. We now see, the
requirement extends to $\,Q(\chi)\,$.
The third definition of the quality factor (offered by Golderich 1963) is
$\,Q_{\textstyle{{}_{Goldreich}}}^{-1}=\,\tan|\epsilon|\,$. However this
definition corresponds neither to the peak work nor to the peak energy. The
existing ambiguity in definition of $Q$ makes this factor redundant, and we
mention it here only as a tribute to the tradition. As we shall see, all
practical calculations contain the products of the Love numbers by the sines
of the phase lags, $\,k_{l}\,\sin\epsilon_{l}\,$, where $\,l\,$ is the degree
of the appropriate spherical harmonic. A possible compromise between this
mathematical fact and the historical tradition of using $\,Q\,$ would be to
define the quality factor through (12), in which case the quality factor must
be equipped with the subscript $\,l\,$. (This would reflect the profound
difference between the tidal quality factors and the seismic quality factor –
see Efroimsky 2012.)
## 3 Several basic facts from continuum mechanics
This section offers a squeezed synopsis of the basic facts from the linear
solid-state mechanics. A more detailed introduction, including a glossary and
examples, is offered in Appendix A.
### 3.1 Stationary linear deformation of isotropic incompressible media
Mechanical properties of a medium are furnished by the so-called constitutive
equation or constitutive law, which interrelates the stress tensor
$\,{\mathbb{S}}\,$ with the strain tensor $\,{\mathbb{U}}\,$ defined as
$\displaystyle{\mathbb{U}}\,\equiv\,\frac{\textstyle 1}{\textstyle
2}\,\left[\,\left(\nabla\otimes{\bf{u}}\right)\,+\,\left(\nabla\otimes{\bf{u}}\right)^{{{}^{T}}}\,\right]~{}~{}~{},$
(15)
where $\,{\bf{u}}\,$ is the vector of displacement.
As we shall consider only linear deformations, our constitutive laws will be
linear, and will be expressed by equations which may be algebraic,
differential, integral, or integro-differential.
The elastic stress $\,\stackrel{{\scriptstyle{{(e)}}}}{{\mathbb{S}}}\,$ is
related to $\,{\mathbb{U}}\,$ through the simplest constitutive equation
$\displaystyle\stackrel{{\scriptstyle{{(e)}}}}{{\mathbb{S}}}\,=\,{\mathbb{B}}~{}{\mathbb{U}}~{}~{}~{},$
(16)
${\mathbb{B}}\,$ being a four-dimensional matrix of real numbers called
elasticity moduli.
A hereditary stress $\,\stackrel{{\scriptstyle{{(h)}}}}{{\mathbb{S}}}\,$ is
connected to $\,{\mathbb{U}}\,$ as
$\displaystyle\stackrel{{\scriptstyle{{(h)}}}}{{\mathbb{S}}}\,=\,\tilde{\,\mathbb{B}}~{}{\mathbb{U}}~{}~{}~{},$
(17)
$\tilde{\,\mathbb{B}}\,$ being a four-dimensional integral-operator-valued
matrix. Its component $\,\tilde{B}_{ijkl}\,$ acts on an element $\,u_{kl}\,$
of the strain not as a mere multiplier but as an integral operator, with
integration going from $\,t\,^{\prime}=-\infty\,$ through
$\,t\,^{\prime}=t\,$. To furnish the value of
$\,\sigma_{ij}=\sum_{kl}\tilde{B}_{ijkl}\,u_{kl}\,$ at time $\,t\,$, the
operator “consumes” as arguments all the values of $\,u_{kl}(t\,^{\prime})\,$
over the interval
$~{}t\,^{\prime}\,\in\,\left(\right.\,-\,\infty,\,t\left.\right]~{}$.
The viscous stress is related to the strain through a differential operator
$\,{\mathbb{A}}\,\frac{\textstyle\partial~{}}{\textstyle\partial t}~{}$:
$\displaystyle\stackrel{{\scriptstyle{{(v)}}}}{{\mathbb{S}}}\,=\,{\mathbb{A}}~{}\frac{\partial~{}}{\partial
t}~{}{\mathbb{U}}~{}~{}~{},$ (18)
$\,{\mathbb{A}}\,$ being a four-dimensional matrix consisting of empirical
constants called viscosities.
In an isotropic medium, each of the three matrices, $\,{\mathbb{B}}\,$,
$\,\tilde{\,\mathbb{B}}\,$, and $\,\tilde{\,\mathbb{A}}\,$, includes two terms
only. The elastic stress becomes:
$\displaystyle\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}\;=\;\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}_{\textstyle{{}_{volumetric}}}\,+\,\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}_{\textstyle{{}_{deviatoric}}}\,=\;3\,K\,\left(\frac{\textstyle
1}{\textstyle
3}~{}{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}\,\right)~{}+~{}2\,\mu\,\left(\,{\mathbb{U}}\;-\;\frac{\textstyle
1}{\textstyle 3}~{}{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}\,\right)\quad,\quad$
(19)
with $\,K\,$ and $\,\mu\,$ being the bulk elastic modulus and the shear
elastic modulus, correspondingly, $\;{\mathbb{I}}\,$ standing for the unity
matrix, and Sp denoting the trace of a matrix:
$~{}\mbox{Sp}\,{\mathbb{U}}\,\equiv\,\sum_{i}U_{ii}\;$.
The hereditary stress becomes:
$\displaystyle\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}~{}=~{}\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}_{\textstyle{{}_{volumetric}}}\,+~{}\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}_{\textstyle{{}_{deviatoric}}}~{}=\;3\,\tilde{K}~{}\left(\,\frac{\textstyle{1}}{\textstyle{3}}\,{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}\,\right)~{}+~{}2\,\tilde{\mu}\,\left(\,{\mathbb{U}}\,-\,\frac{\textstyle{1}}{\textstyle{3}}\,{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}\,\right)~{}~{}~{},~{}~{}$
(20)
where $\,\tilde{K}\,$ and $\,\tilde{\mu}\,$ are the bulk-modulus operator and
the shear-modulus operator, accordingly.
The viscous stress acquires the form:
$\displaystyle\stackrel{{\scriptstyle(v)}}{{\mathbb{S}}}\;=\;\stackrel{{\scriptstyle(v)}}{{\mathbb{S}}}_{\textstyle{{}_{volumetric}}}\,+\,\stackrel{{\scriptstyle(v)}}{{\mathbb{S}}}_{\textstyle{{}_{deviatoric}}}\,=\;3~{}\zeta\,\frac{\textstyle\partial~{}}{\textstyle\partial
t}\,\left(\frac{\textstyle 1}{\textstyle
3}~{}{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}\,\right)~{}+~{}2\,\eta\,\frac{\textstyle\partial~{}}{\textstyle\partial
t}\,\left(\,{\mathbb{U}}\;-\;\frac{\textstyle 1}{\textstyle
3}~{}{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}\,\right)\quad,\quad$ (21)
the quantities $\zeta$ and $\eta$ being termed as the bulk viscosity and the
shear viscosity, correspondingly
The term $~{}\frac{\textstyle 1}{\textstyle
3}~{}{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}~{}$ is called the volumetric part
of the strain, while $~{}{\mathbb{U}}\,-\,\frac{\textstyle 1}{\textstyle
3}~{}{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}~{}$ is called the deviatoric part.
Accordingly, in expressions (161 \- 163) for the stresses, the pure-trace
terms are called volumetric, the other term being named deviatoric.
If an isotropic medium is also incompressible, the relative change of the
volume vanishes: $\,\mbox{Sp}\,{\mathbb{U}}=0\,$, and so does the expansion
rate: $\,\nabla\cdot\bf{v}\,=\,\frac{\textstyle\partial~{}}{\textstyle\partial
t}\,\mbox{Sp}\,{\mathbb{U}}=0\,$. Then the volumetric part of the strain
becomes zero, and so do the volumetric parts of the elastic, hereditary, and
viscous stresses. The incompressibility assumption may be applicable both to
crusty objects and to large icy moons of low porosity. At least for Iapetus,
the low-porosity assumption is likely to be correct (Castillo-Rogez et al.
2011).
### 3.2 Approaches to modeling viscoelastic deformations.
Problems with terminology
One approach to linear deformations is to assume that the elastic, hereditary
and viscous deviatoric stresses simply sum up, each of them being linked to
the same overall deviatoric strain:
$\displaystyle\stackrel{{\scriptstyle(total)}}{{\mathbb{S}}}\,=~{}\stackrel{{\scriptstyle{{(e)}}}}{{\mathbb{S}}}\,+\,\stackrel{{\scriptstyle{{(h)}}}}{{\mathbb{S}}}\,+\,\stackrel{{\scriptstyle{{{(v)}}}}}{{\mathbb{S}}}~{}=~{}{\mathbb{B}}~{}{\mathbb{U}}~{}+~{}\tilde{\,\mathbb{B}}~{}{\mathbb{U}}~{}+~{}{\mathbb{A}}\,\frac{\partial~{}}{\partial
t}~{}{\mathbb{U}}~{}=~{}\left(\,{\mathbb{B}}~{}+~{}\tilde{\mathbb{B}}~{}+~{}{\mathbb{A}}\,\frac{\partial~{}}{\partial
t}\,\right)\;{\mathbb{U}}~{}~{}~{}.$ (22)
An alternative option, to be used in section 5.3 below, is to start with an
overall deviatoric stress, and to expand the deviatoric strain into elastic,
viscous, and hereditary parts:
$\displaystyle{\mathbb{U}}\,=\,\stackrel{{\scriptstyle(e)}}{{\mathbb{U}}}\,+\,\stackrel{{\scriptstyle(h)}}{{\mathbb{U}}}\,+\,\stackrel{{\scriptstyle(v)}}{{\mathbb{U}}}~{}~{}~{},\quad\quad\stackrel{{\scriptstyle(e)}}{{\mathbb{U}}}\,=\,\frac{1}{\mu}\,{\mathbb{S}}~{}~{}~{},\quad\quad\stackrel{{\scriptstyle(v)}}{{\mathbb{U}}}\,=\,\frac{1}{\eta}\,\int^{t}\,{\mathbb{S}}(t\,^{\prime})\,dt\,^{\prime}~{}~{}~{},\quad\quad\stackrel{{\scriptstyle(h)}}{{\mathbb{U}}}\,=\,\tilde{J}\,{\mathbb{S}}~{}~{}~{},\quad$
(23)
$\tilde{J}\,$ being an integral operator with a time-dependent kernel.
An even more general option would be to assume that both the strain and stress
are comprised by components of different nature – elastic, hereditary,
viscous, or more complicated (plastic). Which option to choose – depends upon
the medium studied. The rich variety of materials offered to us by nature
leaves one no chance to develop a unified theory of deformation.
As different segments of the continuum-mechanics community use different
conventions on the meaning of some terms, we offer a glossary of terms in
Appendix A. Here we would only mention that in our paper the term viscoelastic
will be applied to a model containing not only viscous and elastic terms, but
also an extra term responsible for an anelastic hereditary reaction. (A more
appropriate term viscoelastohereditary would be way too cumbersome.)
### 3.3 Evolving stresses and strains. Basic notations
In the general case, loading varies in time, so one has to deal with the
stress and strain tensors as functions of time. However, treatment of
viscoelasticity turns out to be simpler in the frequency domain, i.e., in the
language of complex rigidity and complex compliance. To this end, the stress
$\,\sigma_{\gamma\nu}\,$ and strain $\,u_{\gamma\nu}\,$ in a linear medium can
be Fourier-expanded as
$\displaystyle\sigma_{\gamma\nu}(t)\,\,=\,\sum_{n=0}^{\infty}\,\sigma_{\gamma\nu}(\chi_{\textstyle{{}_{n}}})\,\cos\left[\,\chi_{\textstyle{{}_{n}}}t+\varphi_{\sigma}(\chi_{\textstyle{{}_{n}}})\,\right]\,=\,\sum_{n=0}^{\infty}\,\,{\cal{R}}{\it{e}}\left[\;\sigma_{\gamma\nu}(\chi_{\textstyle{{}_{n}}})\;\,\;e^{\textstyle{{}^{{\it
i}\chi_{\textstyle{{}_{\textstyle{{}_{n}}}}}{\small{t}}\;+\,{\it
i}\varphi_{\textstyle{{}_{\sigma}}}(\chi_{\textstyle{{{{}_{n}}}}})\,}}}\;\right]~{}~{}~{}~{}~{}$
(24a)
$\displaystyle\left.~{}\right.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}=\;\sum_{n=0}^{\infty}\,{\cal{R}}{\it{e}}\left[\,{\bar{\sigma}}_{\gamma\nu}(\chi_{\textstyle{{}_{n}}})\,\;e^{\textstyle{{}^{\,{\it
i}\chi_{\textstyle{{}_{n}}}t}}}\,\;\right]~{}~{}~{}~{}~{},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(24b)
$\displaystyle
u_{\gamma\nu}(t)\,=\,\sum_{n=0}^{\infty}\,u_{\gamma\nu}(\chi_{\textstyle{{}_{n}}})\,\cos\left[\,\chi_{\textstyle{{}_{n}}}t+\varphi_{u}(\chi_{\textstyle{{}_{n}}})\,\right]\,=\,\sum_{n=0}^{\infty}\,\,{\cal{R}}{\it{e}}\left[\;u_{\gamma\nu}(\chi_{\textstyle{{}_{n}}})\;\,\;e^{\textstyle{{}^{{\it
i}\chi_{\textstyle{{}_{\textstyle{{}_{n}}}}}{\small{t}}\;+\,{\it
i}\varphi_{\textstyle{{}_{u}}}(\chi_{\textstyle{{{{}_{n}}}}})\,}}}\;\right]~{}~{}~{}~{}~{}$
(25a)
$\displaystyle\left.~{}\right.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}=\;\sum_{n=0}^{\infty}\,{\cal{R}}{\it{e}}\left[\,{\bar{u}}_{\gamma\nu}(\chi_{\textstyle{{}_{n}}})\;\,e^{\textstyle{{}^{\,{\it
i}\chi_{\textstyle{{}_{n}}}t}}}\,\;\right]~{}~{}~{},$ (25b)
where the complex amplitudes are:
$\displaystyle{\bar{{\sigma}}_{\gamma\nu}}(\chi)={{{\sigma}}_{\gamma\nu}}(\chi)\,\;e^{{\it
i}\varphi_{\sigma}(\chi)}~{}~{}~{}~{}~{},~{}~{}~{}~{}~{}~{}{\bar{{u}}_{\gamma\nu}}(\chi)={{{u}}_{\gamma\nu}}(\chi)\,\;e^{{\it
i}\varphi_{u}(\chi)}~{}~{}~{},$ (26)
while the initial phases $\,\varphi_{\sigma}(\chi)\,$ and
$\,\varphi_{u}(\chi)\,$ are chosen in a manner that sets the real amplitudes
$\,\sigma_{\gamma\nu}(\chi_{\textstyle{{}_{n}}})\,$ and
$\,u_{\gamma\nu}(\chi_{\textstyle{{}_{n}}})\,$ non-negative.
We wrote the above expansions as sums over a discrete spectrum, as the
spectrum generated by tides is discrete. Generally, the sums can, of course,
be replaced with integrals over frequency:
$\displaystyle\sigma_{\gamma\nu}(t)~{}=~{}\int_{0}^{\infty}\,\bar{\sigma}_{\gamma\nu}(\chi)~{}e^{\textstyle{{}^{\,{\it
i}\chi t}}}~{}d\chi\quad\quad\mbox{and}~{}\quad~{}\quad
u_{\gamma\nu}(t)~{}=~{}\int_{0}^{\infty}\,\bar{u}_{\gamma\nu}(\chi)~{}e^{\textstyle{{}^{\,{\it
i}\chi t}}}~{}d\chi~{}~{}~{}.$ (27)
Whenever necessary, the frequency is set to approach the real axis from below:
$\,{\cal I}{\it{m}}(\chi)\rightarrow 0-$
### 3.4 Should we consider positive frequencies only?
At first glance, the above question appears pointless, as a negative frequency
is a mere abstraction, while physical processes go at positive frequencies.
Mathematically, a full Fourier decomposition of a real field can always be
reduced to a decomposition over positive frequencies only.
For example, the full Fourier integral for the stress can be written as
$\displaystyle\sigma_{\gamma\nu}(t)\,=~{}\int_{-\infty}^{\infty}\,\bar{s}_{\gamma\nu}(\omega)~{}e^{\textstyle{{}^{\,{\it
i}\omega
t}}}d\omega\,=\,\int_{0}^{\infty}\left[\,\bar{s}_{\gamma\nu}(\chi)~{}e^{\textstyle{{}^{\,{\it
i}\chi t}}}+~{}\bar{s}_{\gamma\nu}(-\chi)~{}e^{\textstyle{{}^{\,-\,{\it i}\chi
t}}}\,\right]\,d\chi~{}~{}~{},$ (28)
where we define $\,\chi\,\equiv\,|\,\omega\,|\,$. Denoting complex conjugation
with asterisk, we write:
$\displaystyle\sigma^{*}_{\gamma\nu}(t)\,=\int_{0}^{\infty}\left[\,\bar{s}^{\,*}_{\gamma\nu}(-\chi)~{}e^{\textstyle{{}^{\,{\it
i}\chi t}}}+~{}\bar{s}^{\,*}_{\gamma\nu}(\chi)~{}e^{\textstyle{{}^{\,-\,{\it
i}\chi t}}}\,\right]\,d\chi~{}~{}~{}.$ (29)
The stress is real: $\,\sigma^{*}_{\gamma\nu}(t)\,=\,\sigma_{\gamma\nu}(t)\,$.
Equating the right-hand sides of (28) and (29), we obtain
$\displaystyle\bar{s}_{\gamma\nu}(-\chi)\,=\,\bar{s}^{\,*}_{\gamma\nu}(\chi)~{}~{}~{},$
(30)
whence
$\displaystyle\sigma_{\gamma\nu}(t)\,=\int_{0}^{\infty}\left[\,\bar{s}_{\gamma\nu}(\chi)~{}e^{\textstyle{{}^{\,{\it
i}\chi t}}}+~{}\bar{s}^{\,*}_{\gamma\nu}(\chi)~{}e^{\textstyle{{}^{\,-\,{\it
i}\chi
t}}}\,\right]\,d\chi~{}=~{}{\cal{R}}{\it{e}}\int_{0}^{\infty}2~{}\bar{s}_{\gamma\nu}(\chi)~{}e^{\textstyle{{}^{\,{\it
i}\chi t}}}~{}d\chi~{}~{}~{}.$ (31)
This leads us to (27), if we set
$\displaystyle\bar{\sigma}_{\gamma\nu}(\chi)\,=\,2\,\bar{s}_{\gamma\nu}(\chi)~{}~{}~{}.$
(32)
While the switch from
$~{}\sigma_{\gamma\nu}(t)\,=~{}\int_{-\infty}^{\infty}\,\bar{s}_{\gamma\nu}(\omega)~{}e^{\textstyle{{}^{\,{\it
i}\omega t}}}d\omega~{}$ to the expansion
$~{}\sigma_{\gamma\nu}(t)\,=~{}\int_{0}^{\infty}\,\bar{\sigma}_{\gamma\nu}(\omega)~{}e^{\textstyle{{}^{\,{\it
i}\chi t}}}d\chi~{}$ makes things simpler, the simplification comes at a cost,
as we shall see in a second.
Recall that the tide can be expanded over the modes
$\displaystyle\omega_{{\it l}mpq}~{}\equiv~{}({\it
l}-2p)\;\dot{\omega}\,+\,({\it
l}-2p+q)\;\dot{\cal{M}}\,+\,m\;(\dot{\Omega}\,-\,\dot{\theta})~{}\approx~{}({\it
l}-2p+q)\,n~{}-~{}m\,\dot{\theta}~{}~{}~{},~{}~{}~{}$ (33)
each of which can assume positive or negative values, or be zero. Here $l$,
$m$, $p$, $q$ are some integers, ${\theta}$ is the primary’s sidereal angle,
$\dot{\theta}$ is its spin rate, while $\omega$, $\Omega$, ${\cal M}$ and $n$
are the secondary’s periapse, node, mean anomaly, and mean motion. The
appropriate tidal frequencies, at which the medium gets loaded, are given by
the absolute values of the tidal modes:
$\,\chi_{\textstyle{{}_{lmpq}}}\equiv\,|\,\omega_{\textstyle{{}_{lmpq}}}\,|\,$.
The positively-defined forcing frequencies $\,\chi_{\textstyle{{}_{lmpq}}}\,$
are the actual physical frequency at which the $\,lmpq\,$ term in the
expansion for the tidal potential (or stress or strain) oscillates.
The motivation for keeping also the modes $\,\omega_{\textstyle{{}_{lmpq}}}\,$
is subtle: it depends upon the sign of $\,\omega_{\textstyle{{}_{lmpq}}}\,$
whether the $\,lmpq\,$ component of the tide lags or advances. Specifically,
the phase lag between the $\,lmpq\,$ component of the perturbed primary’s
potential $\,U\,$ and the $\,lmpq\,$ component of the tide-raising potential
$\,W\,$ generated by the secondary is given by
$\displaystyle\epsilon_{\textstyle{{}_{lmpq}}}~{}=~{}\omega_{\textstyle{{}_{lmpq}}}~{}\Delta
t_{\textstyle{{}_{lmpq}}}~{}=~{}|\,\omega_{\textstyle{{}_{lmpq}}}\,|~{}\Delta
t_{\textstyle{{}_{lmpq}}}~{}\mbox{sgn}\,\omega_{\textstyle{{}_{lmpq}}}~{}=~{}\chi_{\textstyle{{}_{lmpq}}}~{}\Delta
t_{\textstyle{{}_{lmpq}}}~{}\,\mbox{sgn}\,\omega_{\textstyle{{}_{lmpq}}}~{}~{}~{},$
(34)
where the time lag $~{}\Delta t_{\textstyle{{}_{lmpq}}}~{}$ is always
positive.
While the lag between the applied stress and resulting strain in a sample of a
medium is always positive, the case of tides is more complex: there, the lag
can be either positive or negative. This, of course, in no way implies
whatever violation of causality (the time lag $~{}\Delta
t_{\textstyle{{}_{lmpq}}}~{}$ is always positive). Rather, this is about the
directional difference between the planetocentric positions of the tide-
raising body and the resulting bulge. For example, the principal component of
the tide, $\,lmpq=2200\,$, stays behind (has a positive phase lag
$\,\epsilon_{\textstyle{{}_{2200}}}\,$) when the secondary is below the
synchronous orbit, and advances (has a negative phase lag
$\,\epsilon_{\textstyle{{}_{2200}}}\,$) when the secondary is at a higher
orbit. To summarise, decomposition of a tide over both positive and negative
modes $\,\omega_{\textstyle{{}_{lmpq}}}\,$ (and not just over the positive
frequencies $\,\chi_{\textstyle{{}_{lmpq}}}\,$) does have a physical meaning,
as the sign of a mode $\,\omega_{\textstyle{{}_{lmpq}}}\,$ carries physical
information.
Thus we arrive at the following conclusions:
* 1.
As the fields emerging in the tidal theory – the tidal potential, stress, and
strain – are all real, their expansions in the frequency domain may, in
principle, be written down using the positive frequencies $\,\chi\,$ only.
* 2.
In the tidal theory, the potential (and, consequently, the tidal torque and
force) contain components corresponding to the tidal modes
$\,\omega_{\textstyle{{}_{lmpq}}}\,$ of both the positive and negative signs.
While the $\,lmpq\,$ components of the potential, stress, and strain oscillate
at the positive frequencies
$\,\chi_{\textstyle{{}_{lmpq}}}\,=\,|\omega_{\textstyle{{}_{lmpq}}}|~{}$, the
sign of each $\omega_{\textstyle{{}_{lmpq}}}$ does carry physical information:
it distinguishes whether the lagging of the $\,lmpq\,$ component of the bulge
is positive or negative (falling behind or advancing). Accordingly, this sign
enters explicitly the expression for the appropriate component of the torque
or force. Hence a consistent tidal theory should be developed through
expansions over both positive and negative tidal modes
$\,\omega_{\textstyle{{}_{lmpq}}}\,$ and not just over the positive
$\,\chi_{\textstyle{{}_{lmpq}}}\,$.
* 3.
In order to rewrite the tidal theory in terms of the positively-defined
frequencies $~{}\,\chi_{\textstyle{{}_{lmpq}}}\,$ only, one must inserts “by
hand” the extra multipliers
$\displaystyle\mbox{sgn}\,\omega_{\textstyle{{}_{lmpq}}}~{}=~{}\mbox{sgn}\left[~{}({\it
l}-2p+q)\,n~{}-~{}m\,\dot{\theta}~{}\right]$ (35)
into the expressions for the $\,lmpq\,$ components of the tidal torque and
force.
* 4.
One can employ a rheological law (constitutive equation interconnecting the
strain and stress) and a Navier-Stokes equation (the second law of Newton for
an element of a viscoelastic medium), to calculate the phase lag
$\,\epsilon_{\textstyle{{}_{lmpq}}}\,$ of the primary’s potential
$\,U_{\textstyle{{}_{lmpq}}}\,$ relative to the potential
$\,W_{\textstyle{{}_{lmpq}}}\,$ generated by the secondary. If both these
equations are expanded, in the frequency domain, via positively-defined
forcing frequencies $\,\chi_{\textstyle{{}_{lmpq}}}\,$ only, the resulting
phase lag, too, will emerge as a function of
$\,\chi_{\textstyle{{}_{lmpq}}}\,$:
$\displaystyle\epsilon_{\textstyle{{}_{lmpq}}}\,=\,\epsilon_{\textstyle{{}_{l}}}(\chi_{\textstyle{{}_{lmpq}}})~{}~{}~{}.$
(36)
Within this treatment, one has to equip the lag, “by hand”, with the
multiplier (35).
As we saw above, the lag (36) is the argument of the complex Love number
$\,\bar{k}_{\textstyle{{}_{l}}}(\chi_{\textstyle{{}_{lmpq}}})\,$. Solution of
the constitutive and Navier-Stokes equations renders the complex Love numbers,
from which one can calculate the lags. Hence the above item [4] may be
rephrased in the following manner:
* 4′.
Under the convention that
$\,U_{\textstyle{{}_{lmpq}}}=U(\chi_{\textstyle{{}_{lmpq}}})\,$ and
$\,W_{\textstyle{{}_{lmpq}}}=W(\chi_{\textstyle{{}_{lmpq}}})\,$, we have:
$\displaystyle\left.~{}~{}\right.U_{\textstyle{{}_{lmpq}}}$ $\displaystyle=$
$\displaystyle\bar{k}_{\textstyle{{}_{l}}}(\chi_{\textstyle{{}_{lmpq}}})\,W_{\textstyle{{}_{lmpq}}}~{}~{}~{}~{}\quad\mbox{when}\quad\omega_{\textstyle{{}_{lmpq}}}>\,0~{},~{}~{}\mbox{i.e.}\,,\,~{}\mbox{when}\quad\omega_{\textstyle{{}_{lmpq}}}~{}=~{}\chi_{\textstyle{{}_{lmpq}}}~{}~{},\quad~{}\quad~{}\quad$
(37a) $\displaystyle\left.~{}~{}\right.U_{\textstyle{{}_{lmpq}}}$
$\displaystyle=$
$\displaystyle\bar{k}^{\,*}_{\textstyle{{}_{l}}}(\chi_{\textstyle{{}_{lmpq}}})\,W_{\textstyle{{}_{lmpq}}}~{}~{}~{}\quad\mbox{when}\quad\omega_{\textstyle{{}_{lmpq}}}<\,0~{},~{}~{}\mbox{i.e.}\,,\,~{}\mbox{when}\quad\omega_{\textstyle{{}_{lmpq}}}=\,-\,\chi_{\textstyle{{}_{lmpq}}}~{},\quad~{}\quad~{}\quad$
(37b)
asterisk denoting the complex conjugation.
This ugly convention, a switch from $\,\bar{k}_{\textstyle{{}_{l}}}\,$ to
$\,\bar{k}^{\,*}_{\textstyle{{}_{l}}}\,$, is the price we pay for employing
only the positive frequencies in our expansions, when solving the constitutive
and Navier-Stokes equations, to find the Love number. In other words, this is
a price for our pretending that $\,W_{\textstyle{{}_{lmpq}}}\,$ and
$\,U_{\textstyle{{}_{lmpq}}}\,$ are functions of
$\,\chi_{\textstyle{{}_{lmpq}}}~{}$ – whereas in reality they are functions of
$\,\omega_{\textstyle{{}_{lmpq}}}\,$.
Alternative to this would be expanding the stress, strain, and the potentials
over the positive and negative modes $\,\omega_{\textstyle{{}_{lmpq}}}\,$,
with the negative frequencies showing up in the equations. With the convention
that $\,U_{\textstyle{{}_{lmpq}}}=U(\omega_{\textstyle{{}_{lmpq}}})\,$ and
$\,W_{\textstyle{{}_{lmpq}}}=W(\omega_{\textstyle{{}_{lmpq}}})\,$, we would
have
$\displaystyle U_{\textstyle{{}_{lmpq}}}$ $\displaystyle=$
$\displaystyle\bar{k}_{\textstyle{{}_{l}}}(\omega_{\textstyle{{}_{lmpq}}})\,W_{\textstyle{{}_{lmpq}}}~{}~{}~{}~{},\quad\mbox{for
all}\quad\omega_{\textstyle{{}_{lmpq}}}~{}~{}~{}.$ (38)
All these details can be omitted at the despinning stage, if one keeps only
the leading term of the torque and ignores the other terms. Things change,
though, when one takes these other terms into account. On crossing of an
$lmpq$ resonance, factor (35) will change its sign. Accordingly, the $lmpq$
term of the tidal torque (and of the tidal force) will change its sign too.
### 3.5 The complex rigidity and compliance. Stress-strain relaxation
The stress cannot be obtained by means of an integral operator that would map
the past history of the strain, $\,{\mathbb{U}}(t\,^{\prime})\,$ over
$\,t\,^{\prime}\,\in\,\left(\right.-\infty,\,t\,\left.\right]\,$, to the value
of $\,{\mathbb{S}}\,$ at time $\,t\,$. The insufficiency of such an operator
is evident from the presence of a time-derivative on the right-hand side of
(18). Exceptional are the cases of no viscosity (e.g., a purely elastic
material).
On the other hand, we expect, on physical grounds, that the operator
$\,\hat{J}\,$ inverse to $\,\hat{\mu}\,$ is an integral operator. In other
words, we assume that the current value of the strain depends only on the
present and past values taken by the stress and not on the current rate of
change of the stress. This assumption works for weak deformations, i.e.,
insofar as no plasticity shows up. So we assume that the operator
$\,\hat{J}\,$ mapping the stress to the strain is just an integral operator.
Since the forced medium “remembers” the history of loading, the strain at time
$\,t\,$ must be a sum of small installments $\,\frac{\textstyle 1}{\textstyle
2}\,J(t-t\,^{\prime})\,d{\sigma}_{\gamma\nu}(t\,^{\prime})\,$, each of which
stems from a small change $\,d{\sigma}_{\gamma\nu}(t-\tau)\,$ of the stress at
an earlier time $\,t\,^{\prime}\,<\,t$. The entire history of the past loading
results, at the time $\,t\,$, in a total strain $\,u_{\gamma\nu}(t)\,$
rendered by an integral operator $\,\hat{J}(t)\,$ acting on the entire
function $\,\sigma_{\gamma\nu}(t\,^{\prime})\,$ and not on its particular
value (Karato 2008):
$\displaystyle
2\,u_{\gamma\nu}(t)\;=\;\hat{J}(t)\;\sigma_{\gamma\nu}\;=\;\int^{\infty}_{0}J(\tau)\;\stackrel{{\scriptstyle\centerdot}}{{\sigma}}_{\gamma\nu}(t-\tau)\;d\tau\;=\;\int^{t}_{-\infty}J(t\,-\,t\,^{\prime})\;\stackrel{{\scriptstyle\centerdot}}{{\sigma}}_{\gamma\nu}(t\,^{\prime})\;dt\,^{\prime}\;\;\;,\;\;\;$
(39)
where $\,t\,^{\prime}\,$ is some earlier time ($\,t\,^{\prime}<t\,$), overdot
denotes $\,d/dt\,^{\prime}\,$, while the “age variable” $\tau=t-t\,^{\prime}$
is reckoned from the current moment $\,t\,$ and is aimed back into the past.
The so-defined integral operator $\,\hat{J}(t)\,$ is called the _compliance
operator_ , while its kernel $\,J(t-t\,^{\prime})\,$ goes under the name of
the compliance function or the creep-response function.
Integrating (39) by parts, we recast the compliance operator into the form of
$\displaystyle
2\,u_{\gamma\nu}(t)\;=\;\hat{J}(t)~{}\sigma_{\gamma\nu}~{}=~{}J(0)\;\sigma_{\gamma\nu}(t)\;-\;J(\infty)\;\sigma_{\gamma\nu}(-\infty)~{}+~{}\int^{\infty}_{0}\stackrel{{\scriptstyle~{}\centerdot}}{{J}}(\tau)~{}{\sigma}_{\gamma\nu}(t-\tau)~{}d\tau~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(40a)
$\displaystyle\left.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\right.=~{}J(0)\;\sigma_{\gamma\nu}(t)\;-\;J(\infty)\;\sigma_{\gamma\nu}(-\infty)~{}+~{}\int^{t}_{-\infty}\stackrel{{\scriptstyle\;\centerdot}}{{J}}(t\,-\,t\,^{\prime})~{}{\sigma}_{\gamma\nu}(t\,^{\prime})~{}dt\,^{\prime}~{}~{}\,.~{}~{}~{}~{}\,$
(40b)
The quantity $\,J(\infty)\,$ is the relaxed compliance. Being the asymptotic
value of $\,J(t-t\,^{\prime})\,$ at $\,t-t\,^{\prime}\rightarrow\,\infty\,$,
this parameter corresponds to the strain after complete relaxation. The load
in the infinite past may be assumed zero, and the term
$~{}-\,J(\infty)\;\sigma_{\gamma\nu}(-\infty)\;$ may be dropped
The second important quantity emerging in (40) is the unrelaxed compliance
$\,J(0)\,$, which is the value of the compliance function
$\,J(t-t\,^{\prime})\,$ at $\,t-t\,^{\prime}=\,0\,$. This parameter describes
the instantaneous reaction to stressing, and thus defines the elastic part of
the deformation (the rest of the deformation being viscous and hereditary).
Thus the term containing the unrelaxed compliance $\,J(0)\,$ should be kept.
The term, though, can be absorbed into the integral if we agree that the
elastic contribution enters the compliance function not as 444 Expressing the
stress through the strain, we encountered three possibilities: the elastic
stress was simply proportional to the strain, the viscous stress was
proportional to the time-derivative of the strain, while the hereditary stress
was expressed by an integral operator $\,\tilde{\mu}\,$. However, when we
express the strain through the stress, we place the viscosity into the
integral operator, so the purely viscous reaction also looks like hereditary.
It is our convention, though, to apply the term hereditary to delayed
reactions other than purely viscous.
$\displaystyle J(t-t\,^{\prime})\,=\,J(0)\,+\,\mbox{viscous and hereditary
terms}~{}~{}~{},$ (41)
but as
$\displaystyle
J(t-t\,^{\prime})\,=\,J(0)\,\Theta(t\,-\,t\,^{\prime})\,+\,\mbox{viscous and
hereditary terms}~{}~{}~{},$ (42)
the Heaviside step-function $\,\Theta(t\,-\,t\,^{\prime})\,$ being unity for
$\,t-t\,^{\prime}\,\geq\,0\,$, and zero for $\,t-t\,^{\prime}\,<\,0\,$. As the
derivative of the step-function is the delta-function
$\,\delta(t\,-\,t\,^{\prime})\,$, we can write (40b) simply as
$\displaystyle
2\,u_{\gamma\nu}(t)\,=\,\hat{J}(t)~{}\sigma_{\gamma\nu}\,=\,\int^{t}_{-\infty}\stackrel{{\scriptstyle\;\centerdot}}{{J}}(t-t\,^{\prime})~{}{\sigma}_{\gamma\nu}(t\,^{\prime})\,dt\,^{\prime}~{}~{},~{}~{}~{}\mbox{with}~{}~{}J(t-t\,^{\prime})~{}~{}\mbox{containing}~{}~{}J(0)\,\Theta(t-t\,^{\prime})~{}~{}.~{}~{}~{}$
(43)
Equations (39), (40), (43) are but different expressions for the compliance
operator $\,\hat{J}\,$ acting as
$\displaystyle 2\,u_{\gamma\nu}\;=\;\hat{J}~{}\sigma_{\gamma\nu}~{}~{}~{}.$
(44)
Inverse to the compliance operator is the rigidity operator $\,\hat{\mu}\,$
defined through
$\displaystyle\sigma_{\gamma\nu}\;=\;2\,\hat{\mu}~{}u_{\gamma\nu}~{}~{}~{}.$
(45)
Generally, $\,\hat{\mu}\,$ is not just an integral operator, but is an
integro-differential operator. So it cannot take the form of
$~{}\sigma_{\gamma\nu}(t)\,=\,2\,\int_{-\infty}^{t}\,\dot{\mu}(t\,-\,t\,^{\prime})\,u_{\gamma\nu}(t\,^{\prime})\,dt\,^{\prime}~{}$.
However it can be written as
$\displaystyle\sigma_{\gamma\nu}(t)\,=\,2\,\int_{-\infty}^{t}\,{\mu}(t\,-\,t\,^{\prime})\,\dot{u}_{\gamma\nu}(t\,^{\prime})\,dt\,^{\prime}\;\;\;,$
(46)
if we permit the kernel $\,{\mu}(t\,-\,t\,^{\prime})\,$ to contain a term
$\,\eta\,\delta(t-t\,^{\prime})\,$, where $\,\delta(t-t\,^{\prime})\,$ is the
delta-function. After integration, this term will furnish the viscous part of
the stress, $~{}2\,\eta\,\dot{u}_{\gamma\nu}\,$.
The kernel $\,\mu(t\,-\,t\,^{\prime})\,$ goes under the name of the stress-
relaxation function. Its time-independent part is
$\,\mu(0)\,\Theta(t\,-\,t\,^{\prime})\,$, where the _unrelaxed rigidity_
$\,\mu(0)\,$ is inverse to the unrelaxed compliance $\,J(0)\,$ and describes
the elastic part of deformation. Each term in $\,\mu(t-t\,^{\prime})\,$, which
neither is a constant nor contains a delta-function, is responsible for
hereditary reaction.
For more details on the stress-strain relaxation formalism see the book by
Karato (2008).
### 3.6 Stress-strain relaxation in the frequency domain
Let us introduce the complex compliance $\,\bar{J}(\chi)\,$ and the complex
rigidity $\,\bar{\mu}(\chi)\,$, which are, by definition, the Fourier images
not of the ${J}(\tau)$ and ${\mu}(\tau)$ functions, but of their time-
derivatives:555 Recall that it is the time-derivative of $\,{J}(\tau)\,$ that
is the kernel of the integral operator (43). Hence, to arrive at (50), we have
to define $\,\bar{J}(\chi)\,$ as the Fourier image of
$\,\stackrel{{\scriptstyle\;\bf\centerdot}}{{J}}(\tau)\,$.
$\displaystyle\int_{0}^{\infty}\bar{J}(\chi)\,e^{{\it
i}\chi\tau}d\chi\,=\,\stackrel{{\scriptstyle\;\centerdot}}{{J}}(\tau)~{}~{},~{}~{}~{}\mbox{where}~{}~{}~{}\bar{J}(\chi)~{}=\,\int_{0}^{\infty}\stackrel{{\scriptstyle\;\centerdot}}{{J}}(\tau)\,e^{-{\it
i}\chi\tau}\,d\tau~{}\,.\,~{}~{}$ (47)
and
$\displaystyle\int_{0}^{\infty}\bar{\mu}(\chi)\,e^{{\it
i}\chi\tau}d\chi\,=\,{\bf{\dot{\mu}}}(\tau)~{}~{}\,,\quad\mbox{where}\quad\bar{\mu}(\chi)\,=\,\int_{0}^{\infty}{\bf{\dot{\mu}}}(\tau)\,e^{-{\it
i}\chi\tau}\,d\tau~{}~{}\,,~{}~{}~{}$ (48)
the integrations over $\tau$ spanning the interval
$\,\left[\right.0,\infty\left.\right)\,$, as both kernels are nil for $\tau<0$
anyway. In (47) and (48), we made use of the fact (explained in subsection
3.4) that, when expanding real fields, it is sufficient to use only positive
frequencies.
Expression (39), in combination with the Fourier expansions (27) and with
(47), furnishes:
$\displaystyle
2\,\int_{0}^{\infty}\bar{u}_{\gamma\nu}(\chi)~{}e^{\textstyle{{}^{\,{\it
i}\chi
t}}}~{}d\chi\;=\;\int_{0}^{\infty}\bar{\sigma}_{\mu\nu}(\chi)~{}\bar{J}(\chi)~{}e^{\textstyle{{}^{\,{\it
i}\chi t}}}~{}d\chi~{}~{}~{},$ (49)
which leads us to:
$\displaystyle
2\;\bar{u}_{\gamma\nu}(\chi)\,=\;\bar{J}(\chi)\;\bar{\sigma}_{\gamma\nu}(\chi)\;\;\;.$
(50)
Similarly, insertion of (27) into (46) leads to the relation
$\displaystyle\bar{\sigma}_{\gamma\nu}(\chi)\,=\;2\;\bar{\mu}(\chi)\;\bar{u}_{\gamma\nu}(\chi)\;\;\;,$
(51)
comparison whereof with (50) immediately entails:
$\displaystyle\bar{J}(\chi)\;{\textstyle\bar{\mu}(\chi)}\;=\;{\textstyle
1}\;\;\;.$ (52)
Writing down the complex rigidity and compliance as
$\displaystyle\bar{\mu}(\chi)\;=\;|\bar{\mu}(\chi)|\;\exp\left[\,{\it
i}\,\delta(\chi)\,\right]$ (53)
and
$\displaystyle\bar{J}(\chi)\;=\;|\bar{J}(\chi)|\;\exp\left[\,-\,{\it
i}\,\delta(\chi)\,\right]\;\;\;,\;$ (54)
we split (52) into two expressions:
$\displaystyle|\bar{J}(\chi)|\;=\;\frac{1}{|\bar{\mu}(\chi)|}~{}~{}~{}$ (55)
and
$\displaystyle\varphi_{u}(\chi)\;=\;\varphi_{\sigma}(\chi)\;-\;\delta(\chi)\;\;\;.$
(56)
From the latter, we see that the angle $\;\delta(\chi)\,\;$ is a measure of
lagging of a strain harmonic mode relative to the appropriate harmonic mode of
the stress. It is evident from (53 \- 54) that
$\displaystyle\tan\delta(\chi)\;\equiv\;-\;\frac{\cal{I}\it{m}\left[\,\bar{J\,}(\chi)\,\right]}{\cal{R}\it{e}\left[\,\bar{J\,}(\chi)\,\right]}\;=\;\frac{\cal{I}\it{m}\left[\,\bar{\mu}(\chi)\,\right]}{\cal{R}\it{e}\left[\,\bar{\mu}(\chi)\,\right]}\;\;\;.$
(57)
## 4 Complex Love numbers
The developments presented in this section will rest on a very important
theorem from solid-state mechanics. The theorem, known as the correspondence
principle, also goes under the name of elastic-viscoelastic analogy. The
theorem applies to linear deformations in the absence of nonconservative
(inertial) forces. While the literature attributes the authorship of the
theorem to different scholars, its true pioneer was Sir George Darwin (1879).
One of the corollaries ensuing from this theorem is that, in the frequency
domain, the complex Love numbers are expressed via the complex rigidity or
compliance in the same way as the static Love numbers are expressed via the
relaxed rigidity or compliance.
As was pointed out much later by Biot (1954, 1958), the theorem is
inapplicable to non-potential forces. Hence the said corollary fails in the
case of librating bodies, because of the presence of the inertial force666 The
centripetal term is potential and causes no troubles, except for the necessity
to introduce a degree-0 Love number.
$\;-{\mbox{{\boldmath$\dot{\vec{\omega}}$}}}\times\mbox{{\boldmath$\vec{r}$}}\rho$,
where $\rho$ is the density and $\vec{\omega}$ is the libration angular
velocity. So the standard expression (3) for the Love numbers, generally,
cannot be employed for librating bodies.
Subsection 4.1 below explains the transition from the stationary Love numbers
to their dynamical counterparts, the so-called Love operators. We present this
formalism in the frequency domain, in the spirit of Zahn (1966) who pioneered
this approach in application to a purely viscous medium. Subsection 4.2
addresses the negative tidal modes emerging in the Darwin-Kaula expansion for
tides. Employing the correspondence principle, in subsection 4.3 we then write
down the expressions for the factors
$\,|\bar{k}_{\textstyle{{}_{l}}}(\chi)|\,\sin\epsilon_{\textstyle{{}_{l}}}(\chi)\,=\,-\,{\cal{I}}{\it{m}}[\bar{k}_{\textstyle{{}_{l}}}(\chi)]\,$
emerging in the expansion for tides. Some technical details of this derivation
are discussed in subsections 4.4 \- 4.5.
For more on the correspondence principle and its applicability to Phobos see
Appendix D.
### 4.1 From the Love numbers to the Love operators
A homogeneous incompressible primary, when perturbed by a static secondary,
yields its form and, consequently, has its potential changed. The
$\,{\it{l}}^{th}$ spherical harmonic $\,U_{l}(\mbox{{\boldmath$\vec{r}$}})\,$
of the resulting increment of the primary’s exterior potential is related to
the $\,{\it{l}}^{th}$ spherical harmonic
$\,W_{l}(\mbox{{\boldmath$\vec{R}$}},\mbox{{\boldmath$\vec{r}$}})\,$ of the
perturbing exterior potential through (2).
As the realistic disturbances are never static (except for synchronous
orbiting), the Love numbers become operators:
$\displaystyle U_{\it
l}(\mbox{{\boldmath$\vec{r}$}},\,t)\;=\;\left(\,\frac{R}{r}\,\right)^{{\it
l}+1}\hat{k}_{\it
l}(t)\;W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,\;\mbox{{\boldmath$\vec{r}$}}^{\;*},\,t\,^{\prime})~{}~{}~{}.$
(58)
A Love operator acts neither on the value of $\,W\,$ at the current time
$\,t\,$, nor at its value at an earlier time $\,t\,^{\prime}\,$, but acts on
the entire shape of the function
$\,W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,\;\mbox{{\boldmath$\vec{r}$}}^{\;*},\,t\,^{\prime})\,$,
with $\,t\,^{\prime}\,$ belonging to the semi-interval $\,(-\infty,\,t)\,$.
This is why we prefer to write $\,\hat{k}_{\it l}(t)\,$ and not
$\,\hat{k}_{\it l}(t,\,t\,^{\prime})\,$.
Being linear for weak forcing, the operators must read:
$\displaystyle U_{\it
l}(\mbox{{\boldmath$\vec{r}$}},\,t)=\left(\frac{R}{r}\right)^{{\it
l}+1}\int^{\tau=\infty}_{\tau=0}k_{\it
l}(\tau)\stackrel{{\scriptstyle\bf\centerdot}}{{W}}_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,\,\mbox{{\boldmath$\vec{r}$}}^{\;*},\,t-\tau)\,d\tau=\left(\frac{R}{r}\right)^{{\it
l}+1}\int_{t\,^{\prime}=-\infty}^{t\,^{\prime}=t}k_{\it
l}(t-t\,^{\prime})\stackrel{{\scriptstyle\bf\centerdot}}{{W}}_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,\,\mbox{{\boldmath$\vec{r}$}}^{\;*},\,t\,^{\prime})\,dt\,^{\prime}~{}~{}~{}~{}$
(59a) or, after integration by parts: $\displaystyle U_{\it
l}(\mbox{{\boldmath$\vec{r}$}},\,t)\,=\,\left(\frac{R}{r}\right)^{{\it
l}+1}\,\left[\,k_{l}(0)W(t)\,-\,k_{l}(\infty)W(-\infty)\,\right]\,+\,\left(\,\frac{R}{r}\,\right)^{{\it
l}+1}\int^{\infty}_{0}{\bf\dot{\it{k}}}_{\textstyle{{}_{l}}}(\tau)~{}\,W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,\,\mbox{{\boldmath$\vec{r}$}}^{\;*},\,t-\tau)\,d\tau~{}~{}~{}~{}~{}~{}~{}~{}$
(59b)
$\displaystyle\left.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\right.=\left(\frac{R}{r}\right)^{{\it
l}+1}\,\left[k_{l}(0)W(t)\,-\,k_{l}(\infty)W(-\infty)\right]\,+\,\left(\frac{R}{r}\right)^{{\it
l}+1}\int_{-\infty}^{t}{\bf\dot{\it{k}}}_{\textstyle{{}_{l}}}(t-t\,^{\prime})\,~{}W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,\,\mbox{{\boldmath$\vec{r}$}}^{\;*},\,t\,^{\prime})\,dt\,^{\prime}~{}~{}~{}\,~{}~{}~{}~{}$
(59c) $\displaystyle\left.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\right.=$
$\displaystyle-$ $\displaystyle\left(\frac{R}{r}\right)^{{\it
l}+1}k_{l}(\infty)W(-\infty)$ (59d) $\displaystyle+$
$\displaystyle\left(\frac{R}{r}\right)^{{\it
l}+1}\int_{-\infty}^{t}\frac{d}{dt}\left[~{}{k}_{\it
l}(t\,-\,t\,^{\prime})~{}-~{}{k}_{\it l}(0)~{}+~{}{k}_{\it
l}(0)\Theta(t\,-\,t\,^{\prime})~{}\right]~{}W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}},\mbox{{\boldmath$\vec{r}$}}^{\;*},t\,^{\prime})\,dt\,^{\prime}\quad.\quad\quad\quad\quad$
Just as in the case of the compliance operator (39 \- 40), in expressions (59)
we obtain the terms $~{}k_{l}(0)W(t)~{}$ and $~{}-k_{l}(\infty)W(-\infty)~{}$.
Of the latter term, we can get rid by setting $\,W(-\infty)\,$ nil, while the
former term may be incorporated into the kernel in exactly the same way as in
(41 \- 43). Thus, dropping the unphysical term with $\,W(-\infty)~{}$, and
inserting the elastic term into the Love number not as $\,k_{l}(0)\,$ but as
$\,{k}_{l}(0)\,\Theta(t-t\,^{\prime})\,$, we simplify (59d) to
$\displaystyle U_{\it
l}(\mbox{{\boldmath$\vec{r}$}},\,t)\;=\;\left(\frac{R}{r}\right)^{{\it
l}+1}\int_{-\infty}^{t}{\bf\dot{\it{k}}}_{\textstyle{{}_{l}}}(t-t\,^{\prime})~{}W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,\;\mbox{{\boldmath$\vec{r}$}}^{\;*},\;t\,^{\prime})\,dt\,^{\prime}~{},$
(60)
with $\,{k}_{\it l}(t-t\,^{\prime})\,$ now including, as its part,
$\,{k}_{l}(0)\,\Theta(t-t\,^{\prime})\,$ instead of $\,{k}_{l}(0)\,$.
Were the body perfectly elastic, $\,{k}_{\it l}(t-t\,^{\prime})\,$ would
consist of the instantaneous-reaction term $\,{k}_{\it
l}(0)\,\Theta(t-t\,^{\prime})\,$ _only_. Accordingly, the time-derivative of
$\,{k}_{\it l}\,$ would be:
$\,{\bf\dot{\it{k}}}_{\textstyle{{}_{\it{l}}}}(t-t\,^{\prime})\,=\,k_{\it
l}\,\delta(t-t\,^{\prime})\,$ where $\,k_{\it l}\,\equiv\,k_{\it l}(0)\,$, so
expressions (59 \- 60) would coincide with (2).
Similarly to introducing the complex compliance, one can define the complex
Love numbers as Fourier transforms of
$~{}\stackrel{{\scriptstyle\bf\centerdot}}{{k}}_{\textstyle{{}_{l}}}(\tau)~{}$:
$\displaystyle\int_{0}^{\infty}\bar{k}_{\textstyle{{}_{l}}}(\chi)e^{{\it
i}\chi\tau}d\chi\;=\;\stackrel{{\scriptstyle\bf\centerdot}}{{k}}_{\textstyle{{}_{l}}}(\tau)~{}~{}~{},$
(61)
the overdot standing for $\,d/d\tau\,$. Churkin (1998) suggested to term the
time-derivatives
$~{}\stackrel{{\scriptstyle\bf\centerdot}}{{k}}_{\textstyle{{}_{\,l}}}(t)~{}$
as the _Love functions_.777 Churkin (1998) used functions which he called
$\,k_{\it l}(t)\,$ and which were, due to a difference in notations, the same
as our
$\,\stackrel{{\scriptstyle\bf\centerdot}}{{k}}_{\textstyle{{}_{l}}}(\tau)\,$.
Inversion of (61) trivially yields:
$\displaystyle\bar{k}_{\textstyle{{}_{l}}}(\chi)~{}=~{}\int_{0}^{\infty}{\bf\dot{\mbox{\it{k}}}}_{\textstyle{{}_{l}}}(\tau)\;\,e^{-{\it
i}\chi\tau}\,d\tau~{}=~{}k_{\textstyle{{}_{l}}}(0)\;+\;{\it
i}~{}\chi~{}\int_{0}^{\infty}\left[\,k_{\textstyle{{}_{l}}}(\tau)\,-\,k_{\textstyle{{}_{l}}}(0)\,\Theta(\tau)\,\right]\;e^{-{\it
i}\chi\tau}\,d\tau~{}~{}~{},~{}~{}~{}~{}$ (62)
where we integrated only from $\;{0}\,$ because the future disturbance
contributes nothing to the present distortion, so $\,k_{\it l}(\tau)\,$
vanishes at $\,\tau<0\,$. Recall that the time $\,\tau\,$ denotes the
difference $t-t\,^{\prime}$. So $\tau$ is reckoned from the present moment $t$
and is directed back into the past.
Defining in the standard manner the Fourier components
$\,\bar{U}_{\textstyle{{}_{l}}}(\chi)\,$ and
$\,\bar{W}_{\textstyle{{}_{l}}}(\chi)\,$ of functions
$\,{U}_{\textstyle{{}_{l}}}(t)\,$ and $\,{W}_{\textstyle{{}_{l}}}(t)\,$, we
write (59) in the frequency domain:
$\displaystyle\bar{U}_{\textstyle{{}_{l}}}(\chi)\;=\;\left(\frac{R}{r}\right)^{l+1}\bar{k}_{\textstyle{{}_{l}}}(\chi)\;\,\bar{W}_{\textstyle{{}_{l}}}(\chi)\;\;\;,$
(63)
where we denote the frequency simply by $\chi$ instead of the awkward
$\chi_{\textstyle{{}_{lmpq}}}$. To employ (63) in the tidal theory, one has to
know the frequency-dependencies $\,\bar{k}_{\textstyle{{}_{l}}}(\chi)\,$.
### 4.2 The positive forcing frequencies $\,\chi\,\equiv\,|\omega|\,$ vs.
the positive and negative tidal modes $\,\omega\,$
It should be remembered that, by relying on formula (63), we place ourselves
on thin ice, because the similarity of this formula to (50) and (51) is
deceptive.
In (50) and (51), it was legitimate to limit our expansions of the stress and
the strain to positive frequencies $\,\chi\,$ only. Had we carried out those
expansions over both positive and negative frequencies $\,\omega\,$, we would
have obtained, instead of (50) and (51), similar expressions
$\displaystyle
2\;\bar{u}_{\gamma\nu}(\omega)\,=\;\bar{J}(\omega)\;\bar{\sigma}_{\gamma\nu}(\omega)\quad\quad\mbox{and}\quad\quad\bar{\sigma}_{\gamma\nu}(\omega)\,=\;2\;\bar{\mu}(\omega)\;\bar{u}_{\gamma\nu}(\omega)\;\;\;.$
(64)
For positive $\,\omega\,$, these would simply coincide with (50) and (51), if
we rename $\,\omega\,$ as $\,\chi\,$. For negative $\,\omega\,=\,-\,\chi\,$,
the resulting expressions would read as
$\displaystyle
2\;\bar{u}_{\gamma\nu}(\,-\,\chi)\,=\;\bar{J}(\,-\,\chi)\;\bar{\sigma}_{\gamma\nu}(\,-\,\chi)\quad\quad\mbox{and}\quad\quad\bar{\sigma}_{\gamma\nu}(\,-\,\chi)\,=\;2\;\bar{\mu}(\,-\,\chi)\;\bar{u}_{\gamma\nu}(\,-\,\chi)\;\;\;,$
(65)
where we stick to the agreement that $\,\chi\,$ always stands for a positive
quantity. In accordance with (30), complex conjugation of (65) would then
return us to (64).
Physically, the negative-frequency components of the stress or strain are
nonexistent. If brought into consideration, they are obliged to obey (30) and,
thus, should play no role, except for a harmless renormalisation of the
Fourier components in (32).
When we say that the physically measurable stress $\,\sigma_{\gamma\nu}(t)\,$
is equal to
$~{}\sum{\cal{R}}{\it{e}}\left[\,\bar{\sigma}_{\gamma\nu}(\chi)\,e^{\textstyle{{}^{{\it
i}\chi t}}}\,\right]~{}$, it is unimportant to us whether the
$\,\chi$-contribution in $\,\sigma_{\gamma\nu}(t)\,$ comes from the term
$~{}\bar{\sigma}_{\gamma\nu}(\chi)\,e^{\textstyle{{}^{{\it i}\chi t}}}~{}$
only, or also from the term
$~{}\bar{\sigma}_{\gamma\nu}(\,-\,\chi)\,e^{\textstyle{{}^{{\it
i}(\,-\,\chi)t}}}~{}$. Indeed, the real part of the latter is a clone of the
real part of the former (and it is only the former term that is physical).
However, things remain that simple only for the stress and the strain.
As we emphasised in subsection 3.4, the situation with the potentials is
drastically different. While the physically measurable potential $\,U(t)\,$ is
still equal to
$~{}\sum{\cal{R}}{\it{e}}\left[\,\bar{U}(\chi)\,e^{\textstyle{{}^{{\it i}\chi
t}}}\,\right]~{}$, it is now important to distinguish whether the
$\,\chi$-contribution in $\,U(t)\,$ comes from the term
$~{}\bar{U}_{\gamma\nu}(\chi)\,e^{\textstyle{{}^{{\it i}\chi t}}}~{}$ or from
the term $~{}\bar{U}(\,-\,\chi)\,e^{\textstyle{{}^{{\it i}(\,-\,\chi)t}}}~{}$,
or perhaps from both. Although the negative mode $\,-\chi\,$ would bring the
same input as the positive mode $\,\chi\,$, these inputs will contribute
differently into the tidal torque. As can be seen from (299), the secular part
of the tidal torque is proportional to
$\,\sin\epsilon_{\textstyle{{}_{l}}}\,$, where
$\,\epsilon_{\textstyle{{}_{l}}}\equiv\,\omega_{\textstyle{{}_{lmpq}}}\,\Delta
t_{\textstyle{{}_{lmpq}}}\,$, with the time lag $\,\Delta
t_{\textstyle{{}_{lmpq}}}\,$ being positively defined – see formula (109).
Thus the secular part of the tidal torque explicitly contains the sign of the
tidal mode $\,\omega_{\textstyle{{}_{lmpq}}}\,$.
For this reason, as explained in subsection 3.4, a more accurate form of
formula (63) should be:
$\displaystyle\bar{U}_{\textstyle{{}_{l}}}(\omega)\;=\;\bar{k}_{\textstyle{{}_{l}}}(\omega)\;\bar{W}_{\textstyle{{}_{l}}}(\omega)\;\;\;,$
(66)
where $\,\omega\,$ can be of any sign.
If however, we pretend that the potentials depend on the physical frequency
$\,\chi\,=\,|\omega|\,$ only, i.e., if we always write $\,U(\omega)\,$ as
$\,U(\chi)\,$, then (63) must be written as:
$\displaystyle\bar{U}_{\textstyle{{}_{l}}}(\chi)\;=\;\bar{k}_{\textstyle{{}_{l}}}(\chi)\;\bar{W}_{\textstyle{{}_{l}}}(\chi)\;\;\;,~{}~{}~{}\mbox{when}~{}~{}~{}\chi\,=\,|\omega|~{}~{}~{}\mbox{for}~{}~{}~{}\omega\,>\,0~{}~{}~{},$
(67a) and
$\displaystyle\bar{U}_{\textstyle{{}_{l}}}(\chi)\;=\;\bar{k}^{\,*}_{\textstyle{{}_{l}}}(\chi)\;\bar{W}_{\textstyle{{}_{l}}}(\chi)\;\;\;,~{}~{}~{}\mbox{when}~{}~{}~{}\chi\,=\,|\omega|~{}~{}~{}\mbox{for}~{}~{}~{}\omega\,<\,0~{}~{}~{}.$
(67b)
Unless we keep this detail in mind, we shall get a wrong sign for the
$\,lmpq\,$ component of the torque after the despinning secondary crosses the
appropriate commensurability. (We shall, of course, be able to mend this by
simply inserting the sign sgn$\,\omega_{\textstyle{{}_{lmpq}}}\,$ by hand.)
### 4.3 The complex Love number as a function of the complex compliance
While the static Love numbers depend on the static rigidity modulus $\,\mu\,$
via (3), it is not readily apparent that the same relation interconnects
$\,\bar{k}_{\it l}(\chi)\,$ with $\,\bar{\mu}(\chi)\,$, the quantities that
are the Fourier components of the time-derivatives of
$\,k_{2}(t\,^{\prime})\,$ and $\,\mu(t\,^{\prime})\,$. Fortunately, the
correspondence principle (discussed in Appendix D) tells us that, in many
situations, the viscoelastic operational moduli $\,\bar{\mu}(\chi)\,$ or
$\,\bar{J}(\chi)\,$ obey the same algebraic relations as the elastic
parameters $\,\mu\,$ or $\,J\,$. This is why, in these situations, the Fourier
or Laplace transform of our viscoelastic equations will mimic (235a \- 235b),
except that all the functions will acquire overbars:
$~{}\bar{\sigma}_{\textstyle{{}_{\gamma\nu}}}~{}=~{}2~{}\bar{\mu}~{}\bar{u}_{\textstyle{{}_{\gamma\nu}}}\;$,
etc. So their solution, too, will be $\,\bar{U}_{\it l}=\bar{k}_{\it
l}\,\bar{W}_{\it l}\,$, with $\,\bar{k}_{\it l}\,$ retaining the same
functional dependence on $\,\rho\,$, $\,R\,$, and $\,\bar{\mu}\,$ as in (3),
except that now $\,\mu\,$ will have an overbar:
$\displaystyle\bar{k}_{\it l}(\chi)$ $\displaystyle=$
$\displaystyle\frac{3}{2\,({\it l}\,-\,1)}\;\;\frac{\textstyle 1}{\;\textstyle
1\;+\;\frac{\textstyle{(2\,{\it{l}}^{\,2}\,+\,4\,{\it{l}}\,+\,3)\,\bar{\mu}(\chi)}}{\textstyle{{\it{l}}\,\mbox{g}\,\rho\,R}}\;}~{}=~{}\frac{3}{2\,({\it
l}\,-\,1)}\;\,\frac{\textstyle 1}{\textstyle 1\;+\;A_{\it
l}\;\bar{\mu}(\chi)/\mu}\quad\quad\quad\quad\quad\quad\quad\quad~{}~{}$
$\displaystyle=$ $\displaystyle\frac{3}{2\,({\it
l}\,-\,1)}\;\,\frac{\textstyle 1}{\textstyle 1\;+\;A_{\it
l}\;J/\bar{J}(\chi)}~{}=~{}\frac{3}{2\,({\it
l}\,-\,1)}\;\,\frac{\textstyle\bar{J}(\chi)}{\textstyle\bar{J}(\chi)\;+\;A_{\it
l}\;J}~{}~{}~{}$
Here the coefficients $\,A_{\it l}\,$ are defined via the unrelaxed quantities
$\;\mu=\mu(0)=1/J=1/J(0)\;$ in the same manner as the static $\,A_{\it l}\,$
were introduced through the static (relaxed) $\,\mu=1/J\,$ in formulae (3).
The moral of the story is that, at low frequencies, each $\,\bar{k}_{\it l}\,$
depends upon $\,\bar{\mu}\,$ (or upon $\,\bar{J}\,$) in the same way as its
static counterpart $\,k_{\it l}\,$ depends upon the static $\,\mu\,$ (or upon
the static $\,J\,$). This happens, because at low frequencies we neglect the
acceleration term in the equation of motion (238b), so this equation still
looks like (235b).
Representing a complex Love number as
$\displaystyle\bar{k}_{\it{l}}(\chi)\;=\;{\cal{R}}{\it{e}}\left[\bar{k}_{\it{l}}(\chi)\right]\;+\;{\it
i}\;{\cal{I}}{\it{m}}\left[\bar{k}_{\it{l}}(\chi)\right]\;=\;|\bar{k}_{\it{l}}(\chi)|\;e^{\textstyle{{}^{-{\it
i}\epsilon_{\it l}(\chi)}}}$ (69)
we can write for the phase lag $\,\epsilon_{\it l}(\chi)\,$:
$\displaystyle\tan\epsilon_{\it
l}(\chi)\;\equiv\;-\;\frac{{\cal{I}}{\it{m}}\left[\bar{k}_{\it{l}}(\chi)\right]}{{\cal{R}}{\it{e}}\left[\bar{k}_{\it{l}}(\chi)\right]}\;\;\;$
(70)
or, equivalently:
$\displaystyle|\bar{k}_{\it{l}}(\chi)|\;\sin\epsilon_{\it
l}(\chi)\;=\;-\;{\cal{I}}{\it{m}}\left[\bar{k}_{\it{l}}(\chi)\right]\;\;\;.$
(71)
The products $\;|\bar{k}_{\it{l}}(\chi)|\;\sin\epsilon_{\it l}(\chi)\;$
standing on the left-hand side in (71) emerge also in the Fourier series for
the tidal potential. Therefore it is these products (and not $\;k_{\it
l}/Q\;$) that should enter the expansions for forces, torques, and the damping
rate. This is the link between the body’s rheology and the history of its
spin: from $\,\bar{J}(\chi)\,$ to $\,\bar{k}_{\it{l}}(\chi)\,$ to
$\,|\bar{k}_{\it{l}}(\chi)|\;\sin\epsilon(\chi)\,$, the latter being employed
in the theory of bodily tides.
Through simple algebra, expressions (4.3) entail:
$\displaystyle|\bar{k}_{\it l}(\chi)|\;\sin\epsilon_{\it
l}(\chi)\;=\;-\;{\cal{I}}{\it{m}}\left[\bar{k}_{\it
l}(\chi)\right]\;=\;\frac{3}{2\,({\it
l}\,-\,1)}\;\,\frac{-\;A_{l}\;J\;{\cal{I}}{\it{m}}\left[\bar{J}(\chi)\right]}{\left(\;{\cal{R}}{\it{e}}\left[\bar{J}(\chi)\right]\;+\;A_{l}\;J\;\right)^{2}\;+\;\left(\;{\cal{I}}{\it{m}}\left[\bar{J}(\chi)\right]\;\right)^{2}}~{}~{}~{}.~{}~{}~{}~{}~{}$
(72)
As we know from subsections 3.4 and 4.2, formulae (70 \- 72) should be used
with care. Since in reality the potential $\,\bar{U}\,$ and therefore also
$\,\bar{k}_{l}\,$ are functions not of $\,\chi\,$ but of $\,\omega\,$, then
formulae (72) should be equipped with multipliers
sgn$\,\omega_{\textstyle{{}_{lmpq}}}\,$, when plugged into the expression for
the $lmpq$ component of the tidal force or torque. This prescription is
equivalent to (67).
### 4.4 Should we write $\,\bar{k}_{{\it l}mpq}\;$ and $\,\epsilon_{{\it
l}mpq}\,$, or would $\,\bar{k}_{{\it l}}\,$ and $\,\epsilon_{{\it l}}\,$ be
enough?
In the preceding subsection, the static relation (2) was generalised to
evolving settings as
$\displaystyle U_{{\it
l}mpq}(\mbox{{\boldmath$\vec{r}$}},\,t)\;=\;\left(\,\frac{R}{r}\,\right)^{{\it
l}+1}\hat{k}_{{\it
l}}(t)~{}W_{\it{l}mpq}(\mbox{{\boldmath$\vec{R}$}}\,,\;\mbox{{\boldmath$\vec{r}$}}^{\;*},\,t\,^{\prime})~{}~{}~{},$
(73)
where $~{}{\it l}mpq~{}$ is a quadruple of integers employed to number a
Fourier mode in the Darwin-Kaula expansion (100) of the tide, while $\,U_{{\it
l}mpq}(\mbox{{\boldmath$\vec{r}$}},\,t)\,$ and
$\,W_{\it{l}mpq}(\mbox{{\boldmath$\vec{R}$}}\,,\;\mbox{{\boldmath$\vec{r}$}}^{\;*},\,t\,^{\prime})\,$
are the harmonics containing $\,\cos(\chi_{{\it l}mpq}t-\epsilon_{{\it
l}mpq})\,$ and $\,\cos(\chi_{{\it l}mpq}t\,^{\prime})\,$ correspondingly.
One might be tempted to generalise (2) even further to
$\displaystyle U_{{\it
l}mpq}(\mbox{{\boldmath$\vec{r}$}},\,t)\;=\;\left(\,\frac{R}{r}\,\right)^{{\it
l}+1}\hat{k}_{{\it
l}mpq}(t)~{}W_{\it{l}mpq}(\mbox{{\boldmath$\vec{R}$}}\,,\;\mbox{{\boldmath$\vec{r}$}}^{\;*},\,t\,^{\prime})~{}~{}~{},$
with the Love operator (and, consequently, its kernel, the Love function)
bearing dependence upon $\,m$, $\,p$, and $\,q\,$. Accordingly, (63) would
become
$\displaystyle\bar{U}_{\it{l}mpq}(\chi)\;=\;\bar{k}_{{\it
l}mpq}(\chi)\;\bar{W}_{\it{l}mpq}(\chi)\;\;\;.$ (74)
Fortunately, insofar as the Correspondence Principle is valid, the functional
form of the function $\,\bar{k}_{{\it l}mpq}(\chi)\,$ depends upon $\,\it l\,$
only and, thus, can be written down simply as $\;\bar{k}_{{\it l}}(\chi_{{\it
l}mpq})\;$. We know this from the considerations offered after equations (235a
\- 235b). There we explained that $\,\bar{k}_{{\it l}}\,$ depends on
$\,\chi=\chi_{{\it l}mpq}\,$ only via $\,\bar{J}(\chi)\,$, while _the
functional form_ of $\,\bar{k}_{{\it l}}\,$ bears no dependence on
$\,\chi=\chi_{{\it l}mpq}\,$ and, therefore, no dependence on $\,m,\,p,\,q\,$.
The phase lag is often denoted as $\,\epsilon_{{\it l}mpq}\,$, a time-honoured
tradition established by Kaula (1964). However, as the lag is expressed
through $\,\bar{k}_{\it l}\,$ via (70), we see that all said above about
$\,\bar{k}_{\it l}\,$ applies to the lag too: while the functional form of the
dependency $\,\epsilon_{{\it l}mpq}(\chi)\,$ may be different for different l
s, it is invariant under the other three integers, so the notation
$~{}\epsilon_{{\it l}}(\chi_{{\it l}mpq})~{}$ would be more adequate.
It should be mentioned, though, that for bodies of pronounced non-sphericity
coupling between the spherical harmonics furnishes the Love numbers and lags
whose expressions through the frequency, for a fixed $\,l\,$, have different
functional forms for different $\,m,\,p,\,q\,$. In these cases, the notations
$\,\bar{k}_{{\it l}mpq}\,$ and $\,\epsilon_{{\it l}mpq}\,$ become necessary
(Smith 1974; Wahr 1981a,b,c; Dehant 1987a,b). For a slightly non-spherical
body, the Love numbers differ from the Love numbers of the spherical reference
body by a term of the order of the flattening, so a small non-sphericity can
usually be neglected.
### 4.5 Rigidity vs self-gravitation
For small bodies and small terrestrial planets, the values of
$\,A_{\textstyle{{}_{l}}}\,$ vary from about unity to dozens to hundreds. For
example, $\,A_{2}\,$ is about 2 for the Earth (Efroimsky 2012), about 20 for
Mars (Efroimsky & Lainey 2007), about 80 for the Moon (Efroimsky 2012), and
about 200 for Iapetus (Castillo-Rogez et al. 2011). For superearths, the
values will be much smaller than unity, though.
Insofar as
$\displaystyle
A_{l}\;\,\frac{J}{{\bf{\Large|}}\stackrel{{\scriptstyle-~{}~{}~{}}}{{J(\chi)}}{\bf{\Large|}}}\;\,\gg\;1~{}~{}~{},$
(75)
one can approximate (4.3) with
$\displaystyle\bar{k}_{l}(\chi)\,=\;-\;\frac{3}{2(l-1)}\;\frac{{\textstyle\,\stackrel{{\scriptstyle\mbox{\bf\it\\_}}}{{J}}(\chi)}}{{\textstyle\,\stackrel{{\scriptstyle\mbox{\bf\it\\_}}}{{J}}(\chi)}\;+\;A_{l}\;{\textstyle
J}}\;=\;-\;\frac{3}{2}\;\frac{{\textstyle\,\stackrel{{\scriptstyle\mbox{\bf\it\\_}}}{{J}}(\chi)}}{A_{l}\;{\textstyle
J}}~{}+~{}O\left(~{}|{\textstyle\,\stackrel{{\scriptstyle\mbox{\bf\it\\_}}}{{J}}}/(A_{l}\,J)\,|^{2}~{}\right)\;\;\;,~{}~{}~{}~{}~{}$
(76)
except in the closest vicinity of an $\,lmpq\,$ resonance, where the tidal
frequency $\,\chi_{\textstyle{{}_{lmpq}}}\,$ approaches nil, and $\,\bar{J}\,$
diverges for some rheologies – like, for example, for those of Maxwell or
Andrade.
Whenever the approximate formula (76) is applicable, we can rewrite (70) as
$\displaystyle\tan\epsilon(\chi)\;\equiv\;-\;\frac{{\cal{I}}{\it{m}}\left[\bar{k}_{\it{l}}(\chi)\right]}{{\cal{R}}{\it{e}}\left[\bar{k}_{\it{l}}(\chi)\right]}\;\approx\;-\;\frac{{\cal{I}}{\it{m}}\left[\bar{J}(\chi)\right]}{{\cal{R}}{\it{e}}\left[\bar{J}(\chi)\right]}\;=\;\tan\delta(\chi)\;\;\;,$
(77)
wherefrom we readily deduce that the phase lag $\,\epsilon(\chi)\,$ of the
tidal frequency $\,\chi\,$ coincides with the phase lag of the complex
compliance:
$\displaystyle\epsilon(\chi)\;\approx\;\delta(\chi)\;\;\;,$ (78)
provided $\,\chi\,$ is not too close to nil (i.e., provided we are not too
close to the commensurability). This way, insofar as the condition (71) is
fulfilled, the component $\,\bar{U}_{\it{l}}(\chi)\,$ of the primary’s
potential lags behind the component $\,\bar{W}_{\it{l}}(\chi)\,$ of the
perturbed potential by the same phase angle as the strain lags behind the
stress at frequency $\,\chi\,$ in a sample of the material. Dependent upon the
rheology, a vanishing tidal frequency may or may not limit the applicability
of (71) and thus cause a considerable difference between $\,\epsilon\,$ and
$\,\delta\,$.
In other words, the suggested approximation is valid insofar as changes of
shape are determined solely by the local material properties, and not by self-
gravitation of the object as a whole. Whether this is so or not – depends upon
the rheological model. For a Voigt or SAS 888 The acronym SAS stands for the
Standard Anelastic Solid, which is another name for the Hohenemser-Prager
viscoelastic model. See the Appendix for details. solid in the limit of
$~{}{\chi\rightarrow 0}~{}$, we have $~{}\bar{J}(\chi)\,\rightarrow\,J~{}$, so
the zero-frequency limit of $\,\bar{k}_{l}(\chi)\,$ is the static Love number
$\,k_{l}\,\equiv\,|\bar{k}(0)|\,$. In this case, approximation (76 \- 78)
remains applicable all the way down to $\,\chi=0\,$. For the Maxwell and
Andrade models, however, one obtains, for vanishing frequency:
$~{}\bar{J}(\chi)\,\sim\,1/(\eta\chi)~{}$, whence
$\,\bar{\mu}\,\sim\,\eta\chi\,$ and $\,\bar{k}_{2}(\chi)\,$ approaches the
hydrodynamical Love number $\,k_{2}^{(hyd)}=3/2\,$.
We see that, for the Voigt and SAS models, approximation
$\,(\ref{deltaepsi})\,$ can work, for $A_{l}\gg 1$, at all frequencies,
because the condition $\,A_{L}\gg 1\,$ can be set for all frequencies. For the
Maxwell and Andrade solids, this condition holds only at frequencies larger
than
$~{}\tau_{{}_{M}}^{-1}A_{\textstyle{{}_{l}}}^{-1}\,=\,\frac{\textstyle\mu}{\textstyle\eta}\,A_{\textstyle{{}_{l}}}^{-1}$,
and so does the approximation (78). Indeed, at frequencies below this
threshold, self-gravitation “beats” the local material properties of the body,
and the behaviour of the tidal lag deviates from that of the lag in a sample.
This deviation will be indicated more clearly by formula (94) in the next
section. The fact that, for some models, the tidal lag $\,\epsilon\,$ deviates
from the material lag angle $\,\delta\,$ at the lowest frequencies should be
kept in mind when one wants to explore crossing of a resonance.
A standard caveat is in order, concerning formulae (76 \- 78). Since in
reality the potential $\,\bar{U}\,$ is a function of $\,\omega\,$ and not
$\,\chi\,$, our illegitimate use of $\,\chi\,$ should be compensated by
multiplying the function
$\,\epsilon_{\textstyle{{}_{l}}}(\chi_{\textstyle{{}_{lmpq}}})\,$ with
sgn$\,\omega_{\textstyle{{}_{lmpq}}}\,$, when the lag shows up in the
expression for the tidal force or torque.
### 4.6 The case of inhomogeneous bodies
Tidal dissipation within a multilayer near-spherical body is studied through
expanding the involved fields over the spherical harmonics in each layer,
setting the boundary conditions on the outer surface, and using the matching
conditions on boundaries between layers. This formalism was developed by
Alterman et al (1959). An updated discussion of the method can be found in
Sabadini & Vermeersen (2004). For a brief review, see Legros et al (2006).
Calculation of tidal dissipation in a Jovian planet is an even more formidable
task (see Remus et al. 2012a and references therein). However dissipation in a
giant planet with a solid core may turn out to be approachable by analytic
means (Remus et al. 2011, 2012b).
## 5 Dissipation at different frequencies
### 5.1 The data collected on the Earth: in the lab,
over seismological basins, and through geodetic measurements
In Efroimsky & Lainey (2007), we considered the generic rheological model
$\displaystyle
Q\;=\;\left(\,{\cal{E}}\,\chi\,\right)^{\textstyle{{}^{\alpha}}}~{}~{},$ (79a)
where $\,\chi\,$ is the tidal frequency and $\,{\cal{E}}\,$ is a parameter
having the dimensions of time. The physical meaning of this parameter is
elucidated in Ibid.. Under the special choice of $\,\alpha=-1\,$ and for
sufficiently large values of $\,Q\,$, this parameter coincides with the time
lag $\,\Delta t\,$ which, for this special rheology, turns out to be the same
at all frequencies.
Actual experiments register not the inverse quality factor but the phase lag
between the reaction and the action. So the empirical law should rather be
written down as
$\displaystyle\frac{1}{\sin\delta}\;=\;\left(\,{\cal{E}}\,\chi\,\right)^{\textstyle{{}^{\alpha}}}~{}~{},$
(79b)
which is equivalent to (79a), provided the $Q$ factor is defined there as
$\,Q_{\textstyle{{}_{energy}}}\,$ and not as $\,Q_{\textstyle{{}_{work}}}~{}$
– see subsection 2.2 for details.
The applicability realm of the empirical power law (5.1) is remarkably broad –
in terms of both the physical constituency of the bodies and their chemical
composition. Most intriguing is the robust universality of the values taken by
the index $\,\alpha\,$ for very different materials: between $\,0.2\,$ and
$\,0.4\,$ for ices and silicates, and between $\,0.14\,$ and $\,0.2\,$ for
partial melts. Historically, two communities independently converged on this
form of dependence.
In the material sciences, the rheological model (86), wherefrom the power law
(79b) stems, traces its lineage to the groundbreaking work by Andrade (1910)
who explored creep in metals. Through the subsequent century, this law was
found to be applicable to a vast variety of other materials, including
minerals (Weertman & Weertman 1975, Tan et al. 1997) and their partial melts
(Fontaine et al. 2005). As recently discovered by McCarthy et al. (2007) and
Castillo-Rogez (2009), the same law, with almost the same values of
$\,\alpha\,$, also applies to ices. The result is milestone, taken the
physical and chemical differences between ices and silicates. It is agreed
upon that in crystalline materials the Andrade regime can find its microscopic
origin both in the dynamics of dislocations (Karato & Spetzler 1990) and in
the grain-boundary diffusional creep (Gribb & Cooper 1998). As the same
behaviour is inherent in metals, silicates, ices, and even glass-polyester
composites (Nechada et al. 2005), it should stem from a single underlying
phenomenon determined by some principles more general than specific material
properties. An attempt to find such a universal mechanism was undertaken by
Miguel et al. (2002). See also the theoretical considerations offered in
Karato & Spetzler (1990).
In seismology, the power law (5.1) became popular in the second part of the
XXth century, with the progress of precise measurements on large seismological
basins (Mitchell 1995, Stachnik et al. 2004, Shito et al. 2004). Further
confirmation of this law came from geodetic experiments that included: (a)
satellite laser ranging (SLR) measurements of tidal variations in the
$\,J_{2}\,$ component of the gravity field of the Earth; (b) space-based
observations of tidal variations in the Earth’s rotation rate; and (c) space-
based measurements of the Chandler Wobble period and damping (Benjamin et al.
2006, Eanes & Bettadpur 1996, Eanes 1995). Not surprisingly, the Andrade law
became a key element in the recent attempt to construct a universal
rheological model of the Earth’s mantle (Birger 2007). This law also became a
component of the non-hydrostatic-equilibrium model for the zonal tides in an
inelastic Earth by Defraigne & Smits (1999), a model that became the basis for
the IERS Conventions (Petit & Luzum 2010). While the lab experiments give for
$\,\alpha\,$ values within $\,0.2\,-\,0.4\,$, the geodetic techniques favour
the interval $\,0.14\,-\,0.2\,$. This minor discrepancy may have emerged due
to the presence of partial melt in the mantle and, possibly, due to
nonlinearity at high bounding pressures in the lower mantle. The universality
of the Andrade law compels us to assume that (5.1) works equally well for
other terrestrial bodies. Similarly, the applicability of (5.1) to samples of
ices in the lab is likely to indicate that this law can be employed for
description of an icy moon as a whole.
Karato & Spetzler (1990) argue that at frequencies below a certain threshold
$\,\chi_{0}\,$ anelasticity gives way to purely viscoelastic behaviour, so the
parameter $\,\alpha\,$ becomes close to unity.999 This circumstance was
ignored by Defraigne & Smits (1999). Accordingly, if the claims by Karato &
Spetzler (1990) are correct, the table of corrections for the tidal variations
in the Earth’s rotation in the IERS Conventions is likely to contain
increasing errors for periods of about a year and longer. This detail is
missing in the theory of the Chandler wobble of Mars, by Zharkov & Gudkova
(2009). For the Earth’s mantle, the threshold corresponds to the time-scale
about a year or slightly longer. Although in Karato & Spetzler (1990) the
rheological law is written in terms of $\,1/Q\,$, we shall substitute it with
a law more appropriate to the studies of tides:
$\displaystyle
k_{\textstyle{{}_{l}}}\,\sin\epsilon_{\textstyle{{}_{l}}}\;=\;\left(\,{\cal
E}\,\chi\,\right)^{\textstyle{{}^{\,-\,p}}}\;\;\;,~{}~{}~{}~{}~{}\mbox{where}$
$\displaystyle\;\;p\,=\,0.2\;-\;0.4\;\;\;\mbox{for}\;\;\;\chi\,>\,\chi_{0}\;\;\;$
and
$\displaystyle\;\;p~{}\,\sim~{}\,1~{}\quad~{}\quad~{}\quad\mbox{for}\;\;\;\chi\,<\,\chi_{0}\;\;\;,\;$
(80)
$\chi\,$ being the frequency, and $\,\chi_{0}\,$ being the frequency threshold
below which viscosity takes over anelasticity.
The reason why we write the power scaling law as (80) and not as (5.1) is that
at the lowest frequencies the geodetic measurements give us actually
$\,k_{l}\,\sin\epsilon_{l}\,=\,-\,{\cal{I}}{\it{m}}\left[\,\bar{k}_{\textstyle{{}_{l}}}(\chi)\,\right]\,$
and not the lag angle $\,\delta\,$ in a sample (e.g., Benjamin et al. 2006).
For this same reason, we denoted the exponents in (5.1) and (80) with
different letters, $\,\alpha\,$ and $\,p\,$. Below we shall see that these
exponents do not always coincide. Another reason for giving preference to (80)
is that not only the sine of the lag but also the absolute value of the Love
number is frequency dependent.
### 5.2 Tidal damping in the Moon, from laser ranging
Fitting of the LLR data to the power scaling law (5.1), which was carried out
by Williams et al. (2001), has demonstrated that the lunar mantle possesses
quite an abnormal value of the exponent: $\,-\,0.19\,$. A later reexamination
in Williams et al. (2008) rendered a less embarrassing value, $\,-\,0.09\,$,
which nevertheless was still negative and thus seemed to contradict our
knowledge about microphysical damping mechanisms in minerals. Thereupon,
Williams & Boggs (2009) commented:
“There is a weak dependence of tidal specific dissipation $\,Q\,$ on period.
The $\,Q\,$ increases from $\,\sim 30\,$ at a month to $\,\sim 35\,$ at one
year. $~{}Q\,$ for rock is expected to have a weak dependence on tidal period,
but it is expected to decrease with period rather than increase. The frequency
dependence of $\,Q\,$ deserves further attention and should be improved.”
While there always remains a possibility of the raw data being insufficient or
of the fitting procedure being imperfect, the fact is that the negative
exponent obtained in Ibid. does not necessarily contradict the scaling law
(5.1) proven for minerals and partial melts. Indeed, the exponent obtained by
the LLR Team was not the $\,\alpha\,$ from (5.1) but was the $\,p\,$ from
(80). The distinction is critical due to the difference in frequency-
dependence of the seismic and tidal dissipation. It turns out that the near-
viscous value $\,p\,\sim\,1\,$ from the second line of (80), appropriate for
low frequencies, does not retain its value all the way to the zero frequency.
Specifically, in subsection 5.4 we shall see that at the frequency
$~{}\frac{\textstyle
1}{\textstyle{\tau_{{}_{M}}\,{A_{\textstyle{{}_{l}}}}}}~{}$ (where
$\,\tau_{{}_{M}}=\eta/\mu\,$ is the Maxwell time, with $\,\eta\,$ and
$\,\mu\,$ being the lunar mantle’s viscosity and rigidity), the exponent
$\,p\,$ begins to decrease with the decrease of the frequency. As the
frequency becomes lower, $\,p\,$ changes its sign and eventually becomes
$\,-\,1\,$ in a close vicinity of $\,\chi\,=\,0\,$. This behaviour follows
from calculations based on a realistic rheology (see formulae (92 \- 94)
below), and it goes along well with the evident physical fact that the average
tidal torque must vanish in a resonance.101010 For example, the principal
tidal torque $\,\tau_{\textstyle{{}_{lmpq}}}=\,\tau_{\textstyle{{}_{2200}}}\,$
acting on a secondary must vanish when the secondary is crossing the
synchronous orbit. Naturally, this happens because $\,p\,$ becomes $\,-\,1\,$
in the close vicinity of $\,\chi_{\textstyle{{}_{2200}}}=\,0\,$. In subsection
5.7, comparison of this behaviour with the LLR results will yield us an
estimate for the mean lunar viscosity.
### 5.3 The Andrade model as an example of viscoelastic behaviour
The complex compliance of a Maxwell material contains a term $\,J=J(0)\,$
responsible for the elastic part of the deformation and a term
$\,-\,\frac{\textstyle{\it i}}{\textstyle\chi\eta}\,$ describing the
viscosity. Whatever other terms get incorporated into the compliance, these
will correspond to other forms of hereditary reaction. The available
geophysical data strongly favour a particular extension of the Maxwell
approach, the Andrade model (Cottrell & Aytekin 1947, Duval 1976). In modern
notations, the model can be expressed as 111111As long as we agree to
integrate over
$~{}\,t-t\,^{\prime}\in\left[\right.0,\,\infty\left.\right)~{}$, the terms
$\,\beta(t-t\,^{\prime})^{\alpha}~{}$ and
$~{}{\textstyle\eta}^{-1}\left(t-t\,^{\prime}\right)\,$ can do without the
Heaviside step-function $\,\Theta(t-t\,^{\prime})\,$. We remind though that
the first term, $\,J\,$, does need this multiplier, so that insertion of (81)
into (43) renders the desired $\,J\,\delta(t-t\,^{\prime})\,$ under the
integral, after the differentiation in (43) is performed.
$\displaystyle
J(t-t\,^{\prime})\;=\;\left[\;J\;+\;\beta\,(t-t\,^{\prime})^{\alpha}\,+\;{\eta}^{-1}\left(t-t\,^{\prime}\right)\;\right]\,\Theta(t-t\,^{\prime})\;\;\;,$
(81)
$\alpha\,$ being a dimensionless parameter, $\,\beta\,$ being a dimensional
parameter, $\eta\,$ denoting the steady-state viscosity, and $\,J\,$ standing
for the unrelaxed compliance, which is inverse to the unrelaxed rigidity:
$\,J\equiv J(0)=1/\mu(0)\,=\,1/\mu\,$. We see that (81) is the Maxwell model
amended with an extra term of a hereditary nature.
A simple example illustrating how the model works is rendered by deformation
under constant loading. In this case, the anelastic term dominates at short
times, the strain thus being a convex function of $\,t\,$ (the so-called
primary or transient creep). As time goes on and the applied loading is kept
constant, the viscous term becomes larger, and the strain becomes almost
linear in time – a phenomenon called the secondary creep.
Remarkably, for all minerals (including ices) the values of $\,\alpha\,$
belong to the interval from $\,0.14\,$ through $\,0.4\,$ (more often, through
$\,0.3\,$) – see the references in subsection 5.1 above. The other parameter,
$\,\beta\,$, may be rewritten as
$\displaystyle\beta\,=\,J~{}\tau_{{}_{A}}^{-\alpha}\,=~{}\mu^{-1}\,\tau_{{}_{A}}^{-\alpha}~{}~{}~{},$
(82)
the quantity $\,\tau_{{}_{A}}\,$ having dimensions of time. This quantity is
the timescale associated with the Andrade creep, and it may be termed as the
“Andrade time” or the “anelastic time”. It is clear from (82) that a short
$\,\tau_{{}_{A}}\,$ makes the anelasticity more pronounced, while a long
$\,\tau_{{}_{A}}\,$ makes the anelasticity weak.121212 While the Andrade creep
is likely to be caused by “unpinning” of jammed dislocations (Karato &
Spetzler 1990, Miguel et al 2002), it is not apparently clear if the Andrade
time can be identified with the typical time of unpinning of defects.
It is known from Castillo-Rogez et al. (2011) and Castillo-Rogez & Choukroun
(2010) that for some minerals, within some frequency bands, the Andrade time
gets very close to the Maxwell time:
$\displaystyle\tau_{{}_{A}}\,\approx~{}\tau_{{}_{M}}~{}\quad~{}\Longrightarrow~{}\quad~{}\beta\,\approx~{}J~{}\tau_{{}_{M}}^{-\alpha}\,=\,J^{1-\alpha}\,\eta^{-\alpha}\,=\,\mu^{\alpha-1}\,\eta^{-\alpha}~{}~{}~{},$
(83)
where the relaxation Maxwell time is given by:
$\displaystyle\tau_{{}_{M}}\,\equiv\,\frac{\eta}{\mu}\,=\,\eta\,J~{}~{}.$ (84)
On general grounds, though, one cannot expect the anelastic timescale
$\,\tau_{{}_{A}}\,$ and the viscoelastic timescale $\,\tau_{{}_{M}}\,$ to
coincide in all situations. This is especially so due to the fact that both
these times may possess some degree of frequency-dependence. Specifically,
there exist indications that in the Earth’s mantle the role of anelasticity
(compared to viscoelasticity) undergoes a decrease when the frequencies become
lower than $\,1/$yr – see the miscrophysical model suggested in subsection
5.2.3 of Karato & Spetzler (1990). It should be remembered, though, that the
relation between $\,\tau_{{}_{A}}\,$ and $\,\tau_{{}_{M}}\,$ may depend also
upon the intensity of loading, i.e., upon the damping mechanisms involved. The
microphysical model considered in Ibid. was applicable to strong deformations,
with anelastic dissipation being dominated by dislocations unpinning.
Accordingly, the dominance of viscosity over anelasticity
($\,\tau_{{}_{A}}\ll\tau_{{}_{M}}\,$) at low frequencies may be regarded
proven for strong deformations only. At low stresses, when the grain-boundary
diffusion mechanism is dominant, the values of $\,\tau_{{}_{A}}\,$ and
$\,\tau_{{}_{M}}\,$ may remain comparable at low frequencies. The topic needs
further research.
In terms of the Andrade and Maxwell times, the compliance becomes:
$\displaystyle
J(t-t\,^{\prime})\;=\;J~{}\left[~{}1~{}+~{}\left(\frac{t-t\,^{\prime}}{\tau_{{}_{A}}}\right)^{\alpha}\,+~{}\frac{t-t\,^{\prime}}{\tau_{{}_{M}}}\;\right]\,\Theta(t-t\,^{\prime})\;\;\;.$
(85)
In the frequency domain, compliance (85) will look:
$\displaystyle{\bar{\mathit{J\,}}}(\chi)$ $\displaystyle=$ $\displaystyle
J\,+\,\beta\,(i\chi)^{-\alpha}\;\Gamma\,(1+\alpha)\,-\,\frac{i}{\eta\chi}$
(86a) $\displaystyle=$ $\displaystyle
J\,\left[\,1\,+\,(i\,\chi\,\tau_{{}_{A}})^{-\alpha}\;\Gamma\,(1+\alpha)~{}-~{}i~{}(\chi\,\tau_{{}_{M}})^{-1}\right]\;\;\;,$
(86b)
$\chi\,$ being the frequency, and $\,\Gamma\,$ denoting the Gamma function.
The imaginary and real parts of the complex compliance are:
$\displaystyle{\cal I}{\it m}[\bar{J}(\chi)]$ $\displaystyle=$
$\displaystyle-\;\frac{1}{\eta\,\chi}\;-\;\chi^{-\alpha}\,\beta\;\sin\left(\,\frac{\alpha\,\pi}{2}\,\right)\;\Gamma(\alpha\,+\,1)~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\quad\quad\quad$
(87a) $\displaystyle=$
$\displaystyle-\;J\,(\chi\tau_{{}_{M}})^{-1}\;-\;J\,(\chi\tau_{{}_{A}})^{-\alpha}\;\sin\left(\,\frac{\alpha\,\pi}{2}\,\right)\;\Gamma(\alpha\,+\,1)~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\quad\quad\quad$
(87b)
and
$\displaystyle{\cal R}{\it e}[\bar{J}(\chi)]$ $\displaystyle=$ $\displaystyle
J\;+\;\chi^{-\alpha}\,\beta\;\cos\left(\,\frac{\alpha\,\pi}{2}\,\right)\;\Gamma(\alpha\,+\,1)~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(88a) $\displaystyle=$ $\displaystyle
J\;+\;J\,(\chi\tau_{{}_{A}})^{-\alpha}\;\cos\left(\,\frac{\alpha\,\pi}{2}\,\right)\;\Gamma(\alpha\,+\,1)~{}~{}~{},\quad\quad\quad\quad\quad~{}\quad\quad\quad\quad\quad\quad$
(88b)
whence we obtain the following dependence of the phase lag upon the frequency:
$\displaystyle\tan\delta(\chi)~{}=~{}-~{}\frac{{{\cal{I}}\textit{m}\left[\bar{J}(\chi)\right]}}{{\cal{R}}{\textit{e}\left[\bar{J}(\chi)\right]}}$
$\displaystyle=$
$\displaystyle\frac{(\eta\;\chi)^{\textstyle{{}^{\,-1}}}\,+~{}\chi^{\textstyle{{}^{\,-\alpha}}}\;\beta\;\sin\left(\frac{\textstyle\alpha~{}\pi}{\textstyle
2}\right)\Gamma\left(\alpha\,+\,1\right)}{\mu^{\textstyle{{}^{~{}-1}}}\,+\;\chi^{\textstyle{{}^{\,-\alpha}}}\;\beta\;\cos\left(\frac{\textstyle\alpha\;\pi}{\textstyle
2}\right)\;\Gamma\left(\textstyle\alpha\,+\,1\right)}$ (89a) $\displaystyle=$
$\displaystyle\frac{z^{-1}\,\zeta~{}+~{}z^{-\alpha}\,\sin\left(\frac{\textstyle\alpha~{}\pi}{\textstyle
2}\right)\Gamma\left(\alpha\,+\,1\right)}{1\,+~{}z^{-\alpha}\,\cos\left(\frac{\textstyle\alpha\;\pi}{\textstyle
2}\right)~{}\Gamma\left(\textstyle\alpha\,+\,1\right)}~{}~{}~{}.\quad\quad$
(89b)
Here $\,z\,$ is the dimensionless frequency defined as
$\displaystyle
z~{}\equiv~{}\chi~{}\tau_{{}_{A}}~{}=~{}\chi~{}\tau_{{}_{M}}~{}\zeta~{}~{}~{},$
(90)
while $\,\zeta\,$ is a dimensionless parameter of the Andrade model:
$\displaystyle\zeta~{}\equiv~{}\frac{\textstyle\tau_{{}_{A}}}{\textstyle\tau_{{}_{M}}}~{}~{}~{}.$
(91)
### 5.4 Tidal response of viscoelastic near-spherical bodies obeying the
Andrade and Maxwell models
An $~{}lmpq\,$ term in the expansion for the tidal torque is proportional to
the factor
$~{}k_{\textstyle{{}_{l}}}(\chi)\,\sin\epsilon_{\textstyle{{}_{l}}}(\chi)\,=\,|\bar{k}_{\textstyle{{}_{l}}}(\chi_{\textstyle{{}_{lmpq}}})|\,\sin\epsilon_{\textstyle{{}_{l}}}(\chi_{\textstyle{{}_{lmpq}}})~{}$.
Hence the tidal response of a body is determined by the frequency-dependence
of these factors.
Combining (72) with (86), and keeping in mind that $\,A_{l}\gg 1\,$, it is
easy to write down the frequency-dependencies of the products
$~{}|\bar{k}_{\textstyle{{}_{l}}}(\chi)|~{}\sin\epsilon_{\textstyle{{}_{l}}}(\chi)~{}$.
Referring the reader to Appendix E.2 for details, we present the results,
without the sign multiplier.
$\boldmath\bullet$ In the high-frequency band:
$\displaystyle|\bar{k}_{\textstyle{{}_{l}}}(\chi)|\,\sin\epsilon_{\textstyle{{}_{l}}}(\chi)\,\approx\,\frac{3}{2\,(l-1)}\;\frac{A_{\textstyle{{}_{l}}}}{(A_{\textstyle{{}_{l}}}+\,1)^{2}}~{}\sin\left(\frac{\alpha\pi}{2}\right)~{}\Gamma(\alpha+1)~{}\,\zeta^{-\alpha}\,\left(\,\tau_{{}_{M}}\,\chi\,\right)^{-\alpha}\quad,~{}\quad\mbox{for}~{}\quad~{}\chi\,\gg\,\tau_{{}_{M}}^{-1}~{}~{}.~{}\quad$
(92)
For small bodies and small terrestrial planets (i.e., for
$\,A_{\textstyle{{}_{l}}}\gg 1\,$), the boundary between the high and
intermediate frequencies turns out to be
$\,\chi_{{{}_{HI}}}\,=\,\tau_{{}_{M}}^{-1}\,\zeta^{\textstyle{{}^{\textstyle\,\frac{\alpha}{1-\alpha}}}}~{}~{}~{}.$
For large terrestrial planets (i.e., for $\,A_{\textstyle{{}_{l}}}\ll 1\,$)
the boundary frequency is
$\,\chi_{{{}_{HI}}}\,=\,\tau_{{}_{A}}^{-1}\,=\,\tau_{{}_{M}}^{-1}\,\zeta^{-1}~{}~{}~{}.$
At high frequencies, anelasticity dominates. So, dependent upon the
microphysics of the mantle, the parameter $\,\zeta\,$ may be of order unity or
slightly lower. We say slightly, because we expect both anelasticity and
viscosity to be present near the transitional zone. (A too low $\,\zeta\,$
would eliminate viscosity from the picture completely.) This said, we may
assume that the boundary $\,\chi_{{{}_{HI}}}\,$ is comparable to
$\,\tau_{{}_{M}}^{-1}\,$ for both small and large solid objects. This is why
in (92) we set the inequality simply as $~{}\chi\,\gg\,\tau_{{}_{M}}^{-1}~{}$.
$\boldmath\bullet$ In the intermediate-frequency band:
$\displaystyle|\bar{k}_{\textstyle{{}_{l}}}(\chi)|\,\sin\epsilon_{\textstyle{{}_{l}}}(\chi)\,\approx\,\frac{3}{2\,(l-1)}\;\frac{A_{\textstyle{{}_{l}}}}{(A_{\textstyle{{}_{l}}}+1)^{2}}\,~{}\left(\,\tau_{{}_{M}}\,\chi\,\right)^{-1}\quad,\quad~{}\quad\mbox{for}\quad\quad\tau_{{}_{M}}^{-1}\gg\chi\gg\tau_{{}_{M}}^{-1}\,(A_{\textstyle{{}_{l}}}+1)^{-1}~{}\,~{}.\quad\quad$
(93)
While the consideration in the Appendix E.2 renders
$\,\tau_{{}_{M}}^{-1}\,\zeta^{\textstyle{{}^{\textstyle\,\frac{\alpha}{1-\alpha}}}}\,$
for the upper bound, here we approximate it with $\,\tau_{{}_{M}}^{-1}\,$ in
understanding that $\,\zeta\,$ does not differ from unity too much near the
transitional zone. Further advances of rheology may challenge this convenient
simplification.
$\boldmath\bullet$ In the low-frequency band:
$\displaystyle|\bar{k}_{\textstyle{{}_{l}}}(\chi)|~{}\sin\epsilon_{\textstyle{{}_{l}}}(\chi)\,\approx\,\frac{3}{2\,(l-1)\textbf{}}~{}{A_{\textstyle{{}_{l}}}}~{}\,\tau_{{}_{M}}~{}\chi\quad\quad,\quad\quad\quad\quad~{}\quad\quad\,\mbox{for}~{}\quad~{}\quad\,\tau_{{}_{M}}^{-1}\,(A_{\textstyle{{}_{l}}}+1)^{-1}\,\gg\,\chi~{}~{}~{}.\quad\quad~{}\quad$
(94)
Scaling laws (92) and (93) mimic, up to constant factors, the frequency-
dependencies of $\,|\bar{J}(\chi)|\,\sin\delta(\chi)\,=\,-\,{\cal I}{\it
m}[\bar{J}(\chi)]~{}$ at high and low frequencies, correspondingly, – this can
be seen from (87).
Expression (94) however shows a remarkable phenomenon inherent only in the
tidal lagging, and not in the lagging in a sample of material: at frequencies
below
$~{}\tau_{{}_{M}}^{-1}(A_{\textstyle{{}_{l}}}+1)^{-1}\,=\,\frac{\textstyle\mu}{\textstyle\eta}\,(A_{\textstyle{{}_{l}}}+1)^{-1}$,
the product $\;|\bar{k}_{\it{l}}(\chi)|\;\sin\epsilon_{\it{l}}(\chi)\;$
changes its behaviour and becomes linear in $\,\chi\,$.
While elsewhere the
$~{}|\bar{k}_{\textstyle{{}_{l}}}(\chi)|~{}\sin\epsilon_{\textstyle{{}_{l}}}(\chi)~{}$
factor increases with decreasing $\,\chi\,$, it changes its behaviour
drastically on close approach to the zero frequency. Having reached a finite
maximum at about
$~{}\chi\,=\,\tau_{{}_{M}}^{-1}(A_{\textstyle{{}_{l}}}+1)^{-1}$, the said
factor begins to scale linearly in $\,\chi\,$ as $\,\chi\,$ approaches zero.
This way, the factor
$~{}|\bar{k}_{\textstyle{{}_{l}}}(\chi)|~{}\sin\epsilon_{\textstyle{{}_{l}}}(\chi)~{}$
decreases continuously on close approach to a resonance, becomes nil together
with the frequency at the point of resonance. So neither the tidal torque nor
the tidal force explodes in resonances. In a somewhat heuristic manner, this
change in the frequency-dependence was pointed out, for $\,{\it l}=2\,$, in
Section 9 of Efroimsky & Williams (2009).
### 5.5 Example
Figure 1 shows the absolute value, $~{}k_{2}\equiv|\bar{k}_{2}(\chi)|~{}$, as
well as the real part,
$~{}{\cal{R}}{\it{e}}\left[\bar{k}_{2}(\chi)\right]=k_{2}\,\cos\epsilon_{2}~{}$,
and the negative imaginary part,
$~{}\,-{\cal{I}}{\it{m}}\left[\bar{k}_{2}(\chi)\right]=k_{2}\,\sin\epsilon_{2}~{}$,
of the complex quadrupole Love number. Each of the three quantities is
represented by its decadic logarithm as a function of the decadic logarithm of
the forcing frequency $\,\chi\,$ (given in Hz). The curves were obtained by
insertion of formulae (87 \- 88) into (4.3). As an example, the case of
$~{}\,-{\cal{I}}{\it{m}}\left[\bar{k}_{2}(\chi)\right]~{}$ is worked out in
Appendix E.2, see formulae (262 \- 88).
Both in the high- and low-frequency limits, the negative imaginary part of
$\,\bar{k}_{2}(\chi)\,$, given on Figure 1 by the red curve, approaches zero.
Accordingly, over the low- and high-frequency bands the real part (the green
line) virtually coincides with the absolute value (the blue line).
While on the left and on the close right of the peak, dissipation is mainly
due to viscosity, friction at higher frequencies is mainly due to
anelasticity. This switch corresponds to the change of the slope of the red
curve at high frequencies (for our choice of parameters, at around
$\,10^{-5}\,$ Hz). This change of the slope is often called the elbow.
Figure 1 was generated for $\,A_{2}=80.5\,$ and $\,\tau_{{}_{M}}=3.75\times
10^{5}\,$s. The value of $\,A_{2}\,$ corresponds to the Moon modeled by a
homogeneous sphere of rigidity $\,\mu=0.8\times 10^{11}\,$Pa. Our choice of
the value of $\,\tau_{{}_{M}}\equiv\eta/\mu\,$ corresponds to a homogeneous
Moon with the said value of rigidity and with viscosity set to be
$\,\eta=3\times 10^{16}\,$ Pa s. The reason why we consider an example with
such a low value of $\,\eta\,$ will be explained in subsection 5.7. Finally,
it was assumed for simplicity that $\,\zeta=1\,$, i.e., that
$\,\tau_{{}_{A}}=\tau_{{}_{M}}\,$. Although unphysical at low frequencies,
this simplification only slightly changes the shape of the “elbow” and exerts
virtually no influence upon the maximum of the red curve, provided the maximum
is located well into the viscosity zone.
Figure 1: Tidal response of a homogeneous spherical Andrade body, set against
the decadic logarithm of the forcing frequency $\chi$ (in Hz). The blue curve
renders the decadic logarithm of the absolute value of the quadrupole complex
Love number, $\,\lg k_{2}=\lg|\bar{k}_{2}(\chi)|\,$. The green and red curves
depict the logarithms of the real and the negative imaginary parts of the Love
number:
$\,\lg{\cal{R}}{\it{e}}\left[\bar{k}_{2}(\chi)\right]=\lg\left(k_{2}\,\cos\epsilon_{2}\right)\,$
and
$\,\lg\left\\{-{\cal{I}}{\it{m}}\left[\bar{k}_{2}(\chi)\right]\right\\}=\lg\left(-k_{2}\,\sin\epsilon_{2}\right)\,$,
accordingly. The change in the slope of the red curve (the “elbow”), which
takes place to the right of the maximum, corresponds to the switch from
viscosity dominance at lower frequencies to anelasticity dominance at higher
frequencies. The parameters $\,A_{2}\,$ and $\,\tau_{{}_{M}}\,$ were given
values appropriate to a homogeneous Moon with a low viscosity, as described in
subsection 5.7. The plots were generated for an Andrade body with
$\,\zeta=1\,$ at all frequencies. Setting the body Maxwell at lower
frequencies will only slightly change the shape of the “elbow” and will have
virtually no effect on the maximum.
### 5.6 Crossing a resonance – with a chance for entrapment
As ever, we recall that in the expansion for the tidal torque factors (92 \-
94) should appear in the company of multipliers sgn$\,\omega\,$. For example,
the factor (94) describing dissipation near an $lmpq$ resonance will enter the
expansions as
$\displaystyle|\bar{k}_{\textstyle{{}_{l}}}(\chi_{\textstyle{{}_{lmpq}}})|~{}\sin\epsilon_{\textstyle{{}_{l}}}(\chi_{\textstyle{{}_{lmpq}}})~{}\mbox{sgn}\,\omega_{\textstyle{{}_{lmpq}}}\,\approx\,\frac{3}{2\,(l-1)}~{}{A_{\textstyle{{}_{l}}}}~{}\,\tau_{{}_{M}}~{}\chi_{\textstyle{{}_{lmpq}}}~{}\mbox{sgn}\,\omega_{\textstyle{{}_{lmpq}}}~{}=~{}\frac{3}{2\,(l-1)}~{}{A_{\textstyle{{}_{l}}}}~{}\,\tau_{{}_{M}}~{}\omega_{\textstyle{{}_{lmpq}}}~{}~{}.\quad$
(95)
Naturally, the $lmpq$ term of the torque depends on
$\,\omega_{\textstyle{{}_{lmpq}}}\,$ and not just on
$\,\chi_{\textstyle{{}_{lmpq}}}=|\omega_{\textstyle{{}_{lmpq}}}|\,$. This term
then goes continuously through zero, and changes its sign as the $lmpq$
resonance is crossed (i.e., as $\,\omega_{\textstyle{{}_{lmpq}}}\,$ goes
through nil and changes its sign).
Formula (95) tells us that an _lmpq component of the tidal torque goes
continuously through zero as the satellite is traversing the commensurability
which corresponds to vanishing of a tidal frequency $\,\chi_{{\it l}mpq}\,$_.
This gets along with the physically evident fact that the principal (i.e.,
$\,2200\,$) term of the tidal torque should vanish as the secondary approaches
the synchronous orbit.
It is important that a $lmpq$ term of the torque changes its sign and thus
creates a chance for entrapment. As the value of an $lmpq$ term of the torque
is much lower than that of the principal, $2200$ term, we see that a perfectly
spherical body will never get stuck in a resonance other than $2200$. (The
latter is, of course, the $\,1:1\,$ resonance, i.e., the one in which the
principal term of the torque vanishes.) However, the presence of the
triaxiality-generated torque is known to contribute to the probabilities of
entrapment into other resonances (provided the eccentricity is not zero).
Typically, in the literature they consider a superposition of the triaxiality-
generated torque with the principal tidal term. We would point out that the
“trap” shape of the $lmpq$ term (95) makes this term relevant for the study of
entrapment in the $lmpq$ resonance. In some situations, one has to take into
account also the non-principal terms of the tidal torque.
### 5.7 Comparison with the LLR results
As we mentioned above, fitting of the lunar laser ranging (LLR) data to the
power law has resulted in a very small negative exponent $\,p=\,-\,0.19$
(Williams et al. 2001). Since the measurements of the lunar damping described
in Ibid. rendered information on the tidal and not seismic dissipation, those
results can and should be compared to the scaling law (92 \- 94). As the small
negative exponent was devised from observations over periods of a month to a
year, it is natural to presume that the appropriate frequencies were close to
or slightly below the frequency $~{}\frac{\textstyle
1}{\textstyle{\tau_{{}_{M}}\,{A_{\textstyle{{}_{2}}}}}}~{}$ at which the
factor $\,k_{2}\,\sin\epsilon_{2}\,$ has its peak:
$\displaystyle 3\times
10^{6}~{}\mbox{s}~{}\approx~{}0.1~{}\mbox{yr}~{}=~{}\tau_{{}_{M}}\,{A_{\textstyle{{}_{2}}}}\,=~{}\frac{\eta}{\mu}~{}{A_{\textstyle{{}_{2}}}}\,=~{}\frac{57~{}\eta}{8\,\pi\,G\,(\rho\,R)^{2}}~{}~{}~{},$
(96)
as on Figure 1. Hence, were the Moon a uniform viscoelastic body, its
viscosity would be only
$\displaystyle\eta~{}=~{}3\,\times\,10^{16}~{}\mbox{Pa~{}}\,\mbox{s}~{}~{}~{}.$
(97)
For the actual Moon, the estimate means that the lower mantle contains a high
percentage of partial melt, a fact which goes along well with the model
suggested in Weber et al. (2011), and which was anticipated yet in Williams et
al. (2001) and Williams & Boggs (2009), following an earlier work by Nakamura
et al. (1974).
## 6 The polar component of the tidal torque acting on the primary
Let vector $\,\mbox{{\boldmath$\vec{r}$}}\,=\,(r,\,\lambda,\,\phi)\,$ point
from the centre of the primary toward a point-like secondary of mass
$\,M_{sec}\,$. Associating the coordinate system with the primary, we reckon
the latitude $\,\phi\,$ from the equator. Setting the coordinate system to
corotate, we determine the longitude $\,\lambda\,$ from a fixed meridian. The
tidally induced component of the primary’s potential, $\,U\,$, can be
generated either by this secondary itself or by some other secondary of mass
$\,M_{sec}^{*}\,$ located at
$\,\mbox{{\boldmath$\vec{r}$}}^{\,*}\,=\,(r^{*},\,\lambda^{*},\,\phi^{*})\,$.
In either situation, the tidally induced potential $\,U\,$ generates a tidal
force and a tidal torque wherewith the secondary of mass $\,M_{sec}\,$ acts on
the primary.
The scope of this paper is limited to low values of ${\it i}$. When the role
of the primary is played by a planet, the secondary being its satellite, ${\it
i}$ is the satellite’s inclination. When the role of the primary is played by
the satellite, the planet acting as its secondary, ${\it i}$ acquires the
meaning of the satellite’s obliquity. Similarly, when the planet is regarded
as a primary and its host star is treated as its secondary, ${\it i}$ is the
obliquity of the planet. In all these cases, the smallness of $\,{\it i}\,$
indicates that the tidal torque acting on the primary can be identified with
its polar component, the one orthogonal to the equator of the primary. The
other components of the torque will be neglected in this approximation.
The polar component of the torque acting on the primary is the negative of the
partial derivative of the tidal potential, with respect to the primary’s
sidereal angle:
$\displaystyle{\cal
T}(\mbox{{\boldmath$\vec{r}$}})\;=\;-\;M_{sec}\;\frac{\partial
U(\mbox{{\boldmath$\vec{r}$}})}{\partial\theta}\;\;\;,$ (98)
$\theta\,$ standing for the primary’s sidereal angle. This formula is
convenient when the tidal potential $\,U\,$ is expressed through the
secondary’s orbital elements and the primary’s sidereal angle.131313 Were the
potential written down in the spherical coordinates associated with the
primary’s equator and corotating with the primary, the polar component of the
tidal torque could be calculated with aid of the expression
$\displaystyle{\cal
T}(\mbox{{\boldmath$\vec{r}$}})\;=\;\,M_{sec}\;\frac{\partial
U(\mbox{{\boldmath$\vec{r}$}})}{\partial\lambda}~{}~{}~{}$ derived, for
example, in Williams & Efroimsky (2012). That the expression agrees with (98)
can be seen from the formula
$\displaystyle\lambda\;=\;-\;\theta\;+\;\Omega\;+\;\omega\;+\;\nu\;+\;O({\it
i}^{2})\;=\;-\;\theta\;+\;\Omega\;+\;\omega\;+\;{\cal{M}}\;+\;2\;e\;\sin{\cal{M}}\;+\;O(e^{2})\;+\;O({\it
i}^{2})\;\;\;,\;\;\;\;$ $e\,,\,i\,,\,\omega\,,\,\Omega\,,\,\nu\,$ and $\,{\cal
M}\,$ being the eccentricity, inclination, argument of the pericentre,
longitude of the node, true anomaly, and mean anomaly of the tide-raising
secondary.
Here and hereafter we are deliberately referring to _a primary and a
secondary_ in lieu of _a planet and a satellite_. The preference stems from
our intention to extend the formalism to setting where a moon is playing the
role of a tidally-perturbed primary, the planet being its tide-producing
secondary. Similarly, when addressing the rotation of Mercury, we interpret
the Sun as a secondary that is causing a tide on the effectively primary body,
Mercury.
## 7 The tidal potential
### 7.1 Darwin (1879) and Kaula (1964)
The potential produced at point
$\mbox{{\boldmath$\vec{R}$}}=(R\,,\,\lambda\,,\,\phi)\,$ by a secondary body
of mass $\,M^{*}$, located at
$\,\mbox{{\boldmath$\vec{r}$}}^{\;*}=(r^{*},\,\lambda^{*}\,,\,\phi^{*})\,$
with $\,r^{*}\geq R\,$, is given by (1). When a tide-raising secondary located
at $\,\mbox{{\boldmath$\vec{r}$}}^{\;*}\,$ distorts the shape of the primary,
the potential generated by the primary at some exterior point $\vec{r}$ gets
changed. In the linear approximation, its variation is given by (2). Insertion
of (1) into (2) entails
$\displaystyle
U(\mbox{{\boldmath$\vec{r}$}})\;=\;\,-\,{G\;M^{*}_{sec}}\sum_{{\it{l}}=2}^{\infty}k_{\it
l}\;\frac{R^{\textstyle{{}^{2\it{l}+1}}}}{r^{\textstyle{{}^{\it{l}+1}}}{r^{\;*}}^{\textstyle{{}^{\it{l}+1}}}}\sum_{m=0}^{\it
l}\frac{({\it l}-m)!}{({\it
l}+m)!}(2-\delta_{0m})P_{{\it{l}}m}(\sin\phi)P_{{\it{l}}m}(\sin\phi^{*})\;\cos
m(\lambda-\lambda^{*})~{}~{}~{}.~{}~{}~{}~{}~{}$ (99)
A different expression for the tidal potential was offered by Kaula (1961,
1964), who developed a powerful technique that enabled him to switch from the
spherical coordinates to the Kepler elements $\,(\,a^{*},\,e^{*},\,{\it
i}^{*},\,\Omega^{*},\,\omega^{*},\,{\cal M}^{*}\,)\,$ and $\,(\,a,\,e,\,{\it
i},\,\Omega,\,\omega,\,{\cal M}\,)\,$ of the secondaries located at
$\,\mbox{{\boldmath$\vec{r}$}}^{\;*}\,$ and $\vec{r}$ . Application of this
technique to (99) results in
$\displaystyle U(\mbox{{\boldmath$\vec{r}$}})\;=\;-\;\sum_{{\it
l}=2}^{\infty}\;k_{\it l}\;\left(\,\frac{R}{a}\,\right)^{\textstyle{{}^{{\it
l}+1}}}\frac{G\,M^{*}_{sec}}{a^{*}}\;\left(\,\frac{R}{a^{*}}\,\right)^{\textstyle{{}^{\it
l}}}\sum_{m=0}^{\it l}\;\frac{({\it l}-m)!}{({\it
l}+m)!}\;\left(\,2\;\right.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(100)
$\displaystyle~{}~{}~{}~{}~{}\left.-\,\delta_{0m}\,\right)\,\sum_{p=0}^{\it
l}F_{{\it l}mp}({\it i}^{*})\sum_{q=-\infty}^{\infty}G_{{\it
l}pq}(e^{*})\sum_{h=0}^{\it l}F_{{\it l}mh}({\it
i})\sum_{j=-\infty}^{\infty}G_{{\it l}hj}(e)\;\cos\left[\left(v_{{\it
l}mpq}^{*}-m\theta^{*}\right)-\left(v_{{\it
l}mhj}-m\theta\right)\right]~{}~{}_{\textstyle{{}_{\textstyle,}}}$
where
$\displaystyle v_{{\it l}mpq}^{*}\;\equiv\;({\it l}-2p)\omega^{*}\,+\,({\it
l}-2p+q){\cal M}^{*}\,+\,m\,\Omega^{*}~{}~{}~{},$ (101) $\displaystyle v_{{\it
l}mhj}\;\equiv\;({\it l}-2h)\omega\,+\,({\it l}-2h+j){\cal
M}\,+\,m\,\Omega~{}~{}~{},$ (102)
$\theta\,=\,\theta^{*}\,$ being the sidereal angle, $\,G_{lpq}(e)\,$
signifying the eccentricity functions,141414 Functions $\,G_{lhj}(e)\,$
coincide with the Hansen polynomials
$\,X^{\textstyle{{}^{(-l-1),\,(l-2p)}}}_{\textstyle{{}_{(l-2p+q)}}}(e)~{}$. In
Appendix G, we provide a table of the $\,G_{lhj}(e)\,$ required for expansion
of tides up to $\,e^{6}\,$, inclusively. and $\,F_{lmp}({\it i})\,$ denoting
the inclination functions (Gooding & Wagner 2008).
Being equivalent for a planet with an instant response of the shape, (99) and
(100) disagree when dissipation-caused delays come into play. Kaula’s
expression (100), as well as its truncated, Darwin’s version,151515 While the
treatment by Kaula (1964) entails the infinite Fourier series (100), the
development by Darwin (1879, 1880) renders its partial sum with
$\,|{\it{l}}|,\,|q|,\,|j|\,\leq\,2\,.$ For a simple introduction into Darwin’s
method see Ferraz-Mello et al. (2008). Be mindful that in Ibid. the convention
on the notations $\vec{r}$ and $\mbox{{\boldmath$\vec{r}$}}^{\,*}$ is opposite
to ours. is capable of accommodating separate phase lags for each mode:
$\displaystyle U(\mbox{{\boldmath$\vec{r}$}})\;=\;-\;\sum_{{\it
l}=2}^{\infty}\;k_{\it l}\;\left(\,\frac{R}{a}\,\right)^{\textstyle{{}^{{\it
l}+1}}}\frac{G\,M^{*}_{sec}}{a^{*}}\;\left(\,\frac{R}{a^{*}}\,\right)^{\textstyle{{}^{\it
l}}}\sum_{m=0}^{\it l}\;\frac{({\it l}-m)!}{({\it
l}+m)!}\;\left(\,2\;-\right.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(103) $\displaystyle\left.\delta_{0m}\,\right)\,\sum_{p=0}^{\it l}F_{{\it
l}mp}({\it i}^{*})\sum_{q=-\infty}^{\infty}G_{{\it l}pq}(e^{*})\sum_{h=0}^{\it
l}F_{{\it l}mh}({\it i})\sum_{j=-\infty}^{\infty}G_{{\it
l}hj}(e)\;\cos\left[\left(v_{{\it l}mpq}^{*}-m\theta^{*}\right)-\left(v_{{\it
l}mhj}-m\theta\right)-\epsilon_{{\it
l}mpq}\right]~{}~{}_{\textstyle{{}_{\textstyle,}}}$
where
$\displaystyle\epsilon_{{\it l}mpq}=\left[\,({\it
l}-2p)\,\dot{\omega}^{*}\,+\,({\it
l}-2p+q)\,\dot{\cal{M}}^{*}\,+\,m\,(\dot{\Omega}^{*}\,-\,\dot{\theta}^{*})\,\right]\,\Delta
t_{\it{l}mpq}=\,\omega^{*}_{\it{l}mpq}\,\Delta
t_{\it{l}mpq}=\,\pm\,\chi^{*}_{\it{l}mpq}\,\Delta
t_{\it{l}mpq}~{}~{}~{}~{}~{}~{}$ (104)
is the phase lag. The tidal mode $\omega^{*}_{\it{l}mpq}$ introduced in (104)
is
$\displaystyle\omega^{*}_{{\it l}mpq}\;\equiv\;({\it
l}-2p)\;\dot{\omega}^{*}\,+\,({\it
l}-2p+q)\;\dot{\cal{M}}^{*}\,+\,m\;(\dot{\Omega}^{*}\,-\,\dot{\theta}^{*})\;~{}~{},~{}~{}~{}$
(105)
while the positively-defined quantity
$\displaystyle\chi^{*}_{{\it l}mpq}\,\equiv\,|\,\omega^{*}_{{\it
l}mpq}\,|\,=\,|\,({\it l}-2p)\,\dot{\omega}^{*}\,+\,({\it
l}-2p+q)\,\dot{\cal{M}}^{*}\,+\,m\,(\dot{\Omega}^{*}\,-\,\dot{\theta}^{*})\;|~{}~{}~{}~{}~{}$
(106)
is the actual physical $\,{{\it l}mpq}\,$ frequency excited by the tide in the
primary. The corresponding positively-defined time delay $\,\Delta
t_{\it{l}mpq}=\,\Delta t_{l}(\chi_{{\it l}mpq})\,$ depends on this physical
frequency, the functional forms of this dependence being different for
different materials.
In neglect of the apsidal and nodal precessions, and also of
$\,\dot{\cal{M}}_{0}\,$, the above formulae become:
$\displaystyle\omega_{{\it l}mpq}\,=\,({\it
l}-2p+q)\,n\,-\,m\,\dot{\theta}~{}~{}~{},~{}~{}$ (107)
$\displaystyle\chi_{{\it l}mpq}\,\equiv\,|\,\omega_{{\it
l}mpq}\,|\,=\,|\,({\it l}-2p+q)\,n\,-\,m\,\dot{\theta}\;|~{}~{}~{},~{}~{}$
(108)
and
$\displaystyle\epsilon_{{\it l}mpq}\,\equiv\,\omega_{{\it l}mpq}\,\Delta
t_{{\it l}mpq}$ $\displaystyle=$ $\displaystyle\left[\,({\it
l}-2p+q)\,n\,-\,m\,\dot{\theta}\,\right]\,\Delta t_{{\it l}mpq}$ (109a)
$\displaystyle=$ $\displaystyle\chi_{{\it l}mpq}~{}\,\Delta t_{l}(\chi_{{\it
l}mpq})~{}\,\mbox{sgn}\,\left[\,({\it
l}-2p+q)\,n\,-\,m\,\dot{\theta}\,\right]~{}~{}~{},~{}~{}$ (109b)
Formulae (100) and (103) constitute the pivotal result of Kaula’s theory of
tides. Importantly, Kaula’s theory imposes no _a priori_ constraint on the
form of frequency-dependence of the lags.
## 8 The Darwin torque
As explained in Williams & Efroimsky (2012), the empirical model by MacDonald
(1964), called MacDonald torque, tacitly sets an unphysical rheology of the
satellite’s material. The rheology is given by (5.1) with
$\,\alpha\,=\,-\,1\,$. More realistic is the dissipation law (80). An even
more accurate and practical formulation of the damping law, stemming from the
Andrade formula for the compliance, is rendered by (92 \- 94). These formulae
should be inserted into the Darwin-Kaula theory of tides.
### 8.1 The secular and oscillating parts of the Darwin torque
#### 8.1.1 The general formula
Direct differentiation of (103) with respect to $\;-\,\theta\,$ will result in
the expression161616 For justification of this operation, see Section 6 in
Efroimsky & Williams (2009).
$\displaystyle{\cal T}=-\,\sum_{{\it
l}=2}^{\infty}\left(\frac{R}{a}\right)^{\textstyle{{}^{{\it
l}+1}}}\frac{G\,M^{*}_{sec}\,M_{sec}}{a^{*}}\left(\frac{R}{a^{*}}\right)^{\textstyle{{}^{\it
l}}}\sum_{m=0}^{\it l}\frac{({\it l}-m)!}{({\it l}+m)!}\;2\,m\,\sum_{p=0}^{\it
l}F_{{\it l}mp}({\it i}^{*})\sum_{q=-\infty}^{\infty}G_{{\it
l}pq}(e^{*})~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle\sum_{h=0}^{\it l}F_{{\it l}mh}({\it
i})\sum_{j=-\infty}^{\infty}G_{{\it l}hj}(e)\;k_{\it
l}\;\sin\left[\,v^{*}_{{\it l}mpq}\,-\;v_{{\it l}mhj}\,-\;\epsilon_{{\it
l}mpq}\,\right]~{}~{}_{\textstyle{{}_{\textstyle.}}}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(110)
If the tidally-perturbed and tide-raising secondaries are the same body, then
$\,M_{sec}=M_{sec}^{*}\,$, and all the elements coincide with their
counterparts with an asterisk. Hence the differences
$\displaystyle
v^{*}_{{\it{l}}mpq}\,-\;v_{{\it{l}}mhj}\,=~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle({\it{l}}\,-\,2\,p\,+\,q)\;{\cal{M}}^{*}\,-\;({\it{l}}\,-\,2\,h\,+\,j)\,{\cal{M}}\;+\;m\,({\Omega}^{*}\,-{\Omega})\;+\;{\it{l}}\;({\omega}^{*}\,-\,{\omega})\,-\,2\,p\,{\omega}^{*}\,+\,2\,h\,{\omega}~{}~{}~{}~{}~{}~{}$
(111)
get simplified to
$\displaystyle
v^{*}_{{\it{l}}mpq}\,-\;v_{{\it{l}}mhj}\,=~{}(2\,h\,-\,2\,p\,+\,q\,-\,j)\;{\cal{M}}^{*}\,+\,(2\,h\,-\,2\,p)\,{\omega}^{*}~{}~{}~{},~{}~{}~{}$
(112)
an expression containing both short-period contributions proportional to the
mean anomaly, and long-period contributions proportional to the argument of
the pericentre.
#### 8.1.2 The secular, the purely-short-period, and the mixed-period parts
of the torque
Now we see that the terms entering series (110) can be split into three
groups:
(1) The terms, in which indices $\,(p\,,\,q)\,$ coincide with $\,(h\,,\,j)\,$,
constitute a secular part of the tidal torque, because in such terms
$\,v_{{\it{l}}mhj}\,$ cancel with $\,v_{{\it l}mpq}^{*}\;$. This ${\cal{M}}$\-
and $\,\omega$-independent part is furnished by
$\displaystyle\overline{\cal T}$ $\displaystyle=$
$\displaystyle\sum_{{\it{l}}=2}^{\infty}2~{}G~{}M_{sec}^{\textstyle{{}^{2}}}~{}\frac{R^{\textstyle{{}^{2{\it{l}}\,+\,1}}}}{a^{\textstyle{{}^{2\,{\it{l}}\,+\,2}}}}\sum_{m=0}^{\it
l}\frac{({\it{l}}\,-\,m)!}{({\it{l}}\,+\,m)!}\;m\;\sum_{p=0}^{\it
l}F^{\textstyle{{}^{2}}}_{{\it{l}}mp}({\it
i})\sum^{\it\infty}_{q=-\infty}G^{\textstyle{{}^{2}}}_{{\it{l}}pq}(e)\;k_{\it
l}\;\sin\epsilon_{{\it{l}}mpq}\;\;\;.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(113)
(2) The terms with $\,p\,=\,h\,$ and $\,q\,\neq\,j\,$ constitute a purely
short-period part of the torque:
$\displaystyle\widetilde{\cal T}=-\sum_{{\it
l}=2}^{\infty}\,2\,G\,M^{\textstyle{{}^{2}}}_{sec}\,\frac{R^{\textstyle{{}^{2{\it
l}+1}}}}{a^{\textstyle{{}^{2{\it l}+2}}}}\;\sum_{m=0}^{\it l}\,\frac{({\it
l}-m)!}{({\it l}+m)!}\;m\,\sum_{p=0}^{\it
l}F^{\textstyle{{}^{2}}}_{{\it{l}}mp}({\it
i})\sum_{q=-\infty~{}}^{\infty}{\sum_{\stackrel{{\scriptstyle\textstyle{{}^{~{}j=-\infty}}}}{{\textstyle{{}^{j\;\neq\;q}}}}}^{\infty}}G_{{\it
l}pq}(e)\;G_{{\it l}pj}(e)\;k_{\it l}\;\sin\left[\,\left(q\right.\right.$
$\displaystyle\left.\left.-\,j\right)\;{\cal{M}}\;-\;\epsilon_{{\it
l}mpq}\,\right]\,~{}~{}_{\textstyle{{}_{\textstyle.}}}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(114)
(3) The remaining terms, ones with $\,p\,\neq\,h\,$, make a mixed-period part
comprised of both short- and long-period terms:
$\displaystyle{\cal
T}^{\textstyle{{}^{mixed}}}=~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle-\;\sum_{{\it
l}=2}^{\infty}2\,G\,M^{\textstyle{{}^{2}}}_{sec}\,\frac{R^{\textstyle{{}^{2{\it
l}+1}}}}{a^{\textstyle{{}^{2{\it l}+2}}}}\sum_{m=0}^{\it l}\frac{({\it
l}-m)!}{({\it l}+m)!}~{}m\sum_{p=0}^{\it l}F_{{\it l}mp}({\it
i})\sum_{\stackrel{{\scriptstyle\textstyle{{}^{h=0}}}}{{\textstyle{{}^{h\;\neq\;p}}}}}^{\it
l}F_{{\it l}mh}({\it
i})\sum_{q=-\infty\,}^{\infty}\sum_{\stackrel{{\scriptstyle\textstyle{{}^{\,j=-\infty}}}}{{}}}^{\infty}G_{{\it
l}hq}(e)\,G_{{\it l}pj}(e)\,k_{\textstyle{{}_{\it
l}}}\;\sin\left[\,\left(2\,h\right.\right.$
$\displaystyle\left.\left.-\,2\,p\,+\,q\,-\,j\right)\;{\cal{M}}^{*}\,+\,(2\,h\,-\,2\,p)\,{\omega}^{*}\;-\;\epsilon_{{\it
l}mpq}\,\right]~{}~{}_{\textstyle{{}_{\textstyle.}}}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(115)
#### 8.1.3 The $\,l=2\,$ and $\,l=3\,$ terms in the $\,O(i^{2})\,$
approximation
For $\,l=2\,$, index $\,m\,$ will take the values $\,0,1,2\,$ only. Although
the $\,m=0\,$ terms enter the potential, they add nothing to the torque,
because differentiation of (103) with respect to $\;-\theta\;$ furnishes the
$\,m\,$ multiplier in (110). To examine the remaining terms, we should
consider the inclination functions with subscripts
$\,({\emph{l}}mp)\,=\,(220)\,,\,(210)\,,\,(211)\;$ only:
$\displaystyle F_{220}({\it i})\,=\,3\,+\,O({\it
i}^{2})~{}~{}~{},~{}~{}~{}~{}~{}F_{210}({\it i})\,=\,\frac{3}{2}\;\sin{\it
i}\,+\,O({\it i}^{2})\;\;\;,~{}~{}~{}~{}~{}F_{211}({\it
i})\,=\;-\;\frac{3}{2}\;\sin{\it i}\,+\,O({\it
i}^{2})~{}~{}~{},~{}~{}~{}~{}~{}$ (116)
all the other $\,F_{2mp}({\it i})\,$ being of order $\,O({\it i}^{2})\,$ or
higher. Thence for $\,p=h\,$ (i.e., both in the secular and purely-short-
period parts) it is sufficient, in the $\,O({\it i}^{2})\,$ approximation, to
keep only the terms with $\,F_{220}^{2}({\it i})\,$, ignoring those with
$\,F_{210}^{2}({\it i})\,$ and $\,F_{211}^{2}({\it i})\,$. We see that in the
$~{}O({\it i}^{2})~{}$ approximation
* •
among the $\,l=2\,$ terms, both in the secular and purely short-period parts,
only the terms with $\;({\it{l}}mp)\,=\,(220)\;$ are relevant.
In the case of $\,p\neq h\,$, i.e., in the mixed-period part, the terms of the
leading order in inclination are: $~{}\,F_{{\it{l}}mp}({\it
i})\,F_{{\it{l}}mh}({\it i})~{}=~{}\,F_{210}({\it i})\,F_{211}({\it i})~{}$
and $~{}\,F_{{\it{l}}mp}({\it i})\,F_{{\it{l}}mh}({\it
i})~{}=~{}\,F_{211}({\it i})\,F_{210}({\it i})~{}$, which happen to be equal
to one another, and to be of order $~{}O({\it i}^{2})~{}$. This way, in the
$~{}O({\it i}^{2})~{}$ approximation,
* •
the mixed-period part of the $\,l=2\,$ component may be omitted.
The inclination functions
$\,F_{lmp}\,=\,F_{310}\,,\,F_{312}\,,\,F_{313}\,,\,F_{320}\,,\,F_{321}\,,\,F_{322}\,,\,F_{323}\,,\,F_{331}\,,\,F_{332}\,,\,F_{333}\,$
are of order $\,O({\it i})\,$ or higher. The terms containing the squares or
cross-products of the these functions may thus be dropped. Specifically, the
smallness of the cross-terms tells us that
* •
the mixed-period part of the $\,l=3\,$ component may be omitted.
What remains is the terms containing the squares of functions
$\displaystyle F_{311}({\it i})\,=~{}-~{}\frac{3}{2}\,+\,O({\it
i}^{2})~{}~{}~{}~{}~{}~{}\mbox{and}~{}~{}~{}~{}~{}~{}F_{330}({\it
i})\,=\,15\,+\,O({\it i}^{2})\;\;\;~{}~{}~{}.~{}~{}~{}~{}~{}$ (117)
In other words,
* •
among the $\,l=3\,$ terms, both in the secular and purely short-period parts,
only the terms with $\;({\it{l}}mp)\,=\,(311)\;$ and
$\;({\it{l}}mp)\,=\,(330)\;$ are important.
All in all, for $\,l=2\,$ and $\,l=3\,$ the mixed-period parts of the torque
may be neglected, in the $~{}O({\it i}^{2})~{}$ approximation. The surviving
terms of the secular and the purely short-period parts will be developed up to
$\,e^{6}\,$, inclusively.
### 8.2 Approximation for the secular and short-period parts of the tidal
torque
As we just saw, both the secular and short-period parts of the torque may be
approximated with the following degree-2 and degree-3 components:
$\displaystyle\overline{\cal T}$ $\displaystyle=$ $\displaystyle\overline{\cal
T}_{\textstyle{{}_{{}_{\textstyle{{}_{l=2}}}}}}\,+~{}\overline{\cal
T}_{\textstyle{{}_{{}_{\textstyle{{}_{l=3}}}}}}\,+~{}O\left(\,\epsilon\,(R/a)^{9}\,\right)~{}$
(118) $\displaystyle=$ $\displaystyle\overline{\cal
T}_{\textstyle{{}_{\textstyle{{}_{(lmp)=(220)}}}}}\,+\left[\,\overline{\cal
T}_{\textstyle{{}_{\textstyle{{}_{(lmp)=(311)}}}}}\,+~{}\overline{\cal
T}_{\textstyle{{}_{\textstyle{{}_{(lmp)=(330)}}}}}\right]~{}+~{}O(\epsilon\,i^{2})~{}+~{}O\left(\,\epsilon\,(R/a)^{9}\,\right)~{}~{}~{},\quad\quad~{}~{}~{}~{}$
and
$\displaystyle\widetilde{\cal T}$ $\displaystyle=$
$\displaystyle\widetilde{\cal
T}_{\textstyle{{}_{\textstyle{{}_{l=2}}}}}\,+~{}\widetilde{\cal
T}_{\textstyle{{}_{\textstyle{{}_{l=3}}}}}\,+~{}O\left(\,\epsilon\,(R/a)^{9}\,\right)~{}$
(119) $\displaystyle=$ $\displaystyle\widetilde{\cal
T}_{\textstyle{{}_{\textstyle{{}_{(lmp)=(220)}}}}}\,+\left[\,\widetilde{\cal
T}_{\textstyle{{}_{\textstyle{{}_{(lmp)=(311)}}}}}\,+~{}\widetilde{\cal
T}_{\textstyle{{}_{\textstyle{{}_{(lmp)=(330)}}}}}\right]~{}+~{}O(\epsilon\,i^{2})~{}+~{}O\left(\,\epsilon\,(R/a)^{9}\,\right)~{}~{}~{},\quad\quad$
were the $\,l=2\,$ and $\,l=3\,$ inputs are of the order $\,(R/a)^{5}\,$ and
$\,(R/a)^{7}\,$, accordingly; while the $\,l=4,\,5,\,.\,.\,.\,$ inputs
constitute $\,O\left(\,\epsilon\,(R/a)^{9}\,\right)\,$.
Expressions for $~{}\overline{\cal
T}_{\textstyle{{}_{\textstyle{{}_{(lmp)=(220)}}}}}~{}$, $~{}\overline{\cal
T}_{\textstyle{{}_{\textstyle{{}_{(lmp)=(311)}}}}}~{}$, and $~{}\overline{\cal
T}_{\textstyle{{}_{\textstyle{{}_{(lmp)=(310)}}}}}~{}$ are furnished by
formulae (297), (299), and (301) in Appendix H. As an example, here we provide
one of these components:
$\displaystyle\overline{\cal
T}_{\textstyle{{}_{{}_{\textstyle{{}_{(lmp)=(220)}}}}}}$ $\displaystyle=$
$\displaystyle\frac{3}{2}~{}G\;M_{sec}^{2}\,R^{5}\,a^{-6}\,\left[~{}\frac{1}{2304}~{}e^{6}~{}k_{2}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{220~{}-3}}}}}|~{}~{}\mbox{sgn}\,\left(\,-\,n\,-\,2\,\dot{\theta}\,\right)\right.$
(120) $\displaystyle+$
$\displaystyle\left(~{}\frac{1}{4}~{}e^{2}-\,\frac{1}{16}~{}e^{4}~{}+\,\frac{13}{768}~{}e^{6}~{}\right)~{}k_{2}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{220~{}-1}}}}}|~{}~{}\mbox{sgn}\,\left(\,n\,-\,2\,\dot{\theta}\,\right)~{}$
$\displaystyle+$
$\displaystyle\left(1\,-\,5\,e^{2}\,+~{}\frac{63}{8}\;e^{4}\,-~{}\frac{155}{36}~{}e^{6}\right)~{}k_{2}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{2200}}}}}|~{}~{}~{}~{}\mbox{sgn}\,\left(\,n\,-\,\dot{\theta}\,\right)$
$\displaystyle+$
$\displaystyle\left(\frac{49}{4}~{}e^{2}-\;\frac{861}{16}\;e^{4}+~{}\frac{21975}{256}~{}e^{6}\right)~{}k_{2}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{2201}}}}}|~{}~{}\mbox{sgn}\,\left(\,3\,n\,-\,2\,\dot{\theta}\,\right)$
$\displaystyle+$
$\displaystyle\left(~{}\frac{289}{4}\,e^{4}\,-\,\frac{1955}{6}~{}e^{6}~{}\right)~{}k_{2}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{2202}}}}}|~{}\mbox{sgn}\,\left(\,2\,n\,-\,\dot{\theta}\,\right)$
$\displaystyle+$
$\displaystyle\left.\frac{714025}{2304}~{}e^{6}~{}k_{2}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{2203}}}}}|~{}~{}\mbox{sgn}\,\left(\,5\,n\,-\,3\,\dot{\theta}\,\right)~{}\,\right]+\,O(e^{8}\,\epsilon)\,+\,O({\it
i}^{2}\,\epsilon)~{}~{}~{}.\quad\quad\quad\quad$
Here each term changes its sign on crossing the appropriate resonance. The
change of the sign takes place smoothly, as the value of the term goes through
zero – this can be seen from formula (95) and from the fact that the tidal
mode $\,\omega_{\textstyle{{}_{lmpq}}}\,$ vanishes in the $\,lmpq\,$
resonance.
Expressions for $~{}\widetilde{\cal
T}_{\textstyle{{}_{\textstyle{{}_{(lmp)=(220)}}}}}~{}$, $~{}\widetilde{\cal
T}_{\textstyle{{}_{\textstyle{{}_{(lmp)=(311)}}}}}~{}$, and
$~{}\widetilde{\cal T}_{\textstyle{{}_{\textstyle{{}_{(lmp)=(330)}}}}}~{}$ are
given by formulae (303 \- 305) in Appendix I. Although the average of the
short-period part of the torque vanishes, this part does contribute to
dissipation. Oscillating torques contribute also to variations of the surface
of the tidally-distorted primary, the latter fact being of importance in
laser-ranging experiments.
Whether the short-period torque may or may not influence also the process of
entrapment is worth exploring numerically. The reason why this issue is raised
is that the frequencies $\,n(q-j)\,$ of the components of the short-period
torque are integers of $\,n\,$ and thus are commensurate with the spin rate
$\,\dot{\theta}\,$ near an $A/B$ resonance, $\,A\,$ and $\,B\,$ being integer.
It may be especially interesting to check the role of this torque when
$\,q-j=1\,$ and $\,A/B\,=\,N\,$ is integer.
The hypothetical role of the short-period torque in the entrapment and
libration dynamics has never been discussed so far, as the previous studies
employed expressions for the tidal torque, which were obtained through
averaging over the period of the secondary’s orbital motion.
### 8.3 Librations
Consider a tidally-perturbed body caught in a $\,A:B\,$ resonance with its
tide-raising companion, $A$ and $B$ being integer. Then the spin rate of the
body is
$\displaystyle\dot{\theta}\;=\;\frac{A}{B}\;n\;+\;\dot{\psi}~{}~{}~{},$ (121)
where the physical-libration angle is
$\displaystyle\psi\,=~{}-~{}\psi_{0}~{}\sin\left(\omega_{{{}_{PL}}}\,t\right)~{}~{}~{},$
(122)
$\,\omega_{{{}_{PL}}}\,$ being the physical-libration frequency. The
oscillating tidal torque exerted on the body is comprised of the modes
$\displaystyle\omega_{\textstyle{{}_{lmpq}}}\;=\;\left(l~{}-~{}2~{}p~{}+~{}q\right)~{}n~{}-~{}m~{}\dot{\theta}~{}=~{}\left(l~{}-~{}2~{}p~{}+~{}q~{}-~{}\frac{A}{B}~{}m\right)\,n~{}-~{}m~{}\dot{\psi}~{}~{}.~{}~{}~{}$
(123)
In those $\,lmpq\,$ terms for which the combination $\,l-2p+q-\frac{\textstyle
A}{\textstyle B}\,m\,$ is not zero, the small quantity $\,-m\dot{\psi}\,$ may
be neglected.171717 The physical-libration input $~{}-\,m\,\dot{\alpha}\,$ may
be neglected in the expression for $\,\omega_{\textstyle{{}_{lmpq}}}\,$ even
when the magnitude of the physical libration is comparable to that of the
geometric libration (as in the case of Phobos). The remaining terms will
pertain to the geometric libration. The phase lags will be given by the
standard formula
$\,\epsilon_{\textstyle{{}_{lmpq}}}\,=\,\omega_{\textstyle{{}_{lmpq}}}\,\Delta
t_{\textstyle{{}_{lmpq}}}\,$.
In those $\,lmpq\,$ terms for which the combination $\,l-2p+q-\frac{\textstyle
A}{\textstyle B}\,m\,$ vanishes, the physical-libration input
$~{}-m\,\dot{\psi}\,$ is the only one left. Accordingly, the multiplier
$~{}\sin\left[\,\left(v_{\textstyle{{}_{lmpq}}}^{*}\,-\,m\,\theta^{*}\,\right)\,-\right.$
$\left.\,\left(\,v_{\textstyle{{}_{lmpq}}}\,-\,m\,\theta\,\right)\,\right]~{}$
in the $\,lmpq\,$ term of the tidal torque will reduce to
$~{}\sin\left[\,-\,m\,\left(\,\psi^{*}\,-\,\psi\,\right)\,\right]\,\approx~{}$
$-m\,\dot{\psi}\,\Delta t\,=\,m\,\psi_{0}\,\omega_{{{}_{PL}}}\,\Delta
t\,\cos\omega_{{{}_{PL}}}t~{}$. Here the time lag $\,\Delta t\,$ is the one
corresponding to the physical-libration frequency $\,\omega_{{{}_{PL}}}~{}$
which may be very different from the usual tidal frequencies for
nonsynchronous rotation – see Williams & Efroimsky (2012) for a comprehensive
discussion.
## 9 Marking the minefield
The afore-presented expressions for the secular and purely short-period parts
of the tidal torque look cumbersome when compared to the compact and elegant
formulae employed in the literature hitherto. It will therefore be important
to explain why those simplifications are impractical.
### 9.1 Perils of the conventional simplification
Insofar as the quality factor is large and the lag is small (i.e., insofar as
$\,\sin\epsilon\,$ can be approximated with $\,\epsilon\,$), our expression
(296) assumes a simpler form:
${}^{\textstyle{{}^{\tiny{(Q\,>\,10)\,}}}}\overline{\cal
T}_{\textstyle{{}_{{}_{\textstyle{{}_{l=2}}}}}}$ $\displaystyle=$
$\displaystyle\frac{3}{2}\;G\;M_{sec}^{2}\,R^{5}\,a^{-6}\,k_{2}\;\sum_{q=-3}^{3}\,G^{\textstyle{{}^{\,2}}}_{\textstyle{{}_{\textstyle{{}_{20\mbox{\it{q}}}}}}}(e)\,\epsilon_{\textstyle{{}_{\textstyle{{}_{220\mbox{\it{q}}}}}}}~{}+\,O(e^{6}\,\epsilon)\,+\,O({\it
i}^{2}\,\epsilon)\,+\,O(\epsilon^{3})~{}~{}~{},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(124)
where the error $\,O(\epsilon^{3})\,$ originates from
$\;\sin\epsilon=\epsilon+O(\epsilon^{3})\;$.
The simplification conventionally used in the literature ignores the
frequency-dependence of the Love number and attributes the overall frequency-
dependence to the lag. It also ignores the difference between the tidal lag
$\,\epsilon\,$ and the lag in the material, $\,\delta\,$. This way, the
conventional simplification makes $\,\epsilon\,$ obey the scaling law (79b).
At this point, most authors also set $\,\alpha=-1\,$. Here we shall explore
this approach, though shall keep $\,\alpha\,$ arbitrary. From the formula
181818 Let $~{}\frac{\textstyle 1}{\textstyle\sin\epsilon}=\left(\,{\cal
E}\,\chi\right)^{\alpha}~{}$, where $\,{\cal{E}}\,$ is an empirical parameter
of the dimensions of time, while $\,\epsilon\,$ is small enough, so
$\,\sin\epsilon\approx\epsilon\,$. In combination with
$\,\epsilon_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}\equiv\,\omega_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}\,\Delta
t_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}\,$ and
$\,\chi_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}\,=\,|\omega_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}|\,$,
these formulae entail (125).
$\displaystyle\Delta
t_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}\;=\;{\cal{E}}^{\textstyle{{}^{-\,\alpha}}}\;\chi_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}^{\textstyle{{}^{-\,(\alpha+1)}}}$
(125)
derived by Efroimsky & Lainey (2007) in the said approach, we see that the
time lags are related to the principal-frequency lag $\;\Delta
t_{\textstyle{{}_{\textstyle{{}_{2200}}}}}\;$ via:
$\displaystyle\Delta t_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}\,=\,\Delta
t_{\textstyle{{}_{\textstyle{{}_{2200}}}}}\;\left(\,\frac{\chi_{\textstyle{{}_{\textstyle{{}_{2200}}}}}}{\chi_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}}\,\right)^{\textstyle{{}^{\alpha+1}}}\;\;\;.$
(126)
When the despinning is still going on and $\,\dot{\theta}\gg n\,$, the
corresponding phase lags are:
$\displaystyle\epsilon_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}\;\equiv\;\Delta
t_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}\;\omega_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}\,=\,-~{}\Delta
t_{\textstyle{{}_{\textstyle{{}_{2200}}}}}\;\chi_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}\;\left(\,\frac{\chi_{\textstyle{{}_{\textstyle{{}_{2200}}}}}}{\chi_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}}\,\right)^{\textstyle{{}^{\alpha+1}}}~{}=~{}-~{}\epsilon_{\textstyle{{}_{\textstyle{{}_{2200}}}}}\;\left(\,\frac{\chi_{\textstyle{{}_{\textstyle{{}_{2200}}}}}}{\chi_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}}\,\right)^{\textstyle{{}^{\alpha}}}~{}~{},\quad\quad$
(127)
which helps us to cast the secular part of the torque into the following
convenient form: 191919 For $\,\dot{\theta}\gg 2n\,$, all the modes
$\,\omega_{\textstyle{{}_{\textstyle{{}_{220q}}}}}\,$ are negative, so
$\,\omega_{\textstyle{{}_{\textstyle{{}_{220q}}}}}\,=\,-\,\chi_{\textstyle{{}_{\textstyle{{}_{220q}}}}}\,$.
Then, keeping in mind that $\,n/\dot{\theta}\ll 1\,$, we process (127), for
$q=1,$ like $\displaystyle-\,\Delta
t_{\textstyle{{}_{\textstyle{{}_{2200}}}}}\,\chi_{\textstyle{{}_{\textstyle{{}_{2200}}}}}~{}\left(\,\frac{\chi_{\textstyle{{}_{\textstyle{{}_{2200}}}}}}{\chi_{\textstyle{{}_{\textstyle{{}_{2201}}}}}}\,\right)^{\textstyle{{}^{\alpha}}}=\,-\,\Delta
t_{\textstyle{{}_{\textstyle{{}_{2200}}}}}\,2\,|n-\dot{\theta}|\,\left(\frac{2|n-\dot{\theta}|}{|-2\dot{\theta}+3n|}\,\right)^{\textstyle{{}^{\alpha}}}=\,-\,\Delta
t_{\textstyle{{}_{\textstyle{{}_{2200}}}}}\,2\,(\dot{\theta}-n)\,\left[1\,+\,\frac{\alpha}{2}\,\frac{n}{\stackrel{{\scriptstyle\bf\centerdot}}{{\theta\,}}}\,+\,O(\,(n/\dot{\theta})^{2}\,)\right]~{}~{}~{},$
and similarly for other $q\neq 0$, and then plug the results into (124). This
renders us (128).
${}^{\textstyle{{}^{\tiny{(Q\,>\,10)\,}}}}\overline{\cal
T}_{\textstyle{{}_{{}_{\textstyle{{}_{l=2}}}}}}\;=\;{\cal{Z}}\left[\;\,-\;\,\dot{\theta}\,\left(1\,+\,\frac{15}{2}\,e^{2}\,+\,\frac{105}{4}\,e^{4}\,+\,O(e^{6})\right)\right.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle+\;\left.n\left(1+\left(\frac{15}{2}-6\alpha\right)e^{2}+\left(\frac{105}{4}-\frac{363}{8}\alpha\right)e^{4}+O(e^{6})\right)\right]+O({\it
i}^{2}/Q)+O(Q^{-3})+O(\alpha e^{2}Q^{-1}n/\dot{\theta})\;\;\;~{}~{}~{}~{}$
(128a) $\displaystyle\approx\;{\cal
Z}\;\left[\,-\,\dot{\theta}\,\left(1\,+\,\frac{15}{2}\,e^{2}\right)+\,n\,\left(1\,+\,\left(\frac{15}{2}\,-\,6\,\alpha\right)\,e^{2}\,\right)\,\right]\;\;\;,~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(128b)
where the overall factor reads as:
$\displaystyle{\cal
Z}~{}=~{}\frac{3\,G\,M_{sec}^{\textstyle{{}^{\,2}}}\;\,k_{2}\;\Delta
t_{\textstyle{{}_{2200}}}}{R}\,~{}\frac{R^{\textstyle{{}^{6}}}}{a^{6}}~{}=~{}\frac{3\,n^{2}\,M_{sec}^{\textstyle{{}^{\,2}}}\;\,k_{2}\;\Delta
t_{\textstyle{{}_{2200}}}}{(M_{prim}\,+\,M_{sec})}~{}\,\frac{R^{\textstyle{{}^{5}}}}{a^{3}}\,=\,\frac{3\,n\,M_{sec}^{\textstyle{{}^{\,2}}}~{}\,k_{2}}{Q_{\textstyle{{}_{2200}}}~{}(M_{prim}\,+\,M_{sec})}~{}\,\frac{R^{\textstyle{{}^{5}}}}{a^{3}}~{}\,\frac{n~{}~{}}{\chi_{\textstyle{{}_{2200}}}}~{}~{},\quad\quad$
(129)
$M_{prim}\,$ and $\,M_{sec}\,$ being the primary’s and secondary’s
masses.202020 To arrive at the right-hand side of (129), we recalled that
$~{}\chi_{\textstyle{{}_{lmpq}}}\,\Delta
t_{\textstyle{{}_{lmpq}}}\,=\,|\epsilon_{\textstyle{{}_{lmpq}}}|\,$ and that
$~{}Q_{\textstyle{{}_{lmpq}}}^{-1}\,=\,|\epsilon_{\textstyle{{}_{lmpq}}}|\,+\,O(\epsilon^{3})\,=\,|\epsilon_{\textstyle{{}_{lmpq}}}|\,+\,O(Q^{-3})\,$,
according to formula (12). Dividing (129) by the primary’s principal moment of
inertia $\,\xi\,M_{primary}\,R^{2}\,$, we obtain the contribution that this
component of the torque brings into the angular deceleration rate
$\;\ddot{\theta}\;$:
$\displaystyle\ddot{\theta}~{}=~{}{\cal{K}}~{}\left\\{\,-\,\stackrel{{\scriptstyle\;\centerdot}}{{\theta}}\,\left[~{}1\,+\,\frac{15}{2}\,e^{2}\,+\,\frac{105}{4}\,e^{4}\,+\,O(e^{6})~{}\right]\right.~{}+~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle\left.n\left[~{}1+\left(\frac{15}{2}-6\alpha\right)e^{2}+\left(\frac{105}{4}-\frac{363}{8}\alpha\right)e^{4}+O(e^{6})~{}\right]\,\right\\}+O({\it
i}^{2}/Q)+O(Q^{-3})+O(\alpha
e^{2}Q^{-1}n/\dot{\theta})\;~{}~{}~{}~{}~{}~{}~{}~{}~{}$ (130a)
$\displaystyle\approx~{}{\cal
K}\;\left[\,-\,\stackrel{{\scriptstyle\;\centerdot}}{{\theta}}\,\left(1\,+\,\frac{15}{2}\,e^{2}\right)+\,n\,\left(1\,+\,\left(\frac{15}{2}\,-\,6\,\alpha\right)\,e^{2}\,\right)\,\right]~{}~{}~{},\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$
(130b)
the factor $\,{\cal{K}}\,$ being given by
$\displaystyle{\cal K}\,\equiv\,\frac{\cal
Z}{\xi\,M_{prim}\,R^{2}}\,=\,\frac{3\,n^{2}\,M_{sec}^{\textstyle{{}^{\,2}}}\;\,k_{2}\;{\Delta
t_{\textstyle{{}_{2200}}}}}{\xi\;M_{prim}\;(M_{prim}\,+\,M_{sec})}\;\frac{R^{\textstyle{{}^{3}}}}{a^{3}}\,=\,\frac{3\,n\,M_{sec}^{\textstyle{{}^{\,2}}}\;\,k_{2}}{\xi\;Q_{\textstyle{{}_{\textstyle{{}_{2200}}}}}\;M_{prim}\;(M_{prim}\,+\,M_{sec})}\;\frac{R^{\textstyle{{}^{3}}}}{a^{3}}\;\,\frac{n~{}~{}}{\chi_{\textstyle{{}_{\textstyle{{}_{2200}}}}}}\;\;\;,~{}\quad~{}$
(131)
where $\,\xi\,$ is a multiplier emerging in the expression
$\,\xi\,M_{primary}\,R^{2}\,$ for the primary’s principal moment of inertia
($\,\xi=2/5\,$ for a homogeneous sphere).
In the special case of $\,\alpha\,=\,-\,1\;$, the above expressions enjoy
agreement with the appropriate result stemming from the corrected MacDonald
model – except that our (128) and (130) contain $\,\Delta
t_{\textstyle{{}_{2200}}}\,$, $\,Q_{\textstyle{{}_{2200}}}\,$, and
$\,\chi_{\textstyle{{}_{2200}}}\,$ instead of $\,\Delta t\,$, $\,Q\,$, and
$\,\chi\,$ standing in formulae (44 - 47) from Williams & Efroimsky (2012).
Formula (130b) tells us that the secular part of the tidal torque vanishes for
$\displaystyle\stackrel{{\scriptstyle\;\centerdot}}{{\theta}}\,-\;n\;=\;-\;6\;n\;e^{2}\;\alpha\;\;\;,$
(132)
which coincides, for $\;\alpha=-1\;$, with the result obtained in Rodríguez et
al. (2008, eqn. 2.4) and Williams & Efroimsky (2012, eqn. 49). This
coincidence however should not be taken at its face value, because it is
_occasional_ or, possibly better to say, exceptional.
Formulae (128 \- 129) were obtained by insertion of the expressions for the
eccentricity functions and the phase lags into (124), and by assuming that
$\,n\ll|\dot{\theta}|\,$. The latter caveat is a crucial element, not to be
overlooked by the users of formulae (128 \- 129) and of their corollary (130
\- 131) for the tidal deceleration rate.
The case of $\,\alpha=-1\,$ is special, in that it permits derivation of (128
\- 132) without assuming
that $\,n\ll|\stackrel{{\scriptstyle\;\bf\centerdot}}{{\theta}}|~{}$. However
for $\,\alpha>-1\,$ the condition
$\,n\ll|\stackrel{{\scriptstyle\;\bf\centerdot}}{{\theta}}|\,$ remains
mandatory, so formulae (128 \- 131) become inapplicable when
$\,\stackrel{{\scriptstyle\;\centerdot}}{{\theta}}\,$ reduces to values of
about several $\,n\,$.
Although formulae (128a) and (130a) contain an absolute error $\,O(\alpha
e^{2}Q^{-1}n/\dot{\theta})\,$, this does not mean that for
$\,\stackrel{{\scriptstyle\;\centerdot}}{{\theta}}\,$ comparable to $\,n\,$
the absolute error becomes $\,O(\alpha e^{2}Q^{-1})\,$ and the relative one
becomes $\,O(\alpha e^{2})\,$. In reality, for
$\,\stackrel{{\scriptstyle\;\centerdot}}{{\theta}}\,$ comparable to $\,n\,$,
the entire approximation falls apart, because formulae (126 \- 127) were
derived from expression (125), which is valid for $\,Q\gg 1\,$ only (unless
$\,\alpha=\,-\,1\,$). So these formulae become inapplicable in the vicinity of
a commensurability. By ignoring this limitation, one can easily encounter
unphysical paradoxes.212121 For example, in the case of $\,\alpha>-1\,$,
formulae (125 \- 127) render infinite values for $\,\Delta
t_{\textstyle{{}_{lmpq}}}\,$ and $\,\epsilon_{\textstyle{{}_{lmpq}}}\,$ on
crossing the commensurability, i.e., when $\,\omega_{\textstyle{{}_{lmpq}}}\,$
goes through zero.
Thence, in all situations, except for the unrealistic rheology
$\,\alpha=-1\,$, limitations of the approximation (128 \- 131) should be kept
in mind. This approximation remains acceptable for $\,n\ll|\dot{\theta}|\,$,
but becomes misleading on approach to the physically-interesting resonances.
### 9.2 An oversight in a popular formula
The form in which our approximation (128 \- 131) is cast may appear awkward.
The formula for the despinning rate $\,\ddot{\theta\,}\,$ is written as a
function of $\,\dot{\theta}\,$ and $\,n\,$, multiplied by the overall factor
$\,{\cal{K}}\,$. This form would be reasonable, were $\,{\cal{K}}\,$ a
constant. That this is not the case can be seen from the presence of the
multiplier $\,\frac{\textstyle
n}{\textstyle\chi_{\textstyle{{}_{2200}}}}\,=\,\frac{\textstyle n}{\textstyle
2\,|\stackrel{{\scriptstyle\bf\centerdot}}{{\theta}}\,-\,n|}\,$ on the right-
hand side of (131).
Still, when written in this form, our result is easy to juxtapose with an
appropriate formula from Correia & Laskar (2004, 2009). There, the expression
for the despinning rate looks similar to ours, up to an important detail: the
overall factor is a constant, because it lacks the said multiplier
$\frac{\textstyle n}{\textstyle\chi_{\textstyle{{}_{2200}}}}$. The multiplier
was lost in those two papers, because the quality factor was introduced there
as $\,1/(n\,\Delta t)\,$, see the line after formula (9) in Correia & Laskar
(2009). In reality, the quality factor $Q$ should, of course, be a function of
the tidal frequency $\,\chi\,$, because $Q$ serves the purpose of describing
the tidal damping at this frequency. Had the quality factor been taken as
$\,1/(\chi\,\Delta t)\,$, it would render the corrected MacDonald model
($\,\alpha=\,-\,1\,$), and the missing multiplied would be there. Being
unphysical,222222 To be exact, the model is unphysical everywhere except in
the closest vicinity of the resonance – see formulae (92 \- 94). the model is
mathematically convenient, because it enables one to write down the secular
part of the torque as one expression, avoiding the expansion into a series
(Williams & Efroimsky 2012). The model was pioneered by Singer (1968) and
employed by Mignard (1979, 1980), Hut (1981) and other authors.
Interestingly, in the special case of the 3:2 spin-orbit resonance, we have
$\,\chi=n\,$. Still, the difference between $\,\chi\,$ and $\,n\,$ in the
vicinity of the resonance may alter the probability of entrapment of Mercury
into this rotation mode. The difference between $\,\chi\,$ and $\,n\,$ becomes
even more considerable near the other resonances of interest. So the
probabilities of entrapment into those resonances must be recalculated.
## 10 Conclusions
The goal of this paper was to lay the ground for a reliable model of tidal
entrapment into spin-orbital resonances. To this end, we approached the tidal
theory from the first principles of solid-state mechanics. Starting from the
expression for the material’s compliance in the time domain, we derived the
frequency-dependence of the Fourier components of the tidal torque. The other
torque, one caused by the triaxiality of the rotator, is not a part of this
study and will be addressed elsewhere.
* •
We base our work on the Andrade rheological model, providing arguments in
favour of its applicability to the Earth’s mantle, and therefore, very likely,
to other terrestrial planets and moons. The model is also likely to apply to
the icy moons (Castillo-Rogez et al. 2011). We have reformulated the model in
terms of a characteristic anelastic timescale $\tau_{{}_{A}}$ (the Andrade
time). The ratio of the Andrade time to the viscoelastic Maxwell time,
$\zeta=\tau_{{}_{A}}/\tau_{{}_{M}}$, serves as a dimensionless free parameter
of the rheological model.
The parameters $\tau_{{}_{A}}$, $\tau_{{}_{M}}$, $\zeta$ cannot be regarded
constant, though their values may be changing very slowly over vast bands of
frequency. The shapes of these frequency-dependencies may depend on the
dominating dissipation mechanisms and, thereby, on the magnitude of the load,
as different damping mechanisms get activated under different loads.
The main question here is whether, in the low-frequency limit, anelasticity
becomes much weaker than viscosity. (That would imply an increase of
$\,\tau_{{}_{A}}\,$ and $\,\zeta\,$ as the tidal frequency $\,\chi\,$ goes
down.) The study of ices under weak loads, with friction caused mainly by
lattice diffusion (Castillo-Rogez et al. 2011, Castillo-Rogez & Choukroun
2010) has not shown such a decline of anelasticity. However, Karato & Spetzler
(1990) point out that it should be happening in the Earth’s mantle, where the
loads are much higher and damping is caused mainly by unpinning of
dislocations. According to Ibid., in the Earth, the decrease of the role of
anelasticity happens abruptly as the frequency falls below the threshold
$\,\chi_{\textstyle{{}_{0}}}\sim$ 1 yr${}^{-1}\,$. We then may expect a
similar switch in the other terrestrial planets and the Moon, though there the
threshold may be different as it is sensitive to the temperature of the
mantle. The question, though, remains if this statement is valid also for the
small bodies, in which the tidal stresses are weaker and dissipation is
dominated by lattice diffusion.
* •
By direct calculation, we have derived the frequency dependencies of the
factors $\,k_{\textstyle{{}_{l}}}\,\sin\epsilon_{\textstyle{{}_{l}}}\,$
emerging in the tidal theory. Naturally, the obtained dependencies of these
factors upon the tidal frequency $\,\chi_{\textstyle{{}_{lmpq}}}\,$ (or, to
put it more exactly, upon the tidal mode $\,\omega_{\textstyle{{}_{lmpq}}}\,$)
mimic the frequency-dependence of the imaginary part of the complex
compliance. They scale as $\,\sim\chi^{-\alpha}\,$ with $\,0<\alpha<1\,$, at
higher frequencies; and look as $\,\sim\chi^{-1}\,$ at lower frequencies.
However in the zero-frequency limit the factors
$\,k_{\textstyle{{}_{l}}}\,\sin\epsilon_{\textstyle{{}_{l}}}\,$ demonstrate a
behavior inherent in the tidal lagging and absent in the case of lagging in a
sample: in a close vicinity of the zero frequency, these factors (and the
appropriate components of the tidal torque) become linear in the frequency.
This way, $\,k_{\textstyle{{}_{l}}}\,\sin\epsilon_{\textstyle{{}_{l}}}\,$
first reaches a finite maximum, then decreases continuously to nil as the
frequency approaches to zero, and then changes its sign. So the resonances are
crossed continuously, with neither the tidal torque nor the tidal force
diverging there. For example, the leading term of the torque vanishes at the
synchronous orbit.
This continuous traversing of resonances was pointed out in a heuristic manner
by Efroimsky & Williams (2009). Now we have derived this result directly from
the expression for the compliance of the material of the rotating body. Our
derivation, however, has a problem in it: the frequency, below which the
factors $\,k_{\textstyle{{}_{l}}}\,\sin\epsilon_{\textstyle{{}_{l}}}\,$ and
the appropriate components of the torque change their frequency-dependence to
linear, is implausibly low (lower than $\,10^{-10}\,$Hz, if we take our
formulae literally). The reason for this mishap is that in our formulae we
kept using the known value of the Maxwell time $\,\tau_{{}_{M}}\,$ all the way
down to the zero frequency. Possible changes of the viscosity and,
accordingly, of the Maxwell time in the zero-frequency limit may broaden this
region of linear dependence.
* •
We have offered an explanation of the unexpected frequency-dependence of
dissipation in the Moon, discovered by LLR. The main point of our explanation
is that the LLR measures the tidal dissipation whose frequency-dependence is
different from that of the seismic dissipation. Specifically, the “wrong” sign
of the exponent in the power dissipation law may indicate that the frequencies
at which tidal friction was observed were below the frequency at which the
lunar $~{}k_{2}\,\sin\epsilon_{2}~{}$ has its peak. Taken the relatively high
frequencies of observation (corresponding to periods of order month to year),
this explanation can be accepted only if the lunar mantle has a low mean
viscosity. This may be the case, taken the presumably high concentration of
the partial melt in the low mantle.
* •
We have developed a detailed formalism for the tidal torque, and have singled
out its oscillating component.
The studies of entrapment into spin-orbit resonances, performed in the past,
took into account neither the afore-mentioned complicated frequency-dependence
of the torque in the vicinity of a resonance, nor the oscillating part of the
torque. We have written down a concise and practical expression for the
oscillating part, and have raised the question whether it may play a role in
the entrapment and libration dynamics.
## 11 Acknowledgments
This paper stems largely from my prior research carried out in collaboration
with James G. Williams and discussed on numerous occasions with Sylvio Ferraz
Mello, to both of whom I am thankful profoundly. I am grateful to Benoît
Noyelles, Julie Castillo-Rogez, Veronique Dehant, Shun-ichiro Karato, Valéry
Lainey, Valeri Makarov, Francis Nimmo, Stan Peale, and Tim Van Hoolst for
numerous enlightening exchanges and consultations on the theory of tides.
I gladly acknowledge the help and inspiration which I obtained from reading
the unpublished preprint by the late Vladimir Churkin (1998). In Appendix C, I
present several key results from his preprint.
I sincerely appreciate the support from my colleagues at the US Naval
Observatory, especially from John Bangert.
Appendix.
## Appendix A Continuum mechanics.
A celestial-mechanician’s survival kit
Appendix A offers an extremely short introduction into continuum mechanics.
The standard material, which normally occupies large chapters in books, is
compressed into several pages.
Subsection A.1 presents the necessary terms and definitions. Subsections A.2
explains the basic concepts employed in the theory of stationary deformation,
while subsection A.3 explains extension of these methods to creep. These
subsections also demonstrate the great benefits stemming from the isotropy and
incompressibility assumptions. Subsection A.4 introduces viscosity, while
subsection A.5 offers an example of how elasticity, viscosity get combined
with hereditary reaction, into one expression. Subsection A.6 renders several
simple examples.
### A.1 Glossary
We start out with a brief guide elucidating the main terms employed in
continuum mechanics.
* •
Rheology is the science of deformation and flow.
* •
Elasticity: This is the most trivial reaction – instantaneous, linear, and
fully reversible after stressing is turned off.
* •
Anelasticity: While still linear, this kind of deformation is not necessarily
instantaneous, and can demonstrate “memory”, both under loading and when the
load is removed. Importantly, the term anelasticity always implies
reversibility: though with delay, the original shape is restored. Thus an
anelastic material can assume two equilibrium states: one is the unstressed
state, the other being the long-term relaxed state. Anelasticity is
characterised by the difference in strain between the two states. It is also
characterised by a relaxation time between these states, and by its inverse –
the frequency at which relaxation is most effective. The Hohenemser-Prager
model, also called SAS (Standard Anelastic Solid), renders an example of
anelastic behaviour.
Anelasticity is an example of but not synonym to hereditary reaction. The
latter includes also those kinds of delayed deformation, which are
irreversible.
* •
Inelasticity: This term embraces all kinds of irreversible deformation, i.e.,
deformation which stays, fully or partially, after the load is removed.
* •
Unelasticity ( = Nonelasticity): These terms are very broad, in that they
embrace any behaviour which is not elastic. In the literature, these terms are
employed both for recoverable (anelastic) and unrecoverable (inelastic)
deformations.
* •
Plasticity: Some authors simply state that plastic deformation is deformation
which is irreversible – a very broad and therefore useless definition which
makes plasticity sound like a synonym to inelasticity.
More precisely, plastic is a stepwise behaviour: no strain is observed until
the stress $\,\sigma\,$ reaches a threshold value $\,\sigma_{{}_{Y}}\,$
(called yield strength), while a steady flow begins as the stress reaches the
said threshold. Plasticity can be either perfect (when deformation is going on
without any increase in load) or with hardening (when increasingly higher
stresses are needed to sustain the flow). It is always irreversible.
In real life, plasticity shows itself in combination with elasticity or/and
viscosity. Models describing these types of behaviour are called
elastoplastic, viscoplastic, and viscoelastoplastic. They are all inelastic,
in that they describe unrecoverable changes of shape.
* •
Viscosity: Another example of inelastic, i.e., irreversible behaviour. A
viscous stress is proportional to the time derivative of the viscous strain.
* •
Viscoelasticity: The term is customarily applied to all situations where both
viscous and elastic (but not plastic) reactions are observed. One may then
expect that the equations interrelating the viscoelastic stress to the strain
would contain only the viscosity coefficients and the elastic moduli. However
this is not necessarily true, as some other empirical constants may show up.
For example, the Andrade model (81) contains an elastic term, a viscous term,
and an extra term responsible for a hereditary reaction (the “memory”).
Despite the presence of that extra term, the Andrade creep is still regarded
viscoelastic. So it should be understood that viscoelasticity is, generally,
more than just viscosity combined with elasticity. One might christen such
deformations “viscoelastohereditary”, but such a term would sound
awkward.232323 Sometimes the term elastohereditary is used, but not
viscoelastohereditary or elastoviscohereditary.
On many occasions, complex viscoelastic models can be illustrated with an
infinite set of viscous and elastic elements. These serve to interpret the
hereditary terms as manifestations of viscosity and elasticity only. While
illustrative, these schemes with dashpots and springs have their limitations
and should not be taken too literally. In some (not all) situations, the
hereditary terms may be interpreted as time-dependent additions to the steady-
state viscosity coefficient, the Andrade model being an example of such
situation.
* •
Viscoplasticity: These are all models wherein both viscosity and plasticity
are present in some combination. In these situations, higher stresses have to
be applied to increase the deformation rate. Just as in the case of
viscoelasticity, viscoplastic models may, in principle, incorporate extra
terms standing for hereditary reaction.
* •
Elastoviscoplasticity ( = Viscoelastoplasticity): The same as above, though
with elasticity present.
* •
Hereditary reaction: While the term is self-explanatory, it would be good to
limit its use to effects other than viscosity. In expression (46) for the
stress through strain, the distinction between the viscous and hereditary
reactions is clear: while the viscous part of the stress is rendered
instantaneously by the delta-function term of the kernel, the hereditary
reaction is obtained through integration. In expression (40) for the strain
through stress, though, the viscous part shows up, under the integral, in sum
with the other delayed terms – see, for example, the Andrade model (81). This
is one reason for which we shall use the term hereditary reaction in
application to delayed behaviour different from the pure viscosity. Another
reason is that viscous flow is always irreversible, while a hereditary
reaction may be either irreversible (inelastic) or reversible (anelastic).
* •
Creep: Widely accepted is the convention that this term signifies a slow-rate
deformation under loads below the yield strength $\,\sigma_{{}_{Y}}\,$.
Numerous authors, though, use the oxymoron plastic creep, thereby extending
the applicability realm of the word creep to any slow deformation.
Here we shall understand creep in the former sense, i.e., with no plasticity
involved.
It would be important to distinguish between viscoelastic deformations, on the
one hand, and viscoplastic (or, properly speaking, viscoelastoplastic)
deformations on the other hand. Plasticity shows itself at larger stresses and
is, typically, nonlinear. It comes into play when the linearity assertion
fails. For most minerals, this happens when the strain approaches the
threshold of $\,10^{-6}\,$. Although it is possible that this threshold is
transcended in some satellites (for example, in the deeper layers of the
Moon), we do not address plasticity in this paper.
### A.2 Stationary linear deformation of isotropic incompressible media
In the linear approximation, the tensor of elastic stress,
$\,\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}\,$, is proportional to the
differences in displacement of the neighbouring elements of the medium. These
differences are components of the tensor gradient $\,\nabla\otimes{\bf{u}}\,$,
where ${\bf{u}}\,$ is the displacement vector.
The tensor gradient can be decomposed, in an invariant way, into its symmetric
and antisymmetric parts:
$\displaystyle\nabla\otimes{\bf{u}}\;=\;\frac{\textstyle 1}{\textstyle
2}\,\left[\,\,\left(\nabla\otimes{\bf{u}}\right)\,+\,\left(\nabla\otimes{\bf{u}}\right)^{{{}^{T}}}\,\,\right]\,+~{}\frac{\textstyle
1}{\textstyle
2}\,\left[\,\,\left(\nabla\otimes{\bf{u}}\right)\,-\,\left(\nabla\otimes{\bf{u}}\right)^{{{}^{T}}}\,\,\right]~{}~{}~{}.$
(133)
The decomposition being invariant, the two parts should contribute into the
stress independently, at least in the linear approximation. However, as well
known (Landau & Lifshitz 1986), the antisymmetric part of (133) describes the
displacement of the medium as a whole and thus brings nothing into the stress.
This is why the linear dependence is normally written as
$\displaystyle\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}\,=\,{\mathbb{B}}\,\,{\mathbb{U}}~{}~{}~{},$
(134)
where $\,{\mathbb{B}}\,$ is a four-dimensional tensor having
$\,3^{4}\,=\,81\,$ components, while the strain tensor
$\displaystyle{\mathbb{U}}\,\equiv\,\frac{\textstyle 1}{\textstyle
2}\,\left[\,\left(\nabla\otimes{\bf{u}}\right)\,+\,\left(\nabla\otimes{\bf{u}}\right)^{{{}^{T}}}\,\right]$
(135)
is the symmetric part of the tensor gradient. Its components are related to
the displacement vector $\,{\bf{u}}~{}$ through
$~{}u_{\alpha\beta}\,\equiv\,\frac{\textstyle 1}{\textstyle
2}\left(\frac{\textstyle{\partial u_{\alpha}}}{\textstyle{\partial
x_{\beta}}}+\frac{\textstyle{\partial u_{\beta}}}{\textstyle{\partial
x_{\alpha}}}\right)\,$.
Although the matrix $\,{\mathbb{B}}\,$ is comprised of 81 empirical constants,
in isotropic materials the description can be reduced to two constants only.
To see this, recall that the expansion of the strain into a part with trace
and a traceless part,
$\,{\mathbb{U}}\,=\,\frac{\textstyle{1}}{\textstyle{3}}\,{\mathbb{I}}\;\mbox{Sp}\,{\mathbb{U}}\,+\,\left(\,{\mathbb{U}}\,-\,\frac{\textstyle{1}}{\textstyle{3}}\,{\mathbb{I}}\;\mbox{Sp}\,{\mathbb{U}}\,\right)\,$,
is invariant. Here the trace of $\,{\mathbb{U}}\,$ is denoted with
$\,\mbox{Sp}\,{\mathbb{U}}\,\equiv\,u_{\alpha\alpha}\,$, summation over
repeated indices being implied. The notation $\,{\mathbb{I}}\,$ stands for the
unity matrix consisting of elements $\,\delta_{\gamma\nu}\,$.
In an isotropic medium, the elastic stress must be invariantly expandable into
parts proportional to the afore-mentioned parts of the strain. The first part
of the stress is proportional, with an empirical coefficient $\,3\,K\,$, to
the trace part
$\,\frac{\textstyle{1}}{\textstyle{3}}\,{\mathbb{I}}\;\mbox{Sp}\,{\mathbb{U}}\,$
of the strain. The second part of the stress will be proportional, with an
empirical coefficient $\,2\,\mu\,$, to the traceless part
$\,\left(\,{\mathbb{U}}\,-\,\frac{\textstyle{1}}{\textstyle{3}}\,{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}\,\right)\,$
of the strain:
$\displaystyle\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}~{}=~{}K~{}{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}\,+\,2\,\mu\left(\,{\mathbb{U}}\,-\,\frac{\textstyle{1}}{\textstyle{3}}\,{\mathbb{I}}\;\mbox{Sp}\,{\mathbb{U}}\,\right)~{}=~{}-~{}p~{}{\mathbb{I}}\,+\,2\,\mu\left(\,{\mathbb{U}}\,-\,\frac{\textstyle{1}}{\textstyle{3}}\,{\mathbb{I}}\;\mbox{Sp}\,{\mathbb{U}}\,\right)$
(136a) or, in Cartesian coordinates:
$\displaystyle\sigma_{\gamma\nu}~{}=~{}\,K~{}\delta_{\gamma\nu}\,u_{\alpha\alpha}\,+\,2\,\mu\,\left(\,u_{\gamma\nu}\,-\,\frac{\textstyle{1}}{\textstyle{3}}\,\delta_{\gamma\nu}\,u_{\alpha\alpha}\,\right)~{}=~{}-~{}p~{}\delta_{\gamma\nu}\,+\,2\,\mu\,\left(\,u_{\gamma\nu}\,-\,\frac{\textstyle{1}}{\textstyle{3}}\,\delta_{\gamma\nu}\,u_{\alpha\alpha}\,\right)~{}~{}~{},\quad\quad$
(136b)
where
$\displaystyle
p\,\equiv\,-\,\frac{1}{3}\;\mbox{Sp}\,{\mathbb{S}}\,=\,-\,K\,\mbox{Sp}\,{\mathbb{U}}$
(137)
is the hydrostatic pressure. Thus the elastic stress gets decomposed, in an
invariant way, as:
$\displaystyle\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}~{}=~{}\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}_{\textstyle{{}_{volumetric}}}\,+\;\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}_{\textstyle{{}_{deviatoric}}}~{}~{}~{},$
(138)
where the volumetric elastic stress is given by
$\displaystyle\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}_{\textstyle{{}_{volumetric}}}\;\,\equiv\;K~{}{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}~{}=~{}{\mathbb{I}}~{}\frac{1}{3}~{}\mbox{Sp}\,{\mathbb{S}}\,=\,-\,p\,{\mathbb{I}}~{}~{}~{},$
(139)
while the deviatoric elastic stress is:
$\displaystyle\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}_{\textstyle{{}_{deviatoric}}}~{}\equiv~{}2\,\mu\left(\,{\mathbb{U}}\,-\,\frac{\textstyle{1}}{\textstyle{3}}\,{\mathbb{I}}\;\mbox{Sp}\,{\mathbb{U}}\,\right)~{}~{}~{}.$
(140)
Inverse to (136a \- 136b) are the following expressions for the strain tensor:
$\displaystyle{\mathbb{U}}~{}=~{}\frac{1}{9\,K}~{}{\mathbb{I}}~{}\mbox{Sp}\,\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}\,+\,\frac{1}{2\,\mu}\,\left(\,\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}\,-\,\frac{\textstyle{1}}{\textstyle{3}}\,{\mathbb{I}}~{}\mbox{Sp}\,\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}\,\right)$
(141a) and $\displaystyle
u_{\gamma\nu}~{}=~{}\frac{1}{9\,K}~{}\delta_{\gamma\nu}~{}\stackrel{{\scriptstyle(e)}}{{\sigma}}_{\alpha\alpha}~{}+~{}\frac{1}{2\,\mu}\,\left(\,\stackrel{{\scriptstyle(e)}}{{\sigma}}_{\gamma\nu}\,-\,\frac{\textstyle{1}}{\textstyle{3}}\,\delta_{\gamma\nu}\,\stackrel{{\scriptstyle(e)}}{{\sigma}}_{\alpha\alpha}\,\right)~{}~{}~{},\quad$
(141b)
where the term with trace, $\,\frac{\textstyle 1}{\textstyle
9\,K}\;{\mathbb{I}}\;\mbox{Sp}\,\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}\,$,
is called the volumetric strain, while the traceless term,
$\,\frac{\textstyle{1}}{\textstyle{2\,\mu}}\,\left(\,\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}\,-\,\frac{\textstyle{1}}{\textstyle{3}}\,{\mathbb{I}}\;\mbox{Sp}\,\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}\,\right)\,$,
is named the deviatoric strain. The quantity $\,K\,$ is called the bulk
modulus, while $\,\mu\,$ is called the shear modulus.
Expressions (136) trivially entail the following interrelation between traces:
$\displaystyle\mbox{Sp}\,\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}\,=\,3\,K\,\mbox{Sp}\,{\mathbb{U}}~{}\quad~{}\mbox{or,
in terms of
components:}\quad~{}\stackrel{{\scriptstyle(e)}}{{\sigma}}_{\alpha\alpha}\,=\,3\,K\,u_{\alpha\alpha}~{}~{}~{}.$
(142)
As demonstrated in many books (e.g., in Landau & Lifshitz 1986), the trace of
the strain is equal to the relative variation of the volume, experienced by
the material subject to deformation:
$~{}u_{\alpha\alpha}\,=\,\nabla\cdot{\bf{u}}\,=\,\frac{\textstyle{dV\,^{\prime}-dV}}{\textstyle{dV}}~{}$,
where $\,{\bf{u}}\,$ is the displacement vector. In the no-compressibility
approximation, the trace of the strain and, according to (142), that of the
stress become zero. Then, in the said approximation, the hydrostatic pressure
(137) and the volumetric elastic stress (139) become nil, and all we are left
with is the deviatoric elastic stress (and, accordingly, the deviatoric part
of the strain). Formulae (136) and (141) get simplified to
$\displaystyle\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}~{}=~{}2\,\mu\,{\mathbb{U}}~{}~{},\quad~{}~{}\mbox{which
is the same
as}~{}\quad\stackrel{{\scriptstyle(e)}}{{\sigma}}_{\gamma\nu}~{}=~{}2\,\mu\,u_{\gamma\nu}~{}~{}~{},\,\quad$
(143)
and to
$\displaystyle
2\;{\mathbb{U}}~{}=~{}J\,\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}~{}~{},\quad~{}~{}\mbox{which
is the same as}~{}\quad
2\,u_{\gamma\nu}~{}=~{}J\,\stackrel{{\scriptstyle(e)}}{{\sigma}}_{\gamma\nu}~{}~{}~{},\quad$
(144)
the quantity $\,J\,\equiv\,1/\mu\,$ being called the compliance of the
material.
### A.3 Evolving linear deformation of isotropic incompressible isotropic
media. Hereditary reaction
Equations (134 \- 144) were written for static deformation, so each of these
equations can be assumed to connect the strain and the elastic stress taken at
the same instant of time (for a static deformation their values stay constant
anyway).
Extension of this machinery is needed when one wants to describe evolving
deformation of materials with “memory”. Thence the four-dimensional tensor
$\,{\mathbb{B}}\,$ becomes a linear operator $\,\tilde{\mathbb{B}}\,$ acting
on the strain tensor function as a whole. To render the value of the stress at
time $\,t\,$, the operator will “consume” as arguments all the values of
strain over the interval
$\;t\,^{\prime}\,\in\,\left(\right.-\infty\,,\,t\left.\right]\;$:
$\displaystyle\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}(t)\,=\,\tilde{\mathbb{B}}(t)\,\,{\mathbb{U}}~{}~{}~{}.$
(145)
Thus $\,\tilde{\mathbb{B}}\,$ will be an integral operator, with integration
going from $\,t\,^{\prime}\,=\,-\,\infty\,$ through $\,t\,^{\prime}\,=\,t\,$.
In the static case, the linearity guaranteed elasticity, i.e., the ability of
the body to regain its shape after the loading is turned off: no stress yields
no strain. In a more general situation of materials with “memory”, this
ability is no longer retained, as the material may demonstrate creep. This is
why, in this section, the stress is called hereditary and is denoted with
$\,\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}\,$.
Just as in the stationary case, we wish the properties of the medium to remain
isotropic. As the decomposition of the strain into the trace and traceless
parts remains invariant at each moment of time, these two parts will,
separately, generate the trace and traceless parts of the hereditary stress in
an isotropic medium. This means that, in such media, the four-dimensional
tensor operator $\,\tilde{\mathbb{B}}\,$ gets reduced to two scalar linear
operators $\,\tilde{K}\,$ and $\,\tilde{\mu}\,$:
$\displaystyle\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}~{}=~{}\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}_{\textstyle{{}_{volumetric}}}\,+\;\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}_{\textstyle{{}_{deviatoric}}}~{}=~{}\tilde{K}~{}{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}\,+\,2\,\tilde{\mu}\left(\,{\mathbb{U}}\,-\,\frac{\textstyle{1}}{\textstyle{3}}\,{\mathbb{I}}\;\mbox{Sp}\,{\mathbb{U}}\,\right)~{}~{}~{},$
(146)
where both $\,\tilde{K}\,$ and $\,\tilde{\mu}\,$ are integral operators acting
on the tensor function $\,u_{\gamma\nu}(t\,^{\prime})\,$ as a whole, i.e.,
with integration going from $\,t\,^{\prime}\,=\,-\,\infty\,$ through
$\,t\,^{\prime}\,=\,t\,$.
If we also assume that, under evolving load, the medium stays incompressible,
the trace of the strain, $\,u_{\alpha\alpha}\,$, will stay zero. An operator
generalisation of (142) now reads:
$~{}\sigma_{\alpha\alpha}(t)\,=\,3\,\tilde{K}(t)\,u_{\alpha\alpha}~{}$. Under
a reasonable assumption of $\,\sigma_{\alpha\alpha}\,$ being nil in the
distant past, this integral operator renders $\,\sigma_{\alpha\alpha}\,=\,0\,$
at all times. This way, in a medium that is both isotropic and incompressible,
we have:
$\displaystyle{\mathbb{U}}~{}=~{}{\mathbb{U}}_{\textstyle{{}_{deviatoric}}}~{}\quad~{}\mbox{and,
~{}accordingly:}~{}\quad~{}\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}~{}=~{}\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}_{\textstyle{{}_{deviatoric}}}\quad.\quad\quad$
(147)
Then the time-dependent analogues to formulae (143) and (144) will be:
$\displaystyle\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}(t)~{}=~{}2\,\tilde{\mu}(t)\,{\mathbb{U}}$
(148)
and
$\displaystyle
2\,{\mathbb{U}}(t)~{}=~{}\hat{J}(t)\,\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}~{}~{}~{},$
(149)
where the compliance $\,\hat{J}\,$, too, has been promoted to operatorship and
crowned with a caret.
Formula (148) tells us that in a medium, which is both isotropic and
incompressible, relation between the stress and strain tensors can be
described with one scalar integral operator $\,\tilde{\mu}\,$ only, the
complementary operator $\,\hat{J}\,$ being its inverse. (Here the adjective
“scalar” does not imply multiplication with a scalar. It means that the
operator preserves its functional form under a change of coordinates.)
Below we shall bring into the picture also the viscous component of the
stress, a component related to the strain through a four-dimensional tensor
whose $\,3^{4}=81\,$ components are differential operators. In that case too,
the isotropy of the medium will enable us to reduce the 81-component tensor
operator to two differential operators transforming as scalars. Besides, the
incompressibility of the medium makes the viscous stress traceless. Thus it
will turn out that, in an isotropic and incompressible medium, the viscous
component of the stress can be described by only one scalar differential
operator – much like the elastic and hereditary parts of the stress. (Once
again, “scalar” means: indifferent to coordinate transformations.)
Eventually, the elastic, hereditary, and viscous deformations will be united
under the auspices of a general viscoelastic formalism. In an isotropic
medium, this combined formalism will be reduced to two integro-differential
operators only. In a medium which is both isotropic and incompressible, the
formalism will be reduced to only one scalar integro-differential operator.
### A.4 The viscous stress
While the elastic stress $\,\stackrel{{\scriptstyle{{(e)}}}}{{\mathbb{S}}}\,$
is linear in the strain, the viscous stress
$\,\stackrel{{\scriptstyle(v)}}{{\mathbb{S}}}\,$ is linear in the first
derivatives of the components of the velocity with respect to the coordinates:
$\displaystyle\stackrel{{\scriptstyle(v)}}{{\mathbb{S}}}\,=\,{\mathbb{A}}\,(\nabla\otimes{\bf{v}})\,$
(150)
where $\,{\mathbb{A}}\,$ is the so-called viscosity tensor,
$\,\nabla\otimes{\bf{v}}\,$ is the tensor gradient of the velocity. The
velocity of a fluid parcel relative to its average position is connected to
the displacement vector $\,{\bf u}\,$ through $\,{\bf{v}}\,=\,d{\bf u}/dt\,$.
The tensor gradient of the velocity can be expanded, in an invariant way, into
its antisymmetric and symmetric parts:
$\displaystyle\nabla\otimes{\bf{v}}\;=\;\Omega\;+\;{\mathbb{E}}~{}~{}~{},$
(151)
where the antisymmetric part is furnished by the vorticity tensor
$\displaystyle\Omega\;\equiv\;\frac{1}{2}~{}\left[\,(\nabla\otimes{\bf{v}})~{}-\;(\nabla\otimes{\bf{v}})^{{}^{T}}\,\right]~{}~{}~{},$
(152)
while the symmetric part is given by the rate-of-shear tensor
$\displaystyle{\mathbb{E}}\;\equiv\;\frac{1}{2}~{}\left[\,(\nabla\otimes{\bf{v}})\,+\,(\nabla\otimes{\bf{v}})^{{}^{T}}\,\right]~{}~{}~{}.$
(153)
The latter is obviously related to the strain tensor through
$\displaystyle{\mathbb{E}}\;=\;\frac{\partial~{}}{\partial
t}\;{\mathbb{U}}\;~{}~{}.$ (154)
It can be demonstrated (e.g., Landau & Lifshitz 1987) that the antisymmetric
vorticity tensor describes the rotation of the medium as a whole242424 This is
why this tensor’s components coincide with those of the angular velocity
$\vec{\omega}$ of the body. For example, $\;\Omega_{21}\,=\,\frac{\textstyle
1}{\textstyle 2}\,\left(\,\frac{\textstyle\partial v_{2}}{\textstyle\partial
x_{1}}\,-\,\frac{\textstyle\partial v_{1}}{\textstyle\partial
x_{2}}\,\right)\;$ coincides with $\,\omega_{3}\,$. and therefore contributes
nothing to the stress.252525Since expansion (151) of the tensor gradient into
the vorticity and rate-of-shear tensors is invariant, then so is the
conclusion about the irrelevance of the vorticity tensor for the stress
picture. For this reason, the viscous stress can be written as
$\displaystyle\stackrel{{\scriptstyle(v)}}{{\mathbb{S}}}\,=\,{\mathbb{A}}\,{\mathbb{E}}\,=\,{\mathbb{A}}\;\frac{\partial~{}}{\partial
t}\;{\mathbb{U}}\;~{}~{}.$ (155)
The matrix $\,{\mathbb{A}}\,$ is four-dimensional and contains $\,3^{4}=81\,$
components. Just as the matrix $\,{\mathbb{B}}\,$ emerging in expression (134)
for the elastic stress, the matrix $\,{\mathbb{A}}\,$ can be reduced, in an
isotropic medium, to only two empirical constants. To see this, mind that the
rate-of-shear tensor can be decomposed, in an invariant manner, into two
parts:
$\displaystyle{\mathbb{E}}~{}=~{}\frac{1}{3}~{}{\mathbb{I}}~{}\nabla\cdot{\bf{v}}~{}+~{}\left(\,{\mathbb{E}}~{}-~{}\frac{1}{3}~{}{\mathbb{I}}~{}\nabla\cdot{\bf{v}}\,\right)~{}~{}~{}.$
(156)
where the rate-of-expansion tensor
$\displaystyle\frac{\textstyle 1}{\textstyle
3}~{}{\mathbb{I}}~{}\nabla\cdot{\bf{v}}$ (157)
is diagonal and has a trace, while the combination
$\displaystyle{\mathbb{E}}~{}-~{}\frac{1}{3}~{}{\mathbb{I}}~{}\nabla\cdot{\bf{v}}\;=\;\frac{1}{2}~{}\left[\,(\nabla\otimes{\bf{v}})\,+\,(\nabla\otimes{\bf{v}})^{{}^{T}}\,\right]~{}-~{}\frac{1}{3}~{}{\mathbb{I}}~{}\nabla\cdot{\bf{v}}$
(158)
is symmetric and traceless. These two parts contribute linearly proportional
inputs into the stress. The first input is proportional, with an empirical
coefficient $~{}3\,\zeta~{}$, to the rate-of-expansion term, while the second
input into the stress is proportional, with an empirical coefficient
$~{}2\,\eta~{}$, to the symmetric traceless combination:
$\displaystyle\stackrel{{\scriptstyle(v)}}{{\mathbb{S}}}\,=\,3\,\zeta\;\frac{\textstyle
1}{\textstyle
3}~{}{\mathbb{I}}~{}\nabla\cdot{\bf{v}}~{}+~{}2\,\eta\,\left(\,{\mathbb{E}}\;-\;\frac{\textstyle
1}{\textstyle
3}~{}{\mathbb{I}}~{}\nabla\cdot{\bf{v}}\,\right)~{}=~{}\zeta\,\frac{\textstyle\partial~{}}{\textstyle\partial
t}\,\left(\,{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}\,\right)~{}+~{}2\,\eta\,\frac{\textstyle\partial~{}}{\textstyle\partial
t}\,\left(\,{\mathbb{U}}\;-\;\frac{\textstyle 1}{\textstyle
3}~{}{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}\,\right)~{}~{}~{}.~{}\quad$ (159)
Here we recalled that
$\,\nabla\cdot{\bf{v}}\,=\,\frac{\textstyle\partial~{}}{\textstyle\partial
t}\,\nabla\cdot{\bf{u}}\,=\,\frac{\textstyle\partial~{}}{\textstyle\partial
t}\,u_{\alpha\alpha}\,=\,\frac{\textstyle\partial~{}}{\textstyle\partial
t}\,\mbox{Sp}\,{\mathbb{U}}\,$. Since $\,\mbox{Sp}\,{\mathbb{U}}\,$ is equal
to the volume variation
$~{}\frac{\textstyle{dV^{\prime}-dV}}{\textstyle{dV}}~{}$ experienced by the
material, we see that the first term in (159) is volumetric, the second being
deviatoric.
The quantity $\,\eta\,$ is called the first viscosity or the shear viscosity.
The quantity $\,\zeta\,$ is named the second viscosity or the bulk viscosity.
### A.5 An example of approach to viscoelastic behaviour
In this subsection, we shall consider one possible approach to description of
viscoelastic regimes. As we mentioned in subsection A.1, the term
viscoelasticity covers not only combinations of elasticity and viscosity, but
can also include other forms of delayed reaction. So the term viscoelastic is
customarily used as a substitution for too long a term viscoelastohereditary.
One possible approach would be to assume that the elastic, hereditary, and
viscous stresses simply sum up, and that each of them is related the the same
strain $\,{\mathbb{U}}\;$: an
$\displaystyle\stackrel{{\scriptstyle(total)}}{{\mathbb{S}}}\,=~{}\stackrel{{\scriptstyle{{(e)}}}}{{\mathbb{S}}}\,+\,\stackrel{{\scriptstyle{{{(h)}}}}}{{\mathbb{S}}}\,+\,\stackrel{{\scriptstyle{{{(v)}}}}}{{\mathbb{S}}}~{}=~{}\left(\,{\mathbb{B}}~{}+~{}\tilde{\mathbb{B}}~{}+~{}{\mathbb{A}}\,\frac{\partial~{}}{\partial
t}\,\right)\;{\mathbb{U}}~{}~{}~{},$ (160a) or simply
$\displaystyle\stackrel{{\scriptstyle(total)}}{{\mathbb{S}}}\;=\;\hat{\mathbb{B}}\,{\mathbb{U}}~{}\quad,\quad\quad\mbox{where}\quad\hat{\mathbb{B}}\,\equiv\,{\mathbb{B}}~{}+~{}\tilde{\mathbb{B}}~{}+~{}{\mathbb{A}}\,\frac{\partial~{}}{\partial
t}~{}~{}~{},$ (160b)
where the three operators – the integral operator $\,\tilde{\mathbb{B}}\,$,
the differential operator
$\,{\mathbb{A}}\,\frac{\textstyle\partial~{}}{\textstyle\partial t}\,$, and
the operator of multiplication by matrix $\,{\mathbb{B}}\,$ – comprise an
integro-differential operator $\,\hat{\mathbb{B}}\,$.
In an isotropic medium, each of the three matrices, $\,\tilde{\mathbb{B}}\,$,
$\,{\mathbb{A}}\,\frac{\textstyle\partial~{}}{\textstyle\partial t}\,$, and
$\,{\mathbb{B}}\,$, includes two terms only. This happens because in such a
medium each of the three parts of the stress gets decomposed invariantly into
its deviatoric and volumetric components: The elastic stress becomes:
$\displaystyle\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}\;=\;\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}_{\textstyle{{}_{volumetric}}}\,+\,\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}_{\textstyle{{}_{deviatoric}}}\,=\;3\,K\,\left(\frac{\textstyle
1}{\textstyle
3}~{}{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}\,\right)~{}+~{}2\,\mu\,\left(\,{\mathbb{U}}\;-\;\frac{\textstyle
1}{\textstyle 3}~{}{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}\,\right)\quad,\quad$
(161)
with $\,K\,$ and $\,\mu\,$ being the bulk elastic modulus and the shear
elastic modulus, correspondingly, $\;{\mathbb{I}}\,$ standing for the unity
matrix, and Sp denoting the trace of a matrix:
$~{}\mbox{Sp}\,{\mathbb{U}}\,\equiv\,\sum_{i}U_{ii}\;$.
The hereditary stress becomes:
$\displaystyle\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}~{}=~{}\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}_{\textstyle{{}_{volumetric}}}\,+~{}\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}_{\textstyle{{}_{deviatoric}}}~{}=\;3\,\tilde{K}~{}\left(\,\frac{\textstyle{1}}{\textstyle{3}}\,{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}\,\right)~{}+~{}2\,\tilde{\mu}\,\left(\,{\mathbb{U}}\,-\,\frac{\textstyle{1}}{\textstyle{3}}\,{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}\,\right)~{}~{}~{},~{}~{}$
(162)
where $\,\tilde{K}\,$ and $\,\tilde{\mu}\,$ are the bulk-modulus operator and
the shear-modulus operator, accordingly.
The viscous stress acquires the form:
$\displaystyle\stackrel{{\scriptstyle(v)}}{{\mathbb{S}}}\;=\;\stackrel{{\scriptstyle(v)}}{{\mathbb{S}}}_{\textstyle{{}_{volumetric}}}\,+\,\stackrel{{\scriptstyle(v)}}{{\mathbb{S}}}_{\textstyle{{}_{deviatoric}}}\,=\;3~{}\zeta\,\frac{\textstyle\partial~{}}{\textstyle\partial
t}\,\left(\frac{\textstyle 1}{\textstyle
3}~{}{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}\,\right)~{}+~{}2\,\eta\,\frac{\textstyle\partial~{}}{\textstyle\partial
t}\,\left(\,{\mathbb{U}}\;-\;\frac{\textstyle 1}{\textstyle
3}~{}{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}\,\right)\quad,\quad$ (163)
the quantities $\zeta$ and $\eta$ being termed as the bulk viscosity and the
shear viscosity, correspondingly
The term $~{}\frac{\textstyle 1}{\textstyle
3}~{}{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}~{}$ is called the volumetric part
of the strain, while $~{}{\mathbb{U}}\,-\,\frac{\textstyle 1}{\textstyle
3}~{}{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}~{}$ is called the deviatoric part.
Accordingly, in expressions (161 \- 163) for the stresses, the pure-trace
terms are called volumetric, the other term being named deviatoric.
The total stress, too, can now be split into the total volumetric and the
total deviatoric parts:
$\displaystyle\stackrel{{\scriptstyle(total)}}{{\mathbb{S}}}$ $\displaystyle=$
$\displaystyle\overbrace{\left(\,\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}_{\textstyle{{}_{volumetric}}}\,+\,\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}_{\textstyle{{}_{deviatoric}}}\,\right)}^{\stackrel{{\scriptstyle\textstyle\stackrel{{\scriptstyle{{(e)}}}}{{\mathbb{S}}}}}{{}}}\,+\,\overbrace{\left(\,\stackrel{{\scriptstyle(v)}}{{\mathbb{S}}}_{\textstyle{{}_{volumetric}}}\,+\,\stackrel{{\scriptstyle(v)}}{{\mathbb{S}}}_{\textstyle{{}_{deviatoric}}}\,\right)}^{\stackrel{{\scriptstyle\textstyle\stackrel{{\scriptstyle{{(v)}}}}{{\mathbb{S}}}}}{{}}}\,+\,\overbrace{\left(\,\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}_{\textstyle{{}_{volumetric}}}\,+\,\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}_{\textstyle{{}_{deviatoric}}}\,\right)}^{\stackrel{{\scriptstyle\textstyle\stackrel{{\scriptstyle{{(h)}}}}{{\mathbb{S}}}}}{{}}}$
(164a) $\displaystyle=$
$\displaystyle\overbrace{\left(\,\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}_{\textstyle{{}_{volumetric}}}\,+\,\stackrel{{\scriptstyle(v)}}{{\mathbb{S}}}_{\textstyle{{}_{volumetric}}}\,+\,\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}_{\textstyle{{}_{volumetric}}}\,\right)}^{\stackrel{{\scriptstyle\textstyle{{{\mathbb{S}}_{\textstyle{{}_{volumetric}}}}}}}{{}}}\,+\,\overbrace{\left(\,\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}_{\textstyle{{}_{deviatoric}}}\,+\,\stackrel{{\scriptstyle(v)}}{{\mathbb{S}}}_{\textstyle{{}_{deviatoric}}}\,+\,\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}_{\textstyle{{}_{deviatoric}}}\,\right)}^{\stackrel{{\scriptstyle\textstyle{\mathbb{S}}_{\textstyle{{}_{deviatoric}}}}}{{}}}$
$\displaystyle=$
$\displaystyle\left(3\,K\,+\,3\,\tilde{K}\,+\,3\,\zeta\,\frac{\partial\,}{\partial
t}\right)\left(\frac{\textstyle{1}}{\textstyle{3}}\,{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}\right)\,+\,\left(2\,\mu\,+\,2\,\tilde{\mu}\,+\,2\,\eta\,\frac{\partial\,}{\partial
t}\right)\left(\,{\mathbb{U}}\,-\,\frac{\textstyle{1}}{\textstyle{3}}\,{\mathbb{I}}\;\mbox{Sp}\,{\mathbb{U}}\,\right)~{}\quad\quad\quad\quad\quad\quad\quad$
$\displaystyle=$ $\displaystyle
3~{}\hat{K}~{}\left(\,\frac{\textstyle{1}}{\textstyle{3}}\,{\mathbb{I}}~{}\mbox{Sp}\,{\mathbb{U}}\,\right)~{}+~{}2~{}\hat{\mu}~{}\left(\,{\mathbb{U}}\,-\,\frac{\textstyle{1}}{\textstyle{3}}\,{\mathbb{I}}\;\mbox{Sp}\,{\mathbb{U}}\,\right)~{}~{}~{},$
(164b)
where
$\displaystyle\hat{K}\;\equiv\;K\;+\;\tilde{K}\;+\;\zeta\;\frac{\partial~{}}{\partial
t}~{}\quad\quad\mbox{and}\quad\quad\hat{\mu}\;\equiv\;\mu\;+\;\tilde{\mu}\;+\;\eta\;\frac{\partial~{}}{\partial
t}~{}~{}~{}.\quad\quad\quad$ (165)
As expected, a total linear deformation of an isotropic material can be
described with two integro-differential operators, one acting on the
volumetric strain, another on the deviatoric strain.
If an isotropic medium is also incompressible, the relative change of the
volume vanishes: $\,\mbox{Sp}\,{\mathbb{U}}=0\,$. Accordingly, the volumetric
part of the strain becomes nil, and so do the volumetric parts of the elastic,
hereditary, and viscous stresses. For such media, we end up with a simple
relation which includes only deviators:
$\displaystyle\stackrel{{\scriptstyle(total)}}{{\mathbb{S}}}\,=\,{\mathbb{S}}_{\textstyle{{}_{deviatoric}}}\,=\,\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}_{\textstyle{{}_{deviatoric}}}\,+\,\stackrel{{\scriptstyle(h)}}{{\mathbb{S}}}_{\textstyle{{}_{deviatoric}}}\,+\,\stackrel{{\scriptstyle(v)}}{{\mathbb{S}}}_{\textstyle{{}_{deviatoric}}}\,=\,2\,{\mu}~{}{\mathbb{U}}\,+\,2\,\tilde{\mu}~{}{\mathbb{U}}\,+\,2\,\eta\,\frac{\partial\,}{\partial
t}~{}{\mathbb{U}}\quad\quad$ (166)
or simply:
$\displaystyle\stackrel{{\scriptstyle(total)}}{{\mathbb{S}}}\,=\,{\mathbb{S}}_{\textstyle{{}_{deviatoric}}}\,=\,2\,\hat{\mu}~{}{\mathbb{U}}~{}~{}~{},$
(167)
where $\,{\mathbb{U}}\,$ contains only a deviatoric part, while
$\displaystyle\hat{\mu}\,\equiv\,\mu\,+\,\tilde{\mu}\,+\,\eta\,\frac{\partial\,}{\partial
t}$ (168)
is the total, integro-differential operator, which is mapping the preceding
history and the present rate of change of the strain to the present value of
the stress.
It should be reiterated that the above approach is based on the assertion that
the elastic, viscous, and hereditary stresses sum up, and that all three are
related to the same total strain. A simple example of this approach, called
the Kelvin-Voigt model, is rendered below in subsection A.6.3.
A different approach would be to assume that the strain consists of three
distinct parts – elastic, hereditary, and viscous – and that these components
are related to the same overall stress. A simple example of this treatment,
termed the Maxwell model, is set out in subsection A.6.4. A more complex
example of this approach is furnished by the Andrade model presented in
subsection 5.3. Another way of combining elasticity and viscosity (with no
hereditary reaction involved) is implemented by the Hohenemser-Prager (SAS)
model explained in subsection A.6.5 below.
### A.6 Examples of viscoelastic behaviour with no hereditary reaction
#### A.6.1 Elastic deformation
The truly simplest example of deformation is elastic:
$\displaystyle\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}\,=\,2\,\mu\,{\mathbb{U}}~{}\quad,\quad\quad{\mathbb{U}}~{}=~{}J\,\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}~{}~{}~{},$
(169)
where $\,\mu\,$ and $\,J\,$ are the unrelaxed rigidity and compliance:
$\displaystyle\mu\,=\,\mu(0)~{}~{}~{},\quad~{}J\,=\,J(0)~{}~{}~{},\quad~{}\mu\;=\;{1}/{J}~{}~{}~{}.$
(170)
In the frequency domain, this relation assumes the same form as it would in
the time domain:
$\displaystyle\bar{\sigma}_{\gamma\nu}(\chi)\,=\,2\,\mu\,\bar{u}_{\gamma\nu}(\chi)~{}\quad,\quad\quad
2\,\bar{u}_{\gamma\nu}(\chi)\,=\,J\,\bar{\sigma}_{\gamma\nu}(\chi)~{}~{}~{}.$
(171)
#### A.6.2 Viscous deformation
The next example is that of a purely viscous behaviour:
$\displaystyle\stackrel{{\scriptstyle(v)}}{{\mathbb{S}}}\,=\,2\,\eta\,\frac{\partial\,}{\partial
t}~{}{\mathbb{U}}~{}~{}~{}.$ (172)
It is straightforward from (172) and (27) that, in this regime, the Fourier
components of the stress262626 Although we no longer spell it out, the word
stress everywhere means: deviatoric stress, as we agreed to consider the
medium incompressible. are connected to those of the strain through
$\displaystyle\bar{\sigma}_{\gamma\nu}(\chi)\,=\,2\,\bar{\mu}(\chi)\;\bar{u}_{\gamma\nu}(\chi)~{}\quad,\quad\quad
2\,\bar{u}_{\gamma\nu}(\chi)\,=\,\bar{J}(\chi)\;\bar{\sigma}_{\gamma\nu}(\chi)~{}~{}~{},$
(173)
where the complex rigidity and the complex compliance are given by
$\displaystyle\bar{\mu}\,=\,{\it
i}\,\eta\,\chi\quad,\quad\quad\bar{J}\,=\,-\,\frac{{\it
i}}{\eta\,\chi}~{}~{}~{}.$ (174)
#### A.6.3 Viscoelastic deformation: a Kelvin-Voigt material
The Kelvin-Voigt model, also called the Voigt model, can be represented with a
purely viscous damper and a purely elastic spring connected in parallel.
Subject to the same elongation, these elements have their forces summed up.
This illustrates the situation where the total, viscoelastic stress consists
of a purely viscous and a purely elastic inputs called into being by the same
strain:
$\displaystyle{\mathbb{S}}\;=\;\stackrel{{\scriptstyle(ve)}}{{\mathbb{S}}}\;=\;\stackrel{{\scriptstyle(v)}}{{\mathbb{S}}}\,+\,\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}~{},\,\quad\mbox{while}\,\quad{\mathbb{U}}\,=\;\stackrel{{\scriptstyle(v)}}{{\mathbb{U}}}\,=\;\stackrel{{\scriptstyle(e)}}{{\mathbb{U}}}~{}~{}~{}.$
(175)
Then the total stress reads:
$\displaystyle{\mathbb{S}}\,=\,\left(2\,{\mu}\,+\,2\,\eta\,\frac{\partial\,}{\partial
t}\right)~{}{\mathbb{U}}~{}~{}~{},$ (176a) which is often presented in the
form of
$\displaystyle{\mathbb{S}}\,=\,2\,\mu\,\left(\,{\mathbb{U}}\,+~{}\tau_{{}_{V}}\stackrel{{\scriptstyle\centerdot}}{{\mathbb{U}}}\,\right)~{}~{}~{},$
(176b)
with the so-called _Voigt time_ defined as
$\displaystyle\tau_{{}_{V}}\;\equiv\;{\eta}/{\mu}~{}~{}~{}.$ (177)
Comparing (176) with (46), we understand that the kernel of the rigidity
operator for the Kelvin-Voigt model can be written down as
$\displaystyle\mu(t\,-\,t\,^{\prime})\;=\;\mu\;+\;\eta\;\delta(t\,-\,t\,^{\prime})~{}~{}~{}.$
(178)
Suppose the strain is varying in time as
$\displaystyle{u}_{\gamma\nu}(t)\;=\;\frac{\sigma_{0}}{2\,\mu}\;\left[\,1\;-\;\exp\left(\,-\,\frac{t\,-\,t_{0}}{\tau_{{{}_{V}}}}\,\right)\,\right]~{}\Theta(t\,-\,t_{0})~{}~{}~{},$
(179)
so that
$\displaystyle\stackrel{{\scriptstyle\bf\centerdot}}{{u}}_{\gamma\nu}(t)\;=\;\frac{\sigma_{0}}{2\,\mu\,}\;\frac{1}{\tau_{{{}_{V}}}}\;\exp\left(\,-\,\frac{t\,-\,t_{0}}{\tau_{{{}_{V}}}}\,\right)\,~{}\Theta(t\,-\,t_{0})~{}~{}~{}.$
(180)
Then insertion of (178) and (180) into (LABEL:permitted_2) or, equivalently,
insertion of (179) into (176a) demonstrates that this strain results from a
stress272727 For example, plugging of (178) and (180) into (LABEL:permitted_2)
leads to: $\displaystyle{}\sigma(t)$ $\displaystyle=$
$\displaystyle\int^{t\,^{\prime}=\,t}_{t\,^{\prime}=\,-\,\infty}\left[\,\mu\;+\;\eta\;\delta(t\,-\,t\,^{\prime})\,\right]\;\frac{\sigma_{0}}{\mu}\;\frac{\textstyle
1}{\textstyle\tau_{{{}_{V}}}}\;\exp\left(\,-\,\frac{t\,^{\prime}\,-\,t_{0}}{\tau_{{{}_{V}}}}\,\right)\,~{}\Theta(t\,^{\prime}\,-\,t_{0})\,dt\,^{\prime}$
(184) $\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{c}\textstyle\sigma_{0}~{}\left[\;\int^{t\,^{\prime}=\,t}_{t\,^{\prime}=\,t_{0}}\exp\left(\,-\,\frac{\textstyle
t\,^{\prime}\,-\,t_{0}}{\textstyle\tau_{{{}_{V}}}}\,\right)\,\frac{\textstyle
dt\,^{\prime}}{\textstyle\tau_{{{}_{V}}}}\;+\;\exp\left(\,-\,\frac{\textstyle
t\,-\,t_{0}}{\textstyle\tau_{{{}_{V}}}}\,\right)\right]~{}~{}\quad\mbox{for}\quad
t\,\geq\,t_{0}{}{}{}{}\\\ {}\hfil\\\
0\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad~{}\,\mbox{for}\quad
t\,<\,t_{0}~{}~{}~{},\end{array}\right.$ which is simply
$\,\;\sigma_{0}\,\Theta(t\,-\,t_{0})\;$.
$\displaystyle\sigma_{\gamma\nu}(t)\,=\,{\sigma_{0}}\;\Theta(t\,-\,t_{0})~{}~{}~{}.$
(185)
It would however be a mistake to deduce from this that the compliance function
is equal to $~{}\mu^{-1}\,\left[\,1\;-\;\exp\left(\,-\,\frac{\textstyle
t\,-\,t\,^{\prime}}{\textstyle\tau_{{{}_{V}}}}\,\right)\,\right]~{}$, even
though such a misstatement is sometimes made in the literature. This
expression furnishes the compliance function only in the special situation of
a Heaviside-step stress (185), but not in the general case.
As can be easily shown from (27), in the frequency domain model (176) reads as
(173), except that the complex rigidity and the complex compliance are now
given by
$\displaystyle\bar{\mu}\,=\,{\mu}\,\left(\,1\,+\,{\it
i}\,\chi\,\tau_{{}_{V}}\,\right)\,\quad,\quad\quad\bar{J}\,=\,\frac{J}{1\,+\,{\it
i}\,\chi\,\tau_{{}_{V}}}~{}~{}~{}.$ (186)
Recall that, for brevity, here and everywhere we write $\,\mu\,$ and $\,J\,$
instead of $\,{\mu}(0)\,$ and $\,J(0)\,$.
The Kelvin-Voigt material becomes elastic in the low-frequency limit, and
viscous in the high-frequency limit.
#### A.6.4 Viscoelastic deformation: a Maxwell material
The Maxwell model can be represented with a viscous damper and an elastic
spring connected in series. Experiencing the same force, these elements have
their elongations summed up. This example illustrates the situation where the
total, viscoelastic strain consists of a purely viscous and a purely elastic
contributions generated by the same stress $\,{\mathbb{S}}\,$:
$\displaystyle{\mathbb{U}}\,=\;\stackrel{{\scriptstyle(v)}}{{\mathbb{U}}}\,+\,\stackrel{{\scriptstyle(e)}}{{\mathbb{U}}}~{},\,\quad\mbox{where}\,\quad\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}\,=\,2\,\mu\,\stackrel{{\scriptstyle(e)}}{{\mathbb{U}}}~{}~{}~{}~{}\mbox{and}\quad~{}\stackrel{{\scriptstyle(v)}}{{\mathbb{S}}}\,=\,2\,\eta\,\frac{\partial\,}{\partial
t}~{}\stackrel{{\scriptstyle(v)}}{{\mathbb{U}}}~{}~{}~{}.$ (187)
Since in the Maxwell regime both contributions to the strain are generated by
the same stress
$\displaystyle{\mathbb{S}}\;=\;\stackrel{{\scriptstyle(ve)}}{{\mathbb{S}}}\;=\;\stackrel{{\scriptstyle(v)}}{{\mathbb{S}}}\,=\,\stackrel{{\scriptstyle(e)}}{{\mathbb{S}}}~{}~{}~{},$
(188)
formula (187) can be written down as
$\displaystyle\stackrel{{\scriptstyle\centerdot}}{{\mathbb{U}}}\,=\,\frac{1}{2\,\mu}\;\stackrel{{\scriptstyle\centerdot}}{{\mathbb{S}}}\,+\,\frac{1}{2\,\eta}\;{\mathbb{S}}~{}~{}~{}$
(189a) or, in a more conventional form:
$\displaystyle\stackrel{{\scriptstyle\centerdot}}{{\mathbb{S}}}\,+\;\frac{1\;}{\tau_{{}_{M}}}\,{\mathbb{S}}~{}=~{}2\,\mu\,\stackrel{{\scriptstyle\centerdot}}{{\mathbb{U}}}~{}~{}~{},$
(189b)
with the so-called _Maxwell time_ introduced as
$\displaystyle\tau_{{}_{M}}\;\equiv\;{\eta}/{\mu}~{}~{}~{}.$ (190)
Although formally the Maxwell time is given by an expression mimicking the
definition of the Voigt time, the meaning of these times is different.
Comparing (189) with the general expression (43) for the compliance operator,
we see that, for the Maxwell model, the compliance operator in the time domain
assumes the form:
$\displaystyle
J(t\,-\,t\,^{\prime})\,=\,\left[\,J\,+\,\left(t\;-\;t\,^{\prime}\right)\;\frac{1}{\eta}\,\right]\;\Theta(t\,-\,t\,^{\prime})~{}~{}~{},$
(191)
where $\,J\,\equiv\,1/\mu\,$. In the frequency domain, (189) can be written
down as (173), with the complex rigidity and compliance given by
$\displaystyle\bar{\mu}(\chi)\,=\,{\mu}\;\frac{{\it
i}\,\chi\,\tau_{M}}{1\,+\,{\it
i}\chi\tau_{M}}\,\quad,\quad\quad\bar{J}(\chi)\,=\,J\,\left(\,1\,-\,\frac{{\it
i}}{\chi\,\tau_{{}_{M}}}\,\right)~{}=~{}J~{}-~{}\frac{{\it
i}}{\chi\,\eta}~{}~{}~{}.$ (192)
Clearly, such a body becomes elastic in the high-frequency limit, and becomes
viscous at low frequencies (the latter circumstance making the Maxwell model
attractive to seismologists).
#### A.6.5 Viscoelastic deformation: the Hohenemser-Prager (SAS) model
An attempt to combine the Kelvin-Voigt and Maxwell models leads to the
Hohenemser-Prager model, also known as the Standard Anelastic Solid (SAS):
$\displaystyle{\tau_{{}_{M}}}{\bf\dot{\mathbb{S}}}\,+\;\,{\mathbb{S}}~{}=~{}2\,\mu\,\left(\,{\mathbb{U}}\,+\,\tau_{{}_{V}}\,{\bf\dot{\mathbb{U}}}\,\right)~{}~{}~{},$
(193)
In the limit of $\,\tau_{{}_{M}}\rightarrow 0\,$, this model approaches the
one of Kelvin and Voigt (and $\,\tau_{{}_{V}}\,$ acquires the meaning of the
Voigt time).
A transition from the SAS to Maxwell model, however, can be achieved only
through re-definition of parameters. One should set: $\;2\,\mu\rightarrow 0\;$
and $\;\tau_{{}_{V}}\rightarrow\infty\;$, along with
$\,2\mu\tau_{{}_{V}}\rightarrow 2\eta\,$. Then (193) will become (189), with
$\,\tau_{{}_{M}}\,$ playing the role of the Maxwell time.
In the frequency domain, (193) can be put into the form of (173), the complex
rigidity and the complex compliance being expressed through the parameters as
$\displaystyle\bar{\mu}\,=\,\mu\;\frac{\textstyle
1+i\tau_{{}_{V}}\chi}{\textstyle{1+i\tau_{{}_{M}}\chi}}\,\quad,\quad\quad\bar{J}\,=\,J\;\frac{\textstyle
1+i\tau_{{}_{M}}\chi}{\textstyle{1+i\tau_{{}_{V}}\chi}}~{}~{}~{}.$ (194)
This entails:
$~{}\tan\delta\,\equiv\,{\textstyle{{\cal{I}}{\it{m}}[\bar{\mu}]}}/{\textstyle{{\cal{R}}{\it{e}}[\bar{\mu}]}}\,=\,\frac{\textstyle{(\tau_{{}_{V}}-\tau_{{}_{M}})\,\chi}}{\textstyle{1\,+\,\tau_{{}_{V}}\tau_{{}_{M}}\chi^{2}}}\;$,
whence it is easy to show that the tangent is related to its maximal value
through
$\tan\delta\;=\;2\;\left[\tan\delta\right]_{max}\;\frac{\tau\;\chi}{1\,+\,\tau^{2}\,\chi^{2}}\,\quad,\quad\quad\mbox{where}\quad\tau\,\equiv\,\sqrt{\tau_{{}_{M}}\tau_{{}_{V}}}~{}~{}~{}.$
This is the so-called Debye peak, which is indeed observed in some materials.
To prove that the SAS solid is indeed anelastic, one has to make sure that a
Heaviside step stress $\,\Theta(t^{\prime})\,$ entails a strain proportional
to $\,1-\exp(-\Gamma t)\,$, and to demonstrate that a predeformed sample
subject to stress $\,\Theta(-t^{\prime})\,$ regains it shape as
$\,\exp(-\Gamma t)\,$, where the relaxation constant $\,\Gamma\,$ is positive.
In subsection C.3 we shall do this for a SAS sphere.
## Appendix B Interconnection between the quality factor and the phase lag
The power $\,P\,$ exerted by a tide-raising secondary on its primary can be
written as
$\displaystyle
P\;=\;-\;\int\,\rho\;\mbox{\boldmath$\vec{\boldmath{\,V}}$}\;\cdot\;\nabla
W\;d^{3}x$ (195)
$\rho\,,\;\mbox{\boldmath$\vec{\boldmath{\,V}}$}\,$, and $\,W\,$ signifying
the density, velocity, and tidal potential in the small volume $~{}d^{3}x~{}$
of the primary. The mass-conservation law
$~{}\nabla\cdot(\rho\mbox{\boldmath$\vec{\boldmath{\,V}}$})\,+\frac{\textstyle\partial\rho}{\textstyle\partial
t}\,=\,0\,~{}$ enables one to shape the dot-product into the form of
$\displaystyle\rho\,\mbox{\boldmath$\vec{\boldmath{\,V}}$}\cdot\nabla
W\,=\,\nabla\cdot(\rho\,\mbox{\boldmath$\vec{\boldmath{\,V}}$}\,W)\,-\,\rho\,W\,\nabla\cdot\mbox{\boldmath$\vec{\boldmath{\,V}}$}\,-\,\mbox{\boldmath$\vec{\boldmath{\,V}}$}\,W\,\nabla\rho\;\;~{}.\;\;\;\;$
(196)
Under the realistic assumption of the primary’s incompressibility, the term
with $\,\nabla\cdot\mbox{\boldmath$\vec{\boldmath{\,V}}$}\,$ may be omitted.
To get rid of the term with $\,\nabla\rho\,$, one has to accept a much
stronger approximation of the primary being homogeneous. Then the power will
be rendered by
$\displaystyle
P\;=\;-\;\int\,\nabla\,\cdot\,(\rho\;\mbox{\boldmath$\vec{\boldmath{\,V}}$}\;W)\,d^{3}x\;=\;-\;\int\,\rho\;W\;\mbox{\boldmath$\vec{\boldmath{\,V}}$}\,\cdot\,{\vec{\bf{n}}}\;\,dS\;\;\;,$
(197)
${\vec{\bf{n}}}\,$ being the outward normal and $\,dS\,$ being an element of
the surface area of the primary. This expression for the power (pioneered,
probably, by Goldreich 1963) enables one to calculate the work through radial
displacements only, in neglect of horizontal motion.
Denoting the radial elevation with $\,\zeta\,$, we can write the power per
unit mass, $~{}{\cal P}\equiv P/M~{}$, as:
$\displaystyle{\cal P}\;=\;\left(-\,\frac{\partial W}{\partial
r}\right)\;\mbox{\boldmath$\vec{\boldmath{\,V}}$}\cdot{\vec{\bf{n}}}\;=\;\left(-\,\frac{\partial
W}{\partial r}\right)\frac{d\zeta}{dt}\;\;\;.$ (198)
A harmonic external potential
$\displaystyle
W~{}=~{}W_{0}~{}\cos(\,\omega_{\textstyle{{}_{lmpq}}}\,t\,)~{}~{}~{},$ (199)
applied at a point of the primary’s surface, will elevate this point by
$\displaystyle\zeta~{}=~{}h_{2}~{}\frac{W_{o}}{\mbox{g}}~{}\cos(\omega_{\textstyle{{}_{lmpq}}}\,t~{}-~{}\epsilon_{\textstyle{{}_{lmpq}}})~{}=~{}h_{2}\;\frac{W_{o}}{\mbox{g}}~{}\cos(\omega_{\textstyle{{}_{lmpq}}}\,t\;-\;\omega_{\textstyle{{}_{lmpq}}}\,\Delta
t_{\textstyle{{}_{lmpq}}})~{}~{}~{},$ (200)
with g being the surface gravity acceleration, and $\,h_{2}\,$ denoting the
Love number.
In formula (200), $\,\omega_{\textstyle{{}_{lmpq}}}\,$ is one of the modes
(105) showing up in the Darwin-Kaula expansion (103). The quantity
$\,\epsilon_{\textstyle{{}_{lmpq}}}\,=\,\omega_{\textstyle{{}_{lmpq}}}\,\Delta
t_{\textstyle{{}_{lmpq}}}\,$ is the corresponding phase lag, while $\,\Delta
t_{\textstyle{{}_{lmpq}}}\,$ is the positively defined time lag at this mode.
Although the tidal modes $\,\omega_{\textstyle{{}_{lmpq}}}\,$ can assume any
sign, both the potential $\,W\,$ and elevation $\,\zeta\,$ can be expressed
via the positively defined forcing frequency
$~{}\chi_{\textstyle{{}_{lmpq}}}\,=\,|\,\omega_{\textstyle{{}_{lmpq}}}\,|~{}$
and the absolute value of the phase lag:
$\displaystyle W~{}=~{}W_{0}~{}\cos(\chi\,t)~{}~{}~{},$ (201)
$\displaystyle\zeta~{}=~{}h_{2}~{}\frac{W_{o}}{\mbox{g}}~{}\cos(\chi\,t~{}-~{}|\,\epsilon\,|)~{}~{}~{},$
(202)
subscripts $\,lmpq\,$ being dropped here and hereafter for brevity.
The vertical velocity of the considered element of the primary’s surface will
be
$\displaystyle\frac{d\zeta}{dt}\;=\;-\;h_{2}\;\chi\;\frac{W_{o}}{\mbox{g}}\;\sin(\chi
t\;-\;|\epsilon|)\;=\;-\;h_{2}\;\chi\;\frac{W_{o}}{\mbox{g}}\;\left(\sin\chi
t\;\cos|\epsilon|\;-\;\cos\chi t\;\sin|\epsilon|\right)\;\;.\;\;$ (203)
Introducing the notation $~{}A\,=\,h_{2}\,\frac{\textstyle
W_{0}}{\textstyle\mbox{g}}\,\frac{\textstyle\partial W_{0}}{\textstyle\partial
r}~{}$, we write the power per unit mass as
$\displaystyle{\cal P}\;=\;\left(-\,\frac{\partial W}{\partial
r}\right)\frac{d\zeta}{dt}\;=\;A~{}\chi~{}\cos(\chi\,t)~{}\sin(\chi
t\;-\;|\epsilon|)\;\;\;,$ (204)
and write the work $\,w\,$ per unit mass, performed over a time interval
$\,(t_{0}\,,\;t)\,$, as:
$\displaystyle
w|^{\textstyle{{}^{~{}t}}}_{\textstyle{{}_{~{}t_{0}}}}=\int_{t_{0}}^{t}{\cal
P}~{}dt=A\int^{\chi t}_{\chi t_{0}}\cos(\chi\,t)\,\sin(\chi
t-|\epsilon|)d(\chi\,t)=A\,\cos|\epsilon|\int_{\chi t_{0}}^{\chi t}\cos
z\,\sin z\,dz-A\,\sin|\epsilon|\int_{\chi t_{0}}^{\chi t}\cos^{2}z\,dz$
$\displaystyle=~{}-~{}\frac{A}{4}~{}{\LARGE\left[\right.}\,\cos(2\chi
t-|\epsilon|)\,+\,2\;\chi\;t\;\sin|\epsilon|~{}{\LARGE\left.\right]}^{\textstyle{{}^{~{}t}}}_{\textstyle{{}_{~{}t_{0}}}}~{}~{}~{}.$
(205)
Being cyclic, the first term in (205) renders the elastic energy stored in the
body. The second term, being linear in time, furnishes the energy damped. This
clear interpretation of the two terms was offered by Stan Peale [2011,
personal communication].
The work over a time period $\,T\,=\,2\pi/\chi\,$ is equal to the energy
dissipated over the period:
$\displaystyle w|^{\textstyle{{}^{~{}t=T}}}_{\textstyle{{}_{~{}t=0}}}=\Delta
E_{\textstyle{{}_{cycle}}}\;=\;-\;A\;\pi\;\sin|\epsilon|~{}~{}~{}.$ (206)
It can be shown that the peak work is obtained over the time span from
$\,\pi\,$ to $\,|\epsilon|\,$ and assumes the value
$\displaystyle
E_{\textstyle{{}_{peak}}}^{\textstyle{{}^{(work)}}}\,=\;~{}\frac{A}{2}~{}\left[\,\cos|\epsilon|~{}-~{}\sin|\epsilon|~{}\left(\,\frac{\pi}{2}\,-\,|\epsilon|\,\right)\,\right]~{}~{}~{},$
(207)
whence the appropriate quality factor is given by:
$\displaystyle Q^{-1}_{\textstyle{{}_{work}}}\,=~{}\frac{~{}-~{}\Delta
E_{\textstyle{{}_{cycle}}}}{2\,\pi\,E_{\textstyle{{}_{peak}}}^{\textstyle{{}^{(work)}}}}~{}=~{}\frac{\tan|\epsilon|}{1\;-\;\left(\,\frac{\textstyle\pi}{\textstyle
2}\,-\,|\epsilon|\,\right)~{}\tan|\epsilon|}~{}~{}~{}.$ (208)
To calculate the peak energy stored in the body, we would note that the first
term in (205) is maximal when taken over the span from
$\,\chi\,t\,=\,\pi/4\,+\,|\epsilon|/2\,$ through
$\,\chi\,t\,=\,3\pi/4\,+\,|\epsilon|/2~{}$:
$\displaystyle
E_{\textstyle{{}_{peak}}}^{\textstyle{{}^{(energy)}}}\,=\;~{}\frac{A}{2}~{}~{}~{},$
(209)
and the corresponding quality factor is:
$\displaystyle Q^{-1}_{\textstyle{{}_{energy}}}\,=~{}\frac{~{}-~{}\Delta
E_{\textstyle{{}_{cycle}}}}{2\,\pi\,E_{\textstyle{{}_{peak}}}^{\textstyle{{}^{(energy)}}}}~{}=~{}\sin|\epsilon|~{}~{}~{}.$
(210)
Goldreich (1963) suggested to employ the span $\,\chi\,t\,=\,(0\,,\;\pi/4)\,$.
The absolute value of the resulting power, denoted in Ibid. as $\,E^{*}\,$, is
equal to
$\displaystyle E^{*}\,=\;~{}\frac{A}{2}~{}\cos|\epsilon|~{}~{}~{}$ (211)
and is not the peak value of the energy stored nor of the work performed.
Goldreich (1963) however employed it to define a quality factor, which we
shall term $~{}Q_{\textstyle{{}_{Goldreich}}}~{}$. This factor is introduced
via
$\displaystyle Q^{-1}_{\textstyle{{}_{Goldreich}}}\,=~{}\frac{~{}-~{}\Delta
E_{\textstyle{{}_{cycle}}}}{2\,\pi\,E^{*}}~{}=~{}\tan|\epsilon|~{}~{}~{}.$
(212)
In our opinion, the quality factor $\,Q_{\textstyle{{}_{energy}}}\,$ defined
through (210) is preferable, because the expansion of tides contains terms
proportional to
$\,k_{l}(\chi_{\textstyle{{}_{lmpq}}})~{}\sin\epsilon_{l}(\chi_{\textstyle{{}_{lmpq}}})~{}\,$.
Since the long-established tradition suggests to substitute $\,\sin\epsilon\,$
with $\,1/Q\,$, it is advisable to define the $\,Q\,$ exactly as (210), and
also to call it $\,Q_{l}\,$, to distinguish it from the seismic quality factor
(Efroimsky 2012).
## Appendix C Tidal response of a homogeneous viscoelastic sphere (Churkin
1998)
This section presents some results from the unpublished preprint by Churkin
(1998). We took the liberty of upgrading the notations282828 Churkin (1998)
employed the notation $\,k_{\it l}(\tau)\,$ for what we call
$\,\stackrel{{\scriptstyle\bf\centerdot}}{{k}}_{\textstyle{{}_{l}}}(\tau)\,$.
Our notations are more convenient in that they amplify the close analogy
between the Love functions and the compliance function. and correcting some
minor oversights.
### C.1 A homogeneous Kelvin-Voigt spherical body
In combination with the Correspondence Principle, the formulae from subsection
A.6.3 furnish the following expression for the complex Love numbers of a
Kelvin-Voigt body:
$\displaystyle\bar{k}_{\textstyle{{}_{l}}}(\chi)\;=\;\frac{3}{2(l-1)}\;\,\frac{1}{\textstyle
1\;+\;A_{\textstyle{{}_{l}}}\,\left(\,1\;+\;\;\tau_{\textstyle{{}_{V}}}\;{\it
i}\;\chi\,\right)\;}~{}~{}~{},\quad$ (213)
It then can be demonstrated, with aid of (61), that the time-derivative of the
corresponding Love function is
$\displaystyle\stackrel{{\scriptstyle\bf\centerdot}}{{k}}_{\textstyle{{}_{l}}}(\tau)~{}=~{}\left\\{\begin{array}[]{c}\frac{\textstyle
3}{\textstyle 2(l-1)}~{}\,\frac{\textstyle 1}{\textstyle
A_{\textstyle{{}_{\textstyle{{}_{l}}}}}\;\tau_{\textstyle{{}_{V}}}}\;\exp(\textstyle\;-\,\tau\,\zeta_{\textstyle{{}_{\textstyle{{}_{l}}}}}\,)\;\Theta(\tau)\quad~{}~{}\quad\mbox{for}\quad\tau_{\textstyle{{}_{V}}}\,>\,0{}{}{}{}\\\
{}\hfil\\\ \frac{\textstyle 3}{\textstyle 2(l-1)}~{}\,\frac{\textstyle
1}{\textstyle
1\;+\;A_{\textstyle{{}_{\textstyle{{}_{l}}}}}}\;\,\delta(\tau)\quad\quad\quad\quad\quad\quad\quad~{}\,\mbox{for}\quad\tau_{\textstyle{{}_{V}}}\,=\,0~{}~{}~{},\end{array}\right.$
(217)
while the Love function itself has the form of
$\displaystyle{k}_{\textstyle{{}_{l}}}(\tau)$ $\displaystyle=$
$\displaystyle\frac{3}{2(l-1)}~{}\,\frac{1}{A_{\textstyle{{}_{\textstyle{{}_{l}}}}}\;\zeta_{\textstyle{{}_{\textstyle{{}_{l}}}}}\;\tau_{\textstyle{{}_{V}}}}\;\left[\;1\;-\;\exp(\textstyle\;-\,\tau\,\zeta_{\textstyle{{}_{\textstyle{{}_{l}}}}}\,)\;\right]\;\Theta(\tau)$
$\displaystyle=$
$\displaystyle\frac{3}{2(l-1)}~{}\,\frac{1}{1\;+\;A_{\textstyle{{}_{\textstyle{{}_{l}}}}}}\;\left[\;1\;-\;\exp(\textstyle\;-\,\tau\,\zeta_{\textstyle{{}_{\textstyle{{}_{l}}}}}\,)\;\right]\;\Theta(\tau)\;\;\;,\quad\;$
where
$\displaystyle\zeta_{\textstyle{{}_{\textstyle{{}_{l}}}}}\;\equiv\;\frac{1\;+\;A_{\textstyle{{}_{\textstyle{{}_{l}}}}}}{A_{\textstyle{{}_{\textstyle{{}_{l}}}}}\;\tau_{\textstyle{{}_{V}}}}~{}~{}.$
(219)
Formulae (217) may look confusing, in that
$\,\exp(\textstyle\;-\,\tau\,\zeta_{\textstyle{{}_{\textstyle{{}_{l}}}}}\,)\,$
simply vanishes in the elastic limit, i.e., when
$~{}\tau_{\textstyle{{}_{V}}}\rightarrow 0~{}$ and
$~{}\zeta_{\textstyle{{}_{l}}}\rightarrow\infty~{}$. We however should not be
misled by this mathematical artefact stemming from the nonanaliticity of the
exponent function. Instead, we should keep in mind that a physical meaning is
attributed not to the Love functions or their derivatives but to the results
of the Love operator’s action on realistic disturbances. For example, a
Heaviside step potential
$\displaystyle
W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,\;\mbox{{\boldmath$\vec{r}$}}^{\;*}\,,\;t\,^{\prime})\;=\;W\;\Theta(t\,^{\prime})$
(220)
applied to a homogeneous Kelvin-Voigt spherical body will furnish, through
relation (60), the following response of the potential:
$\displaystyle U_{\it l}(\mbox{{\boldmath$\vec{r}$}},\,t)$ $\displaystyle=$
$\displaystyle\left(\frac{R}{r}\right)^{{\it
l}+1}\int_{t\,^{\prime}=\,-\infty}^{\,t\,^{\prime}=\,t}\stackrel{{\scriptstyle\centerdot}}{{k}}_{\it
l}(t-t\,^{\prime})\;W\;\Theta(t\,^{\prime})\,dt\,^{\prime}\,=\;\left(\frac{R}{r}\right)^{{\it
l}+1}\int_{t\,^{\prime}=\,0}^{\,t\,^{\prime}=\,t}\stackrel{{\scriptstyle\centerdot}}{{k}}_{\it
l}(t-t\,^{\prime})\;W\;dt\,^{\prime}\,\quad\quad\quad\quad~{}$
$\displaystyle=$ $\displaystyle W\;\left(\frac{R}{r}\right)^{{\it
l}+1}\,\int_{\tau\,=\,0}^{\,\tau\,=\,t}\stackrel{{\scriptstyle\centerdot}}{{k}}_{\it
l}(\tau)\;d\tau\;=\;\frac{3}{2(l-1)}~{}\,\frac{\;1\;-\;\exp(\textstyle\;-\,t\,\zeta_{\textstyle{{}_{\textstyle{{}_{l}}}}}\,)\;}{1\;+\;A_{\textstyle{{}_{l}}}}\;\left(\frac{R}{r}\right)^{{\it
l}+1}W~{}~{}~{}.\quad\quad\quad\quad\quad$
In the elastic limit, this becomes:
$\displaystyle\tau_{\textstyle{{}_{V}}}\rightarrow
0\quad\Longrightarrow\quad\zeta_{\textstyle{{}_{l}}}\rightarrow\infty\quad\Longrightarrow\quad
U_{\it
l}(\,\mbox{{\boldmath$\vec{r}$}}\,,\,\;t\,)\;\rightarrow\;\frac{3}{2(l-1)}~{}\,\frac{\;1\;}{1\;+\;A_{\textstyle{{}_{l}}}}\;\left(\frac{R}{r}\right)^{{\it
l}+1}W~{}~{}~{},\quad$ (222)
which reproduces the case described by the static Love number
$\;k_{\textstyle{{}_{l}}}\,=\,\frac{\textstyle 3}{\textstyle
2}~{}\,\frac{\textstyle\;1\;}{\textstyle 1\;+\;A_{\textstyle{{}_{l}}}}\;\,$.
An alternative way of getting (222) would be to employ formulae (LABEL:KV2)
and (59a).
### C.2 A homogeneous Maxwell spherical body
Using the formulae presented in the Appendix A.6.4, and relying upon the
Correspondence Principle, we write down the complex Love numbers for a Maxwell
material as
$\displaystyle\bar{k}_{\textstyle{{}_{l}}}(\chi)\;=\;\frac{3}{2(l-1)}\;\frac{1}{\;\textstyle
1\;+\;\frac{\textstyle A_{\textstyle{{}_{l}}}\;\tau_{\textstyle{{}_{M}}}\;{\it
i}\;\chi}{\textstyle 1\;+\;\tau_{\textstyle{{}_{M}}}\;{\it
i}\;\chi\;}\;}\;=\;\frac{3}{2(l-1)}\;\frac{1}{1\;+\;A_{\textstyle{{}_{l}}}}\;\left[\;1\;+\;\frac{A_{\textstyle{{}_{l}}}}{\textstyle
1\;+\;\left(\,1\;+\;A_{\textstyle{{}_{l}}}\,\right)\;\tau_{\textstyle{{}_{M}}}\;{\it
i}\;\chi\;}\;\right]~{}~{}~{},\quad$ (223)
which corresponds, via (61), to
$\displaystyle\stackrel{{\scriptstyle\bf\centerdot}}{{k}}_{\textstyle{{}_{l}}}(\tau)~{}=~{}\frac{3}{2(l-1)}~{}\,\frac{\;\;\delta(\tau)\;+\;{A_{\textstyle{{}_{l}}}}\;\gamma_{\textstyle{{}_{l}}}\;\exp(\textstyle\;-\,\tau\,\gamma_{\textstyle{{}_{\textstyle{{}_{l}}}}}\,)\;\Theta(\tau)\;}{\textstyle
1~{}+~{}\textstyle A_{\textstyle{{}_{l}}}}$ (224)
and
$\displaystyle{k}_{\textstyle{{}_{l}}}(\tau)~{}=~{}\frac{3}{2(l-1)}~{}\,\frac{\;\;1\;+\;{A_{\textstyle{{}_{l}}}}\;\left[\,1\;-\;\exp(\textstyle\;-\,\tau\,\gamma_{\textstyle{{}_{\textstyle{{}_{l}}}}}\,)\;\right]\;}{\textstyle
1~{}+~{}\textstyle A_{\textstyle{{}_{l}}}}\;\,\Theta(\tau)~{}~{}~{},$ (225)
where
$\displaystyle\gamma_{\textstyle{{}_{l}}}\,\equiv\;\frac{1}{\textstyle\left(\,1\;+\;A_{\textstyle{{}_{l}}}\,\right)\;\tau_{\textstyle{{}_{M}}}\;}~{}~{}~{}.$
(226)
A Heaviside step potential
$\displaystyle
W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,\;\mbox{{\boldmath$\vec{r}$}}^{\;*}\,,\;t\,^{\prime})\;=\;W\;\Theta(t\,^{\prime})$
(227)
will, according to formula (60), render the following response:
$\displaystyle U_{\it
l}(\mbox{{\boldmath$\vec{r}$}},\,t)\;=\;\left(\frac{R}{r}\right)^{{\it
l}+1}\int_{t\,^{\prime}=\,-\infty}^{\,t\,^{\prime}=\,t}\stackrel{{\scriptstyle\centerdot}}{{k}}_{\it
l}(t-t\,^{\prime})\;W\;\Theta(t\,^{\prime})\,dt\,^{\prime}\,=\;\left(\frac{R}{r}\right)^{{\it
l}+1}\int_{t\,^{\prime}=\,0}^{\,t\,^{\prime}=\,t}\stackrel{{\scriptstyle\centerdot}}{{k}}_{\it
l}(t-t\,^{\prime})\;W\;dt\,^{\prime}\,\quad\quad\quad\quad\quad~{}$ (228)
$\displaystyle=W\;\left(\frac{R}{r}\right)^{{\it
l}+1}\,\int_{\tau\,=\,0}^{\,\tau\,=\,t}\stackrel{{\scriptstyle\centerdot}}{{k}}_{\it
l}(\tau)\;d\tau\;=\;\frac{3}{2(l-1)}~{}\,\frac{\;1\;+\;A_{\textstyle{{}_{l}}}\;\left[1\;-\;\exp(\textstyle\;-\,t\,\gamma_{\textstyle{{}_{\textstyle{{}_{l}}}}}\,)\;\right]\;}{1\;+\;A_{\textstyle{{}_{l}}}}\;\left(\frac{R}{r}\right)^{{\it
l}+1}W~{}\Theta(t)~{}.\quad~{}\quad~{}$
In the elastic limit, we obtain:
$\displaystyle\tau_{\textstyle{{}_{M}}}\rightarrow\infty\quad\Longrightarrow\quad\gamma_{\textstyle{{}_{l}}}\rightarrow
0\quad\Longrightarrow\quad U_{\it
l}(\,\mbox{{\boldmath$\vec{r}$}}\,,\,\;t\,)\;\rightarrow\;\frac{3}{2(l-1)}~{}\,\frac{\;1\;}{1\;+\;A_{\textstyle{{}_{l}}}}\;\left(\frac{R}{r}\right)^{{\it
l}+1}W~{}~{}~{},\quad$ (229)
which corresponds to the situation described by the static Love number
$\;k_{\textstyle{{}_{l}}}\,=\,\frac{\textstyle 3}{\textstyle
2(l-1)}~{}\,\frac{\textstyle\;1\;}{\textstyle
1\;+\;A_{\textstyle{{}_{l}}}}\;\,$.
### C.3 A homogeneous Hohenemser-Prager (SAS) spherical body
The Correspondence Principle, along with the formulae from subsection A.6.5,
yields the following expression for the complex Love numbers of a Hohenemser-
Prager (SAS) spherical body:
$\displaystyle\bar{k}_{\textstyle{{}_{l}}}(\chi)\;=\;\frac{3}{2\,(l-1)}\;\,\frac{1}{\;1\;+\;A_{\textstyle{{}_{l}}}\;\frac{\textstyle
1\,+\,{\it i}\,\chi\,\tau_{\textstyle{{}_{\textstyle{{}_{V}}}}}}{\textstyle
1\,+\,{\it i}\,\chi\,\tau_{\textstyle{{}_{\textstyle{{}_{M}}}}}}\;}~{}~{}~{}.$
(230)
Combined with (61), this entails:
$\displaystyle\stackrel{{\scriptstyle\bf\centerdot}}{{k}}_{\textstyle{{}_{l}}}(\tau)\;=\;\frac{3}{2\,(l-1)}\;\,\frac{1}{\;1\;+\;A_{\textstyle{{}_{l}}}\;\,\frac{\textstyle\,\tau_{\textstyle{{}_{\textstyle{{}_{V}}}}}}{\,\textstyle\tau_{\textstyle{{}_{\textstyle{{}_{M}}}}}}\;}\;\left[\;\delta(\tau)\;+\;\frac{A_{\textstyle{{}_{l}}}}{\tau_{\textstyle{{}_{\textstyle{{}_{M}}}}}}\,\;\frac{\tau_{\textstyle{{}_{\textstyle{{}_{V}}}}}\,-\,\tau_{\textstyle{{}_{\textstyle{{}_{M}}}}}}{~{}\tau_{\textstyle{{}_{\textstyle{{}_{M}}}}}\,+\,A_{\textstyle{{}_{l}}}\,\tau_{\textstyle{{}_{\textstyle{{}_{V}}}}}}~{}\,\exp\left(\,-\,\frac{\,1\;+\;A_{\textstyle{{}_{l}}}\,}{\,\tau_{\textstyle{{}_{\textstyle{{}_{M}}}}}\;+\;A_{\textstyle{{}_{l}}}\;\tau_{\textstyle{{}_{\textstyle{{}_{V}}}}}\,}\;\tau\,\right)\;\right]\quad\quad\quad$
(231)
and
$\displaystyle{k}_{\textstyle{{}_{l}}}(\tau)\;=\;\frac{3}{2\,(l-1)}\;\,\frac{\textstyle\;1~{}-~{}\frac{\textstyle
A_{\textstyle{{}_{l}}}}{\textstyle\,\tau_{\textstyle{{}_{\textstyle{{}_{M}}}}}}\,\;\frac{\textstyle\tau_{\textstyle{{}_{\textstyle{{}_{V}}}}}\,-\,\tau_{\textstyle{{}_{\textstyle{{}_{M}}}}}}{\textstyle~{}1\,+\,A_{\textstyle{{}_{l}}}\,}\left[\;1-\;~{}\,\exp\left(\,-\,\frac{\textstyle\,1\;+\;A_{\textstyle{{}_{l}}}\,}{\textstyle\,\tau_{\textstyle{{}_{\textstyle{{}_{M}}}}}\;+\;A_{\textstyle{{}_{l}}}\;\tau_{\textstyle{{}_{\textstyle{{}_{V}}}}}\,}\;\tau\,\right)\;\right]\;}{\textstyle\;1\;+\;A_{\textstyle{{}_{l}}}\;\,\frac{\textstyle\,\tau_{\textstyle{{}_{\textstyle{{}_{V}}}}}}{\,\textstyle\tau_{\textstyle{{}_{\textstyle{{}_{M}}}}}}\;}\;~{}\Theta(\tau)\quad\quad\quad$
(232)
A Heaviside step potential
$\displaystyle
W_{\it{l}}(\mbox{{\boldmath$\vec{R}$}}\,,\;\mbox{{\boldmath$\vec{r}$}}^{\;*}\,,\;t\,^{\prime})\;=\;W\;\Theta(t\,^{\prime})$
(233)
applied to a SAS spherical body will then result in the following variation of
its potential:
$\displaystyle U_{\it
l}(\mbox{{\boldmath$\vec{r}$}},\,t)~{}=\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$
$\displaystyle\left(\frac{R}{r}\right)^{{\it
l}+1}\int_{t\,^{\prime}=\,-\infty}^{\,t\,^{\prime}=\,t}\stackrel{{\scriptstyle\centerdot}}{{k}}_{\it
l}(t-t\,^{\prime})\,W\,\Theta(t\,^{\prime})\,dt\,^{\prime}=\left(\frac{R}{r}\right)^{{\it
l}+1}\int_{t\,^{\prime}=\,0}^{\,t\,^{\prime}=\,t}\stackrel{{\scriptstyle\centerdot}}{{k}}_{\it
l}(t-t\,^{\prime})\,W\,dt\,^{\prime}=W\left(\frac{R}{r}\right)^{{\it
l}+1}\,\int_{\tau\,=\,0}^{\,\tau\,=\,t}\stackrel{{\scriptstyle\centerdot}}{{k}}_{\it
l}(\tau)\,d\tau\quad\quad$
$\displaystyle=\frac{3}{2(l-1)}\left(\frac{1}{{1+A_{\textstyle{{}_{l}}}\,\frac{\textstyle\tau_{{\textstyle{{}_{\textstyle{{}_{V}}}}}}}{\textstyle\tau_{{\textstyle{{}_{\textstyle{{}_{M}}}}}}}\,}}+\frac{A_{\textstyle{{}_{l}}}}{1+A_{\textstyle{{}_{l}}}}\,\frac{\tau_{{\textstyle{{}_{\textstyle{{}_{V}}}}}}\,-\,\tau_{{\textstyle{{}_{\textstyle{{}_{M}}}}}}}{\tau_{{\textstyle{{}_{\textstyle{{}_{M}}}}}}+A_{\textstyle{{}_{l}}}\,\tau_{{\textstyle{{}_{\textstyle{{}_{V}}}}}}}\right)\left[1-\exp\left(-\,\frac{\textstyle\,1\,+\,A_{\textstyle{{}_{l}}}\,}{\textstyle\,\tau_{\textstyle{{}_{\textstyle{{}_{M}}}}}+A_{\textstyle{{}_{l}}}\tau_{\textstyle{{}_{\textstyle{{}_{V}}}}}\,}\;t\right)\right]\left(\frac{R}{r}\right)^{{\it
l}+1}W~{}\Theta(t)~{}.~{}~{}\quad$ (234)
Within this model, the elastic limit is achieved by setting
$\,\tau_{\textstyle{{}_{\textstyle{{}_{M}}}}}\,=\,\tau_{\textstyle{{}_{\textstyle{{}_{V}}}}}\,$,
whence we obtain the case described by the static Love number
$\;k_{\textstyle{{}_{l}}}\,=\,\frac{\textstyle 3}{\textstyle
2}~{}\,\frac{\textstyle\;1\;}{\textstyle 1\;+\;A_{\textstyle{{}_{l}}}}\;\,$.
Interestingly, the elastic regime is achieved even when these times are not
zero. Their being equal to one another turns out to be sufficient.
Repeating the above calculation for tidal disturbance
$\,W\,\Theta(-t\,^{\prime})\,$, we shall see that, after the tidal
perturbation is removed, a tidally prestressed sphere regains its shape, the
stress relaxing at a rate proportional to
$~{}\exp\left(-\,~{}\frac{\textstyle\,1\,+\,A_{\textstyle{{}_{l}}}\,}{\textstyle\,\tau_{\textstyle{{}_{\textstyle{{}_{M}}}}}+A_{\textstyle{{}_{l}}}\tau_{\textstyle{{}_{\textstyle{{}_{V}}}}}\,}\,~{}t\right)~{}$.
## Appendix D The correspondence principle
(elastic-viscoelastic analogy)
### D.1 The correspondence principle, for nonrotating bodies
While the static Love numbers depend on the static rigidity $\,\mu\,$ through
(3), it is not immediately clear if a similar formula interconnects also
$\,\bar{k}_{\it l}(\chi)\,$ with $\,\bar{\mu}(\chi)\,$. To understand why and
when the relation should hold, recall that formulae (3) originate from the
solution of a boundary-value problem for a system incorporating two equations:
$\displaystyle\sigma_{\textstyle{{}_{\beta\nu}}}$ $\displaystyle=$
$\displaystyle 2\;\mu\;u_{\textstyle{{}_{\beta\nu}}}~{}~{}~{},$ (235a)
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\frac{\partial\sigma_{\textstyle{{}_{\beta\nu}}}}{\partial
x_{\textstyle{{}_{\nu}}}}\;-\;\frac{\partial p}{\partial
x_{\textstyle{{}_{\beta}}}}\;-\;\rho\;\frac{\partial(W\,+\,U)}{\partial
x_{\textstyle{{}_{\beta}}}}~{}~{}~{},$ (235b)
the latter being simply the equation of equilibrium written for a _static_
viscoelastic medium, in neglect of compressibility and heat conductivity. The
notations $\,\sigma_{\textstyle{{}_{\beta\nu}}}\,$ and
$\,u_{\textstyle{{}_{\beta\nu}}}\,$ stand for the deviatoric stress and
strain, $~{}p\,\equiv\,-\,\frac{\textstyle 1}{\textstyle
3}\,\mbox{Sp}\,{\mathbb{S}}~{}$ is the pressure (set to be nil in
incompressible media), while $\,W\,$ and $\,U\,$ are the perturbing and
perturbed potentials. By solving the system, one arrives at the static
relation $\,U_{\it l}=k_{\it l}\,W_{\it l}\,$, with the customary static Love
numbers $\,k_{\it l}\,$ expressed via $\,\rho\,$, $\,R\,$, and $\,\mu\,$ by
(3).
Now let us write equation like (235a \- 235b) for time-dependent deformation
of a nonrotating body:
$\displaystyle{\mathbb{S}}$ $\displaystyle=$ $\displaystyle
2~{}\hat{\mu}~{}{\mathbb{U}}~{}~{}~{},$ (236a)
$\displaystyle\rho~{}{\bf\ddot{u}}$ $\displaystyle=$
$\displaystyle\nabla{\mathbb{S}}~{}-~{}\nabla
p~{}-~{}\nabla(W\,+\,U)~{}~{}~{}$ (236b)
or, in terms of components:
$\displaystyle\sigma_{\textstyle{{}_{\beta\nu}}}$ $\displaystyle=$
$\displaystyle 2~{}\hat{\mu}~{}u_{\textstyle{{}_{\beta\nu}}}~{}~{}~{},$ (237a)
$\displaystyle\rho\,\ddot{u}_{\textstyle{{}_{\beta}}}$ $\displaystyle=$
$\displaystyle\frac{\partial\sigma_{\textstyle{{}_{\beta\nu}}}}{\partial
x_{\textstyle{{}_{\nu}}}}~{}-~{}\frac{\partial p}{\partial
x_{\textstyle{{}_{\beta}}}}\;-\;\rho\;\frac{\partial(W\,+\,U)}{\partial
x_{\textstyle{{}_{\beta}}}}~{}~{}~{}.$ (237b)
In the frequency domain, this will look:
$\displaystyle\bar{\sigma}_{\textstyle{{}_{\beta\nu}}}(\chi)$ $\displaystyle=$
$\displaystyle
2~{}\bar{\mu}(\chi)~{}\bar{u}_{\textstyle{{}_{\beta\nu}}}(\chi)~{}~{}~{},$
(238a)
$\displaystyle\rho\,\chi^{2}\,\bar{u}_{\textstyle{{}_{\beta\nu}}}(\chi)$
$\displaystyle=$
$\displaystyle\frac{\partial\bar{\sigma}_{\textstyle{{}_{\beta\nu}}}(\chi)}{\partial
x_{\textstyle{{}_{\nu}}}}~{}-~{}\frac{\partial\bar{p}(\chi)}{\partial
x_{\textstyle{{}_{\beta}}}}\;-\;\rho\;\frac{\partial\left[\bar{W}(\chi)\,+\,\bar{U}(\chi)\right]}{\partial
x_{\textstyle{{}_{\beta}}}}~{}~{}~{},$ (238b)
where a bar denotes a spectral component for all functions except $\,\mu~{}$ –
recall that $\,\bar{\mu}\,$ is a spectral component not of the kernel
$\,\mu(\tau)\,$ but of its time-derivative $\,{\bf{\dot{\mu}}}(\tau)\,$.
Unless the frequencies are extremely high, we can neglect the body-fixed
acceleration term $\,\chi^{2}\,\bar{u}_{\textstyle{{}_{\beta\nu}}}(\chi)\,$ in
the second equation, in which case our system of equations for the spectral
components will mimic (235). Thus we arrive at the so-called _correspondence
principle_ (also known as the elastic-viscoelastic analogy), which maps a
solution of a linear viscoelastic boundary-value problem to a solution of a
corresponding elastic problem with the same initial and boundary conditions.
As a result, the algebraic equations for the Fourier (or Laplace) components
of the strain and stress in the viscoelastic case mimic the equations
connecting the strain and stress in the appropriate elastic problem. So the
viscoelastic operational moduli $\,\bar{\mu}(\chi)\,$ or $\,\bar{J}(\chi)\,$
obey the same algebraic relations as the elastic parameters $\,\mu\,$ or
$\,J\,$.
In the literature, there is no consensus on the authorship of this principle.
For example, Haddad (1995) mistakenly attributes it to several authors who
published in the 1950s and 1960s. In reality, the principle was pioneered
almost a century earlier by Darwin (1879), for isotropic incompressible media.
The principle was extended to more general types of media by Biot (1954,
1958), who also pointed out some limitations of this principle.
### D.2 The correspondence principle, for rotating bodies
Consider a body of mass $\,M_{prim}\,$, which is spinning at a rate
$\vec{\omega}$ and is also performing some orbital motion (for example, is
orbiting, with its partner of mass $\,M_{sec}\,$, around their mutual centre
of mass). Relative to some inertial coordinate system, the centre of mass of
the body is located at $\,\mbox{{\boldmath$\vec{x}$}}_{{}_{CM}}\,$, while a
small parcel of its material is positioned at $\vec{x}$ . Relative to the
centre of mass of the body, the parcel is located at
$\,\mbox{{\boldmath$\vec{r}$}}\,=\,\mbox{{\boldmath$\vec{x}$}}\,-\,\mbox{{\boldmath$\vec{x}$}}_{{}_{CM}}\,$.
The body being deformable, we can decompose $\vec{r}$ into its average value,
$\,\mbox{{\boldmath$\vec{r}$}}_{0}\,$, and an instantaneous displacement
$\vec{u}$ :
$\displaystyle\left.\begin{array}[]{c}\mbox{{\boldmath$\vec{x}$}}\;=\;\mbox{{\boldmath$\vec{x}$}}_{{}_{CM}}\,+\;\mbox{{\boldmath$\vec{r}$}}{}{}\\\
{}\hfil\\\
\mbox{{\boldmath$\vec{r}$}}\;=\;\mbox{{\boldmath$\vec{r}$}}_{0}\,+\;\mbox{{\boldmath$\vec{u}$}}{}{}{}{}\end{array}\right\\}~{}\quad~{}\Longrightarrow\quad~{}\quad\mbox{{\boldmath$\vec{x}$}}\;=\;\mbox{{\boldmath$\vec{x}$}}_{{}_{CM}}\,+\;\mbox{{\boldmath$\vec{r}$}}_{0}\,+\;\mbox{{\boldmath$\vec{u}$}}~{}~{}.$
(242)
Denote with $D/Dt$ the time-derivative in the inertial frame. The symbol
$d/dt$ and its synonym, overdot, will be reserved for the time-derivative in
the body frame, so $\,{d\mbox{{\boldmath$\vec{r}$}}_{0}}/{dt}\,=\,0\,$. Then
$\displaystyle\frac{D\mbox{{\boldmath$\vec{r}$}}}{Dt}\;=\;\frac{d\mbox{{\boldmath$\vec{r}$}}}{dt}~{}+~{}\mbox{{\boldmath$\vec{\omega}$}}\,\times\,\mbox{{\boldmath$\vec{r}$}}\quad~{}\quad\mbox{and}\quad~{}\quad\frac{D^{2}\mbox{{\boldmath$\vec{r}$}}}{Dt^{2}}\;=\;\frac{d^{2}\mbox{{\boldmath$\vec{r}$}}}{dt^{2}}\;+\;2\;\mbox{{\boldmath$\vec{\omega}$}}\,\times\,\frac{d\mbox{{\boldmath$\vec{r}$}}}{dt}\;+\;\mbox{{\boldmath$\vec{\omega}$}}\,\times\,\left(\mbox{{\boldmath$\vec{\omega}$}}\,\times\,\mbox{{\boldmath$\vec{r}$}}\right)\;+\;\mbox{{\boldmath$\dot{\vec{\omega}}$}}\,\times\,\mbox{{\boldmath$\vec{r}$}}~{}~{}.\quad\quad$
(243)
Together, the above formulae result in
$\displaystyle\frac{D^{2}\mbox{{\boldmath$\vec{x}$}}}{Dt^{2}}\;=\;\frac{D^{2}\mbox{{\boldmath$\vec{x}$}}_{{}_{CM}}}{Dt^{2}}\;+\;\frac{D^{2}\mbox{{\boldmath$\vec{r}$}}}{Dt^{2}}$
$\displaystyle=$
$\displaystyle\frac{D^{2}\mbox{{\boldmath$\vec{x}$}}_{{}_{CM}}}{Dt^{2}}\;+\;\frac{d^{2}\mbox{{\boldmath$\vec{r}$}}}{dt^{2}}\;+\;2\;\mbox{{\boldmath$\vec{\omega}$}}\,\times\,\frac{d\mbox{{\boldmath$\vec{r}$}}}{dt}\;+\;\mbox{{\boldmath$\vec{\omega}$}}\,\times\,\left(\mbox{{\boldmath$\vec{\omega}$}}\,\times\,\mbox{{\boldmath$\vec{r}$}}\right)\;+\;\mbox{{\boldmath$\dot{\vec{\omega}}$}}\,\times\,\mbox{{\boldmath$\vec{r}$}}$
(244) $\displaystyle=$
$\displaystyle\frac{D^{2}\mbox{{\boldmath$\vec{x}$}}_{{}_{CM}}}{Dt^{2}}\;+\;\frac{d^{2}\mbox{{\boldmath$\vec{u}$}}}{dt^{2}}\;+\;2\;\mbox{{\boldmath$\vec{\omega}$}}\,\times\,\frac{d\mbox{{\boldmath$\vec{u}$}}}{dt}\;+\;\mbox{{\boldmath$\vec{\omega}$}}\,\times\,\left(\mbox{{\boldmath$\vec{\omega}$}}\,\times\,\mbox{{\boldmath$\vec{r}$}}\right)\;+\;\mbox{{\boldmath$\dot{\vec{\omega}}$}}\,\times\,\mbox{{\boldmath$\vec{r}$}}~{}~{}.\quad\quad\quad$
The equation of motion for a small parcel of the body’s material will read as
$\displaystyle\rho\;\frac{D^{2}\mbox{{\boldmath$\vec{x}$}}}{Dt^{2}}\;=\;\nabla{\mathbb{S}}\;-\;\nabla
p\;+\;\mbox{\boldmath$\vec{\boldmath{F}}$}_{self}\;+\;\mbox{\boldmath$\vec{\boldmath{F}}$}_{ext}~{}~{}~{},$
(245)
where $\,\mbox{\boldmath$\vec{\boldmath{F}}$}_{ext}\,$ is the exterior gravity
force per unit volume, while $\,\mbox{\boldmath$\vec{\boldmath{F}}$}_{self}\,$
is the “interior” gravity force per unit volume, i.e., the self-force
wherewith the rest of the body is acting upon the selected parcel of medium.
Insertion of (244) in (245) furnishes:
$\displaystyle\rho\,\left[\,\frac{D^{2}\mbox{{\boldmath$\vec{x}$}}_{{}_{CM}}}{Dt^{2}}\,+\,\stackrel{{\scriptstyle\bf{\centerdot\,\centerdot}}}{{\textbf{\mbox{\boldmath$\vec{\boldmath
u}$}}}}\,+\,2\,\mbox{{\boldmath$\vec{\omega}$}}\,\times\,\stackrel{{\scriptstyle\bf{\centerdot}}}{{\textbf{\mbox{\boldmath$\vec{\boldmath
u}$}}}}\,+\,\mbox{{\boldmath$\vec{\omega}$}}\,\times\,\left(\mbox{{\boldmath$\vec{\omega}$}}\,\times\,\mbox{{\boldmath$\vec{r}$}}\right)\,+\,\mbox{{\boldmath$\dot{\vec{\omega}}$}}\,\times\,\mbox{{\boldmath$\vec{r}$}}\,\right]\,=\,\nabla{\mathbb{S}}\,-\,\nabla
p\,+\,\mbox{\boldmath$\vec{\boldmath{F}}$}_{self}\,+\,\mbox{\boldmath$\vec{\boldmath{F}}$}_{ext}~{}~{}.\quad\quad$
(246)
At the same time, for the primary body as a whole, we can write:
$\displaystyle
M_{prim}\;\frac{D^{2}\mbox{{\boldmath$\vec{x}$}}_{{}_{CM}}}{Dt^{2}}\;=\;\int_{V}\mbox{\boldmath$\vec{\boldmath{F}}$}_{ext}\;d^{3}\mbox{{\boldmath$\vec{r}$}}~{}~{},$
(247)
the integration being carried out over the volume $\,V\,$ of the primary.
(Recall that $\,\mbox{\boldmath$\vec{\boldmath{F}}$}_{ext}\,$ is a force per
unit volume.) Combined together, the above two equations will result in
$\displaystyle\rho\left[\,\stackrel{{\scriptstyle\bf{\centerdot\,\centerdot}}}{{\textbf{\mbox{\boldmath$\vec{\boldmath
u}$}}}}+\,2\,\mbox{{\boldmath$\vec{\omega}$}}\times\stackrel{{\scriptstyle\bf{\centerdot}}}{{\textbf{\mbox{\boldmath$\vec{\boldmath
u}$}}}}+\,\mbox{{\boldmath$\vec{\omega}$}}\times\left(\mbox{{\boldmath$\vec{\omega}$}}\times\mbox{{\boldmath$\vec{r}$}}\right)+\mbox{{\boldmath$\dot{\vec{\omega}}$}}\times\mbox{{\boldmath$\vec{r}$}}\,\right]\,=\,\nabla{\mathbb{S}}\,-\,\nabla
p\,+\,\mbox{\boldmath$\vec{\boldmath{F}}$}_{self}\,+\,\mbox{\boldmath$\vec{\boldmath{F}}$}_{ext}\,-\,\frac{\rho~{}}{\,M_{prim}\,}\int_{V}\mbox{\boldmath$\vec{\boldmath{F}}$}_{ext}\;d^{3}\mbox{{\boldmath$\vec{r}$}}~{}~{}.\quad~{}$
(248)
For a spherically-symmetrical (not necessarily radially-homogeneous) body, the
integral on the right-hand side clearly removes the Newtonian part of the
force, leaving the harmonics intact:
$\displaystyle\mbox{\boldmath$\vec{\boldmath{F}}$}_{ext}\,-\,\frac{\rho~{}}{\,M_{prim}\,}\int_{V}\mbox{\boldmath$\vec{\boldmath{F}}$}_{ext}~{}d^{3}\mbox{{\boldmath$\vec{r}$}}~{}=~{}\rho~{}\sum_{l=2}^{\infty}\nabla
W_{l}~{}~{},$ (249)
where the harmonics are given by
$\displaystyle
W_{l}(\mbox{{\boldmath$\vec{r}$}},\,\mbox{{\boldmath$\vec{r}$}}^{*})\;=\;-\;\frac{G\,M_{sec}}{r^{*}}\;\left(\frac{r}{r^{*}}\right)^{l}\,P_{l}(\cos\gamma)~{}~{}~{},$
(250)
$\mbox{{\boldmath$\vec{r}$}}^{\,*}$ being the vector pointing from the centre
of mass of the primary to that of the secondary, and $\,\gamma\,$ being the
angular separation between $\vec{r}$ and
$\,\mbox{{\boldmath$\vec{r}$}}^{\,*}\,$, subtended at the centre of mass of
the primary.
In reality, a tiny extra force $\,{\cal F}\,$, the tidal force per unit
volume, is left over due to the body being slightly distorted:
$\displaystyle\mbox{\boldmath$\vec{\boldmath{F}}$}_{ext}\,-\,\frac{\rho~{}}{\,M_{prim}\,}\int_{V}\mbox{\boldmath$\vec{\boldmath{F}}$}_{ext}~{}d^{3}\mbox{{\boldmath$\vec{r}$}}~{}=~{}\rho~{}\sum_{l=2}^{\infty}\nabla
W_{l}~{}+~{}{\cal F}~{}~{}.$ (251)
Here $\,{\cal F}\,$ is the density multiplied by the average tidal
acceleration experienced by the body as a whole. In neglect of $\,{\cal F}\,$,
we arrive at
$\displaystyle\rho\left[\,\stackrel{{\scriptstyle\bf{\centerdot\,\centerdot}}}{{\textbf{\mbox{\boldmath$\vec{\boldmath
u}$}}}}+\,2\,\mbox{{\boldmath$\vec{\omega}$}}\times\stackrel{{\scriptstyle\bf{\centerdot}}}{{\textbf{\mbox{\boldmath$\vec{\boldmath
u}$}}}}+\,\mbox{{\boldmath$\vec{\omega}$}}\times\left(\mbox{{\boldmath$\vec{\omega}$}}\times\mbox{{\boldmath$\vec{r}$}}\right)+\mbox{{\boldmath$\dot{\vec{\omega}}$}}\times\mbox{{\boldmath$\vec{r}$}}\,\right]\,=\,\nabla{\mathbb{S}}\,-\,\nabla
p\,-\,\rho~{}\sum_{l=2}^{\infty}\nabla(U_{l}\,+\;W_{l})~{}~{}.\quad~{}$ (252)
Here, to each disturbing term of the exterior potential, $\,W_{l}\,$,
corresponds a term $\,U_{l}\,$ of the self-potential, the self-force thus
being expanded into
$\,\mbox{\boldmath$\vec{\boldmath{F}}$}_{self}\,=\,-\,\sum_{l=2}^{\infty}\nabla
U_{l}\,$.
Equation (252) could as well have been derived in the body frame, where it
would have assumed the same form.
Denoting the tidal frequency with $\,\chi\,$, we see that the terms on the
left-hand side have the order of $\,\rho\,\chi^{2}\,u~{}$,
$~{}\rho\,\omega\,\chi\,u~{}$, $~{}\rho\,\omega^{2}\,r\,$, and
$~{}\rho\,\dot{\chi}\,\omega\,r~{}$, correspondingly. In realistic situations,
the first two terms, thus, can be neglected, and we end up with
$\displaystyle 0\,=\,\nabla{\mathbb{S}}\,-\,\nabla
p\,-~{}\rho~{}\sum_{l=2}^{\infty}\nabla(U_{l}\,+\;W_{l})~{}-~{}\rho~{}\mbox{{\boldmath$\vec{\omega}$}}\times\left(\mbox{{\boldmath$\vec{\omega}$}}\times\mbox{{\boldmath$\vec{r}$}}\right)~{}-~{}\rho\,\mbox{{\boldmath$\dot{\vec{\omega}}$}}\times\mbox{{\boldmath$\vec{r}$}}~{}~{},\quad~{}$
(253)
the term $\;-\,\nabla p\,$ vanishing in an incompressible media.
### D.3 The centripetal term and the zero-degree Love number
The centripetal term in (253) can be split into a purely radial part and a
part that can be incorporated into the $\,W_{2}\,$ term of the tide-raising
potential, as was suggested by Love (1909, 1911). Introducing the colatitude
$\,\phi\,^{\prime}\,$ through
$~{}\cos\phi\,^{\prime}~{}=~{}\frac{\mbox{{\boldmath$\vec{\omega}$}}}{\,|\mbox{{\boldmath$\vec{\omega}$}}|\,}\cdot\frac{\mbox{{\boldmath$\vec{r}$}}}{\,|\mbox{{\boldmath$\vec{r}$}}|\,}~{}$,
we can write down the evident equality
$\displaystyle\mbox{{\boldmath$\vec{\omega}$}}\times\left(\mbox{{\boldmath$\vec{\omega}$}}\times\mbox{{\boldmath$\vec{r}$}}\right)~{}=~{}\mbox{{\boldmath$\vec{\omega}$}}~{}(\mbox{{\boldmath$\vec{\omega}$}}\cdot\mbox{{\boldmath$\vec{r}$}})~{}-~{}\mbox{{\boldmath$\vec{r}$}}~{}\mbox{{\boldmath$\vec{\omega}$}}^{\,2}=~{}\nabla\left[\,\frac{1}{2}\,\left(\mbox{{\boldmath$\vec{\omega}$}}\cdot\mbox{{\boldmath$\vec{r}$}}\right)^{2}\,-~{}\frac{1}{2}\,\mbox{{\boldmath$\vec{\omega}$}}^{\,2}\,\mbox{{\boldmath$\vec{r}$}}^{\,2}\,\right]~{}=~{}\nabla\left[~{}~{}\frac{1}{2}\,\mbox{{\boldmath$\vec{\omega}$}}^{\,2}\,\mbox{{\boldmath$\vec{r}$}}^{\,2}\,\left(\,\cos^{2}\phi\,^{\prime}~{}-~{}1\,\right)~{}~{}\right]~{}~{}~{}.\quad\quad\quad\quad\quad\quad$
The definition $~{}P_{2}(\cos\phi\,^{\prime})\,=~{}\frac{\textstyle
1}{\textstyle 2}\left(3\,\cos^{2}\phi\,^{\prime}\,-~{}1\right)~{}$ easily
renders: $~{}\cos^{2}\phi\,^{\prime}~{}=~{}\frac{\textstyle 2}{\textstyle
3}~{}P_{2}(\cos\phi\,^{\prime})~{}+~{}\frac{\textstyle 1}{\textstyle 3}~{}$,
whence:
$\displaystyle\mbox{{\boldmath$\vec{\omega}$}}\times\left(\mbox{{\boldmath$\vec{\omega}$}}\times\mbox{{\boldmath$\vec{r}$}}\right)~{}=~{}\nabla\left[~{}~{}\frac{1}{3}~{}\mbox{{\boldmath$\vec{\omega}$}}^{\,2}\,\mbox{{\boldmath$\vec{r}$}}^{\,2}\,\left[\,P_{2}\left(\cos\phi\,^{\prime}\right)~{}-~{}1~{}\right]~{}~{}\right]~{}~{}~{}.$
(254)
We see that the centripetal force splits into a second-harmonic and purely-
radial parts:
$\displaystyle-~{}\rho~{}\mbox{{\boldmath$\vec{\omega}$}}\times\left(\mbox{{\boldmath$\vec{\omega}$}}\times\mbox{{\boldmath$\vec{r}$}}\right)~{}=~{}-~{}\nabla\left[~{}\frac{\rho}{3}~{}\mbox{{\boldmath$\vec{\omega}$}}^{\,2}\,\mbox{{\boldmath$\vec{r}$}}^{\,2}\,P_{2}\left(\cos\phi\,^{\prime}\right)~{}\right]~{}+~{}\nabla\;\left[~{}\frac{\rho}{3}~{}\mbox{{\boldmath$\vec{\omega}$}}^{\,2}\,\mbox{{\boldmath$\vec{r}$}}^{\,2}~{}\right]~{}~{}~{},$
(255)
where we assume the body to be homogeneous. The second-harmonic part can be
incorporated into the external potential. The response to this part will be
proportional to the degree-2 Love number $\,k_{2}\,$.
The purely radial part of the centripetal potential generates a radial
deformation. This part of the potential is often ignored, the associated
deformation being tacitly included into the equilibrium shape of the body.
Compared to the main terms of the equation of motion, this radial term is of
the order of $\,10^{-3}\,$ for the Earth, and is smaller for most other
bodies. As the rotation variations of the Earth are of the order of
$\,10^{-5}\,$, this term leads to a tiny change in the geopotential and to an
associated displacement of the order of a micrometer. 292929 Tim Van Hoolst,
private communication.
However, for other rotators the situation may be different. For example, in
Phobos, whose libration magnitude is large (about 1 degree), the radial term
may cause an equipotential-surface variation of about 10 cm. This magnitude is
large enough to be observed by future missions and should be studied in more
detail.303030 Tim Van Hoolst, private communication. The emergence of the
purely radial deformation gives birth to the zero-degree Love number (Dahlen
1976, Matsuyama & Bills 2010). Using Dahlen’s results, Yoder (1982, eqns 21 -
22) demonstrated that the contribution of the radial part of the centripetal
potential to the change in mean motion of Phobos is about 3%, which is smaller
than the uncertainty in our knowledge of Phobos’ $\,k_{2}/Q\,$. It should be
mentioned, however, that the calculations by Dahlen (1976) and Matsuyama &
Bills (2010) were performed for steady (or slowly changing) rotation, and not
for libration. This means that Yoder’s application of Dahlen’s result to
Phobos requires extra justification.
What is important for us here is that the radial term does not interfere with
the calculation of the Love number. Being independent of the longitude, this
term generates no tidal torque either, provided the obliquity is neglected.
### D.4 The toroidal term
The inertial term
$~{}~{}-~{}\rho~{}\mbox{{\boldmath$\dot{\vec{\omega}}$}}\times\mbox{{\boldmath$\vec{r}$}}\,$
in the equation of motion (253) can be cast into the form
$\displaystyle-~{}\rho~{}\mbox{{\boldmath$\dot{\vec{\omega}}$}}\times\mbox{{\boldmath$\vec{r}$}}\;=\;\rho~{}\mbox{{\boldmath$\vec{r}$}}\times\nabla(\mbox{{\boldmath$\dot{\vec{\omega}}$}}\cdot\mbox{{\boldmath$\vec{r}$}})~{}~{}~{},$
(256)
whence we see that this term is of a toroidal type. Being almost nil for a
despinning primary, this force becomes important for a librating object.
In spherically-symmetric bodies, the toroidal force (256) generates toroidal
deformation only. This deformation produces neither radial uplifts nor
variations of the gravitational potential. Hence its presence does not
influence the expressions for the Love numbers associated with vertical
displacement ($\,h_{\textstyle{{}_{l}}}\,$) or the potential
($\,k_{\textstyle{{}_{l}}}\,$). As this deformation yields no change in the
gravitational potential of the tidally-perturbed body, there is no tidal
torque associated with this deformation. Being divergence-free, this
deformation entails no contraction or expansion either, i.e., it is purely
shear. Still, this deformation contributes to dissipation. Besides, since the
toroidal forcing results in the toroidal deformation, it can, in principle, be
associated with a “toroidal” Love number.
To estimate the dissipation caused by the toroidal rotational force, Yoder
(1982) introduced an equivalent effective torque. He pointed out that this
force becomes important when the magnitude of the physical libration is
comparable to that of the optical libration. According to Ibid., the toroidal
force contributes to the change of the mean motion of Phobos about 1.6%, which
is less than the input from the purely radial part.
## Appendix E The Andrade and Maxwell models at different frequencies
### E.1 Response of a sample obeying the Andrade model
Within the Andrade model, the tangent of the phase lag demonstrates the so-
called “elbow dependence”. At high frequencies, the tangent of the lag obeys a
power law with an exponent equal to $\,-\alpha\,$, where
$\,0\,<\,\alpha\,<\,1\,$. At low frequencies, the tangent of the lag once
again obeys a power law, this time though with an exponent
$~{}-\,(1-\alpha)\,$. This model fits well the behaviour of ices, metals,
silicate rocks, and many other materials.
However the applicability of the Andrade law may depend upon the intensity of
the load and, accordingly, upon the damping mechanisms involved. Situations
are known, when, at low frequencies, anelasticity becomes much less efficient
than viscosity. In these cases, the Andrade model approaches, at low
frequencies, the Maxwell model.
#### E.1.1 The high-frequency band
At high frequencies, expression (89b) gets simplified. In the numerator, the
term with $\,z^{-\alpha}\,$ dominates: $~{}z^{-\alpha}\,\gg\,z^{-1}\,\zeta\,$,
which is equivalent to
$\,z\,\gg\,\zeta^{\textstyle{{}^{\textstyle{\frac{1}{1-\alpha}}}}}\,$. In the
denominator, the constant term dominates: $~{}1\,\gg\,z^{-\alpha}\,$, or
simply: $\,z\,\gg\,1\,$. To know which of the two conditions,
$\,z\,\gg\,\zeta^{\textstyle{{}^{\textstyle{\frac{1}{1-\alpha}}}}}\,$ or
$\,z\,\gg\,1\,$, is stronger, we recall that at high frequencies anelasticity
beats viscosity. So the $\,\alpha-$term in (85) is large enough. In other
words, the Andrade timescale $\,\tau_{{}_{A}}\,$ should be smaller (or, at
least, not much higher) than the viscoelastic time $\,\tau_{{}_{M}}\,$.
Accordingly, at high frequencies, $\,\zeta\,$ is smaller (or, at least, not
much higher) than unity. Hence, within the high-frequency band, either the
condition $\,z\,\gg\,1\,$ is stronger than
$\,z\,\gg\,\zeta^{\textstyle{{}^{\textstyle{\frac{1}{1-\alpha}}}}}\,$ or the
two conditions are about equivalent. This, along with (90) and (91) enables us
to write:
$\displaystyle\tan\delta~{}\,\approx\,~{}(\chi\,\tau_{{}_{A}})^{\textstyle{{}^{-\alpha}}}\sin\left(\frac{\alpha\,\pi}{2}\right)~{}\Gamma(\alpha+1)~{}\quad~{}\quad~{}\mbox{for}\quad\chi\;\gg\;\tau^{-1}_{{}_{A}}\,=\,\tau_{{}_{M}}^{-1}\zeta^{-1}~{}~{}~{}.~{}~{}\,~{}$
(257)
The tangent being small, the expression for $\,\sin\delta\,$ looks identical:
$\displaystyle\sin\delta\,~{}\approx\,~{}(\chi\,\tau_{{}_{A}})^{\textstyle{{}^{-\alpha}}}~{}\sin\left(\frac{\alpha\,\pi}{2}\right)~{}\Gamma(\alpha+1)~{}\quad~{}\quad~{}\mbox{for}~{}~{}~{}\chi\;\gg\;\tau^{-1}_{{}_{A}}\,=\,\tau_{{}_{M}}^{-1}\zeta^{-1}~{}~{}~{}.~{}~{}\,~{}$
(258)
#### E.1.2 The intermediate region
In the intermediate region, the behaviour of the phase lag $\,\delta\,$
depends upon the frequency-dependence of $\,\zeta\,$. For example, if there
happens to exist an interval of frequencies over which the conditions
$\,1\,\gg\,z\,\gg\,\zeta^{\textstyle{{}^{\textstyle{\frac{1}{1-\alpha}}}}}\,$
are obeyed, then over this interval we shall have: $\,1\ll\,z^{-\alpha}\,$ and
$\,z^{-\alpha}\gg\,\zeta z^{-1}\,$. Applying these inequalities to (89b), we
see that over such an interval of frequencies $\,\tan\delta\,$ will behave as
$\,z^{-2\alpha}\,\tan\left(\frac{\textstyle\alpha\,\pi}{\textstyle
2}\right)\,$.
#### E.1.3 The low-frequency band
At low frequencies, the term $\,z^{-1}\,\zeta\,$ becomes leading in the
numerator of (89b): $\,z^{-\alpha}\,\ll\,z^{-1}\,\zeta\,$, which requires
$\,z\,\ll\,\zeta^{\textstyle{{}^{\textstyle{\frac{1}{1-\alpha}}}}}\,$. In the
denominator, the term with $\,z^{-\alpha}\,$ becomes the largest:
$\,1\,\ll\,z^{-\alpha}\,$, whence $\,z\,\ll\,1\,$. Since at low frequencies
the viscous term in (85) is larger than the anelastic term, we expect that for
these frequencies $\,\zeta\,$ is larger (at least, not much smaller) than
unity. Thence the condition $\,z\,\ll\,1\,$ becomes sufficient. Its fulfilment
ensures the fulfilment of
$\,z\,\ll\,\zeta^{\textstyle{{}^{\textstyle{\frac{1}{1-\alpha}}}}}\,$. Thus we
state:
$\displaystyle\tan\delta~{}\approx\,~{}(\chi\,\tau_{{}_{A}})^{\textstyle{{}^{-(1-\alpha)}}}\frac{\zeta}{~{}\cos\left(\frac{\textstyle\alpha\,\pi}{\textstyle
2}\right)~{}\Gamma(\alpha+1)}\quad\quad\quad\mbox{for}\quad\quad\chi~{}\ll~{}\tau^{-1}_{{}_{A}}\,=~{}\tau_{{}_{M}}^{-1}\,\zeta^{-1}~{}~{}~{}.~{}~{}\quad\quad$
(259)
The appropriate expression for $\,\sin\delta\,$ will be:
$\displaystyle\sin\delta~{}\approx\,~{}1~{}-~{}O\left(~{}(\chi\,\tau_{{}_{A}})^{{{2(1-\alpha)}}}\,\zeta^{-2}\,\right)\quad\,\quad\quad\quad\,\mbox{for}\quad\quad\chi~{}\ll\;\tau_{{}_{A}}^{-1}\,=~{}\tau_{{}_{M}}^{-1}\,\zeta^{-1}~{}~{}~{},~{}~{}$
(260)
It would be important to emphasise that the threshold
$\,\tau^{-1}_{{}_{A}}\,=~{}\tau_{{}_{M}}^{-1}\,\zeta^{-1}\,$ standing in (257)
and (258) is different from the threshold
$\,\tau^{-1}_{{}_{A}}\,=~{}\tau_{{}_{M}}^{-1}\,\zeta^{-1}\,$ showing up in
(259) and (260), even though these two thresholds are given by the same
expression. The reason for this is that the timescales $\,\tau_{{}_{A}}\,$ and
$\,\tau_{{}_{M}}\,$ are not fixed constants. While the Maxwell time is likely
to be a very slow function of the frequency, the Andrade time may undergo a
faster change over the transitional region: $\,\tau_{{}_{A}}\,$ must be larger
than $\,\tau_{{}_{M}}\,$ at low frequencies (so anelasticity yields to
viscosity), and must become shorter than or of the order of
$\,\tau_{{}_{M}}\,$ at high frequencies (so anelasticity becomes stronger).
This way, the threshold $\,\tau^{-1}_{{}_{A}}\,$ standing in (259 \- 260) is
lower than the threshold $\,\tau^{-1}_{{}_{A}}\,$ standing in (257 \- 258).
The gap between these thresholds is the region intermediate between the two
pronounced power laws (257) and (259).
#### E.1.4 The low-frequency band: a special case, the Maxwell model
Suppose that, below some threshold $\,\chi_{\textstyle{{}_{0}}}\,$,
anelasticity quickly becomes much less efficient than viscosity. This would
imply a steep increase of $\,\zeta\,$ (equivalently, of $\,\tau_{{}_{A}}\,$)
at low frequencies. Then, in (89b), we shall have: $\,1\gg\,z^{-\alpha}\,$ and
$\,z^{-\alpha}\ll\,\zeta z^{-1}\,$. This means that, for frequencies below
$\,\chi_{\textstyle{{}_{0}}}\,$, the tangent will behave as
$\displaystyle\tan\delta~{}\,\approx\,~{}z^{-1}\,\zeta~{}=~{}(\chi\,\tau_{{}_{M}})^{-1}~{}\quad~{}\quad~{}\mbox{for}\quad\quad\chi\,\ll\,\chi_{\textstyle{{}_{0}}}~{}~{}~{}.~{}~{}\,~{}$
(261)
the well-known viscous scaling law for the lag.
The study of ices and minerals under weak loads (Castillo-Rogez et al. 2011,
Castillo-Rogez 2011) has not shown such an abrupt vanishing of anelasticity.
However, Karato & Spetzler (1990) point out that this should be happening in
the Earth’s mantle, where the loads are much higher and anelasticity is caused
by unpinning of dislocations.
### E.2 The behaviour of
$\,|k_{\textstyle{{}_{l}}}(\chi)|\;\sin\epsilon_{\textstyle{{}_{l}}}(\chi)=-\,{\cal{I}}{\it{m}}\left[\,\bar{k}_{\textstyle{{}_{l}}}(\chi)\,\right]\,$
within the Andrade and Maxwell models
As we explained in subsection 4.1, products
$\,~{}k_{\textstyle{{}_{l}}}(\chi_{\textstyle{{}_{{\it{l}}mpq}}})\,\sin\epsilon_{\textstyle{{}_{l}}}(\chi_{\textstyle{{}_{{\it{l}}mpq}}})\,\;$
enter the $\;{\it l}mpq\;$ term of the Darwin-Kaula series for the tidal
potential, force, and torque. Hence the importance to know the behaviour of
these products as functions of the tidal frequency
$~{}\chi_{\textstyle{{}_{{\it{l}}mpq}}}~{}$.
#### E.2.1 Prefatory algebra
It ensues from (4.3) that
$\displaystyle\bar{k}_{\textstyle{{}_{l}}}(\chi)\;=\;\frac{3}{2\,(l-1)}\;\frac{\left(\;{\cal{R}}{\it{e}}\left[\bar{J}(\chi)\right]\;\right)^{2}\;+\;\left(\;{\cal{I}}{\it{m}}\left[\bar{J}(\chi)\right]\;\right)^{2}\;+\;A_{\textstyle{{}_{l}}}\;J\;{\cal{R}}{\it{e}}\left[\bar{J}(\chi)\right]\;+i\;A_{\textstyle{{}_{l}}}\;J\;{\cal{I}}{\it{m}}\left[\bar{J}(\chi)\right]}{\left(\;{\cal{R}}{\it{e}}\left[\bar{J}(\chi)\right]\;+\;A_{\textstyle{{}_{l}}}\;J\;\right)^{2}\;+\;\left(\;{\cal{I}}{\it{m}}\left[\bar{J}(\chi)\right]\;\right)^{2}}~{}~{}~{},~{}~{}~{}$
(262)
whence
$\displaystyle|\bar{k}_{\textstyle{{}_{l}}}(\chi)|\;\sin\epsilon_{\textstyle{{}_{l}}}(\chi)\;=\;-\;{\cal{I}}{\it{m}}\left[\bar{k}_{\textstyle{{}_{l}}}(\chi)\right]\;=\;\frac{3}{2\,(l-1)}\;\frac{-\;A_{\textstyle{{}_{l}}}\;J\;{\cal{I}}{\it{m}}\left[\bar{J}(\chi)\right]}{\left(\;{\cal{R}}{\it{e}}\left[\bar{J}(\chi)\right]\;+\;A_{\textstyle{{}_{l}}}\;J\;\right)^{2}\;+\;\left(\;{\cal{I}}{\it{m}}\left[\bar{J}(\chi)\right]\;\right)^{2}}~{}~{}~{},~{}~{}~{}~{}~{}$
(263)
$J\,=\,J(0)\,\equiv\,1/\mu\,=\,1/\mu(0)\;$ being the unrelaxed compliance (the
inverse of the unrelaxed shear modulus $\,\mu\,$). For an Andrade material,
the compliance $\;\bar{J}\;$ in the frequency domain is rendered by (86). Its
imaginary and real parts are given by (87 \- 88). It is then easier to rewrite
(263) as
$\displaystyle|\bar{k}_{\textstyle{{}_{l}}}(\chi)|\;\sin\epsilon_{\textstyle{{}_{l}}}(\chi)\;=\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$
$\displaystyle\frac{3~{}A_{\textstyle{{}_{l}}}}{2\,(l-1)}~{}\frac{\zeta~{}z^{-1}~{}+~{}z^{-\alpha}~{}\sin\left(\frac{\textstyle\alpha\,\pi}{\textstyle
2}\right)~{}\Gamma(\alpha\,+\,1)}{\left[A_{\textstyle{{}_{l}}}+1+z^{-\alpha}\,\cos\left(\frac{\textstyle\alpha\,\pi}{\textstyle
2}\right)\,\Gamma(\alpha+1)\right]^{2}+\,\left[\zeta\,z^{-1}+\,z^{-\alpha}\,\sin\left(\frac{\textstyle\alpha\,\pi}{\textstyle
2}\right)\,\Gamma(1+\alpha)\right]^{2}}~{}~{}~{},\quad\quad$ (264)
where
$\displaystyle
z~{}\equiv~{}\chi~{}\tau_{{}_{A}}\,=~{}\chi~{}\tau_{{}_{M}}\,\zeta$ (265)
and
$\displaystyle\zeta~{}\equiv~{}\frac{\tau_{{}_{A}}}{\tau_{{}_{M}}}~{}~{}~{}.$
(266)
For $\,\beta\,\rightarrow\,0\,$, i.e., for
$\,\tau_{{}_{A}}\,\rightarrow\,\infty\,$, (264) coincides with the appropriate
expression for a spherical Maxwell body.
#### E.2.2 The high-frequency band
Within the upper band, the term with $\,z^{-\alpha}\,$ dominates the
numerator, while $\,A_{\textstyle{{}_{l}}}\,$ dominates the denominator. The
domination of $\,z^{-\alpha}\,$ in the numerator requires that
$\,z\gg\zeta^{\textstyle{{}^{\,{\textstyle\frac{1}{1-\alpha}}}}}\,$, which is
the same as
$\,\chi\gg\tau_{{}_{M}}^{-1}\,\zeta^{\textstyle{{}^{\textstyle\,\frac{\alpha}{1-\alpha}}}}\,$.
The domination of $\,A_{\textstyle{{}_{l}}}\,$ in the denominator requires:
$\,z\gg\,A^{\textstyle{{}^{\,-1/\alpha}}}\,$, which is the same as
$\,\chi\gg\,\tau_{{}_{M}}^{-1}\,\zeta^{-1}\,A^{\textstyle{{}^{\,-\,1/\alpha}}}\,$.
It also demands that $\,\zeta\,z^{-1}\ll A_{\textstyle{{}_{l}}}\,$, which is:
$\,\chi\gg\tau^{-1}_{{}_{M}}\,A^{-1}_{\textstyle{{}_{l}}}\,$.
For realistic values of $\,A_{\textstyle{{}_{l}}}\,$ (say, $10^{3}$) and
$\,\alpha\,$ (say, 0.25), we have:
$\,A^{\textstyle{{}^{\,-\,1/\alpha}}}\sim\,10^{-12}\,$. At high frequencies,
anelasticity beats viscosity, so $\,\zeta\,$ is less than unity (or, at least,
is not much larger than unity). On these grounds, the requirement
$\,\chi\gg\tau_{{}_{M}}^{-1}\,\zeta^{\textstyle{{}^{\textstyle\,\frac{\alpha}{1-\alpha}}}}\,$
is the strongest here. Its fulfilment guarantees that of both
$\,\chi\gg\,\tau_{{}_{M}}^{-1}\,\zeta^{-1}\,A^{\textstyle{{}^{\,-\,1/\alpha}}}\,$
and $\,\chi\gg\tau_{{}_{M}}/A_{\textstyle{{}_{l}}}\,$. Thus we have:
$\displaystyle|\bar{k}_{\textstyle{{}_{l}}}(\chi)|\,\sin\epsilon_{\textstyle{{}_{l}}}(\chi)$
$\displaystyle\approx$
$\displaystyle\frac{3}{2\,(l-1)}\;\frac{1}{A_{\textstyle{{}_{l}}}}\,\sin\left(\frac{\alpha\,\pi}{2}\right)\,\Gamma(\alpha+1)~{}\,\zeta^{-\alpha}~{}\left(\,\tau_{{}_{M}}\,\chi\,\right)^{-\alpha}~{}~{}\quad~{}\mbox{for}\quad\quad\chi\,\gg\,\tau_{{}_{M}}^{-1}\,\zeta^{\textstyle{{}^{\textstyle\,\frac{\alpha}{1-\alpha}}}}~{}~{}.\quad\quad\quad~{}\quad$
(267a)
#### E.2.3 The intermediate band
Within the intermediate band, the term $\,\zeta\,z^{-1}\,$ takes over in the
numerator, while $\,A_{\textstyle{{}_{l}}}\,$ still dominates in the
denominator. The domination of $\,\zeta\,z^{-\alpha}\,$ in the numerator
implies that
$\,z\ll\zeta^{\textstyle{{}^{\,{\textstyle\frac{1}{1-\alpha}}}}}\,$, which is
equivalent to
$\,\chi\gg\tau_{{}_{M}}^{-1}\,\zeta^{\textstyle{{}^{\textstyle\,\frac{\alpha}{1-\alpha}}}}\,$.
The domination of $\,A_{\textstyle{{}_{l}}}\,$ in the denominator requires
$\,\chi\gg\,\tau_{{}_{M}}^{-1}\,\zeta^{-1}\,A_{\textstyle{{}_{l}}}^{\textstyle{{}^{\,-\,1/\alpha}}}\,$
and $\,\chi\gg\tau_{{}_{M}}^{-1}A^{-1}_{\textstyle{{}_{l}}}\,$, as we just saw
above.
As we are considering the band where viscosity takes over anelasticity, we may
expect that here $\,\zeta\,$ is about or, likely, larger than unity. Taken the
large value of $\,A_{\textstyle{{}_{l}}}\,$, we see that the condition
$\,\chi\gg\tau_{{}_{M}}^{-1}A^{-1}_{\textstyle{{}_{l}}}\,$ is stronger. Its
fulfilment guarantees the fulfilment of
$\,\chi\gg\,\tau_{{}_{M}}^{-1}\,\zeta^{-1}\,A_{\textstyle{{}_{l}}}^{\textstyle{{}^{\,-\,1/\alpha}}}\,$.
This way, we obtain:
$\displaystyle|\bar{k}_{\textstyle{{}_{l}}}(\chi)|\,\sin\epsilon_{\textstyle{{}_{l}}}(\chi)$
$\displaystyle\approx$
$\displaystyle\frac{3}{2\,(l-1)}~{}\frac{1}{A_{\textstyle{{}_{l}}}}~{}\left(\,\tau_{{}_{M}}\,\chi\,\right)^{-1}\quad\quad~{}\quad~{}\mbox{for}\quad\quad\tau_{{}_{M}}^{-1}\,\zeta^{\textstyle{{}^{\textstyle\,\frac{\alpha}{1-\alpha}}}}\,\gg\,\chi\,\gg\,\tau_{{}_{M}}^{-1}A_{\textstyle{{}_{l}}}^{-1}~{}~{}.\quad\quad\quad\quad\quad$
(267b)
#### E.2.4 The low-frequency band
For frequencies lower than $\,\tau_{{}_{M}}^{-1}A_{\textstyle{{}_{l}}}^{-1}\,$
the Andrade model renders the same frequency-dependency as that given (at
frequencies below $\,\tau^{-1}_{{}_{M}}\,$) by the Maxwell model:
$\displaystyle|\bar{k}_{\textstyle{{}_{l}}}(\chi)|\;\sin\epsilon_{\textstyle{{}_{l}}}(\chi)$
$\displaystyle\approx$
$\displaystyle\frac{3}{2\,(l-1)}~{}{A_{\textstyle{{}_{l}}}}~{}\,\tau_{{}_{M}}~{}\chi\quad\quad\quad~{}\quad~{}\quad\mbox{for}\quad\quad\tau_{{}_{M}}^{-1}\,A_{\textstyle{{}_{l}}}^{-1}\,\gg\,\chi~{}~{}~{}.\quad\,\quad\quad\quad\,\quad\quad\quad\,\quad\quad$
(267c)
#### E.2.5 Interpretation
Formulae (267a) and (267b) render a frequency-dependence mimicking that of
$\,|\stackrel{{\scriptstyle\bf{-}}}{{J}}(\chi)|\,\sin\delta(\chi)$
$=\,-\,{\cal{I}}\textit{m}\left[\bar{J}(\chi)\right]~{}$ in the high- and low-
frequency bands. This can be seen from comparing (267a) and (267b) with (258).
In contrast, (267c) reveals a peculiar feature inherent in the tidal lagging,
and absent in the lagging in a sample.
For terrestrial bodies, the condition
$~{}\tau_{{}_{M}}^{-1}\,A_{\textstyle{{}_{l}}}^{-1}\,\gg\,\chi\;$ puts the
values of $\,\chi\,$ below $\,10^{-10}\,$Hz , give or take several orders of
magnitude. Hence
$\,|\bar{k}_{\textstyle{{}_{l}}}(\chi)|\,\sin\epsilon_{\textstyle{{}_{l}}}(\chi)\,$
follows the linear scaling law (267c) only in an extremely close vicinity of
the commensurability where the frequency $\,\chi\,$ vanishes. Nonetheless it
is very important that $~{}|\bar{k}_{2}(\chi)|\,\sin\epsilon(\chi)~{}$ first
reaches a finite maximum and then decreases continuously and vanishes, as the
frequency goes to zero. This confirms that neither the tidal torque nor the
tidal force becomes infinite in resonances.
## Appendix F The behaviour of $\;k_{l}(\chi)\equiv|\bar{k}_{l}(\chi)|\;$ in
the
limit of vanishing tidal frequency $\;\chi\;\,$,
within the Andrade and Maxwell models
From (4.3), we obtain:
$\displaystyle|\bar{k}_{l}(\chi)|^{2}=\left(\frac{3}{2(l-1)}\right)^{2}\frac{|\,\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)|\,}{|A_{\textstyle{{}_{l}}}\,J\,+\,\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)|}\,=\left(\frac{3}{2(l-2)}\right)^{2}\,\frac{\left(~{}{\cal{R}}{\it{e}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]~{}\right)^{2}~{}+~{}\left(~{}{\cal{I}}{\it{m}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]~{}\right)^{2}}{\left(\,{\cal{R}}{\it{e}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]+\,A_{\textstyle{{}_{l}}}\,J\,\right)^{2}+\left(\,{\cal{I}}{\it{m}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]\,\right)^{2}}~{}~{}~{}.~{}~{}~{}$
(268)
Bringing in expressions for the imaginary and real parts of the compliance,
and introducing notations
$\displaystyle E~{}\equiv~{}\eta^{-1}~{}~{}~{},\quad
B~{}\equiv~{}\beta~{}\sin\left(\,\frac{\alpha\,\pi}{2}\,\right)~{}\Gamma(1\,+\,\alpha)~{}~{}~{},\quad
D~{}\equiv~{}\beta~{}\cos\left(\,\frac{\alpha\,\pi}{2}\,\right)~{}\Gamma(1\,+\,\alpha)~{}~{}~{},\quad\quad$
(269)
we can write:
$\displaystyle|k_{l}(\chi)|^{2}~{}=~{}\left(\frac{3}{2(l-1)}\right)^{2}~{}\left[\,1\,-~{}\frac{\textstyle
2\,A_{\textstyle{{}_{l}}}\,J\,D\,}{E^{2}}~{}\chi^{2-\alpha}\,-\,\frac{\textstyle
2\,A_{\textstyle{{}_{l}}}\,J^{2}\,+\,A_{\textstyle{{}_{l}}}^{2}\,J^{2}\,}{E^{2}}~{}\chi^{2}\,+\,O(\chi^{3-2\alpha})\,\right]~{}~{}~{}$
(270)
and
$\displaystyle|k_{l}(\chi)|^{-2}~{}=~{}\left(\frac{2(l-1)}{3}\right)^{2}~{}\left[\,1\,+~{}\frac{\textstyle
2\,A_{\textstyle{{}_{l}}}\,J\,D\,}{E^{2}}~{}\chi^{2-\alpha}\,+\,\frac{\textstyle
2\,A_{\textstyle{{}_{l}}}\,J^{2}\,+\,A_{\textstyle{{}_{l}}}^{2}\,J^{2}\,}{E^{2}}~{}\chi^{2}\,+\,O(\chi^{3-2\alpha})\,\right]~{}~{}~{},$
(271)
whence
$\displaystyle|k_{l}(\chi)|~{}=~{}\frac{3}{2(l-1)}~{}\left[\,1\,-\,A_{\textstyle{{}_{l}}}\,J~{}\frac{\textstyle
D\,}{E^{2}}~{}\chi^{2-\alpha}\,-\,A_{\textstyle{{}_{l}}}\,J\,\frac{\textstyle
J\,+\,A_{\textstyle{{}_{l}}}\,J/2\,}{E^{2}}~{}\chi^{2}\,+\,O(\chi^{3-2\alpha})\,\right]~{}~{}~{}$
(272)
and
$\displaystyle|k_{l}(\chi)|^{-1}~{}=~{}\frac{2(l-1)}{3}~{}\left[\,1\,+\,A_{\textstyle{{}_{l}}}\,J~{}\frac{\textstyle
D\,}{E^{2}}~{}\chi^{2-\alpha}\,+\,A_{\textstyle{{}_{l}}}\,J\,\frac{\textstyle
J\,+\,A_{\textstyle{{}_{l}}}\,J/2\,}{E^{2}}~{}\chi^{2}\,+\,O(\chi^{3-2\alpha})\,\right]~{}~{}~{},$
(273)
the expansions being valid for
$\,\chi\,J/E\,\ll\,1\,+\,A_{\textstyle{{}_{l}}}\,$, i.e., for
$\,\chi\,\tau_{{}_{M}}\,\ll\,1\,+\,A_{\textstyle{{}_{l}}}\,$.
Rewriting (4.3) as
$\displaystyle\bar{k}_{l}(\chi)\,=\,\frac{3}{2(l-1)}\,\frac{\left({\cal{R}}{\it{e}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]~{}+~{}i~{}{\cal{I}}{\it{m}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]\right)~{}\left({\cal{R}}{\it{e}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]~{}+~{}A_{\textstyle{{}_{l}}}~{}J~{}-~{}i~{}{\cal{I}}{\it{m}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]\right)}{\left({\cal{R}}{\it{e}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]~{}+~{}A_{\textstyle{{}_{l}}}~{}J\right)^{2}~{}+~{}\left({\cal{I}}{\it{m}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]\right)^{2}}~{}~{}~{},\quad\quad\quad$
(274)
we extract its real part:
$\displaystyle{\cal{R}}{\it{e}}\left[\bar{k}_{l}(\chi)\right]\,=\,\frac{3}{2(l-1)}\,\;\frac{\left({\cal{R}}{\it{e}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]\right)^{2}~{}+~{}\left({\cal{I}}{\it{m}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]\right)^{2}~{}+~{}A_{\textstyle{{}_{l}}}~{}J~{}{\cal{R}}{\it{e}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]}{\left({\cal{R}}{\it{e}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]~{}+~{}A_{\textstyle{{}_{l}}}~{}J\right)^{2}~{}+~{}\left({\cal{I}}{\it{m}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]\right)^{2}}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle=~{}\frac{3}{2(l-1)}~{}\left[~{}1~{}-~{}A_{\textstyle{{}_{l}}}~{}J~{}\frac{{\cal{R}}{\it{e}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]~{}+~{}A_{\textstyle{{}_{l}}}~{}J}{\left({\cal{R}}{\it{e}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]~{}+~{}A_{\textstyle{{}_{l}}}~{}J\right)^{2}~{}+~{}\left({\cal{I}}{\it{m}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]\right)^{2}}~{}~{}\right]~{}~{}~{}.$
(275)
Insertion of the expressions for
$~{}{\cal{R}}{\it{e}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]~{}$
and
$~{}{\cal{I}}{\it{m}}\left[\stackrel{{\scriptstyle\bf{\\_}}}{{J}}(\chi)\right]~{}$
into the latter formula entails:
$\displaystyle{\cal{R}}{\it{e}}\left[\bar{k}_{l}(\chi)\right]\,=~{}\frac{3}{2(l-1)}~{}\left[\,1\,-\,A_{\textstyle{{}_{l}}}\,J~{}\frac{\textstyle
D\,}{E^{2}}~{}\chi^{2-\alpha}\,-\,A_{\textstyle{{}_{l}}}\,J~{}\frac{\textstyle
J\,+\,A_{\textstyle{{}_{l}}}\,J}{E^{2}}~{}\chi^{2}\,+\,O(\chi^{3-2\alpha})\,\right]~{}~{}~{}.$
(276)
Expressions (273) and (276) enable us to write down the cosine of the shape
lag:
$\displaystyle\cos\epsilon_{l}\;=\;\frac{{\cal R}{\it
e}[k_{l}(\chi)]}{|k_{l}(\chi)|}\;=\;1\;-\;\frac{1}{2}\;\left(\,\frac{\textstyle
A_{\textstyle{{}_{l}}}\;J}{\textstyle
E}\,\right)^{2}\;\chi^{2}\;+\;O(\chi^{3-2\alpha})\;=\;1\;-\;\frac{1}{2}\;A_{\textstyle{{}_{l}}}^{2}\;(\tau_{{}_{M}}\chi)^{2}\;+\;O(\chi^{3-2\alpha})\;\;.~{}~{}$
(277)
Comparing this expression with (272), we see that, for the Andrade
($\alpha\neq 0$) model, the evolution of
$\,k_{l}(\chi)\equiv|\bar{k}_{2}(\chi)|\,$ in the limit of small $\,\chi\,$,
unfortunately, cannot be approximated with a convenient expression
$\;k_{l}(\chi)\,\approx\,k_{l}(0)\,\cos\epsilon(\chi)\;$, which is valid for
simpler models (like the one of Kelvin-Voigt or SAS).
However, for the Maxwell model ($\beta=0$) expression (272) becomes:
$\displaystyle|k_{l}(\chi)|=\frac{3}{2(l-1)}\left[1-\frac{\textstyle
A_{\textstyle{{}_{l}}}J}{E^{2}}\left(J+A_{\textstyle{{}_{l}}}J/2\right)\chi^{2}+O(\chi^{3})\right]=\frac{3}{2(l-1)}\left[1-\textstyle
A_{\textstyle{{}_{l}}}\left(1+A_{\textstyle{{}_{l}}}/2\right)(\tau_{{}_{M}}\chi)^{2}+O(\chi^{3})\right]~{}~{},~{}$
(278)
which can be written as
$\displaystyle|k_{l}(\chi)|~{}\approx~{}\frac{3}{2(l-1)}~{}\left[\,1\,-\;\frac{1}{2}\;\left(\,\frac{\textstyle
A_{\textstyle{{}_{l}}}\,J}{E}\,\right)^{2}~{}\chi^{2}\,+\,O(\chi^{3})\,\right]~{}=~{}\frac{3}{2(l-1)}~{}\left[\,1\,-\;\frac{1}{2}\;A_{\textstyle{{}_{l}}}^{2}\;(\tau_{{}_{M}}\chi)^{2}\,+\,O(\chi^{3})\,\right]~{}~{},~{}~{}$
(279)
insofar as $\;A_{\textstyle{{}_{l}}}\gg 1\;$. Comparing this with (277), we
see that, for small terrestrial moons and planets (but not for superearths
whose $\,A_{l}\,$ is small), the following convenient approximation is valid,
provided the Maxwell model is employed:
$\displaystyle
k_{l}(\chi)\,\approx\,k_{l}(0)\,\cos\epsilon(\chi)\quad,\quad\mbox{for}\quad\,\chi\,\tau_{{}_{M}}\,\ll\,1\,+\,A_{\textstyle{{}_{l}}}\quad\quad\mbox{where}\quad
A_{\textstyle{{}_{l}}}\gg 1~{}~{}~{}.$ (280)
## Appendix G The eccentricity functions
In our development, we take into account the expansions for
$\,G^{2}_{\textstyle{{}_{lpq}}}(e)\,$ over the powers of eccentricity, keeping
the terms up to $\,e^{6}\,$, inclusively. The table of the eccentricity
functions presented in the book by Kaula (1966) is not sufficient for our
purposes, because some of the $\,G_{\textstyle{{}_{lpq}}}(e)\,$ functions in
that table are given with lower precision. For example, the $\,e^{6}\,$ term
is missing in the approximation for $\,G_{\textstyle{{}_{200}}}(e)\,$.
Besides, that table omits several functions which are of order $\,e^{3}\,$. So
here we provide a more comprehensive table. The table is based on the
information borrowed from Cayley (1861) who tabulated various expansions
employed in astronomy. Among those, were series
$\displaystyle\left(\,\frac{r}{a}\,\right)^{-(l+1)}\left[\begin{array}[]{cc}\left[\,\cos{}\right]^{i}~{}\cos\\\
\left[\,\sin{}\right]^{i}~{}\sin\end{array}\right]j\nu~{}=~{}\sum_{i\,=\,-\,\infty}^{\infty}\left[\begin{array}[]{cc}\cos\\\
\sin\end{array}\right]i{\cal{M}}~{}~{}~{},$ (285)
$\nu\,$ and $\,{\cal{M}}\,$ signifying the true and mean anomalies, while
$\,\left[\,\cos{}\right]^{i}\,$ and $\,\left[\,\sin{}\right]^{i}\,$ denoting
the coefficients tabulated by Cayley. These coefficients are polynomials in
the eccentricity. Cayley’s integer indices $~{}i\,,\,j~{}$ are connected with
Kaula’s integers $~{}l\,,\,p\,,\,q~{}$ via
$\displaystyle
l\,-\,2\,p~{}=~{}j~{}\quad~{},\quad\quad~{}l\,-\,2\,p\,+\,q~{}=~{}i~{}~{}~{}.$
(286)
With the latter equalities kept in mind, the eccentricity functions, for
$\,i\,\geq\,0\,$, are related to Cayley’s coefficients by
$\displaystyle
G_{\textstyle{{}_{lpq}}}(e)~{}=~{}\left[\,\cos{}\right]^{i}\,+~{}\left[\,\sin{}\right]^{i}~{}\quad,\quad\mbox{for}\quad
i~{}\geq~{}0~{}~{}~{}.$ (287)
To obtain the eccentricity functions for $\,i\,<\,0\,$, one has to keep in
mind that $\,\left[\,\cos{}\right]^{-i}\,=~{}\left[\,\cos{}\right]^{i}\,$,
while $\,\left[\,\sin{}\right]^{i}\,=~{}-~{}\left[\,\sin{}\right]^{i}\,$. It
is then possible to demonstrate that
$\displaystyle
G_{\textstyle{{}_{lpq}}}(e)~{}=~{}\left[\,\cos{}\right]^{i}\,-~{}\left[\,\sin{}\right]^{i}~{}\quad,\quad\mbox{for}\quad
i~{}<~{}0~{}~{}~{}.$ (288)
Then the following expressions, for $\,l=2\,$, ensue from Cayley’s tables:
$\displaystyle G_{\textstyle{{}_{20~{}-11}}}(e)$ $\displaystyle=$
$\displaystyle
G_{\textstyle{{}_{20~{}-10}}}(e)~{}=~{}G_{\textstyle{{}_{20~{}-9}}}(e)~{}=~{}G_{\textstyle{{}_{20~{}-8}}}(e)~{}=~{}0~{}~{}~{},$
(289a) $\displaystyle G_{\textstyle{{}_{20~{}-7}}}(e)$ $\displaystyle=$
$\displaystyle\frac{15625}{129024}~{}e^{7}~{}~{}~{},$ (289b) $\displaystyle
G_{\textstyle{{}_{20~{}-6}}}(e)$ $\displaystyle=$
$\displaystyle\frac{4}{45}~{}e^{6}~{}~{}~{},$ (289c) $\displaystyle
G_{\textstyle{{}_{20~{}-5}}}(e)$ $\displaystyle=$
$\displaystyle\frac{81}{1280}~{}e^{5}~{}+~{}\frac{81}{2048}~{}e^{7}~{}~{}~{},$
(289d) $\displaystyle G_{\textstyle{{}_{20~{}-4}}}(e)$ $\displaystyle=$
$\displaystyle\frac{1}{24}~{}e^{4}\,+~{}\frac{7}{240}~{}e^{6}~{}~{}~{},$
(289e) $\displaystyle G_{\textstyle{{}_{20~{}-3}}}(e)$ $\displaystyle=$
$\displaystyle\frac{1}{48}~{}e^{3}\,+~{}\frac{11}{768}~{}e^{5}\,+~{}\frac{313}{30720}~{}e^{7}~{}~{}~{},$
(289f) $\displaystyle G_{\textstyle{{}_{20~{}-2}}}(e)$ $\displaystyle=$
$\displaystyle 0~{}~{}~{},$ (289g) $\displaystyle
G_{\textstyle{{}_{20~{}-1}}}(e)$ $\displaystyle=$
$\displaystyle-~{}\frac{1}{2}~{}e~{}+~{}\frac{1}{16}~{}e^{3}\,-~{}\frac{5}{384}~{}e^{5}\,-~{}\frac{143}{18432}~{}e^{7}~{}~{}~{},$
(289h) $\displaystyle G_{\textstyle{{}_{200}}}(e)$ $\displaystyle=$
$\displaystyle
1~{}-~{}\frac{5}{2}~{}e^{2}\,+~{}\frac{13}{16}~{}e^{4}\,-~{}\frac{35}{288}~{}e^{6}~{}~{}~{},$
(289i) $\displaystyle G_{\textstyle{{}_{201}}}(e)$ $\displaystyle=$
$\displaystyle\frac{7}{2}~{}e\,-~{}\frac{123}{16}~{}e^{3}\,+~{}\frac{489}{128}~{}e^{5}\,-~{}\frac{1763}{2048}~{}e^{7}$
(289j) $\displaystyle G_{\textstyle{{}_{202}}}(e)$ $\displaystyle=$
$\displaystyle\frac{17}{2}~{}e^{2}\,-~{}\frac{115}{6}~{}e^{4}\,+~{}\frac{601}{48}~{}e^{6}~{}~{}~{},$
(289k) $\displaystyle G_{\textstyle{{}_{203}}}(e)$ $\displaystyle=$
$\displaystyle\frac{845}{48}~{}e^{3}\,-~{}\frac{32525}{768}~{}e^{5}\,+~{}\frac{208225}{6144}~{}e^{7}~{}~{}~{},$
(289l) $\displaystyle G_{\textstyle{{}_{204}}}(e)$ $\displaystyle=$
$\displaystyle\frac{533}{16}~{}e^{4}\,-~{}\frac{13827}{160}~{}e^{6}~{}~{}~{},$
(289m) $\displaystyle G_{\textstyle{{}_{205}}}(e)$ $\displaystyle=$
$\displaystyle\frac{228347}{3840}~{}e^{5}\,-~{}\frac{3071075}{18432}~{}e^{7}~{}~{}~{},$
(289n) $\displaystyle G_{\textstyle{{}_{206}}}(e)$ $\displaystyle=$
$\displaystyle\frac{73369}{720}~{}e^{6}~{}~{}~{},$ (289o) $\displaystyle
G_{\textstyle{{}_{207}}}(e)$ $\displaystyle=$
$\displaystyle\frac{12144273}{71680}~{}e^{7}~{}~{}~{},$ (289p)
the other values of $\,q\,$ generating polynomials
$\,G_{\textstyle{{}_{20q}}}(e)\,$ whose leading terms are of order $\,e^{8}\,$
and higher.
Since in our study we intend to employ the squares of these functions, with
terms up to $\,e^{6}\,$ only, then we may completely omit the eccentricity
functions with $\,|q|\,\geq\,4\,$. In our approximation, the squares of the
eccentricity functions will look:
$\displaystyle G^{\,2}_{\textstyle{{}_{\textstyle{{}_{20~{}-3}}}}}(e)$
$\displaystyle=$ $\displaystyle\frac{1}{2304}~{}e^{6}~{}+\,O(e^{8})~{}~{}~{},$
(290a) $\displaystyle G^{\,2}_{\textstyle{{}_{\textstyle{{}_{20~{}-2}}}}}(e)$
$\displaystyle=$ $\displaystyle 0~{}~{}~{},$ (290b) $\displaystyle
G^{\,2}_{\textstyle{{}_{\textstyle{{}_{20~{}-1}}}}}(e)$ $\displaystyle=$
$\displaystyle\frac{1}{4}~{}e^{2}-\,\frac{1}{16}~{}e^{4}~{}+\,\frac{13}{768}~{}e^{6}+\,O(e^{8})~{}~{}~{},$
(290c) $\displaystyle G^{\,2}_{\textstyle{{}_{\textstyle{{}_{200}}}}}(e)~{}$
$\displaystyle=$ $\displaystyle
1\,-\,5\,e^{2}\,+\;\frac{63}{8}\;e^{4}\;-\;\frac{155}{36}~{}e^{6}~{}+\;O(e^{8})\;\;\;\,,\;\;\;$
(290d) $\displaystyle
G^{\,2}_{\textstyle{{}_{\textstyle{{}_{20{{1}}}}}}}(e)~{}$ $\displaystyle=$
$\displaystyle\frac{49}{4}\;e^{2}\;-\;\frac{861}{16}\;e^{4}\;+~{}\frac{21975}{256}~{}e^{6}~{}+~{}O(e^{8})~{}~{}~{},$
(290e) $\displaystyle
G^{\,2}_{\textstyle{{}_{\textstyle{{}_{20{{2}}}}}}}(e)~{}$ $\displaystyle=$
$\displaystyle\frac{289}{4}\,e^{4}\,-\,\frac{1955}{6}~{}e^{6}\,+~{}\,O(e^{8})~{}~{}~{},$
(290f) $\displaystyle
G^{\,2}_{\textstyle{{}_{\textstyle{{}_{20{{3}}}}}}}(e)~{}$ $\displaystyle=$
$\displaystyle\frac{714025}{2304}\,e^{6}\,+~{}\,O(e^{8})~{}~{}~{},$ (290g)
the squares of the others being of the order of $\,e^{8}\,$ or higher.
Be mindful that, for $\,l=2\,$ we considered only the functions with
$\,p=0\,$. This is dictated by the fact that the inclination functions
$\,F_{\textstyle{{}_{lmp}}}\,=\,F_{\textstyle{{}_{22p}}}\,$ are of order
$\,i\,$ (and, accordingly, their squares and cross-products are of order
$\,i^{2}\,$) for all the values of $\,p\,$ except zero.
For $\,l=3\,$, the situation changes. The inclination functions
$\,F_{lmp}\,=\,F_{310}\,,\,F_{312}\,,\,F_{313}\,,\,F_{320}\,,$
$F_{321}\,,\,F_{322}\,,\,F_{323}\,,\,F_{331}\,,\,F_{332}\,,\,F_{333}\,$ are of
the order $\,O({\it i})\,$ or higher. The terms containing the squares or
cross-products of the these functions may thus be omitted. (Specifically, by
neglecting the cross-terms we get rid of the mixed-period part of the
$\,l=3\,$ component.) What is left is the terms with $\,lmp\,=\,311\,$ and
$\,lmp\,=\,330\,$. These terms contain the squares of functions
$\displaystyle F_{311}({\it i})\,=~{}-~{}\frac{3}{2}\,+\,O({\it
i}^{2})~{}~{}~{}~{}~{}~{}\mbox{and}~{}~{}~{}~{}~{}~{}F_{330}({\it
i})\,=\,15\,+\,O({\it i}^{2})\;\;\;~{}~{}~{},~{}~{}~{}~{}~{}$ (291)
accordingly. From here, we see that, for $\,l=3\,$, we shall need to employ
the eccentricity functions
$\,G_{\textstyle{{}_{lpq}}}(e)\,=\,G_{\textstyle{{}_{30q}}}(e)\,$ and
$\,G_{\textstyle{{}_{lpq}}}(e)\,=\,G_{\textstyle{{}_{31q}}}(e)\,$.
The following expressions, for $\,l=3\,$ and $\,p=0\,$, ensue from Cayley’s
tables:
$\displaystyle G_{\textstyle{{}_{30~{}-11}}}(e)$ $\displaystyle=$
$\displaystyle
G_{\textstyle{{}_{30~{}-10}}}(e)~{}=~{}G_{\textstyle{{}_{30~{}-9}}}(e)~{}=~{}G_{\textstyle{{}_{30~{}-8}}}(e)~{}=~{}0~{}~{}~{},$
(292a) $\displaystyle G_{\textstyle{{}_{30~{}-7}}}(e)$ $\displaystyle=$
$\displaystyle\frac{8}{315}~{}e^{7}~{}~{}~{},$ (292b) $\displaystyle
G_{\textstyle{{}_{30~{}-6}}}(e)$ $\displaystyle=$
$\displaystyle\frac{81}{5120}~{}e^{6}~{}~{}~{},$ (292c) $\displaystyle
G_{\textstyle{{}_{30~{}-5}}}(e)$ $\displaystyle=$
$\displaystyle\frac{1}{120}~{}e^{5}~{}+~{}\frac{13}{1440}~{}e^{7}~{}~{}~{},$
(292d) $\displaystyle G_{\textstyle{{}_{30~{}-4}}}(e)$ $\displaystyle=$
$\displaystyle\frac{1}{384}~{}e^{4}\,+~{}\frac{1}{384}~{}e^{6}~{}~{}~{},$
(292e) $\displaystyle G_{\textstyle{{}_{30~{}-3}}}(e)$ $\displaystyle=$
$\displaystyle 0~{}~{}~{},$ (292f) $\displaystyle
G_{\textstyle{{}_{30~{}-2}}}(e)$ $\displaystyle=$
$\displaystyle\frac{1}{8}~{}e^{2}\,+~{}\frac{1}{48}~{}e^{4}\,+~{}\frac{55}{3072}~{}e^{6}~{}~{}~{},$
(292g) $\displaystyle G_{\textstyle{{}_{30~{}-1}}}(e)$ $\displaystyle=$
$\displaystyle-~{}e~{}+~{}\frac{5}{4}~{}e^{3}\,-~{}\frac{7}{48}~{}e^{5}\,+~{}\frac{23}{288}~{}e^{7}~{}~{}~{},$
(292h) $\displaystyle G_{\textstyle{{}_{300}}}(e)$ $\displaystyle=$
$\displaystyle
1~{}-~{}6~{}e^{2}\,+~{}\frac{423}{64}~{}e^{4}\,-~{}\frac{125}{64}~{}e^{6}~{}~{}~{},$
(292i) $\displaystyle G_{\textstyle{{}_{301}}}(e)$ $\displaystyle=$
$\displaystyle
5~{}e\,-~{}22~{}e^{3}\,+~{}\frac{607}{24}~{}e^{5}\,-~{}\frac{98}{9}~{}e^{7}$
(292j) $\displaystyle G_{\textstyle{{}_{302}}}(e)$ $\displaystyle=$
$\displaystyle\frac{127}{8}~{}e^{2}\,-~{}\frac{3065}{48}~{}e^{4}\,+~{}\frac{243805}{3072}~{}e^{6}~{}~{}~{},$
(292k) $\displaystyle G_{\textstyle{{}_{303}}}(e)$ $\displaystyle=$
$\displaystyle\frac{163}{4}~{}e^{3}\,-~{}\frac{2577}{16}~{}e^{5}\,+~{}\frac{1089}{5}~{}e^{7}~{}~{}~{},$
(292l) $\displaystyle G_{\textstyle{{}_{304}}}(e)$ $\displaystyle=$
$\displaystyle\frac{35413}{384}~{}e^{4}\,-~{}\frac{709471}{1920}~{}e^{6}~{}~{}~{},$
(292m) $\displaystyle G_{\textstyle{{}_{305}}}(e)$ $\displaystyle=$
$\displaystyle\frac{23029}{120}~{}e^{5}\,-~{}\frac{35614}{45}~{}e^{7}~{}~{}~{},$
(292n) $\displaystyle G_{\textstyle{{}_{306}}}(e)$ $\displaystyle=$
$\displaystyle\frac{385095}{1024}~{}e^{6}~{}~{}~{},$ (292o) $\displaystyle
G_{\textstyle{{}_{307}}}(e)$ $\displaystyle=$
$\displaystyle\frac{44377}{63}~{}e^{7}~{}~{}~{},$ (292p)
the other values of $\,q\,$ generating polynomials
$\,G_{\textstyle{{}_{30q}}}(e)\,$ and $\,G_{\textstyle{{}_{31q}}}(e)\,$, whose
leading terms are of order $\,e^{8}\,$ and higher.
The squares of some these functions, will read, up to $\,e^{6}\,$ terms
inclusively, as:
$\displaystyle G^{\,2}_{\textstyle{{}_{\textstyle{{}_{30~{}-3}}}}}(e)$
$\displaystyle=$ $\displaystyle 0~{}~{}~{},$ (293a) $\displaystyle
G^{\,2}_{\textstyle{{}_{\textstyle{{}_{30~{}-2}}}}}(e)$ $\displaystyle=$
$\displaystyle\frac{1}{64}~{}e^{4}~{}+~{}\frac{1}{192}~{}e^{6}~{}+\,O(e^{8})~{}~{}~{},$
(293b) $\displaystyle G^{\,2}_{\textstyle{{}_{\textstyle{{}_{30~{}-1}}}}}(e)$
$\displaystyle=$ $\displaystyle
e^{2}-\,\frac{5}{2}~{}e^{4}~{}+\,\frac{89}{48}~{}e^{6}+\,O(e^{8})~{}~{}~{},$
(293c) $\displaystyle G^{\,2}_{\textstyle{{}_{\textstyle{{}_{300}}}}}(e)~{}$
$\displaystyle=$ $\displaystyle
1~{}-~{}12~{}e^{2}~{}+\;\frac{1575}{32}\;e^{4}\;-\;\frac{2663}{32}~{}e^{6}~{}+\;O(e^{8})\;\;\;\,,\;\;\;$
(293d) $\displaystyle
G^{\,2}_{\textstyle{{}_{\textstyle{{}_{30{{1}}}}}}}(e)~{}$ $\displaystyle=$
$\displaystyle
25\;e^{2}\;-\;220\;e^{4}\;+~{}\frac{8843}{12}~{}e^{6}~{}+~{}O(e^{8})~{}~{}~{},$
(293e) $\displaystyle
G^{\,2}_{\textstyle{{}_{\textstyle{{}_{30{{2}}}}}}}(e)~{}$ $\displaystyle=$
$\displaystyle\frac{16129}{64}\,e^{4}\,-\,\frac{389255}{192}~{}e^{6}\,+~{}\,O(e^{8})~{}~{}~{},$
(293f) $\displaystyle
G^{\,2}_{\textstyle{{}_{\textstyle{{}_{30{{3}}}}}}}(e)~{}$ $\displaystyle=$
$\displaystyle\frac{26569}{16}~{}e^{6}\,+~{}\,O(e^{8})~{}~{}~{},$ (293g)
the squares of the others being of the order of $\,e^{8}\,$ or higher.
Finally, we write down the expressions for the eccentricity functions with
$\,l=3\,$ and $\,p=1\,$:
$\displaystyle G_{\textstyle{{}_{31~{}-9}}}(e)$ $\displaystyle=$
$\displaystyle G_{\textstyle{{}_{31~{}-8}}}(e)~{}=~{}0~{}~{}~{},$ (294a)
$\displaystyle G_{\textstyle{{}_{31~{}-7}}}(e)$ $\displaystyle=$
$\displaystyle\frac{16337}{2240}~{}e^{7}~{}~{}~{},$ (294b) $\displaystyle
G_{\textstyle{{}_{31~{}-6}}}(e)$ $\displaystyle=$
$\displaystyle\frac{48203}{9216}~{}e^{6}~{}~{}~{},$ (294c) $\displaystyle
G_{\textstyle{{}_{31~{}-5}}}(e)$ $\displaystyle=$
$\displaystyle\frac{899}{240}~{}e^{5}~{}+~{}\frac{2441}{480}~{}e^{7}~{}~{}~{},$
(294d) $\displaystyle G_{\textstyle{{}_{31~{}-4}}}(e)$ $\displaystyle=$
$\displaystyle\frac{343}{128}~{}e^{4}\,+~{}\frac{2819}{640}~{}e^{6}~{}~{}~{},$
(294e) $\displaystyle G_{\textstyle{{}_{31~{}-3}}}(e)$ $\displaystyle=$
$\displaystyle\frac{23}{12}~{}e^{3}~{}+~{}\frac{89}{24}~{}e^{5}~{}+~{}\frac{5663}{960}~{}e^{7}~{}~{}~{},$
(294f) $\displaystyle G_{\textstyle{{}_{31~{}-2}}}(e)$ $\displaystyle=$
$\displaystyle\frac{11}{8}~{}e^{2}\,+~{}\frac{49}{16}~{}e^{4}\,+~{}\frac{15665}{3072}~{}e^{6}~{}~{}~{},$
(294g) $\displaystyle G_{\textstyle{{}_{31~{}-1}}}(e)$ $\displaystyle=$
$\displaystyle~{}e~{}+~{}\frac{5}{2}~{}e^{3}\,+~{}\frac{35}{8}~{}e^{5}\,+~{}\frac{105}{16}~{}e^{7}~{}~{}~{},$
(294h) $\displaystyle G_{\textstyle{{}_{310}}}(e)$ $\displaystyle=$
$\displaystyle
1~{}+~{}2~{}e^{2}\,+~{}\frac{239}{64}~{}e^{4}\,+~{}\frac{3323}{576}~{}e^{6}~{}~{}~{},$
(294i) $\displaystyle G_{\textstyle{{}_{311}}}(e)$ $\displaystyle=$
$\displaystyle
3~{}e\,+~{}\frac{11}{4}~{}e^{3}\,+~{}\frac{245}{48}~{}e^{5}\,+~{}\frac{463}{64}~{}e^{7}$
(294j) $\displaystyle G_{\textstyle{{}_{312}}}(e)$ $\displaystyle=$
$\displaystyle\frac{53}{8}~{}e^{2}\,+~{}\frac{39}{16}~{}e^{4}\,+~{}\frac{7041}{1024}~{}e^{6}~{}~{}~{},$
(294k) $\displaystyle G_{\textstyle{{}_{313}}}(e)$ $\displaystyle=$
$\displaystyle\frac{77}{6}~{}e^{3}\,-~{}\frac{25}{48}~{}e^{5}\,+~{}\frac{4751}{480}~{}e^{7}~{}~{}~{},$
(294l) $\displaystyle G_{\textstyle{{}_{314}}}(e)$ $\displaystyle=$
$\displaystyle\frac{2955}{128}~{}e^{4}\,-~{}\frac{3463}{384}~{}e^{6}~{}~{}~{},$
(294m) $\displaystyle G_{\textstyle{{}_{315}}}(e)$ $\displaystyle=$
$\displaystyle\frac{3167}{80}~{}e^{5}\,-~{}\frac{8999}{320}~{}e^{7}~{}~{}~{},$
(294n) $\displaystyle G_{\textstyle{{}_{316}}}(e)$ $\displaystyle=$
$\displaystyle\frac{3024637}{46080}~{}e^{6}~{}~{}~{},$ (294o) $\displaystyle
G_{\textstyle{{}_{317}}}(e)$ $\displaystyle=$
$\displaystyle\frac{178331}{1680}~{}e^{7}~{}~{}~{},$ (294p)
and the squares:
$\displaystyle G^{\,2}_{\textstyle{{}_{\textstyle{{}_{31~{}-3}}}}}(e)$
$\displaystyle=$
$\displaystyle\frac{529}{144}~{}e^{6}~{}+\,O(e^{8})~{}~{}~{},$ (295a)
$\displaystyle G^{\,2}_{\textstyle{{}_{\textstyle{{}_{31~{}-2}}}}}(e)$
$\displaystyle=$
$\displaystyle\frac{121}{64}~{}e^{4}~{}+~{}\frac{539}{64}~{}e^{6}~{}+\,O(e^{8})~{}~{}~{},$
(295b) $\displaystyle G^{\,2}_{\textstyle{{}_{\textstyle{{}_{31~{}-1}}}}}(e)$
$\displaystyle=$ $\displaystyle
e^{2}+\,5~{}e^{4}~{}+\,15~{}e^{6}+\,O(e^{8})~{}~{}~{},$ (295c) $\displaystyle
G^{\,2}_{\textstyle{{}_{\textstyle{{}_{310}}}}}(e)~{}$ $\displaystyle=$
$\displaystyle
1~{}+\;4\;e^{2}\;+\;\frac{367}{32}~{}e^{4}\,+~{}\frac{7625}{288}~{}e^{6}~{}+\;O(e^{8})\;\;\;\,,$
(295d) $\displaystyle
G^{\,2}_{\textstyle{{}_{\textstyle{{}_{31{{1}}}}}}}(e)~{}$ $\displaystyle=$
$\displaystyle
9\;e^{2}\;+\;\frac{33}{2}\;e^{4}\;+~{}\frac{611}{16}~{}e^{6}~{}+~{}O(e^{8})~{}~{}~{},$
(295e) $\displaystyle
G^{\,2}_{\textstyle{{}_{\textstyle{{}_{31{{2}}}}}}}(e)~{}$ $\displaystyle=$
$\displaystyle\frac{2809}{64}\,e^{4}\,+\,\frac{2067}{64}~{}e^{6}\,+~{}\,O(e^{8})~{}~{}~{},$
(295f) $\displaystyle
G^{\,2}_{\textstyle{{}_{\textstyle{{}_{31{{3}}}}}}}(e)~{}$ $\displaystyle=$
$\displaystyle\frac{5929}{36}~{}e^{6}\,+~{}\,O(e^{8})~{}~{}~{},$ (295g)
the squares of the other functions from this set being of the order
$\,e^{8}\,$ or higher.
## Appendix H The $\,l=2\,$ and $\,l=3\,$ terms of the secular part of the
torque
### H.1 The $\,l=2\,$ terms of the secular torque
Extracting the $\,l=2\,$ input from (113), we recall that only the
$\,(lmpq)=(220q)\,$ terms matter. Out of these terms, we need only the ones up
to $\,e^{6}\,$. These are the terms with $\,|q|\,\leq\,3\,$. They are given by
formulae (290) from Appendix G. Employing those formulae, we arrive at
$\displaystyle\overline{\cal T}_{\textstyle{{}_{{}_{\textstyle{{}_{l=2}}}}}}$
$\displaystyle=$ $\displaystyle\overline{\cal
T}_{\textstyle{{}_{{}_{\textstyle{{}_{(lmp)=(220)}}}}}}+\,O({\it
i}^{2}\,\epsilon)=\,\frac{3}{2}\,G\,M_{sec}^{2}\,R^{5}\,a^{-6}\sum_{q=-3}^{3}\,G^{\textstyle{{}^{\,2}}}_{\textstyle{{}_{\textstyle{{}_{20\mbox{\it{q}}}}}}}(e)~{}k_{2}\;\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{220\mbox{\it{q}}}}}}}+\,O(e^{8}\,\epsilon)\,+\,O({\it
i}^{2}\,\epsilon)\quad\quad\quad\quad$ $\displaystyle=$
$\displaystyle\frac{3}{2}\;G\;M_{sec}^{2}\,R^{5}\,a^{-6}\,\left[~{}\frac{1}{2304}~{}e^{6}~{}k_{2}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{220~{}-3}}}}}~{}+\left(~{}\frac{1}{4}~{}e^{2}-\,\frac{1}{16}~{}e^{4}~{}+\,\frac{13}{768}~{}e^{6}~{}\right)~{}k_{2}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{220~{}-1}}}}}\right.$
$\displaystyle+$
$\displaystyle\left(1\,-\,5\,e^{2}\,+~{}\frac{63}{8}\;e^{4}\,-~{}\frac{155}{36}~{}e^{6}\right)\,k_{2}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{2200}}}}}\,+\left(\frac{49}{4}~{}e^{2}-\;\frac{861}{16}\;e^{4}+~{}\frac{21975}{256}~{}e^{6}\right)\,k_{2}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{2201}}}}}$
$\displaystyle+$
$\displaystyle\left.\left(~{}\frac{289}{4}\,e^{4}\,-\,\frac{1955}{6}~{}e^{6}~{}\right)~{}k_{2}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{2202}}}}}\,+\frac{714025}{2304}\,e^{6}~{}k_{2}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{2203}}}}}\,\right]+\,O(e^{8}\,\epsilon)\,+\,O({\it
i}^{2}\,\epsilon)~{}~{}~{},\quad\quad\quad\quad$ (296b)
where the absolute error $\,O(e^{8}\,\epsilon)\,$ has emerged because of our
neglect of terms with $\,|q|\,\geq\,4\,\,$, while the absolute error
$\,O(i^{2}\,\epsilon)\,$ came into being after the truncation of terms with
$\,p\,\geq\,1\,$.
Recalling expression (109b), we can rewrite (296b) in a form indicating
explicitly at which resonance each term changes its sign. To this end, each
$~{}k_{l}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}~{}$ will be
rewritten as:
$~{}k_{l}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}|\,\mbox{sgn}\,\left[\,({\it
l}-2p+q)\,n\,-\,m\,\dot{\theta}\,\right]~{}$. This will render:
$\displaystyle\overline{\cal T}_{\textstyle{{}_{{}_{\textstyle{{}_{l=2}}}}}}$
$\displaystyle=$
$\displaystyle\frac{3}{2}~{}G\;M_{sec}^{2}\,R^{5}\,a^{-6}\,\left[~{}\frac{1}{2304}~{}e^{6}~{}k_{2}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{220~{}-3}}}}}|~{}~{}\mbox{sgn}\,\left(\,-\,n\,-\,2\,\dot{\theta}\,\right)\right.$
(297) $\displaystyle+$
$\displaystyle\left(~{}\frac{1}{4}~{}e^{2}-\,\frac{1}{16}~{}e^{4}~{}+\,\frac{13}{768}~{}e^{6}~{}\right)~{}k_{2}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{220~{}-1}}}}}|~{}~{}\mbox{sgn}\,\left(\,n\,-\,2\,\dot{\theta}\,\right)~{}$
$\displaystyle+$
$\displaystyle\left(1\,-\,5\,e^{2}\,+~{}\frac{63}{8}\;e^{4}\,-~{}\frac{155}{36}~{}e^{6}\right)~{}k_{2}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{2200}}}}}|~{}~{}~{}~{}\mbox{sgn}\,\left(\,n\,-\,\dot{\theta}\,\right)$
$\displaystyle+$
$\displaystyle\left(\frac{49}{4}~{}e^{2}-\;\frac{861}{16}\;e^{4}+~{}\frac{21975}{256}~{}e^{6}\right)~{}k_{2}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{2201}}}}}|~{}~{}\mbox{sgn}\,\left(\,3\,n\,-\,2\,\dot{\theta}\,\right)$
$\displaystyle+$
$\displaystyle\left(~{}\frac{289}{4}\,e^{4}\,-\,\frac{1955}{6}~{}e^{6}~{}\right)~{}k_{2}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{2202}}}}}|~{}\mbox{sgn}\,\left(\,2\,n\,-\,\dot{\theta}\,\right)$
$\displaystyle+$
$\displaystyle\left.\frac{714025}{2304}~{}e^{6}~{}k_{2}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{2203}}}}}|~{}~{}\mbox{sgn}\,\left(\,5\,n\,-\,3\,\dot{\theta}\,\right)~{}\,\right]+\,O(e^{8}\,\epsilon)\,+\,O({\it
i}^{2}\,\epsilon)~{}~{}~{},\quad\quad\quad\quad$
### H.2 The $\,l=3\,,~{}m=1\,$ terms of the secular torque
Getting the $\,l=3\,,\,m=1\,$ input from (113) and leaving in it only the
terms up to $\,e^{6}\,$, we obtain, with aid of formulae (290) from Appendix
G, the following expression:
$\displaystyle\overline{\cal
T}_{\textstyle{{}_{{}_{\textstyle{{}_{(lmp)=(311)}}}}}}$ $\displaystyle=$
$\displaystyle\frac{3}{8}\,G\,M_{sec}^{2}\,R^{7}\,a^{-8}\sum_{q=-3}^{3}\,G^{\textstyle{{}^{\,2}}}_{\textstyle{{}_{\textstyle{{}_{31\mbox{\it{q}}}}}}}(e)~{}k_{3}\;\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{311\mbox{\it{q}}}}}}}~{}+\,O(e^{8}\,\epsilon)$
$\displaystyle=$
$\displaystyle\frac{3}{8}\;G\;M_{sec}^{2}\,R^{7}\,a^{-8}\,\left[~{}\,\frac{529}{144}~{}e^{6}~{}k_{3}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{311~{}-3}}}}}~{}+\left(~{}\frac{121}{64}~{}e^{4}~{}+~{}\frac{539}{64}~{}e^{6}~{}\right)~{}k_{3}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{311~{}-2}}}}}\right.$
$\displaystyle+$
$\displaystyle\left(e^{2}+\,5\,e^{4}+\,15\,e^{6}\right)\,k_{3}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{311~{}-1}}}}}+\,\left(1\,+\,4\,e^{2}+\;\frac{367}{32}~{}e^{4}+~{}\frac{7625}{288}~{}e^{6}\right)\,k_{3}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{3110}}}}}$
$\displaystyle+$
$\displaystyle\left(9\;e^{2}\;+\;\frac{33}{2}\;e^{4}\;+~{}\frac{611}{16}~{}e^{6}\right)\,k_{3}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{3111}}}}}~{}+~{}\left(\frac{2809}{64}\,e^{4}\,+\,\frac{2067}{64}~{}e^{6}\right)\,k_{3}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{3112}}}}}$
$\displaystyle+$
$\displaystyle\left.\frac{5929}{36}\,e^{6}\,k_{3}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{3113}}}}}\,\right]+\,O(e^{8}\,\epsilon)~{}~{}~{}.\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$
(298b)
With the signs depicted explicitly, this will look:
$\displaystyle\overline{\cal
T}_{\textstyle{{}_{{}_{\textstyle{{}_{(lmp)=(311)}}}}}}$ $\displaystyle=$
$\displaystyle\frac{3}{8}\;G\;M_{sec}^{2}\,R^{7}\,a^{-8}\,\left[~{}\,\frac{529}{144}~{}e^{6}~{}k_{3}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{311~{}-3}}}}}|~{}\,\mbox{sgn}\,\left(\,-\,2\,n\,-\,\dot{\theta}\,\right)\right.$
(299) $\displaystyle+$
$\displaystyle\left(~{}\frac{121}{64}~{}e^{4}~{}+~{}\frac{539}{64}~{}e^{6}~{}\right)~{}k_{3}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{311~{}-2}}}}}|~{}\,\mbox{sgn}\,\left(\,-\,n\,-\,\dot{\theta}\,\right)$
$\displaystyle+$
$\displaystyle\left(e^{2}+\,5\,e^{4}+\,15\,e^{6}\right)\,k_{3}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{311~{}-1}}}}}|~{}\,\mbox{sgn}\,\left(\,-\,\dot{\theta}\,\right)$
$\displaystyle+$
$\displaystyle\left(1\,+\,4\,e^{2}+\;\frac{367}{32}~{}e^{4}+~{}\frac{7625}{288}~{}e^{6}\right)\,k_{3}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{3110}}}}}|~{}\,\mbox{sgn}\,\left(\,n\,-\,\dot{\theta}\,\right)$
$\displaystyle+$
$\displaystyle\left(9\;e^{2}\;+\;\frac{33}{2}\;e^{4}\;+~{}\frac{611}{16}~{}e^{6}\right)\,k_{3}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{3111}}}}}|~{}\,\mbox{sgn}\,\left(\,2\,n\,-\,\dot{\theta}\,\right)$
$\displaystyle+$
$\displaystyle\left(\frac{2809}{64}\,e^{4}\,+\,\frac{2067}{64}~{}e^{6}\right)\,k_{3}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{3112}}}}}|~{}\,\mbox{sgn}\,\left(\,3\,n\,-\,\dot{\theta}\,\right)$
$\displaystyle+$
$\displaystyle\left.\frac{5929}{36}~{}e^{6}\,k_{3}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{3113}}}}}|~{}\,\mbox{sgn}\,\left(\,4\,n\,-\,\dot{\theta}\,\right)\,\right]+\,O(e^{8}\,\epsilon)~{}~{}~{}.\quad\quad\quad\quad\quad\quad\quad\quad$
### H.3 The $\,l=3\,,~{}m=3\,$ terms of the secular torque
The second relevant group of terms with $\,l=3\,$ will read:
$\displaystyle\overline{\cal
T}_{\textstyle{{}_{{}_{\textstyle{{}_{(lmp)=(330)}}}}}}$ $\displaystyle=$
$\displaystyle\frac{15}{8}\;G\;M_{sec}^{2}\,R^{7}\,a^{-8}\,\sum_{q=-3}^{3}\,G^{\textstyle{{}^{\,2}}}_{\textstyle{{}_{\textstyle{{}_{30\mbox{\it{q}}}}}}}(e)~{}k_{3}\;\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{330\mbox{\it{q}}}}}}}~{}+\,O(e^{8}\,\epsilon)$
$\displaystyle=$
$\displaystyle\frac{15}{8}\;G\;M_{sec}^{2}\,R^{7}\,a^{-8}\,\left[~{}\,\left(~{}\frac{1}{64}~{}e^{4}~{}+~{}\frac{1}{192}~{}e^{6}~{}\right)~{}k_{3}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{330~{}-2}}}}}\right.$
$\displaystyle+$
$\displaystyle\left(e^{2}-\,\frac{5}{2}\,e^{4}+\,\frac{89}{48}\,e^{6}\right)\,k_{3}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{330~{}-1}}}}}+\,\left(1\,-\,12\,e^{2}+\;\frac{1575}{32}~{}e^{4}-~{}\frac{2663}{32}~{}e^{6}\right)\,k_{3}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{3300}}}}}$
$\displaystyle+$
$\displaystyle\left(25~{}e^{2}\,-~{}220~{}e^{4}\,+~{}\frac{8843}{12}~{}e^{6}\right)\,k_{3}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{3301}}}}}~{}+~{}\left(\frac{16129}{64}\,e^{4}\,-~{}\frac{389255}{192}~{}e^{6}\right)\,k_{3}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{3302}}}}}$
$\displaystyle+$
$\displaystyle\left.\frac{26569}{16}\,e^{6}\,k_{3}~{}\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{3303}}}}}\,\right]+\,O(e^{8}\,\epsilon)\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$
(300b)
or, with the signs shown explicitly:
$\displaystyle\overline{\cal
T}_{\textstyle{{}_{{}_{\textstyle{{}_{(lmp)=(330)}}}}}}$ $\displaystyle=$
$\displaystyle\frac{15}{8}\;G\;M_{sec}^{2}\,R^{7}\,a^{-8}\,\left[~{}\,\left(~{}\frac{1}{64}~{}e^{4}~{}+~{}\frac{1}{192}~{}e^{6}~{}\right)~{}k_{3}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{330~{}-2}}}}}|~{}\,\mbox{sgn}\,\left(\,-\,n\,-\,\dot{\theta}\,\right)\right.$
(301) $\displaystyle+$
$\displaystyle\left(e^{2}-\,\frac{5}{2}\,e^{4}+\,\frac{89}{48}\,e^{6}\right)\,k_{3}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{330~{}-1}}}}}|~{}\,\mbox{sgn}\,\left(\,-\,\dot{\theta}\,\right)$
$\displaystyle+$
$\displaystyle\left(1\,-\,12\,e^{2}+\;\frac{1575}{32}~{}e^{4}-~{}\frac{2663}{32}~{}e^{6}\right)\,k_{3}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{3300}}}}}|~{}\,\mbox{sgn}\,\left(\,n\,-\,\dot{\theta}\,\right)$
$\displaystyle+$
$\displaystyle\left(25~{}e^{2}\,-~{}220~{}e^{4}\,+~{}\frac{8843}{12}~{}e^{6}\right)\,k_{3}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{3301}}}}}|~{}\,\mbox{sgn}\,\left(\,2\,n\,-\,\dot{\theta}\,\right)$
$\displaystyle+$
$\displaystyle\left(\frac{16129}{64}\,e^{4}\,-\,\frac{389255}{192}~{}e^{6}\right)\,k_{3}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{3302}}}}}|~{}\,\mbox{sgn}\,\left(\,3\,n\,-\,\dot{\theta}\,\right)$
$\displaystyle+$
$\displaystyle\left.\frac{26569}{16}~{}e^{6}\,k_{3}~{}\sin|\epsilon_{\textstyle{{}_{\textstyle{{}_{3303}}}}}|~{}\,\mbox{sgn}\,\left(\,4\,n\,-\,\dot{\theta}\,\right)\,\right]+\,O(e^{8}\,\epsilon)~{}~{}~{}.\quad\quad\quad\quad\quad\quad\quad\quad$
## Appendix I The $\,l=2\,$ and $\,l=3\,$ terms of the short-period part of
the torque
The short-period part of the torque may be approximated with terms of degrees
2 and 3:
$\displaystyle\widetilde{\cal T}$ $\displaystyle=$
$\displaystyle\widetilde{\cal
T}_{\textstyle{{}_{\textstyle{{}_{l=2}}}}}\,+~{}\widetilde{\cal
T}_{\textstyle{{}_{\textstyle{{}_{l=3}}}}}\,+~{}O\left(\,\epsilon\,(R/a)^{9}\,\right)~{}$
(302) $\displaystyle=$ $\displaystyle\widetilde{\cal
T}_{\textstyle{{}_{\textstyle{{}_{(lmp)=(220)}}}}}\,+\left[\,\widetilde{\cal
T}_{\textstyle{{}_{\textstyle{{}_{(lmp)=(311)}}}}}\,+~{}\widetilde{\cal
T}_{\textstyle{{}_{\textstyle{{}_{(lmp)=(330)}}}}}\right]~{}+~{}O(\epsilon\,i^{2})~{}+~{}O\left(\,\epsilon\,(R/a)^{9}\,\right)~{}~{}~{},\quad\quad$
where
$\displaystyle\widetilde{\cal
T}_{\textstyle{{}_{\textstyle{{}_{(lmp)=(220)}}}}}=~{}3\,{G}\,M_{sec}^{\textstyle{{}^{2}}}\,R^{\textstyle{{}^{5}}}\,a^{-6}\sum_{q=-3~{}}^{3}{\sum_{\stackrel{{\scriptstyle\textstyle{{}^{~{}j=-3}}}}{{\textstyle{{}^{j~{}<~{}q}}}}}^{3}}G_{20q}(e)~{}G_{20j}(e)~{}\left\\{~{}\cos\left[\,{\cal{M}}\,(q-j)\,\right]~{}\,k_{\textstyle{{}_{2}}}\,\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{220q}}}}}\right.\quad\quad\quad$
$\displaystyle\left.-~{}\sin\left[\,{\cal{M}}\,(q-j)\,\right]~{}\,k_{\textstyle{{}_{2}}}\,\cos\epsilon_{\textstyle{{}_{\textstyle{{}_{220q}}}}}~{}\right\\}~{}+\,O(i^{2}\,\epsilon)\,+\,O(e^{7}\,\epsilon)~{}~{},\quad\quad\quad\quad\quad$
(303a)
$\displaystyle\left.\quad\quad~{}\quad\quad\right.=\,-\,3{G}\,M_{sec}^{\textstyle{{}^{2}}}\,R^{\textstyle{{}^{5}}}a^{-6}\sum_{q=-3~{}}^{3}{\sum_{\stackrel{{\scriptstyle\textstyle{{}^{~{}j=-3}}}}{{\textstyle{{}^{j~{}<~{}q}}}}}^{3}}G_{20q}(e)\,G_{20j}(e)\,k_{\textstyle{{}_{2}}}\,\sin\left[{\cal{M}}\,(q-j)\right]+O(i^{2}\epsilon)+O(e\epsilon)~{}~{},\quad~{}\quad\quad$
(303b)
$\displaystyle\widetilde{\cal
T}_{\textstyle{{}_{\textstyle{{}_{(lmp)=(311)}}}}}=\,\frac{3}{4}\,{G}\,M_{sec}^{\textstyle{{}^{2}}}\,R^{\textstyle{{}^{7}}}\,a^{-8}\sum_{q=-3~{}}^{3}{\sum_{\stackrel{{\scriptstyle\textstyle{{}^{~{}j=-3}}}}{{\textstyle{{}^{j\;<\;q}}}}}^{3}}G_{31q}(e)\;G_{31j}(e)~{}\left\\{\cos\left[\,{\cal{M}}\,(q-j)\,\right]~{}\,k_{\textstyle{{}_{3}}}\,\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{311q}}}}}\right.\quad\quad\quad$
$\displaystyle\left.-~{}\sin\left[\,{\cal{M}}\,(q-j)\,\right]~{}\,k_{\textstyle{{}_{2}}}\,\cos\epsilon_{\textstyle{{}_{\textstyle{{}_{311q}}}}}~{}\right\\}+\,O(i^{2}\,\epsilon)\,+\,O(e^{7}\,\epsilon)~{}~{},\quad\quad\quad\quad\quad$
(304a)
$\displaystyle\left.\quad\quad~{}\quad\quad\right.=\,-\,\frac{3}{4}\,{G}\,M_{sec}^{\textstyle{{}^{2}}}\,R^{\textstyle{{}^{5}}}a^{-6}\sum_{q=-3~{}}^{3}{\sum_{\stackrel{{\scriptstyle\textstyle{{}^{~{}j=-3}}}}{{\textstyle{{}^{j~{}<~{}q}}}}}^{3}}G_{31q}(e)\,G_{31j}(e)\,k_{\textstyle{{}_{2}}}\,\sin\left[{\cal{M}}\,(q-j)\right]+O(i^{2}\epsilon)+O(e\epsilon)~{}~{},~{}~{}\quad~{}\quad$
(304b)
$\displaystyle\widetilde{\cal
T}_{\textstyle{{}_{\textstyle{{}_{(lmp)=(330)}}}}}=\,\frac{15}{4}\,{G}\,M_{sec}^{\textstyle{{}^{2}}}\,R^{\textstyle{{}^{7}}}\,a^{-8}\sum_{q=-3~{}}^{3}{\sum_{\stackrel{{\scriptstyle\textstyle{{}^{~{}j=-3}}}}{{\textstyle{{}^{j\;<\;q}}}}}^{3}}G_{30q}(e)\;G_{30j}(e)~{}\cos\left[\,{\cal{M}}\,(q-j)\,\right]~{}\,k_{\textstyle{{}_{3}}}\,\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{330q}}}}}\quad\quad\quad$
$\displaystyle\left.-~{}\sin\left[\,{\cal{M}}\,(q-j)\,\right]~{}\,k_{\textstyle{{}_{2}}}\,\cos\epsilon_{\textstyle{{}_{\textstyle{{}_{330q}}}}}~{}\right\\}+\,O(i^{2}\,\epsilon)\,+\,O(e^{7}\,\epsilon)~{}~{},\quad\quad$
(305a)
$\displaystyle\left.\quad\quad~{}\quad\quad\right.=\,-\,\frac{15}{4}\,{G}\,M_{sec}^{\textstyle{{}^{2}}}\,R^{\textstyle{{}^{5}}}a^{-6}\sum_{q=-3~{}}^{3}{\sum_{\stackrel{{\scriptstyle\textstyle{{}^{~{}j=-3}}}}{{\textstyle{{}^{j~{}<~{}q}}}}}^{3}}G_{30q}(e)\,G_{30j}(e)\,k_{\textstyle{{}_{2}}}\,\sin\left[{\cal{M}}\,(q-j)\right]+O(i^{2}\epsilon)+O(e\epsilon)~{}~{},~{}\quad~{}\quad$
(305b)
the expressions for the eccentricity functions being provided in Appendix G.
The overall numerical factors in (303 \- 305) are twice the numerical factors
in (114), because in (303 \- 305) we have $\,j<q\,$ and not $\,j\neq\,q\,$.
The right-hand sides of (303 \- 305) contain $\,O(e\epsilon)\,$ instead of
$\,O(e^{7}\epsilon)\,$, because at the final step we approximated
$~{}\cos\left[\,{\cal{M}}\,(q-j)\,\right]\,k_{\textstyle{{}_{l}}}\,\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}-\,\sin\left[\,{\cal{M}}\,(q-j)\,\right]\,k_{\textstyle{{}_{l}}}\,\cos\epsilon_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}~{}$
simply with
$~{}-\sin\left[\,{\cal{M}}\,(q-j)\,\right]\,k_{\textstyle{{}_{l}}}~{}$. Doing
so, we replaced the cosine with unity, because the entire Darwin-Kaula
formalism is a linear approximation in the lags. We also neglected
$\,k_{\textstyle{{}_{l}}}\,\sin\epsilon_{\textstyle{{}_{\textstyle{{}_{lmpq}}}}}\,$
and kept only the leading term with $\,k_{\textstyle{{}_{l}}}\,$. This neglect
would be illegitimate in the secular part of the torque, but is probably
acceptable in the purely short-period part, because the latter part has a zero
average and therefore should be regarded as a small correction even in its
leading order. The latter circumstance also will justify approximation of
$\,k_{\textstyle{{}_{l}}}=\,k_{\textstyle{{}_{l}}}(\chi)\,$ with
$\,k_{\textstyle{{}_{l}}}(0)\,$ in (303 \- 305).
## References
* [1] Alterman, Z.; Jarosch, H.; and Pekeris, C. 1959. “Oscillations of the Earth.” Proceedings of the Royal Society of London, Series A, Vol. 252, pp. 80 - 95.
* [2] Andrade, E. N. da C. 1910. “On the Viscous Flow in Metals, and Allied Phenomena.” _Proceedings of the Royal Society of London. Series A._ Vol. 84, pp. 1 - 12
* [3] Benjamin, D.; Wahr, J. ; Ray, R. D.; Egbert, G. D.; and Desai, S. D. 2006. “Constraints on mantle anelasticity from geodetic observations, and implications for the $\,J_{2}\,$ anomaly.” _Geophysical Journal International_ , Vol. 165, pp. 3 - 16
* [4] Bills, B. G.; Neumann, G. A.; Smith, D.E.; and Zuber, M.T. 2005\. “Improved estimate of tidal dissipation within Mars from MOLA observations of the shadow of Phobos.” Journal of Geophysical Research, Vol. 110, pp. 2376 - 2406. doi:10.1029/2004JE002376, 2005
* [5] Biot, M. A. 1954. “Theory of Stress-Strain Relaxation in Anisotropic Viscoelasticity and Relaxation Phenomena.” Journal of Applied Physics, Vol. 25, pp. 1385 \- 1391
http://www.pmi.ou.edu/Biot2005/papers/FILES/054.PDF
* [6] Biot, M. A. 1958. Linear thermodynamics and the mechanics of solids. Proceedings of the Third US National Congress of Applied Mechanics, held at Brown University, pp. 1 - 18. Published by ASME, NY, June 1958
http://www.pmi.ou.edu/Biot2005/papers/FILES/076.PDF
* [7] Birger, B. I. 2007. “Attenuation of Seismic Waves and the Universal Rheological Model of the Earth’s Mantle.” _Izvestiya. Physics of the Solid Earth._ Vol. 49, pp. 635 - 641
* [8] Castillo-Rogez, J. 2009. “New Approach to Icy Satellite Tidal Response Modeling.” American Astronomical Society, DPS meeting 41, 61.07.
* [9] Castillo-Rogez, J. C.; Efroimsky, M., and Lainey, V. 2011. “The tidal history of Iapetus. Dissipative spin dynamics in the light of a refined geophysical model”. Journal of Geophysical Research – Planets, Vol. 116, p. E09008
doi:10.1029/2010JE003664
* [10] Castillo-Rogez, J. C., and Choukroun, M. 2010. “Mars’ Low Dissipation Factor at 11-h. Interpretation from an Anelasticity-Based Dissipation Model.” American Astronomical Society, DPS Meeting No 42, Abstract 51.02. Bulletin of the American Astronomical Society, Vol. 42, p. 1069
* [11] Churkin, V. A. 1998. “The Love numbers for the models of inelastic Earth.” Preprint No 121. Institute of Applied Astronomy. St.Petersburg, Russia. /in Russian/
* [12] Correia, A. C. M., and Laskar, J. 2009. “Mercury’s capture into the $\,3/2\,$ spin-orbit resonance including the effect of core-mantle friction.” _Icarus_ , Vol. 201, pp. 1 - 11
* [13] Correia, A. C. M., and Laskar, J. 2004. “Mercury’s capture into the $\,3/2\,$ spin-orbit resonance as a result of its chaotic dynamics.” _Nature_ , Vol. 429, pp. 848 - 850
* [14] Cottrell, A. H., and Aytekin, V. 1947. “Andrade’s creep law and the flow of zinc crystalls.” Nature, Vol. 160, pp. 328 - 329
* [15] Dahlen, F. A. 1976. “The passive influence of the oceans upon rotation of the Earth.” Geophys. Journal of the Royal Astronomical Society. Vol. 46, pp. 363 - 406
* [16] Darwin, G. H. 1879. “On the precession of a viscous spheroid and on the remote history of the Earth.” Philosophical Transactions of the Royal Society of London, Vol. 170, pp. 447 - 530
http://www.jstor.org/view/02610523/ap000081/00a00010/
* [17] Darwin, G. H. 1880. “On the secular change in the elements of the orbit of a satellite revolving about a tidally distorted planet.” Philosophical Transactions of the Royal Society of London, Vol. 171, pp. 713 - 891
http://www.jstor.org/view/02610523/ap000082/00a00200
* [18] Defraigne, P., and Smits, I. 1999. “Length of day variations due to zonal tides for an inelastic earth in non-hydrostatic equilibrium.” Geophysical J. International, Vol. 139, pp. 563 - 572
* [19] Dehant V. 1987a. “Tidal parameters for an inelastic Earth.” Physics of the Earth and Planetary Interiors, Vol. 49, pp. 97 - 116
* [20] Dehant V. 1987b. “Integration of the gravitational motion equations for an elliptical uniformly rotating Earth with an inelastic mantle.” Physics of the Earth and Planetary Interiors, Vol. 49, pp. 242 - 258
* [21] Duval, P. 1976. “Temporary or permanent creep laws of polycrystalline ice for different stress conditions.” Annales de Geophysique, Vol. 32, pp. 335 - 350
* [22] Eanes, R. J. 1995. _A study of temporal variations in Earth’s gravitational field using LAGEOS-1 laser ranging observations_. PhD thesis, University of Texas at Austin
* [23] Eanes, R. J., and Bettadpur, S. V. 1996. “Temporal variability of Earth’s gravitational field from laser ranging.” In: Rapp, R. H., Cazenave, A. A., and Nerem, R. S. (Eds.) _Global gravity field and its variations. Proceedings of the International Association of Geodesy Symposium No 116 held in Boulder CO in July 1995._ IAG Symposium Series. Springer 1997
ISBN: 978-3-540-60882-0
* [24] Efroimsky, M., and V. Lainey. 2007. “The Physics of Bodily Tides in Terrestrial Planets, and the Appropriate Scales of Dynamical Evolution.” _Journal of Geophysical Research – Planets_ , Vol. 112, p. E12003. doi:10.1029/2007JE002908
* [25] Efroimsky, M., and Williams, J. G. 2009. “Tidal torques. A critical review of some techniques.” _Celestial mechanics and Dynamical Astronomy,_ Vol. 104, pp. 257 - 289
arXiv:0803.3299
* [26] Efroimsky, M. 2012. “Tidal dissipation compared to seismic dissipation: in small bodies, earths, and superearths.” the Astrophysical Journal, Vol. 746, No 2, p. 150
doi:10.1088/0004-637X/746/2/150
arXiv:1105.3936
* [27] Ferraz-Mello, S.; Rodríguez, A.; and Hussmann, H. 2008. “Tidal friction in close-in satellites and exoplanets: The Darwin theory re-visited.” _Celestial mechanics and Dynamical Astronomy,_ Vol. 101, pp. 171 - 201
* [28] Fontaine, F. R.; Ildefonse, B.; and Bagdassarov, N. 2005. “Temperature dependence of shear wave attenuation in partially molten gabbronorite at seismic frequencies.” Geophysical Journal International, Vol. 163, pp. 1025 - 1038
* [29] Goldreich, P. 1963. “On the eccentricity of the satellite orbits in the Solar System.” Monthly Notices of the Royal Astronomical Society of London, Vol. 126, pp. 257 - 268
* [30] Gooding, R.H., and Wagner, C.A. 2008. “On the inclination functions and a rapid stable procedure for their evaluation together with derivatives.” Celestial Mechanics and Dynamical Astronomy, Vol. 101, pp. 247 - 272
* [31] Gribb, T.T., and Cooper, R.F. 1998. “Low-frequency shear attenuation in polycrystalline olivine: Grain boundary diffusion and the physical significance of the Andrade model for viscoelastic rheology.” Journal of Geophysical Research – Solid Earth, Vol. 103, pp. 27267 - 27279
* [32] Haddad, Y. M. 1995. Viscoelasticity of Engineering Materials. Chapman and Hall, London UK, p. 279
* [33] Hut, P. 1981. “Tidal evolution in close binary systems.” _Astronomy and Astrophysics_ , Vol. 99, pp. 126 - 140
* [34] Karato, S.-i. 2008. Deformation of Earth Materials. An Introduction to the Rheology of Solid Earth. Cambridge University Press, UK.
* [35] Karato, S.-i., and Spetzler, H. A. 1990. “Defect Microdynamics in Minerals and Solid-State Mechanisms of Seismic Wave Attenuation and Velocity Dispersion in the Mantle.” Reviews of Geophysics, Vol. 28, pp. 399 - 423
* [36] Kaula, W. M. 1961. “Analysis of gravitational and geometric aspects of geodetic utilisation of satellites.” The Geophysical Journal, Vol. 5, pp. 104 - 133
* [37] Kaula, W. M. 1964. “Tidal Dissipation by Solid Friction and the Resulting Orbital Evolution.” Reviews of Geophysics, Vol. 2, pp. 661 - 684
* [38] Kaula, W. M. 1966. _Theory of Satellite Geodesy: Applications of Satellites to Geodesy._ Blaisdell Publishing Co, Waltham MA. (Re-published in 2006 by Dover. ISBN: 0486414655.)
* [39] Landau, L., and Lifshitz, E. M. 1986. The Theory of Elasticity. Pergamon Press, Oxford 1986.
* [40] Landau, L., and Lifshitz, E. M. 1987. Fluid Mechanics. Pergamon Press, Oxford 1987.
* [41] Legros, H.; Greff, M.; and Tokieda, T. 2006 “Physics inside the Earth. Deformation and Rotation.” Lecture Notes in Physics, Vol. 682, pp. 23 - 66. Springer, Heidelberg
* [42] Love, A. E. H. 1909. “The Yielding of the Earth to Disturbing Forces.” Proceedings of the Royal Society of London. Series A, Vol. 82, pp. 73 - 88
* [43] Love, A. E. H. 1911. Some problems of geodynamics. Cambridge University Press, London. Reprinted by: Dover, NY 1967.
* [44] MacDonald, G. J. F. 1964. “Tidal Friction.” Reviews of Geophysics. Vol. 2, pp. 467 - 541
* [45] Matsuyama, I., and Bills, B. G. 2010. “Global contraction of planetary bodies due to despinning. Application to Mercury and Iapetus.” Icarus, Vol. 209, pp. 271 - 279
* [46] McCarthy, C.; Goldsby, D. L.; and Cooper, R. F. 2007. “Transient and Steady-State Creep Responses of Ice-I/Magnesium Sulfate Hydrate Eutectic Aggregates.” _38th Lunar and Planetary Science Conference XXXVIII_ , held on 12 - 16 March 2007 in League City, TX. LPI Contribution No 1338, p. 2429
* [47] Mignard, F. 1979. “The Evolution of the Lunar Orbit Revisited. I.” The Moon and the Planets. Vol. 20, pp. 301 - 315.
* [48] Mignard, F. 1980. “The Evolution of the Lunar Orbit Revisited. II.” The Moon and the Planets. Vol. 23, pp. 185 - 201
* [49] Miguel, M.-C.; Vespignani, A.; Zaiser, M.; and Zapperi, S. 2002. “Dislocation Jamming and Andrade Creep.” _Physical Review Letters_ , Vol. 89, pp. 165501 - 1655
* [50] Mitchell, B. J. 1995. “Anelastic structure and evolution of the continental crust and upper mantle from seismic surface wave attenuation.” _Reviews of Geophysics_ , Vol. 33, No 4, pp. 441 - 462.
* [51] Munk, W. H., and MacDonald, G. J. F. 1960. The rotation of the earth; a geophysical discussion. Cambridge University Press, 323 pages.
* [52] Nakamura, Y.; Latham, G.; Lammlein, D.; Ewing, M.; Duennebier, F.; and Dorman, J. 1974. “Deep lunar interior inferred from recent seismic data.” Geophysical Research Letters, Vol. 1, pp. 137 - 140
* [53] Nechada, H.; Helmstetterb, A.; El Guerjoumaa, R.; and Sornette, D. 2005. “Andrade and critical time-to-failure laws in fiber-matrix composites. Experiments and model.” _Journal of the Mechanics and Physics of solids_ , Vol. 53, pp. 1099 - 1127.
* [54] Petit, G., and Luzum, B. (Eds.) 2010. “IERS Conventions 2010. Technical Note No 36.” Verlag des Bundesamts für Kartographie und Geodäsie. Frankfurt am Main 2010.
http://www.iers.org/TN36/
* [55] Rambaux, N.; Castillo-Rogez, J. C.; Williams, J. G.; and Karatekin, Ö. 2010. “The librational response of Enceladus.” Geophysical Research Letters, Vol. 37, p. L04202
doi:10.1029/2009GL041465
* [56] Remus, F.; Mathis, S.; and Zahn, J.-P. 2012a. “The Equilibrium Tide in Stars and Giant Planets. I - The Coplanar Case.” Submitted to: Astronomy & Astrophysics
* [57] Remus, F.; Mathis, S.; Zahn, J.-P.; and Lainey, V. 2012b. “Anelastic tidal dissipation in multi-layers planets.” Submitted to: Astronomy & Astrophysics
* [58] Remus, F.; Mathis, S.; Zahn, J.-P.; and Lainey, V. 2011. “The elasto-viscous equilibrium tide in exoplanetary systems.” EPSC-DPS Joint Meeting 2011, Abstract 1372.
* [59] Rodríguez; A., Ferraz Mello, S.; and Hussmann, H. 2008. “Tidal friction in close-in planets.”
In: Y.S.Sun, S.Ferraz-Mello and J.L.Zhou (Eds.) _Exoplanets: Detection,
Formation and Dynamics. Proceedings of the IAU Symposium No 249,_ pp. 179 -
186
doi:10.1017/S174392130801658X
* [60] Sabadini, R., and Vermeersen, B. 2004. Global Dynamics of the Earth: Applications of Normal Mode Relaxation Theory to Solid-Earth Geophysics. Kluwer, Dordrecht 2004
* [61] Shito, A.; Karato, S.-i.; and Park, J. 2004. “Frequency dependence of $Q$ in Earth’s upper mantle, inferred from continuous spectra of body wave.” _Geophysical Research Letters_ , Vol. 31, No 12, p. L12603, doi:10.1029/2004GL019582
* [62] Singer, S. F. 1968. “The Origin of the Moon and Geophysical Consequences.” The Geophysical Journal of the Royal Astronomical Society, Vol. 15, pp. 205 - 226
* [63] Smith, M. 1974. “The scalar equations of infinitesimal elastic-gravitational motion for a rotating, slightly elliptical Earth.” The Geophysical Journal of the Royal Astronomical Society, Vol. 37, pp. 491 - 526
* [64] Stachnik, J. C.; Abers, G. A.; and Christensen, D. H. 2004. “Seismic attenuation and mantle wedge temperatures in the Alaska subduction zone.” _Journal of Geophysical Research – Solid Earth_ , Vol. 109, No B10, p. B10304, doi:10.1029/2004JB003018
* [65] Tan, B. H.; Jackson, I.; and Fitz Gerald J. D. 1997. “Shear wave dispersion and attenuation in fine-grained synthetic olivine aggregates: preliminary results.” _Geophysical Research Letters_ , Vol. 24, No 9, pp. 1055 - 1058, doi:10.1029/97GL00860
* [66] Taylor, P. A., and Margot, J.-L. 2010. “Tidal evolution of close binary asteroid systems.” Celestial Mechanics and Dynamical Astronomy, Vol. 108, pp. 315 - 338
* [67] Wahr, J.M. 1981a. “A normal mode expansion for the forced response of a rotating Earth.” The Geophysical Journal of the Royal Astronomical Society, Vol. 64, pp. 651 - 675
* [68] Wahr, J.M. 1981b. “Body tides on an elliptical, rotating, elastic and oceanless Earth.” The Geophysical Journal of the Royal Astronomical Society, Vol. 64, pp. 677 - 703
* [69] Wahr, J.M. 1981c. “The forced nutations of an elliptical, rotating, elastic and oceanless Earth.” The Geophysical Journal of the Royal Astronomical Society, Vol. 64, pp. 705 - 727
* [70] Weertman, J., and Weertman, J. R. 1975. “High Temperature Creep of Rock and Mantle Viscosity.” Annual Review of Earth and Planetary Sciences, Vol. 3, pp. 293 - 315
* [71] Weber, R. C.; Lin, Pei-Ying; Garnero, E.; Williams, Q.; and Lognonné, P. 2011. “Seismic Detection of the Lunar Core.” Science, Vol. 331, Issue 6015, pp. 309 - 312
* [72] Williams, J. G., Boggs, D. H., Yoder, C. F., Ratcliff, J. T., and Dickey, J. O. 2001. “Lunar rotational dissipation in solid-body and molten core.” _The Journal of Geophysical Research – Planets_ , Vol. 106, No E11, pp. 27933 - 27968. doi:10.1029/2000JE001396
* [73] Williams, J. G., Boggs, D. H., and Ratcliff, J. T. 2008. “Lunar Tides, Fluid Core and Core/Mantle Boundary.” The 39th Lunar and Planetary Science Conference, (Lunar and Planetary Science XXXIX), held on 10-14 March 2008 in League City, TX. LPI Contribution No. 1391., p. 1484
http://www.lpi.usra.edu/meetings/lpsc2008/pdf/1484.pdf
* [74] Williams, J. G., and Boggs, D. H. 2009. “Lunar Core and Mantle. What Does LLR See?”
Proceedings of the 16th International Workshop on Laser Ranging, held on 12-17
October 2008 in Poznan, Poland. Edited by S. Schilliak. pp. 101 - 120
http://cddis.gsfc.nasa.gov/lw16/docs/papers/sci$\\_$1$\\_$Williams$\\_$p.pdf
http://cddis.gsfc.nasa.gov/lw16/docs/papers/proceedings$\\_$vol2.pdf
* [75] Williams, J. G., and Efroimsky, M. 2012. “Bodily tides near the $\,1:1\,$ spin-orbit resonance. Correction to Goldreich’s dynamical model.” Submitted to Celestial Mechanics and Dynamical Astronomy. arXiv:……..
* [76] Yoder, C. 1982. “Tidal Rigidity of Phobos”. _Icarus_ , Vol. 49, pp. 327 - 346
* [77] Zahn, J.-P. 1966. “Les marées dans une étoile double serrée.” Annales d’Astrophysique, Vol. 29, pp. 313 - 330
* [78] Zharkov, V.N., and Gudkova, T.V. 2009. “The period and $Q$ of the Chandler wobble of Mars.” Planetary and Space Science, Vol. 57, pp. 288 - 295
* [79] Zschau, J. 1978. “Phase shifts of tidal load deformations of the Earth’s surface due to low-viscosity layers in the interior.”
In: M. Bonatz (Ed.), Proceedings of the 8th International Symposium on Earth
Tides, held in Bonn on 19-24 September 1977
|
arxiv-papers
| 2011-05-30T19:53:03 |
2024-09-04T02:49:19.180871
|
{
"license": "Public Domain",
"authors": "Michael Efroimsky",
"submitter": "Michael Efroimsky",
"url": "https://arxiv.org/abs/1105.6086"
}
|
1105.6238
|
# Universal contact and collective excitations of a strongly interacting Fermi
gas
Yun Li Sandro Stringari Dipartimento di Fisica, Università di Trento and
INO-CNR BEC Center, I-38123 Povo, Italy
###### Abstract
We study the relationship between Tan’s contact parameter and the macroscopic
dynamic properties of an ultracold trapped gas, such as the frequencies of the
collective oscillations and the propagation of sound in one-dimensional (1D)
configurations. We find that the value of the contact, extracted from the most
recent low-temperature measurements of the equation of state near unitarity,
reproduces with accuracy the experimental values of the collective frequencies
of the radial breathing mode at the lowest temperatures. The available
experiment results for the 1D sound velocities near unitarity are also
investigated.
###### pacs:
03.75.Kk, 03.75.Ss, 05.30.Fk, 67.85.-d
## I Introduction
Significant experimental and theoretical work has been devoted in recent years
to understand the universal properties of interacting Fermi gases along the
BEC-BCS crossover (for a review, see, for example, Giorgini2008 ). More
recently, Tan has introduced a new concept for investigating universality
based on the so-called contact parameter, which relates the short-range
features of these systems to their thermodynamic properties Tan2008 ;
Werner2009 . The universality of the Tan’s relations has been proven in a
series of experiments based on the measurement of the molecular fraction
Partridge2005 , the momentum distribution, the RF spectroscopic rate at high
frequencies, the adiabatic sweep and virial theorems Stewart2010 , the spin
structure factor Kuhnle2010 and the equation of state Navon2010 . The
temperature dependence of the contact parameter has been the object of recent
theoretical Hu2011 and experimental Kuhnle2011 work.
In this paper we discuss the relationship between Tan’s contact parameter and
the frequencies of the collective oscillations of a harmonically trapped Fermi
gas near unitarity. We also investigate the relationship with the sound
velocity in highly elongated configurations. The study of the collective
oscillations along the BEC-BCS crossover in terms of the contact parameter has
been already addressed in DelloStritto2010 where upper bounds to the
collective frequencies were calculated using a sum rule approach. However, the
sum rule method developed in this work significantly overestimates the
hydrodynamic frequencies of trapped Fermi gases, being consequently
ineffective for a useful quantitative comparison with experimental data in the
superfluid hydrodynamic regime of low temperatures.
The present approach is based on a perturbative solution of the hydrodynamic
equations of superfluids near unitarity Bulgac2005 . This allows for an exact
analytic relationship between Tan’contact parameter and the deviations of the
frequencies of the collective oscillations as well as of the one-dimensional
(1D) sound velocity from their values at unitarity. The high precision
achievable in the frequency measurements is in particular expected to provide
an alternative accurate determination of the contact parameter and to further
confirm the universality of this physical quantity.
## II Contact, equation of state and hydrodynamic equations
We start from the following definition of the contact parameter, based on the
so-called adiabatic sweep theorem Tan2008 :
$\left[\frac{dE}{d(1/a)}\right]_{N}=-\frac{\hbar^{2}\mathcal{I}}{4\pi M},$ (1)
where $E$ is the total energy of the system, $\mathcal{I}$ is the contact
parameter, $M$ is the atomic mass, and $a$ is the $s$-wave scattering length.
By using the local density approximation (LDA), the total energy can be
calculated as $E=\int d^{3}r(\epsilon+nV_{\text{ext}})$, where
$V_{\text{ext}}$ is the trapping potential and $\epsilon$ is the energy
density of a uniform gas. The equilibrium profile, in the LDA, is determined
by the equation
$\mu(n)+V_{\text{ext}}=\bar{\mu},$ (2)
where
$\mu(n)=\frac{\partial\epsilon(n,a)}{\partial n}$ (3)
is the chemical potential of uniform matter, providing the equation of state
of the gas, while $\bar{\mu}$ is the chemical potential of the trapped system,
fixed by the normalization condition. The derivative of the total energy with
respect to $1/a$ in Eq.(1) can then be conveniently written as
$\left[\frac{dE}{d(1/a)}\right]_{N}=\int
d^{3}r\,\left[\frac{\partial\epsilon(n,a)}{\partial(1/a)}\right]_{n},$ (4)
exploiting the link between the contact parameter and the equation of state.
On the other hand the chemical potential of uniform matter $\mu(n)$ is a
crucial ingredient of the hydrodynamic equations of superfluids. At zero
temperature, these equations actually read Pitaevskii2003 :
$\displaystyle\frac{\partial n}{\partial t}+\nabla\left(\mathbf{v}n\right)=0,$
(5) $\displaystyle\frac{\partial\mathbf{v}}{\partial
t}+\frac{1}{M}\nabla\left[\frac{1}{2}M\mathbf{v}^{2}+\mu(n)+V_{\text{ext}}\right]=0,$
so that an insightful question is to understand the link between the contact
parameter and the behavior of the collective modes emerging from the solutions
of Eq.(5). In the following we will discuss the problem near unitarity, where
we can expand the chemical potential in the form Tan2008 ; Bulgac2005
$\displaystyle\mu(n)=\frac{\hbar^{2}}{2M}\xi\left(3\pi^{2}n\right)^{2/3}-\frac{2\hbar^{2}}{5M}\frac{\zeta}{a}\left(3\pi^{2}n\right)^{1/3}$
(6)
with $\xi$ and $\zeta$ being two universal dimensionless parameters accounting
for the effects of the interactions. The first term in (6) exhibits the same
density dependence as the ideal Fermi gas with the renormalization factor
$\xi$. The second term (first-order correction in $1/a$) is directly related
to Tan’s parameter calculated at unitarity. Indeed, using Eqs.(1-4), one
easily finds that the contact, for a harmonically trapped system, is given by
$\frac{\mathcal{I}}{Nk_{F}}=\frac{512}{175}\zeta,$ (7)
where we have defined the Fermi wave vector $k_{F}=[3\pi^{2}n(0)]^{1/3}$
depending on the density in the center of the trap.
The small-amplitude oscillations of the gas can be studied by solving the
linearized hydrodynamic equations (5),
$-\omega^{2}\delta
n=\frac{1}{M}\nabla\cdot\left\\{n\nabla\left[\frac{\partial\mu(n)}{\partial
n}\delta n\right]\right\\},$ (8)
where $\delta n$ is the amplitude of the density oscillations around the
equilibrium value $n$. At unitarity $\mu(n)\propto n^{2/3}$, and the solutions
of Eq.(8), in the presence of harmonic trapping with axial symmetry, exhibit
the analytic form
$\omega^{2}(\lambda)=\frac{\omega_{\perp}^{2}}{3}\left[\left(4\lambda^{2}+5\right)\pm\sqrt{16\lambda^{4}-32\lambda^{2}+25}\,\right],$
(9)
with $\lambda=\omega_{z}/\omega_{\perp}$, holding for the lowest $m=0$ modes,
and include the coupling between the monopole and quadrupole oscillations
caused by the non spherical shape of the potential (here $m$ is the $z$
component of angular momentum carried by the excitation). Notice that,
remarkably, result (9) does not depend on the parameter $\xi$ characterizing
the equation of state at unitarity. This relation for the collective
frequencies was actually first obtained in contexts different from the unitary
Fermi gas, such as the ideal Bose gas above $T_{c}$ Kagan1997 ; Griffin1997
and the ideal Fermi gas Amoruso1999 in the hydrodynamic regime. In both cases
the equation of state [$\mu(n,s)$ actually exhibits the same $n^{2/3}$
dependence for fixed entropy per particle $s$] and the hydrodynamic equations
yield the same dispersion law (9) for the scaling solutions of coupled
quadrupole monopole type in the presence of harmonic trapping. The prediction
(9) for the collective frequencies was checked experimentally at unitarity
providing a direct confirmation of the universality exhibited by the unitary
Fermi gas Kinast2004 ; Altmeyer2007 . The same oscillations were investigated
out of unitarity along the BEC-BCS regime, confirming the predictions of
theory Stringari2004 ; Astrakharchik2005 and in particular the fine details
of the equation of state accounted for by quantum Monte Carlo simulations.
## III Collective frequencies and sound velocity shifts near unitarity
In the following we calculate the deviations of the collective frequencies
from the unitary value (9) holding for small values of the dimensionless
parameter $1/k_{F}a$. To this purpose we solve the hydrodynamic equations
using a perturbative procedure, which is generally applicable to any equation
of state having the form $\mu(n)=\mu_{0}(n)+\mu_{1}(n)$, where
$\mu_{1}(n)\ll\mu_{0}(n)$ represents the first-order correction. The density
profile $n(\mathbf{r})=n_{0}(\mathbf{r})+n_{1}(\mathbf{r})$, including the
first-order correction, can be obtained starting from the expansion of
$\mu(n)$ around the zero-order ground-state density profile
$n_{0}(\mathbf{r})$ and using the equilibrium condition (2) in LDA. This gives
$n_{1}=\left[\delta\bar{\mu}-\mu_{1}(n_{0})\right]\left/\frac{\partial\mu_{0}(n_{0})}{\partial
n_{0}}\right.,$ (10)
where $\delta\bar{\mu}$ is the first-order correction to the chemical
potential $\bar{\mu}$. Solving the linearized hydrodynamic equation (8)
perturbatively, we find the following expression for the frequency shift:
$\frac{\delta\omega}{\omega}=-\frac{\int
d^{3}r\left(\nabla^{2}f_{0}^{\ast}\right)\left[n_{1}-n_{0}\left(\partial
n_{1}/\partial n_{0}\right)\right]f_{0}}{2\omega^{2}M\int
d^{3}r\,f_{0}^{\ast}\;\delta n_{0}}$ (11)
for the collective oscillations caused by the perturbation $\mu_{1}(n)$ in the
chemical potential. In Eq.(11) $f_{0}=[\partial\mu_{0}(n_{0})/\partial
n_{0}]\delta n_{0}$ is the zero-order eigenfunction of (8), and $n_{1}$ is
given by (10). According to Eq.(11), one always has $\delta\omega=0$ for the
surface modes satisfying the condition $\nabla^{2}f_{0}=0$, as expected, due
to their independence of the equation of state. For compression modes one
instead expects a correction due to the changes in the equation of state. In
general, one can show that the first term in (10) proportional to
$\delta\bar{\mu}$ gives no contribution to the frequency shift and will be
consequently neglected in the following Note_02 .
Result (11) is valid for both Bose and Fermi systems. In the case of weakly
interacting Bose-Einstein condensed gas, it allows for the calculation of the
frequency shifts caused by the Lee-Huang-Yang corrections in the equation of
state. In this case, the corresponding density correction, neglecting the term
proportional to $\delta\bar{\mu}$, can be written as
$n_{1}=-32(n_{0}a)^{3/2}/3\sqrt{\pi}$, and using Eq.(11), one finds the
frequency shift of the compression mode as derived in Pitaevskii1998 . For the
Fermi gas near unitarity, we instead employ the expansion of the equation of
state (6) around the density profile $n_{0}(\mathbf{r})$ calculated at
unitarity. Ignoring also in this case the irrelevant term proportional to
$\delta\bar{\mu}$, we find
$n(\mathbf{r})=n_{0}(\mathbf{r})+n_{1}(\mathbf{r})=n_{0}(\mathbf{r})-\frac{3\beta}{2\alpha}n_{0}^{2/3}(\mathbf{r}),$
(12)
where $\alpha=\hbar^{2}\xi(3\pi^{2})^{2/3}/2M$,
$\beta=-2\hbar^{2}\zeta(3\pi^{2})^{1/3}/(5Ma)$, and
$n_{0}=[(\bar{\mu}_{0}-V_{\text{ext}})/\alpha]^{3/2}$, with $\bar{\mu}_{0}$
being the chemical potential evaluated for a trapped system at unitarity.
After some straightforward algebra, one finds the following expression for the
frequency shift of the collective oscillations near unitarity:
$\frac{\delta\omega}{\omega}=\frac{\beta}{6\omega^{2}M}\frac{\int
d^{3}r\left(\nabla^{2}f_{0}^{\ast}\right)n_{0}^{2/3}\,f_{0}}{\int
d^{3}r\,f_{0}^{\ast}n_{0}^{1/3}\,f_{0}}.$ (13)
One can check that this result is equivalent to the result of Eq.(24) and (25)
in Bulgac2005 where a similar expansion was carried out near unitarity.
Figure 1: (Color online) Functions $\eta_{+}$ (solid blue) and $\eta_{-}$
(dashed red) relative to the higher and lower $m=0$ modes as a function of the
deformation parameter $\lambda=\omega_{z}/\omega_{\perp}$.
The eigenfunctions for the $m=0$ modes (9) have the form $f_{0}\sim
a+br_{\perp}^{2}+cz^{2}$, where
$\displaystyle\frac{a}{b}$
$\displaystyle=-\frac{\bar{\mu}_{0}[4\lambda^{2}+3(\omega/\omega_{\perp})^{2}-10]}{6M\omega_{z}^{2}},$
(14) $\displaystyle\frac{c}{b}$
$\displaystyle=\frac{3(\omega/\omega_{\perp})^{2}-10}{2}.$
After some length but straightforward algebra, one finally obtains the
following result for the frequency shift:
$\frac{\delta\omega}{\omega}=\left[\frac{128\zeta}{525\pi\xi}\,\eta_{\pm}(\lambda)\right]\left(k_{F}a\right)^{-1}=\left[\frac{\mathcal{I}/Nk^{0}_{F}}{12\pi\xi^{1/2}}\,\eta_{\pm}(\lambda)\right]\left(k^{0}_{F}a\right)^{-1},$
(15)
where
$\eta_{\pm}(\lambda)=\frac{1}{2}\pm\frac{3}{\phantom{\hat{1}}2\sqrt{16\lambda^{4}-32\lambda^{2}+25}\phantom{\hat{1}}},$
(16)
with the index $\pm$ referring to the higher $(+)$ and lower $(-)$ frequencies
of (9). In the second equality of (15), we have used relation (7) for the
contact parameter calculated for an harmonically trapped atomic cloud and we
have expressed the Fermi momentum $k_{F}$ in terms of the ideal Fermi gas wave
vector $k^{0}_{F}=(24N)^{1/6}a_{ho}^{-1}$, with $a_{ho}$ being the geometrical
average of the harmonic oscillator lengths. For the same total number of
atoms, the density $n(0)$ in the center of the trap for the unitary gas is
$\xi^{-3/4}$ times larger than for the ideal gas, yielding
$k_{F}=k^{0}_{F}\xi^{-1/4}$.
Eq.(15) represents the main result of the present paper. It relates Tan’s
contact, a central quantity for the universality relations holding in
interacting systems, with the low-energy macroscopic dynamics of the system,
namely, the frequencies of the collective oscillations. These equations can be
used either to predict theoretically the frequency shifts, once the
dimensionless parameters $\xi$ and $\zeta$ or Tan’s contact are known, or to
determine experimental constraints on the value of $\mathcal{I}$. In Fig.1 we
plot $\eta_{\pm}$ as a function of the deformation parameter $\lambda$. For a
spherical trap ($\lambda=1$) one obtains $\eta_{+}=1$ for the monopole mode
and $\eta_{-}=0$, confirming, as already anticipated, that there is no
frequency shift for the surface quadrupole mode. When $\lambda\neq 1$, the two
modes are coupled. In the limit of both spherical traps and highly elongated
traps ($\lambda\ll 1$) it reproduces the results of Bulgac2005 .
Figure 2: (Color online) Frequency of the radial compression mode for an
elongated Fermi gas in the units of radial frequency ($\omega_{\perp}$)
Altmeyer2007 . The deformation of the trap $\lambda=0.038$. The dashed red
curve refers to the equation of state based on Monte Carlo simulations. Open
and solid circles correspond to experiment measurements. The black straight
line shows the slope of the frequency shifts around unitarity obtained from
Eq.(15) using $\xi=0.41$ and $\zeta=0.93$ as extracted from the direct
measurement of the equation of state Navon2010 (corresponding to
$\mathcal{I}/Nk^{0}_{F}\simeq 3.4$). The green (light gray) straight line
instead corresponds to the values $\xi=0.41$ and $\zeta=0.74$ yielding
$\mathcal{I}/Nk^{0}_{F}\simeq 2.7$ (see text).
In Fig.2 we show the prediction of Eq.(15) in the trapping conditions of the
experiment of Altmeyer2007 , using the values $\xi=0.41$, $\zeta=0.93$
extracted from the direct measurement of the equation of state Navon2010
carried out at the lowest temperatures and yielding the value
$\mathcal{I}/Nk_{F}^{0}\simeq 3.4$ for the contact parameter. These values
differ by a few percent from the most recent theoretical predictions based on
Monte Carlo simulations at $T=0$ (see, for example, Gandolfi2011 and
references therein). The predicted slope $\delta\omega/\omega\sim
0.11/(k^{0}_{F}a)$ (black line) of the collective frequencies of the radial
breathing mode at unitarity turns out to be in very good agreement with
experiments ($\delta\omega/\omega\sim 0.12/(k_{F}^{0}a)$ Note_01 ).
In order to appreciate the sensitivity of the slope to the choice of the
values of $\xi$ and $\zeta$, in Fig.2, we also show the predictions [green
(light gray) line] for the frequency shifts using the values $\xi=0.41$ and
$\zeta=0.74$, yielding the smaller value $\mathcal{I}/Nk^{0}_{F}\simeq 2.7$
for the contact. This value is closer to the measurement of the contact
carried out in Stewart2010 and Kuhnle2011 at slightly higher values of
temperature. The resulting slope $\delta\omega/\omega\sim 0.09/(k_{F}^{0}a)$
provides a worse description of the experimental data for the collective
frequencies.
The collective oscillations discussed above represent the discretized values
of the usual sound waves described by hydrodynamics. It is actually useful to
calculate also the changes of the sound velocity of a uniform sample near
unitarity in terms of the dimensionless parameters $\xi$ and $\zeta$ or,
equivalently, Tan’s contact parameter. For bulk Fermi gases, one finds $\delta
c/c=-\zeta/(5\xi k_{F}a)$.
Figure 3: (Color online) Normalized 1D sound velocity $c_{1}/v_{F}$ vs the
interaction parameter $1/k^{0}_{F}a$ in the unitary regime, where $v_{F}=\hbar
k_{F}^{0}/M$ is the Fermi velocity for ideal Fermi gas and
$k_{F}^{0}=k_{F}\xi^{1/4}$ is the corresponding Fermi wave vector. Circles
with error bars correspond to the experiment measurements Joseph2007 . The
dashed red curve is a quadratic fitting Thomas_private , and the black
straight line is the slope of $\delta c_{1}/c_{1}$ calculated from Eq.(18)
using the values $\xi=0.41$ and $\zeta=0.93$ extracted from Navon2010 .
In the case of cylindrical geometry with radial harmonic trapping the sound
velocities can be also calculated starting from the 1D hydrodynamic expression
Capuzzi2006
$c_{1}=\left\\{\left.\frac{1}{M}\int d^{2}r_{\perp}\,n\right/\int
d^{2}r_{\perp}\,\left[\frac{\partial\mu(n)}{\partial
n}\right]^{-1}\right\\}^{1/2},$ (17)
where the the density profile along the radial direction should be evaluated
in LDA. The above results hold for sound waves characterized by wavelengths
significantly larger than the radial size of the gas. Carrying out a
perturbative development around unitarity, similar to the one employed above
for the calculation of the excitation frequencies, we obtain the result
$\frac{\delta c_{1}}{c_{1}}=-\frac{3\zeta}{20\xi}\left(k_{F}a\right)^{-1},$
(18)
where $c_{1}=(\xi/5)^{1/2}(\hbar k_{F}/M)$ is the 1D sound velocity depending
on the density in the center of the trap via $k_{F}$. Notice that $c_{1}$
differs from the sound velocity calculated at the central value of the density
by the factor $\sqrt{3/5}$. In Fig.3 we show the experimental values of
$c_{1}$ in the unitary regime measured in Joseph2007 together with the slope
evaluated from Eq.(18) using for $\xi$ and $\zeta$ the $T\simeq 0$ values
obtained from the experiment Navon2010 . A quadratic fit is applied to the
experiment data (dashed red curve) Thomas_private . The figure shows that the
value of the slope at unitarity [black curve, $\delta
c_{1}/c_{1}\sim-0.27/(k_{F}^{0}a)$] overestimates the experimental linear
changes in a visible way, suggesting that these experimental data were carried
out at relatively higher temperatures, such that they cannot be accurately
reproduced by employing the $T=0$ values of the contact parameter. Another
source of disagreement might be due to the fact that the conditions of
applicability of the 1D expression (17) for the sound velocity are not fully
satisfied in the experiment of Joseph2007 .
## IV Conclusion
In conclusion our analysis reveals consistency between the experimental
results for the contact, obtained through the measurement of the equation of
state carried out at $T\simeq 0$ in Navon2010 , and the behavior of the
collective frequencies carried out in Altmeyer2007 at the lowest
temperatures. It would be interesting to extend our analysis of the frequency
shifts to finite temperature. The analysis could be simplified by the fact
that the scaling modes of monopole and quadrupole types have a universal
behavior at unitarity and their frequencies, calculated in the hydrodynamic
regime, are independent of temperature. Furthermore they are not coupled to
second sound. The proper calculation of the resulting slope at finite
temperature and the explicit connection with Tan’s contact parameter at finite
temperature will be the object of a future work.
###### Acknowledgements.
Useful discussions with R. Combescot and L. P. Pitaevskii are acknowledged.
This work has been supported by ERC through the QGBE grant.
## References
* (1) S. Giorgini, L. P. Pitaevskii, and S. Stringari, Rev. Mod. Phys., 80, 1215 (2008).
* (2) S. Tan, Ann. Phys. 323, 2952 (2008); ibid 323, 2971 (2008); 323, 2987 (2008).
* (3) E. Braaten and L. Platter, Phys. Rev. Lett. 100, 205301 (2008); R. Combescot, F. Alzetto, and X. Leyronas, Phys Rev. A 79, 053640 (2009); F. Werner, L. Tarruell, and Y. Castin, Eur. Phy. J. B 68, 401 (2009).
* (4) G. B. Partridge, K. E. Strecker, R. I. Kamar, M. W. Jack, and R. G. Hulet, Phys. Rev. Lett., 95, 020404 (2005).
* (5) J. T. Stewart, J. P. Gaebler, T. E. Drake, and D. S. Jin, Phys. Rev. Lett., 104, 235301 (2010).
* (6) E. D. Kuhnle, H. Hu, X.-J. Liu, P. Dyke, M. Mark, P. D. Drummond, P. Hannaford, and C. J. Vale, Phys. Rev. Lett., 105, 070402 (2010).
* (7) N. Navon, S. Nascimbène, F. Chevy, and C. Salomon, Science, 328, 729 (2010).
* (8) H. Hui, X. J. Liu and P. D. Drummond, New Journal of Physics, 13, 035007 (2011).
* (9) E. D. Kuhnle, S. Hoinka, and P. Dyke, H. Hu, P. Hannaford, and C. J. Vale, Phys. Rev. Lett., 106, 170402 (2011).
* (10) M. DelloStritto, and T. M. De Silva, arXiv:1012.2329.
* (11) A. Bulgac, and G. F. Bertsch, Phys. Rev. Lett., 94, 070401 (2005).
* (12) L. P. Pitaevskii, and S. Stringari, 2003, _Bose-Einstein Condensation_ (Clarendon, Oxford).
* (13) Y. Kagan, E. L. Surkov, and G. V. Shlyapnikov, Phys. Rev. A, 55, R18 (1997).
* (14) A. Griffin, W. C. Wu, and S. Stringari, Phys. Rev. Lett., 78, 1838 (1997).
* (15) M. Amoruso, I. Meccoli, A. Minguzzi, and M. P. Tosi, Eur. Phys. J. D, 7, 441 (1999).
* (16) J. Kinast, S. L. Hemmer, M. E. Gehm, A. Turlapov, and J. E. Thomas, Phys. Rev. Lett., 92, 150402 (2004).
* (17) A. Altmeyer, S. Riedl, C. Kohstall, M. J. Wright, R. Geursen, M. Bartenstein, C. Chin, J. H. Denschlag, and R. Grimm, Phys. Rev. Lett., 98, 040401 (2007).
* (18) S. Stringari, Europhys. Lett., 65, 749 (2004).
* (19) G. E. Astrakharchik, R. Combescot, X. Leyronas, and S. Stringari, Phys. Rev. Lett., 95, 030404 (2005).
* (20) This follows from the fact that the unperturbed frequencies [see for example Eq.(9)] do not depend on the value of the chemical potential $\bar{\mu}$.
* (21) L. Pitaevskii, and S. Stringari, Phys. Rev. Lett., 81, 4541 (1998).
* (22) S. Gandolfi, K. E. Schmidt, J. Carlson, Phys. Rev. A 83, 041601 (2011).
* (23) R. Grimm (private communication).
* (24) P. Capuzzi, P. Vignolo, F. Federici, and M. P. Tosi, Phys. Rev. A, 73, 021603 (2006).
* (25) J. Joseph, B. Clancy, L. Luo, J. Kinast, A. Turlapov, and J. E. Thomas, Phys. Rev. Lett., 98, 170401 (2007).
* (26) J. Joseph and J. E. Thomas (private communication).
|
arxiv-papers
| 2011-05-31T10:57:50 |
2024-09-04T02:49:19.212655
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Yun Li and Sandro Stringari",
"submitter": "Yun Li",
"url": "https://arxiv.org/abs/1105.6238"
}
|
1106.0008
|
# A Model of Opinion Dynamics with Bounded Confidence and Noise
P. Nyczka Institute of Theoretical Physics, University of Wrocław, pl. Maxa
Borna 9, 50-204 Wrocław, Poland
###### Abstract
This paper introduces a new model of continuous opinion dynamics with random
noise. The model belongs to the broad class of so called bounded confidence
models. It differs from other popular bounded confidence models by the update
rule, since it is intended to describe how the single person can influence at
the same time a group of several listeners. Moreover, opinion noise is
introduced to the model. Due to this noise, in some specific cases,
spontaneous transitions between two states with a different number of large
opinion clusters occur. Detailed analysis of these transitions is provided,
with MC simulations and ME numerical integration analysis.
## 1 Introduction
Models of opinion dynamics are very popular in modern sociophysics (see recent
reviews [1, 2, 3]). An interesting subset of them are the models with Bounded
Confidence (BC models) [4, 5, 6, 7]. In these models, the opinion exchange
takes place only when the difference between two opinions is below the
confidence bound (also called as threshold or tolerance). This is a reasonable
consideration because if the minds of two people are very different, it is
difficult for them to convince each other of something. Sometimes it is even
hard for them to talk to each other. As a result of opinion exchange, one
agent can change the opinion of another agent, or they can convince each
other. There are many different types of opinion exchanges, including
exchanges between more than two agents. BC models are commonly used to
simulate the evolution of opinion distribution in a set of agents. Depending
on tolerance, the simulation results can give consensus (one big cluster in an
opinion space) for big Tolerance, polarization (two big clusters) for smaller
Tolerance, or clusterization (three or more clusters) for even smaller
Tolerance.
Although several extensions based on continuous opinions and bounded
confidence have been proposed and analyzed [4], little attention has been paid
to the instability of opinion caused by influences from outside [12, 13, 14].
Hence, the simulation in most BC models ends when static configuration is
reached (consensus, polarization, or clusterization) and nothing else happens.
It is noticeable that even in countries with stable democracy we can observe
oscillations in opinion distribution, some of them are very strong. This
result cannot be obtained in simulations if only the interactions between
agents is taken into account. The model described in this paper tries to
incorporate influence from outside.
I understand that many external factors can influence an agent’s opinion, such
as dramatic event, Mass-media or others. This influence generates some
unpredictable opinion changes. It may also be regarded as free will opposing
the conformist character of opinion exchange. I assumed that the opinion of
such an influenced agent may change to a completely different one, as in
models with discrete opinions where it is known as ”contrarians” [8, 9, 10,
11]. To simulate these various unpredictable changes noise was added to that
model [12, 13, 14]. As in [13, 14] I decided to change from time to time, an
opinion of one randomly chosen agent to a new opinion, chosen from uniform
random distribution between 0 and 1. Probability of this change will be
described by noise parameter $\rho$ [15, 16, 17]. Another type of noise was
introduced in BC models with continuous opinions [18], but this did not
significantly affect the simulation results.
In the next section, a new BC model with continuous opinions and random
changes in an agent’s opinion (noise parameter $\rho$) will be introduced.
## 2 Model
Consider a set of N agents. Each agent is connected with the others (such a
structure can be described by a complete graph) and has its own opinion which
is represented by a real number between 0 and 1.
1. 1.
Randomly choose one agent from set $A=\left\\{a_{1},...,a_{n}\right\\}$
denoting it’s opinion by $S^{*}$.
2. 2.
Randomly choose $L$ agents from the rest of the set. Their opinions will be
subset $\left\\{S_{i}\right\\}$ of $A$ where $i=\left\\{1,...,L\right\\}$.
These agents will be listeners.
3. 3.
For each $i\epsilon\left\\{1,...,L\right\\}$ if $|S_{i}-S^{*}|\leq
T\Rightarrow S_{i}^{{}^{\prime}}=\frac{1}{2}(S^{*}+S_{i})$ .
4. 4.
With probability $\rho$, randomly choose one and only one agent from set $A$
and change its opinion value to a new randomly chosen value from division
$\left\langle 0,1\right\rangle$.
5. 5.
Back to 1.
As described, the parameters of the simulation are: $L$ \- number of
listeners, $\rho$ \- noise parameter, $T$ \- tolerance and $N$ \- number of
agents.
The model proposed here is quite similar to the most popular BC models, i.e.,
the Deffuant (D) model [5] and that of Hegselmann and Krause (HK) [6].
Opinions take real values in an interval $[0,1]$ and each agent, with opinion
$S^{*}$, interacts with agents whose opinions lie in the range
$[S^{*}-T,S^{*}+T]$. The difference is given by the update rule: chosen agent
does not interact with one of its neighbors, like in D model, but with $L$
compatible neighbors at once, similarly to HK model. However, on contrary to
HK model, opinions of $L$ neighbors are changed, instead of changing an
opinion of chosen agent. This means that one agent influences simultaneously
$L$ compatible neighbors, instead of being influenced by them. Differences
between these three models can be viewed shortly in the following way (see
also [1]):
* 1.
Deffuant s model describes the opinion dynamics of large populations, where
people meet in small groups, like pairs.
* 2.
HK model describes formal meetings, where there is an effective interaction
involving many people at the same time.
* 3.
The model proposed in this paper is intended to describe how the single person
can influence at the same time (during a formal meeting) a group of several
listeners.
As mentioned in the Introduction, to describe the various unpredictable
changes, the noise was introduced to the model just like Pineda et al [14] did
with the Deffuant model.
It’s very important that results obtained with Deffuant model with noise are
the same as with this model for $L=2$. However there’s slight difference
between update rules but it has no impact on the MC results, and has nothing
to do with analytical approach. Hence model described in this paper could be
treated as generalisation of Deffuant model with number of interacting at one
time persons as a parameter.
Due to the noise, the system never reaches the final fixed point, but rather
dynamic equilibrium. Moreover, after some time the opinion distribution is
independent of the initial conditions unlike in noiseless BC models case – if
there are any two different initial distributions, for example ($Q$ (e.q.
uniform) and $R$ (e.q. normal)), and $\rho_{Q}=\rho_{R}>0$, $L_{Q}=L_{R}$,
$T_{Q}=T_{R}$, $N_{Q}=N_{R}$, after some number of steps we cannot distinguish
between these two systems. Surely their distributions in any given moment will
be different, but if a certain timespan is given, their statistical properties
will be the same. All the simulations were made after the system reached
dynamic equilibrium.
## 3 Results
### 3.1 Monte Carlo simulations
Figure 1: Opinion distribution $n(O)$ could be approximated by
$f\left(O\right)\approx\alpha e^{\left|-\beta O\right|-\gamma}$ where
$\alpha$, $\beta$ and $\gamma$ are the factors. Figure 2: Different shapes of
opinion distribution $n(O)$ for different $\frac{\rho}{L}$. For smaller
$\frac{\rho}{L}$ maximum is higher and standard deviation of distribution is
greater than for greater $\frac{\rho}{L}$ ($\rho$ \- noise parameter and $L$
\- listeners number). Figure 3: Standard deviation of opinion distribution
$SD(\frac{\rho}{L})$ for $N=1600$ and $T=1$. Figure 4: Number of clusters in
opinion space (in other words, number of modes in opinion distribution) for
several values of the tolerance factor $T$, $L=16$, $\rho=0.16$, $N=1600$.
Figure 5: Number of opinion clusters for $L=64$, $\rho=0.64$, $N=1600$,
$10^{3}MCS$, $10$ simulations, as a function of inverse opinion threshold
$X=\frac{1}{2T}$ and $C\approx\left[AX+B\right]$ approximation. Figure 6:
Number of large opinion clusters $C(X)$ for different number of agents, only
first unstable region, $L=16$, $\rho=0.16$, $10^{4}MCS$, $10$ simulations.
Figure 7: $|1>\rightarrow|2>$ transition $L=16$, $\rho=0.16$, $X=1.66$,
$N=1600$. Points a, b, c, d, e, f on upper panel of figure correspond to the
six bottom panels, respectively. Figure 8: $|2>\rightarrow|1>$ transition
$L=16$, $\rho=0.16$, $X=1.66$, $N=1600$. Points a, b, c, d, e, f on upper
panel of figure correspond to the six bottom panels, respectively. Figure 9:
Fragment of typical simulation outcome $|2>\rightarrow|1>$ transition in
points a, b and $|1>\rightarrow|2>$ transition in points c, d $L=16$,
$\rho=0.16$, $X=1.66$, $N=1600$. Figure 10: Number of transitions $t(X)$ per
$10^{3}MCS$ as a function of inverse opinion threshold $X=\frac{1}{2T}$
$L=64$, $\rho=0.64$, $N=1600$, $10^{3}MCS$ 10 simulations.
* 1.
If $L>0$, $\rho=0$, and $T=1$, the consensus is reached very quickly.
* 2.
If $L=0$, $\rho>0$, and $T\epsilon<0,1>$, there is no information exchange and
uniform random distribution of opinions appears.
* 3.
If $L>0$, $\rho>0$, and $T=1$, the set of agents does not reach consensus but
rather stays in dynamic equilibrium with one big opinion cluster.
For $T=1$, distribution of the opinions can be approximated by:
$f\left(O\right)\approx\alpha e^{\left|-\beta O\right|-\gamma}$ (1)
rule, where $\alpha$, $\beta$ and $\gamma$ are the factors that depend on
model parameters (see fig. 1). For $T=1$ and $N=const.$, standard deviation
($SD$) of opinions depends only on $\frac{\rho}{L}$ – the greater is factor
$\frac{\rho}{L}$ the greater is $SD$ (see figs. 2 and fig. 3). As it can be
seen, regardless of the force attracting the agents to the center, there are
still some agents spread on the whole opinion space.
For $T<1$, as in other BC models, opinion fragmentation occurs. The smaller
$T$ is, the greater number of clusters occur (see fig. 4). Assuming
$X=\frac{1}{2T}$, the number of large opinion clusters $C$ corresponds to
$C\approx\left[AX+B\right]$ (2)
rule (see fig. 5), which is more accurate than
$C\approx\left[X\right]$ (3)
proposed by Deffuant [5]. Using $X$ rather than $T$ where $X=\frac{1}{2T}$ is
more suitable for the presentation of simulation results, so it is more often
used in this paper. Most importantly the number of clusters $C$ changes
continuously, not discretely but there are evident steps. There are regions
where $C$ is far from the integer. These regions in the space of $X$ are most
interesting because criticality appears there, according to bifurcation points
in [14]. Due to instability in these regions, spontaneous transitions between
states with different numbers of big opinion clusters appear.
Figure 11: Number of transitions per $10^{3}MCS$, $t(X)$ for different number
of listeners $L$, only first unstable region, $\rho=0.08$, $N=1600$,
$10^{4}MCS$ 10 simulations. Figure 12: Number of transitions per $10^{3}MCS$,
$t(X)$ for different noise parameter $\rho$, only first unstable region,
$L=8$, $N=1600$, $10^{4}MCS$ 10 simulations. Figure 13: Number of transitions
per $10^{3}MCS$, $t(X)$ for different number of agents, only first unstable
region, $L=16$, $\rho=0.16$, $10^{4}MCS$ 10 simulations. Figure 14:
Distribution of transitions for parameters $L=4$, $\rho=0.04$, $N=1600$.
Results for only one simulation are presented and in this case observation
time is $10^{7}MCS$. Disc line is $\tau_{|1>}$ distribution where $\tau_{|1>}$
is $|1>$ state lifetime or time between $|2>\rightarrow|1>$ and
$|1>\rightarrow|2>$ transitions, circle line is $\tau_{|2>}$ distribution
where $\tau_{|2>}$ is $|2>$ state lifetime or time between $|1>\rightarrow|2>$
and $|2>\rightarrow|1>$ transitions.
The Deffuant model for continuous opinion dynamics under the presence of noise
has been studied recently [14]. They were able to derive a master equation for
the probability density function which determines the individuals density or
distribution in the opinion space. Moreover, they have also found that in the
noisy case the asymptotic steady-state probability distributions reached by
Monte-Carlo simulations might not coincide with the ones obtained from the
master equation [14]. This takes place for finite systems and is caused by
perturbation introduced by noise.
Observed deviations were more pronounced in the case of being close to a
bifurcation point.
In this paper we study a new model, which differs slightly from the Deffuant
model, yet belongs to the class of bounded continence models. It occurs that
different BC models without noise exhibit very similar behavior [1, 4].
Therefore, one could expect similar behavior under the presence of noise.
Indeed results obtained using Monte Carlo simulations for D model with noise
[14] agrees with results obtained in this paper. Moreover, due to the
similarities between BC models, results obtained here suggest that spontaneous
transitions appearing in a bifurcation point might be responsible for the
inconsistency between analytical results and simulations observed in [1, 4].
Let us now describe steady states of our system more carefully. For the
description of the system’s state, $|k>$ notation will be used, where $k$
denotes the number of big opinion clusters.
* 1.
For $X\epsilon<1,X_{1}-\Delta>$, the system has one big opinion cluster which
is in $|1>$ state, where $\Delta=\Delta(N,L,\rho)$ is a monotonically
decreasing function of the total number of agents $N$. For
$N\rightarrow\infty$ $\Delta\rightarrow 0$, which is usual behavior in the
critical point (see fig. 6).
* 2.
For $X\epsilon<X_{1}-\Delta,X_{1}+\Delta>$ (first unstable region, see fig.
6), spontaneous transitions between one and two big clusters occur (See fig.
7, fig. 8), fig. 9. It can be denoted as $|1>\rightarrow|2>$ for one to two
clusters transitions and $|2>\rightarrow|1>$ for opposite. Finally there is
$|1>\rightarrow|2>\rightarrow|1>$ cycle, where time intervals between
transitions are unpredictable.
* 3.
For $X\epsilon<X_{1}+\Delta,X_{2}-\Delta>$ the system has two big clusters,
where the second critical point $X=X_{2}$.
* 4.
For $X\epsilon<X_{2}-\Delta,X_{2}+\Delta>$ (second unstable region),
spontaneous transitions between two and three big clusters are observed.
$|2>\rightarrow|3>\rightarrow|2>$ cycle occur.
* 5.
Generally, for $X\epsilon<X_{k-1}+\Delta,X_{k}-\Delta>$ opinions are
fragmented into $k$ clusters ($k$-modal opinion distribution).
* 6.
For $X\epsilon<X_{k}-\Delta,X_{k}+\Delta>$, spontaneous transitions between
$k$ and $k+1$ big clusters are observed and
$|k>\rightarrow|k+1>\rightarrow|k>$ cycle appears.
It is surprising that such a simple model can simulate such a complex
behavior. Once again it should be mentioned that spontaneous transitions occur
only in critical regions around the bifurcation points
$X\epsilon<X_{k}-\Delta,X_{k}+\Delta>$ and that for $N\rightarrow\infty$
$\Delta\rightarrow 0$, which is usual in the critical point. However, it
should be noticed that in social systems $N$ can rarely be treated as infinite
and thus $\Delta>0$. Within the proposed model, ’real life’ takes place in the
critical region.
Let me now examine the spontaneous transition’s mechanism.
* 1.
For $|1>$ state in the first unstable region there is one big cluster in the
center and two small clusters near 0 and 1 (see panel denoted by (a) in fig.
7). The position of the central cluster is about 0.5 but it oscillates very
strongly, and the oscillations are larger for smaller $N$ and are vanishing
for $N\rightarrow\infty$. Sometimes, when it goes far to one of the sides, the
opposite small cluster grows very fast and becomes a second big cluster
immediately and $|1>\rightarrow|2>$ transition takes place (see fig. 7 and
fig. 9).
* 2.
$|2>\rightarrow|1>$ transition (See fig. 8, fig. 9) is different and more
rapid. If the set is in $|2>$ state and it is in the first unstable region,
there are two large clusters whose positions oscillate slightly. Sometimes
they get so close to each other that their tails begin to interact and attract
each other. As a result they become closer and closer, and finally joining
into one big cluster.
* 3.
In the next unstable regions the mechanism is similar. In
$|k>\rightarrow|k+1>$ transition, a new cluster is created between two other
clusters. In $|k+>\rightarrow|k>$ transition, two adjacent clusters join into
one.
Although the exact moment of spontaneous transition cannot be predicted, its
frequency $t$ (average number of transitions per $10^{3}MCS$) can be measured.
There are several maximums of transition frequency, exactly in the centres of
unstable regions (float part of $C(X)\approx 0.5$) (See fig. 10). This is
logical because the instability of an opinion’s distribution is greatest in
such places, so even a small perturbation can cause the spontaneous
reorganization of opinions in the whole set. In each unstable region the shape
of $t(X)$ can be alternately approximated by Gaussian distribution, and also
follows the rule:
$t(X)\approx\mu\frac{dC(X)}{dX},$ (4)
where $\mu=\mu(L,\rho,k,N)$. It can be seen (fig. 10) that $\mu$ is a
decreasing function of $k$. This is understandable due to the fact that for a
greater $X$ (greater $k$ and smaller $T$) an agent can interact with less
number of listeners than in the case of a smaller $X$ (smaller $k$ and greater
$T$), because they are out of confidence bound. One can say that effective $L$
is smaller and analogous to fig. 11: the smaller $L$ the fewer transitions.
As mentioned above, the average number of transitions per time unit depends
not only on $X$ (or $T$), but also on parameters $\rho$, $L$ and $N$,
$t(L,\rho,X,N)$. It has been already shown that factor $\frac{\rho}{L}$
determines the shape of $n(O)$. On the other hand, for larger $\rho$ or $L$,
the transitions occur much more often (see fig. 12 11). It is easy to
understand why. For greater $\rho$, $\frac{\rho}{L}$ is also greater, hence
the clusters are wider and it is easier for them to interact. When $L$ is
greater, the fluctuations are also bigger. This is because when one agent
whose opinion is quite rare speaks to many other agents, it can convince many
agents to its rare opinion. As a result this rare opinion gets stronger and
begins to attract many other agents. So the fluctuation grows and can cause
the transition very easily. It’s also worth to notice that for greater $\rho$
critical region shifts towards greater $X$ and for greater $L$ it shifts in
opposite direction. This phenomena occurs due to changes in shape of $n(O)$
and for constant $\frac{\rho}{L}$ there is no shift. Of course it should be
mentioned here once again that for $L=2$ this model behaves identically to
Deffuant model with noise.
Of course, as mentioned above, spontaneous transitions occur only in critical
regions $X\epsilon<X_{k}-\Delta,X_{k}+\Delta>$ and $\Delta=\Delta(N,L,\rho)$
is a monotonically decreasing function of the system size $N$ (see fig. 6).
Such behavior is typical for critical phase transitions. Dependence on $N$ is
also visible in $t(X)$ (see fig. 13). The more agents present, the less
transitions occur. Again, this can be easily understood. With more agents, the
distribution of opinion is more stable because it is hard to obtain such big
fluctuations as with a small number of agents. Although for an infinite set
there will not be any transitions identically as in [14] ,but social systems
are finite and transitions may occur.
An analysis of the distribution of time between transitions (i.e state’s
lifetime) $\tau$ (see fig. 14) gives more interesting details. The parameters
of the system were set to $L=4$, $\rho=0.04$, $N=1600$ and the observation
time was $10^{7}MCS$ with $1$ simulation. Simulations were provided for
several different $X$ values in the first unstable region. The results are
presented for $X={1.61,1.63,1.64,1.65}$. It is clear that each state has its
own characteristic lifetime scale – there are maximums on the lifetime
histograms. The position of the maximum depends on $X$. Where probabilities of
two states are similar, the maximums of $t([\log_{2}(\tau)])$ distribution for
higher and lower state are also close to each other (See fig. 14 b, c). When
these probabilities differ, the distributions are also further and
characteristic lifetimes differs more See fig. 14 a, d.
### 3.2 Master Equation results
Master equation for this model when $L=1$ is:
$\displaystyle\frac{\delta P(O,t)}{\delta t}$ $\displaystyle=$
$\displaystyle(1-\rho)[2\int_{|O-O^{\prime}|<T/2}dO^{\prime}P(2O-O^{\prime},t)P(O^{\prime},t)$
(5) $\displaystyle-$ $\displaystyle
P(O,t)\int_{|O-O^{\prime}|<T}dO^{\prime}P(O^{\prime},t)]+\rho[P_{a}(O)-P(O,t)],$
and is almost identical to that one for Deffuant system with noise derived by
Pineda [14]. In fact to get that equation we need to divide first part of
Pineda’s equation by two. This is due to fact that in Deffuant model two
agents are changing their opinions at one step but in this model for $L=1$
there’s only one.
To figure out behavior of this equation I numerically integrated it using
fourth order Runge-Kutta method. From analysis of stability comes out that for
some values of $X$ solution of this equation is unstable. I always started
from uniform opinion distribution $P(O,0)=1$ and after reaching an asymptotic
solution I introduced perturbation $\varrho P_{p}(O)$, where
$P^{\prime}(O,t)=(1-\varrho)P(O,t)+\varrho P_{p}(O)$ was the perturbed
distribution. There were two types of perturbation: symmetric
$P_{s}(O)=2(1-|2O-1|)$ and asymmetric $P_{a}(O)=2O$, both of them are
normalized. It occurred that $|1>$ state is immune to symmetric perturbation
and needs asymmetric one to make $|1>\rightarrow|2>$ transition. In $|2>$
state case situation is opposite and symmetric perturbation is needed to make
$|2>\rightarrow|1>$ transition.
Therefore, in the case of $|1>$ I was introducing asymmetric perturbation
$P_{a}$ and I was checking how big $\varrho$ has to be, to make
$|1>\rightarrow|2>$ transition. If the solution was $|2>$ I was introducing
symmetric perturbation $P_{s}$ and I was checking how big $\varrho$ has to be,
to make $|2>\rightarrow|1>$ transition. The dependence between the amount of
perturbation $\rho$ needed for transition and the parameter $X$ is presented
in Fig.15.
Figure 15: Amount of perturbation $\rho$ needed to make transition
It can be seen that for some values of $X$ stability of solution is very low
and it can switch to another solution very easily but opposite switch requires
much greater perturbation. However, there are values of $X$ where this switch
requires perturbation of the same strength (the same $\rho$) in both
directions, but still have to remember that in direction $|1>\rightarrow|2>$
we need asymmetric perturbation, and for $|2>\rightarrow|1>$ symmetric one.
Because those two types of transitions are equally easy in points of equal
$\rho$ their should be analogous to the maximums of $t(X)$ from MC
simulations, and as can be seen in fig.15 they are very close.
## 4 Discussion
The model proposed in this paper is a simple bounded confidence (BC) model on
a complete graph with noise $\rho$ added to simulate outside influences as
well as the free will and unpredictability of individual agents. One randomly
chosen agent can communicate with $L$ other randomly chosen agents according
to the BC rule. Then, with probability $\rho$, the opinion of one randomly
chosen agent is changed to a random number between $0$ and $1$. As in the
other BC models, clusterization of opinions occurs. The new quality that
appears due to the introduced noise is the presence of spontaneous transitions
between different numbers of clusters. They take place for some specific
values of $X=\frac{1}{2T}$ where $T$ is tolerance.
As is usual in the case of critical phenomena, in the proposed model
competition between two opposite forces is present. Due to competition between
these forces, the system is in dynamic equilibrium. However, in some cases
(for specific $T$ values, close to the bifurcation points) the system can be
in a critical state and can spontaneously transit between two different kinds
of order (i.e. states). Spontaneous transitions occur only in the critical
regions $T\epsilon<T_{k}-\Delta,T_{k}+\Delta>$ and for
$L\rightarrow\infty\Rightarrow\Delta\rightarrow 0$, which is usual in the
critical point. However, it should be noted that in social systems $L$ can
rarely be treated as infinite and thus $\Delta>0$. Within the proposed model,
’real life’ takes place in the critical region.
Occurrence of spontaneous transitions has been also observed in [14] for the
Deffuant model with noise, and some analytical results were made, however no
detailed analysis of this phenomena has been provided. On contrary, in this
paper an analysis of the distribution of time between transitions (i.e state’s
lifetime) and influence of $L$ $\rho$ and $N$ on the $t(X)$, have been
presented. Also influence of the perturbation on ME solutions was
investigated. I hope it will shed some more light on that BC models analysis.
## 5 Appendix
Derivation of master equation for time evolution of $P(O)$ for model described
in this paper is almost the same as in [14], except one small change in number
of opinions updated in one step. $P_{n}(O)$ is probability density function of
the opinions at step n and is constructed from the histogram of individual
opinions $O_{n}^{i}$. Let’s choose two agents $i,j$ to update at step $n$
their opinions will be $O_{n}^{i},O_{n}^{j}$. Probability that agent $i$ at
step $n+1$ will adopt opinion $O$ is $P_{n+1}^{i}(O)$ and is given by below
formula.
$\displaystyle P_{n+1}^{i}(O)$ $\displaystyle=$
$\displaystyle\int_{|O^{i}_{n}-O^{j}_{n}|<T/2}dO^{i}_{n}dO^{j}_{n}P_{n}(O^{i}_{n})P_{n}(O^{j}_{n})\delta\left(O-\frac{O^{i}_{n}+O^{j}_{n}}{2}\right)$
(6) $\displaystyle+$
$\displaystyle\int_{|O^{i}_{n}-O^{j}_{n}|<T/2}dO^{i}_{n}dO^{j}_{n}P_{n}(O^{i}_{n})P_{n}(O^{j}_{n})\delta(O-O^{i}_{n}),$
The independence approximation for the variables $O^{i}_{n},O^{j}_{n}$ has
been assumed and it means that
$P_{n}(O_{n}^{i},O_{n}^{j})=P_{n}(O_{n}^{i})P_{n}(O_{n}^{j})$. To figure out
$P_{n+1}(O)$ we have to include interaction between agents with probability
$(1-\rho)$ as well as random change of opinion of randomly chosen agent with
probability $\rho$. For Deffuant model equation has form given below.
$\displaystyle P_{n+1}(O)$ $\displaystyle=$
$\displaystyle(1-\rho)\left[\frac{N-2}{N}P_{n}(O)+\frac{1}{N}P^{i}_{n+1}(O)+\frac{1}{N}P^{j}_{n+1}(O)\right]$
(7) $\displaystyle+$
$\displaystyle\rho\left[\frac{N-1}{N}P_{n}(O)+\frac{1}{N}P_{a}(O)\right],$
Now we want to consider this equation for model described in this paper for
case of $L=1$. Because there is only one agent changing it’s opinion, the part
$\frac{1}{N}P^{j}_{n+1}(O)$ should be neglected and $N-2$ should be replaced
buy $N-1$.
$\displaystyle P_{n+1}(O)$ $\displaystyle=$
$\displaystyle(1-\rho)\left[\frac{N-1}{N}P_{n}(O)+\frac{1}{N}P^{i}_{n+1}(O)\right]$
(8) $\displaystyle+$
$\displaystyle\rho\left[\frac{N-1}{N}P_{n}(O)+\frac{1}{N}P_{a}(O)\right],$
After replacing $P_{n+1}^{i}$ from Eq. 6, and simple transformations there
goes:
$\displaystyle
P_{n+1}(O)=P_{n}(O)+\frac{(1-\rho)}{N}[2\int_{|O-O^{\prime}|<T/2}dO^{\prime}P_{n}(2O-O^{\prime})P_{n}(O^{\prime})-$
$\displaystyle
P_{n}(O)\int_{|O-O^{\prime}|<T}dO^{\prime}P_{n}(O^{\prime})]+\frac{\rho}{N}[P_{a}(O)-P_{n}(O)],$
(9)
For continuum limit $P_{n}(O)\rightarrow P(O,t)$ with time $t=n\delta t$ and
$\delta t=1/N\rightarrow 0$ as $N\rightarrow\infty$ there is:
$\displaystyle\frac{\delta P(O,t)}{\delta t}$ $\displaystyle=$
$\displaystyle(1-\rho)[2\int_{|O-O^{\prime}|<T/2}dO^{\prime}P(2O-O^{\prime},t)P(O^{\prime},t)$
(10) $\displaystyle-$ $\displaystyle
P(O,t)\int_{|O-O^{\prime}|<T}dO^{\prime}P(O^{\prime},t)]+\rho[P_{a}(O)-P(O,t)],$
Which is master equation for model described above for $L=1$ case, where
$\rho$ denotes noise intensity.
## References
* [1] C. Castellano, S. Fortunato, V. Loreto, Rev. Mod. Phys. 81, 591 (2009)
* [2] S. Galam, Int. J. Mod. Phys. C 19, 409 (2008)
* [3] K. Sznajd-Weron, Acta Phys. Pol. B 36 2537 (2005)
* [4] J. Lorenz, Int. J. Mod. Phys. C 18, 1819 (2007)
* [5] G. Deffuant, D. Neau, F. Amblard, and G. Weisbuch, Adv. Compl. Sys. 3, 87 (2000)
* [6] R. Hegselmann and U. Krause, JASSS 5(3) (2002)
* [7] G. Weisbuch, G. Deffuant and F. Amblard, Physica A 353 555 (2005)
* [8] S. Galam, Physica A 333, 453 (2004)
* [9] M. S. de la Lama, J. M. Lopez and H. S. Wio, Europhys. Lett. 72, 851 (2005)
* [10] C. Borghesi and S. Galam, Physical Review E 73, 066118 (2006)
* [11] H. S. Wio, M. S. de la Lama, J. M. Lopez, Physica A 371, 108 (2006)
* [12] EDMONDS, B.,(2006), Assessing the Safety of (Numerical) Representation in Social Simulation. pp. 195-214 in: Agent-based computational modelling, edited by F.C. Billari, T. Fent, A. Prskawetz and J. Schefflarn, Physica Verlag, Heidelberg 2006.
* [13] T. Carletti, D. Fanelli, A. Guarino, F. Bagnoli, A. Guazzini, Eur. Phys. J. B 64, 285 (2008)
* [14] M. Pineda, R. Toral and E. Hernandez-Garcia, J. Stat. Mech. P08001 (2009)
* [15] K. Sznajd-Weron, J. Sznajd, Int. J. Mod. Phys. C 11, 1157 (2000)
* [16] F. Schweitzer and J. A. Hołyst, Eur. Phys. J. B 15, 723 (2000)
* [17] Medeiros, Nazareno G. F.; Silva, Ana T. C.; Moreira, F. G. Brady, Phys. Rev. E 73, 046120 (2006)
* [18] G. Deffuant, JASSS 9(3) (2006)
* [19] D. Stauffer, A.O. Sousa, arXiv:cond-mat/0310243v2 [cond-mat.stat-mech] (2004)
* [20] G. Weisbuch G. Deffuant , F. Amblard and J. P. Nadal, arXiv:cond-mat/0111494v1 [cond-mat.dis-nn] (2001)
* [21] R. Albert and A.-L. Barab asi, Rev. Mod. Phys. 74, 47 (2002)
|
arxiv-papers
| 2011-05-31T20:00:17 |
2024-09-04T02:49:19.224606
|
{
"license": "Public Domain",
"authors": "P. Nyczka",
"submitter": "Piotr Nyczka",
"url": "https://arxiv.org/abs/1106.0008"
}
|
1106.0010
|
# Stable marriage problem under Monte Carlo simulations, influence of
preferrence corelation on relaxation time
P. Nyczka, J. Cisło Institute of Theoretical Physics, University of Wrocław,
pl. Maxa Borna 9, 50-204 Wrocław, Poland
###### Abstract
In this paper we consider stable marriage problem under the Monte Carlo
simulations. We investigate how correlation in lists of preferrences can
affect simulation results such as: relaxation time, time distribution of
relaxation times etc. We took into account attractiveness of individuals and
it’s different types as well as personal taste.
## 1 Introduction
We revisited problem well known from the game theory: the stable marriage
problem [11], which is also widely used in economics etc. In this problem two
sets of agents (e.g. men and women) must be mathed pairwise in accordance to
their mutual preferrences. Those preferrences could be in conflict, and agents
are egoistic in addition. That means, each of them tries to maximise its own
satisfaction (find the best partner) without respecting the rest. However
there is virtually impossible to make all of the agents absolutely happy,
there are some states where they are more less satisfied with their partners
and can’t change them anymore. These states are known as stable states or Nash
equilibria. Precisely this equilibrium it is the situation, where there is no
such two agents from opposite sets, whose both prefer to be together instead
of staying with their actual partners in other words there is no unstable
pairs. Usually there is several possible stable states in one set.
Most of simulations of matching problem use deterministic algorithms to find
the stable states. It works well quick and elegant, but we wanted to focus on
another aspect of stable marriage problem. We needed more realistic model,
because we wanted to investigate system’s relaxation time (not only find
stable state itself) in ”real life” situation, which is the time of reaching
the stable state. We also wanted to know how this time can be affected by
system size and correlations of the preferrences lists.
In deterministic ”classical” case each agent from one set knows all the agents
from opposite set, and have a list of them in order of preferrence. Algorithm
chooses optimal order of encounters, to minimize time needed to reach the
stable state. In real life, people don’t know all of their potential partners
at the beginning, and we can assume that people meet each other randomly. To
simulate this we used Monte Carlo simulations. In this case the encounters
between agents are random.
In real populations attractiveness of individuals differs from one to another.
It’s very important fact and we considered this, by introducing some kind of
beauty factors which caused a correlations between the lists of preferrences.
Our assumption was similiar to the one prestented in [12], but we introduced
beauty in different way. We noticed that people can have also different tastes
and types of beauty, and simulated this by introducing attraciveness, and
taste vectors, which seems to be more realistic and more detailed than in
above paper.
In our survey we compared different correlations strength (different
dimensions of attraciveness and taste vectors - the greater dimension, the
weaker correlation) and uncorrelated case as well. Article is focused on
influence of such correlations on the relaxaton time.
## 2 Model
Described model is a simple model of stable marriage problem, mentioned above.
The goal is to rich the stable state, but, as it has been said, we used Monte
Carlo simulations instead of common used deterministic algorithm.
There are sets of $N$ men $M=\\{m_{1},m_{2},m_{3},...,m_{N}\\}$ and $N$ women
$W=\\{w_{1},w_{2},w_{3},...,w_{N}\\}$. Each agent has it’s own preference list
of opposite sex representants. This is simply a subjective ranking list of
attractivness of potential partners from opposite sex. Let’s define two
matrices: $P_{m}$ for men and $P_{w}$ for women, where $P_{m}(m_{i},w_{j})$
element denotes the rank of the woman $w_{j}$ on the ranking list of the man
$m_{i}$, and the $P_{w}(w_{j},m_{i})$ denotes the rank of the man $m_{i}$ on
the ranking list of the woman $w_{j}$. The lower rank, the higher position.
Agents from opposite sets must be matched pairwise to create collection of
$R={(m,w)i}_{i=1,...,N}$ relatonships, to do this one have to arrange
encounters between them. There are two different ways of this arrangement. In
deterministic case - the order of encounters is determined by the preferrences
lists, in MC case - order is random.
### 2.1 Dynamics
In each time step we arrange random encounter between opposite sets
representants.
During each encounter both of the potential partners check their mutual
attractiveness and may declare a will to commit to a new partnership. An agent
declares such will when is free (has no partner), or when potential new
partner is higher on preferences list ($P$ is lower) than it’s current partner
. If both of them declare they willing, their break up their previous
relationships (if they have ones) and the new one between them is created.
Only if $P_{m}(m_{i}^{\prime},w_{j}^{\prime})<P_{m}(m_{i}^{\prime},w_{j})$ or
$m_{i}^{\prime}$ has no partner, and
$P_{w}(w_{j}^{\prime},w_{i}^{\prime})<P_{w}(w_{j}^{\prime},w_{i})$ or
$m_{j}^{\prime}$ has no partner, the new relationship is created. Where
primmed are the people who ere doing the meeting, and unprimmed are their
actual partners ($m_{i}$ is in relation with $w_{i}$ ${m_{i},w_{i}}$ and
$m_{j}$ with $w_{j}$ ${m_{j},w_{j}}$).
The simulation ends while there is no such pair of agents, which can change
it’s partners. This state is called a stable state, or the Nash equilibrium.
Usually there are several possible stable states for one set.
### 2.2 Preferrences lists
We have made simulations for two common types of preferrences lists. First was
the most classical case: random lists, and the second more realistic:
correlated ones.
#### 2.2.1 Random lists
Random list construction is very simple. Each agent have such a list, with
oppsite sex representants in random order. However it’s simple, seems to be
quite unrealistic [12]. As we know people differs in their attractiveness from
person to person. Random lists don’t take this fact into account.
#### 2.2.2 Correlated lists
Construction of correlated lists is little more complicated than in random
case. People are different, their have different attacitveness and tastes as
well. However that was claimed in [12], we did it in somehow different way.
Our approach takes into account different tastes, and different types of
beauty. We can describe attracitveness by one number, but we can use several
numbers (vector) to describe different aspects of its. Let
$A(a_{1},a_{2},a_{3},...,a_{n})$ be a vector describing agents’s
attractiveness, and $T(t_{1},t_{2},t_{3},...,t_{n})$ will be another vector to
describe individual’s taste, where $n$ is number of those aspects. Numbers in
$A$ are real and randomly chosen from division $<0,1>$, and could describe
different qualities of attractiveness, such as beauty, intelligence etc.
Numbers in $T$ are describing attention paid to given quality in potential
mate evaluation, or in other words: ”weigths” and they are different for
different agents. On the beginning they are also randomly chosen real numbers
from division $<0,1>$, but this vector is to be normalised later, so $|T|=1$.
When there are $A$ and $T$ vectors, created for all the agents, we have to
make preferrences lists. Whole procedure is quite simple. An agent $i$
evaluates all agents from opposite set one by one and sort them by obtained
score $S$, the higher score - the higher place on preferrences list of an
agent. Where evaluation of agent $j$ by agent $i$ is just a simple scalar
product: $S=T_{i}\cdotp A_{j}$. When preferrences lists are ready, simulation
starts, and runs until stable state is reached.
## 3 Results
We investigated relaxation times $\tau$, which means number of MC steps needed
to reach a stable state. It was done for different number of agents, and for
different types of preferences lists. In general there was two common types of
that lists: correlated and random ones. As we found, $\tau$ strongly depends
on list type (see fig. 1).
For strongly correlated preferences lists $\tau$ is very small . Extremal case
of correlation is situation where agents from one sex have identical lists. It
takes place when $n=1$. For random lists, agents have different lists and
$\tau$ is much greater.
In case of correlated lists when $n$ is greater, also the $\tau$ is greater,
but for higher $n$ differences are smaller. Even for $n>1000$ there is still
huge gap between $\tau$ for correlated and random lists. In correlated case
function $\tau(N)$ satisfies power law for some $N<N_{c}$ and then grows much
faster. For $n=1$ power law is satisfied for all(?) $N$. Suprising is fact
that for some system’s size there is optimal $n=n_{o}$ value for which
relaxation time is minimal, for $N>350,n_{o}=1$, but for $50<N<350,n_{o}=2$.
Suprising is fact that even for such small $N$ as $N=20$ the size of $\tau$
gap between uncorellated and correlated cases is about $10^{3}$ ,which is
really spectacular difference.
Figure 1: Median of $\tau(N)$ for different types of preferrences lists in
log-log scale, where $n$ is number of qualities decribing attractivness. Each
result was is average of 100 MC simulations. Figure 2: Median of $\tau(N)$ for
two different $n$ values log-log scale, where $n$ is number of qualities
decribing attractivness. Each result was is average of 100 MC simulations.
## 4 Discussion
As it has been shown, the more correlated lists the shorter relaxation times
$\tau$ for $N>350$.
In other words: the more common tastes or more different individuals are, the
shorter times occur. One can claim a hypothesis that for shorter mating times,
also reproduction is more effective. An assumption that only stable marriages
can have children, implies that younger couple can have more children thand
older ones. Therefore natural selection could prefer some kind of common taste
and diversity of individuals, to maximise offspring count and minimise cost of
sexual selection.
For smaller $N$ there are some other differences. Suprising is fact that for
smaller groups this time is minimal for $n>1$ (see fig. 2) which means weaker
correlation, stronger personalisation. For some reason in small groups, some
weakening of correlation which means more idividualisation, causes quicker
mathing.
## References
* [1] B. A l d e r s h o f, O.M. Ca r d u c c i, Stable matchings with couples, Discrete Applied Mathematics, 68 (1996), 203 207.
* [2] A. A l k a n, D. G a l e, Stable schedule matching under revealed preference, Journal of Economic Theory, 112 (2003), 289 306.
* [3] C.T. B e r g s t r o m, L.A. R e a l, Toward a theory ofm utual mate choice: Lessons from two-sided matching, Evolutionary Ecology Research, 2 (2000), 493 508\.
* [4] T. F l e i n e r, A fixed-point approach to stable matchings and some applications, Mathematics of Operations Research, 28 (2003), 103 126.
* [5] M. F a f c h a m p s, A. Q u i s u m b i n g, Assets at marriage in rural Ethiopia, Journal of Development Economics, 77 (2005), 1 25.
* [6] D. G a l e, The two-sided matching problem. Origin, development and current issues, International Game Theory Review, 3 (2001), 237 252.
* [7] D. G u s f i e l d, R.W. I r v i n g, The Stable Marriage Problem: Structure and Algorithms, MIT Press, Cambridge, MA, 1989.
* [8] D. G u s f i e l d, R.W. I r v i n g, The Stable Marriage Problem: Structure and Algorithms, MIT Press, Cambridge, MA, 1989.
* [9] [KK02] B. K l a u s, F. K l i j n, Stable matchings and preferences of couples, Journal of Economic Theory, 121 (2005), 75 106.
* [10] [RS92] A.E. R o t h, M.A. S o t o m a y o r, Two-sided matching. A study in gametheoretic modeling and analysis, Cambridge University Press, 1992.
* [11] Guido Caldarellia , Andrea Capoccib, Paolo Laureti Physica A 299 (2001) 268 272
* [12] G. Caldarellia , A. Capoccib Physica A 300 (2001) 325 331
* [13] Paolo Laureti, Yi-ChengZhang A 324 (2003) 49 65
* [14] Sprecher, S. & Hatfield, E. (in press/2009)
|
arxiv-papers
| 2011-05-31T20:00:53 |
2024-09-04T02:49:19.230401
|
{
"license": "Public Domain",
"authors": "P. Nyczka, J. Cis{\\l}o",
"submitter": "Piotr Nyczka",
"url": "https://arxiv.org/abs/1106.0010"
}
|
1106.0035
|
# Multiband Transport in Bilayer Graphene at High Carrier Densities
Dmitri K. Efetov Patrick Maher Simas Glinskis Philip Kim Department of
Physics, Columbia University New York, NY 10027
###### Abstract
We report a multiband transport study of bilayer graphene at high carrier
densities. Employing a poly(ethylene)oxide-CsClO4 solid polymer electrolyte
gate we demonstrate the filling of the high energy subbands in bilayer
graphene samples at carrier densities $|n|\geq 2.4\times 10^{13}$ cm-2. We
observe a sudden increase of resistance and the onset of a second family of
Shubnikov de Haas (SdH) oscillations as these high energy subbands are
populated. From simultaneous Hall and magnetoresistance measurements together
with SdH oscillations in the multiband conduction regime, we deduce the
carrier densities and mobilities for the higher energy bands separately and
find the mobilities to be at least a factor of two higher than those in the
low energy bands.
###### pacs:
73.63.b, 73.22.f, 73.23.b
Multiband transport is common for many complex metals where different types of
carriers on different pieces of the Fermi Surface (FS) carry electrical
currents. Conduction in this regime is controlled by the properties of the
individual subbands, each of which can have distinct mobilities, band masses,
and carrier densities. Other changes to the single-band conduction model
include inter-band scattering processes and mutual electrostatic screening of
carriers in different subbands, which alters the effective strength of the
Coulomb potential and hence adjusts the strength of electron-electron and
electron-charged impurity interactions.
To understand electronic conduction in this regime, it is desirable to study
the properties of the individual bands separately and compare these to the
properties in the multiband regime. This was achieved in 2-dimensional
electron gases (2DEGs) formed in GaAs quantum wells GaAs82 , where the
subbands can be continuously populated and depopulated by inducing parallel
magnetic fields. In these 2DEGs, an increased overall scattering rate due to
interband scattering was observed upon the single- to multiband transition,
GaAs90 ; GaAs97 , along with changes in the effective Coulomb potential which
led to the observation of new filling factors in the fractional quantum Hall
effect Shayegan10 .
Figure 1: (a) The tight-binding band structure of bilayer graphene for
interlayer asymmetries $\Delta=0$ eV (gray) and $\Delta=0.6$ eV (black). (b)
Schematic view of the double gated device, consisting of the SiO2/Si back gate
and the electrolytic top gate. Debye layers of Cs+ or ClO${}_{4}^{-}$ ions are
formed $d\sim 1$ nm above the bilayer and the gate electrode, respectively.
(c) Longitudinal resistivity and Hall resistance of the bilayer graphene
device at $T=2$ K as a function of back gate voltage $V_{bg}$ for 3 different
fixed electrolyte gate voltages $V_{eg}=$ -1.7, -0.4, and 1 V from left to
right, corresponding to predoping levels of $n_{H}=$ (-2.9, 0, 2.9)$\times
10^{13}$ cm-2. Inset shows an optical microscope image of a typical Hall Bar
device (the scale bar corresponds to 5 $\mu$m). Figure 2: Landau fan diagram
of the differential longitudinal resistivity
$\mathrm{d}\rho_{xx}/\mathrm{d}n_{H}$ for 3 different density ranges at $T=2$
K as a function of the Hall density $n_{H}$ and the magnetic field. (center)
The SdH oscillations in the LES converge at the CNP and flatten out at higher
$n_{H}$ due to decreasing LL separation. (left and right) For
$|n_{H}|>2.6\times 10^{13}$ cm-2 additional SdH oscillations appear,
originating at the resistivity spikes (red regions) that mark the onset of the
HES.
Bilayer graphene (BLG)Geim07 ; Geim08 ; McCann06 ; Rotenberg06 ; Novoselov07 ,
with its multiband structure and strong electrostatic tunability, offers a
unique model system to investigate multiple band transport phenomena. BLG’s
four-atom unit cell yields a band structure described by a pair of low energy
subbands (LESs) touching at the charge neutrality point (CNP) and a pair of
high energy subbands (HESs) whose onset is $\sim\pm$0.4 eV away from the CNP
(Fig.1(a)). Specifically, the tight binding model yields the energy dispersion
McCann06 :
$\epsilon_{1,2}^{\pm}(k)=\pm\sqrt{\frac{\gamma_{1}^{2}}{2}+\frac{\Delta^{2}}{4}+v_{F}^{2}{k}^{2}\pm\sqrt{\frac{\gamma_{1}^{4}}{4}+v_{F}^{2}{k}^{2}(\gamma_{1}^{2}+\Delta^{2})}},$
(1)
where the upper and lower index indicates the conduction (+) and valence (-);
and LES (1) and HES (2), $k$ is the wave vector measured from the Brillouin
zone corner, $v_{F}\approx$106 m/s is the Fermi velocity in single layer
graphene, $\gamma_{1}\approx 0.4$ eV is the interlayer binding energy, and
$\Delta$ is the interlayer potential asymmetry. Interestingly, since a
perpendicular electric field $E$ across the sample gives rise to an interlayer
potential difference $\Delta$, it opens up a gap in the spectrum of the LES
Rotenberg06 ; band gap ; Fai09 and is furthermore predicted to adjust the
onset energy of the HES. Whereas the LESs have been widely studied, the HESs,
with their expected onset density of $n\gtrsim 2.4\times 10^{13}$ cm-2
Rotenberg06 , have thus far not been accessed in transport experiments. This
can mainly be attributed to the carrier density limitations set by the
dielectric breakdown of the conventional SiO2/Si back gates, which do not
permit the tuning of carrier densities above $n\approx 0.7\times 10^{13}$ cm-2
($\epsilon_{F}\approx$ 0.2 eV).
In this letter, we report multiband transport in bilayer graphene. Using an
electrolytic gate, we were able to populate the HES of bilayer graphene,
allowing for both the LES and HES to be occupied simultaneously. The onset of
these subbands is marked by an abrupt increase of the sample resistivity, most
likely due to the opening of an interband scattering channel, along with the
appearance of a new family of Shubnikov-de Haas (SdH) oscillations associated
with the HES. A detailed analysis of the magneto- and Hall resistivities in
combination with the HES SdH oscillations in this regime enables us to
estimate the carrier mobilities in each subband separately, where we observe a
two-fold enhanced mobility of the HES carriers as compared to the LES carriers
at the same band densities.
Bilayer graphene devices were fabricated by mechanical exfoliation of Kish
graphite onto 300 nm thick SiO2 substrates, which are backed by degenerately
doped Si to form a back gate. The samples were etched into a Hall bar shape
with a typical channel size of $\sim$ 5 $\mu$m and then contacted with Cr/Au
(0.5/30 nm) electrodes through beam lithography (Fig. 1(c) inset). In order to
access the HES we utilized a recently developed solid polymer electrolyte
gating techniqueelectrolyte ; Iwasa08 ; Fai09 ; me10 , which was recently
shown to induce carrier densities beyond values of $n>10^{14}$ cm-2 me10 in
single layer graphene. The working principle of the solid polymer electrolyte
gate is shown in Fig.1(b). Cs+ and ClO${}_{4}^{-}$ ions are mobile in the
solid matrix formed by the polymer poly(ethylene)oxide (PEO). Upon applying a
gate voltage $V_{eg}$ to the electrolyte gate electrode, the ions form a thin
Debye layer a distance $d\sim 1$ nm away from the graphene surface. The
proximity of these layers to the graphene surface results in huge capacitances
per unit area $C_{eg}$, enabling extremely high carrier densities in the
samples. While CsClO4 has almost the same properties as the typically used
LiClO4 salt, we find a reduced sample degradation upon application of the
electrolyte on top of the sample, resulting in considerably higher sample
mobilities.
One major drawback of the electrolyte gate for low temperature studies is that
it cannot be tuned below $T<250$ K, where the ions start to freeze out in the
polymer and become immobile (though leaving the Debye layers on the bilayer
surface intact) Iwasa08 ; me10 . A detailed study of the density dependent
transport properties at low temperatures can therefore be quite challenging.
In order to overcome this issue, we employ the electrolyte gate just to
coarsely tune the density to high values ($|n|<$1014 cm-2) at $T=300$ K,
followed by an immediate cooldown to $T=2$ K. We then use the standard SiO2/Si
back gate to map out the detailed density dependence of the longitudinal sheet
resistivity $\rho_{xx}$ and the Hall resistance $R_{H}$, from which we extract
the total carrier density of the sample $n_{H}=B/eR_{H}$, with $B$ the
magnetic field and $e$ the electron charge. Here we find the back gate
capacitance to be $C_{bg}=141$ aF/$\mu$m2, almost unaltered by the presence of
the Debye layers on top of the sample.
In this experiment, we have measured $\rho_{xx}$ and $R_{H}$ of more than 3
BLG devices as a function of the back gate voltage $V_{bg}$ at various fixed
$V_{eg}$ corresponding to the wide density range of $n_{H}\sim\pm 8\times
10^{13}$ cm-2. Fig.1(c) shows $\rho_{xx}$ and $R_{H}$ for a representative
device for 3 selected cool-downs at $V_{eg}=$ -1.7, -0.4, 1 V from left to
right, corresponding to a pre-doping level of $n_{H}=$(-2.9, 0, 2.9)$\times
10^{13}$ cm-2. For low doping levels ($V_{eg}=$-0.4 V, Fig.1(c) middle) we
observe the expected Dirac Peak in $\rho_{xx}$ and the ambipolar transition of
$R_{H}$ as $V_{bg}$ sweeps through the CNP. Away from the CNP, $\rho_{xx}$ and
$R_{H}$ decrease as $|n_{H}|$ increases, as was observed before in BLG samples
Novoselov07 . For the strongly pre-doped gate sweeps however (Fig.1(c) left
and right), we observe a rather unexpected non-monotonic feature in the sample
resistivity. Instead of a monotonic decrease of $\rho_{xx}$ with increasing
$|n_{H}|$, it exhibits an abrupt increase by $\sim 10\%$ symmetrically at both
electron and hole sides at $n^{*}\sim|n_{H}|=2.6\times 10^{13}$ cm-2, a
carrier density which is consistent with theoretical expectations for the
onset density of the HES McCann06 ; Rotenberg06 . A similarly increasing
resistivity at the opening of a new subband was previously observed in 2D
electron systems formed in wide GaAs quantum wells GaAs82 ; GaAs90 ; GaAs97 ,
where it was attributed to a decreased overall scattering time $\tau$ due to
the opening of an additional inter-band scattering channel as the new subbands
are populated. Such an inter-band scattering mechanism between the LES and the
HES is also expected to give rise to a resistivity increase upon filling of
the HES in BLG samples, making it a likely candidate for the origin of the
observed resistivity increase. However, considering the strong differences
between the 2D electron gases in GaAs quantum wells and in BLG, including
vastly different densities of states, mobilities, and electron energies, more
theoretical work needs to be done to conclusively determine the origin of this
resistivity increase.
The electronic structure of the LES and HES can be further investigated by
studying the effect of the magnetic field $B$ on the longitudinal resistivity
$\rho_{xx}(B)$ in the various density ranges. Fig.2 shows the Landau fan
diagram of the differential sheet resistivity
$\mathrm{d}\rho_{xx}/\mathrm{d}n_{H}$ as a function of $B$ and $n_{H}$. Close
to the CNP (Fig.2 center) the SdH oscillations in the two LES are quite
pronounced, but with increasing density their amplitude quickly decays as the
energy separation of the Landau Levels (LL) decreases. Above the onset of the
HES (Fig.2 left and right), marked by the “spikes” of increased resistivity
(here the red regions) however, we observe another set of SdH oscillations
which form LL fans converging into the onset point of the HES.
In order to analyze the SdH oscillations, we now plot the $\rho_{xx}(B)$
traces for various fixed $n_{H}$ as a function of the inverse magnetic field
$B^{-1}$. Fig. 3(a) displays three exemplary traces at different $n_{H}$ above
the onset density of the HES. All traces show periodic oscillations in
$B^{-1}$ allowing us to obtain the SdH density,
$n_{SdH}=\frac{4e}{h}\Delta(B^{-1})$, assuming that each LL is both spin and
valley degenerate. Whereas for all $|n_{H}|<n^{*}$ we find that the obtained
$n_{SdH}\approx n_{H}$, indicating that the SdH oscillations are solely from a
single band (i.e., the LES), for $|n_{H}|>n^{*}$ the obtained $n_{SdH}$ values
are much smaller than the simultaneously measured $n_{H}$ values. This
behavior can be well explained by assuming that these SdH oscillations reflect
only the small fraction of charge carriers lying in the HES. For
$|n_{H}|>n^{*}$ we hence are able to extract the occupation densities of the
LES ($n_{LES}$) and HES ($n_{HES}$) from $n_{LES}=n_{H}-n_{SdH}$ and
$n_{HES}=n_{SdH}$. Fig. 3(b) shows the $|n_{LES}|$ and $|n_{HES}|$ in this
regime as a function of the total carrier density $|n_{H}|$. For each fixed
$V_{eg}$, the obtained $|n_{LES}|$ and $|n_{HES}|$ increase as $|n_{H}|$
increases (adjusted by $V_{bg}$), for both electrons and holes. Interestingly,
we notice that the $|n_{LES}(n_{H})|$ are slightly larger for larger
$|V_{eg}|$ while the trend is opposite for the HES, i.e. $|n_{HES}(n_{H})|$
are smaller for larger $|V_{eg}|$, even though their $n_{H}$ values are in
similar ranges. These general trends can be explained by an increase of the
interlayer potential difference $\Delta$ for increased values of $|V_{eg}|$,
which are predicted by the tight-binding model in Eq.1 to result in an
increase of the onset density (energy) of the HES.
Figure 3: (a) Exemplary traces of the longitudinal resistivity as a function
of inverse magnetic field at fixed values of $n_{H}$ beyond the onset of the
HES. (b) Carrier densities inferred from the SdH oscillations vs. the overall
Hall densities $n_{H}$, from 4 cooldowns at different set electrolyte gate
voltages $V_{eg}=$ -2 V (red), -1.7 V (orange), 1 V (purple), 1.4 V (blue).
(bottom) $|n_{HES}|$ vs. $n_{H}$, fitted with theoretical expectations for the
HES. (top) $|n_{LES}|$ vs. $n_{H}$, fitted with theoretical expectations for
the LES. Line traces correspond to theoretical fits for different values of
$\Delta=$ 0.31 eV (red), 0.17 eV (orange), 0.13 eV (purple), 0.26 eV (blue).
While a precise quantitative determination of the expected shift in the onset
density of the HES as a function of $V_{eg}$ and $V_{bg}$ requires a self-
consistent calculation of $\Delta(V_{eg},V_{bg})$ and would go beyond the
scope of this paper, we can still qualitatively test the above prediction.
This is possible since $\Delta$ is mostly controlled by $V_{eg}$, which has a
much stronger coupling to the BLG sample than the $V_{bg}$, thus allowing us
to approximately treat $\Delta$ as a constant for fixed $V_{eg}$. Since the
experimental traces displayed in Fig.3(b) correspond to different values of
$V_{eg}$ but the same ranges of $V_{bg}$, $\Delta$ is different for each trace
and can be extracted from the theoretical fits from Eq. 1, with $\Delta$ as
the only fitting parameter. Indeed for all 4 traces we find good agreement
with the theoretical fits; we clearly observe an enhanced onset density
(energy) for the traces with larger set potential differences across the
sample, which is in good qualitative agreement with theoretical predictions.
We now turn our attention to the transport properties of BLG in the limit of
$n_{H}>n^{*}$. The filling of these sub-bands creates a parallel transport
channel in addition to the one in the LES, thus defining the transport
properties in this regime by two types of carriers with distinct mobilities
$\mu_{1,2}$, effective masses $m^{*}_{1,2}$ and subband densities $n_{1,2}$
(here the index corresponds to the LES(1) and HES(2)) Fuhrer08 ; vanHouten88 .
In sharp contrast to a single band Drude model, where $\rho_{xx}(B)$ does not
depend on the $B$ field, in a two-carrier Drude theory it is expected to
become strongly modified, resulting in a pronounced $B$ field dependence
Ashcroft76 :
$\rho_{xx}(B)=\frac{n_{1}\mu_{1}+n_{2}\mu_{2}+(n_{1}\mu_{1}\mu_{2}^{2}+n_{2}\mu_{2}\mu_{1}^{2})B^{2}}{e((n_{1}\mu_{1}+n_{2}\mu_{2})^{2}+\mu_{1}^{2}\mu_{2}^{2}(n_{1}+n_{2})^{2}B^{2})},$
(2)
Fig.4(a) shows magnetoresistance traces for different fixed Hall densities
$n_{H}$. Close to the CNP, where only the LES are populated (Fig.4(a) black
trace), the $\rho_{xx}(B)$ traces are nearly flat as expected from the one-
fluid Drude theory. When the density is increased and the HES starts to fill
up, however, we observe a smooth transition to an approximately parabolic $B$
field dependence, resulting in a strong increase of $\rho_{xx}$ of up to
25$\%$ from 0 T to 8 T. Using the previously extracted carrier densities in
the two bands $n_{1,2}$ we can now fit the $\rho_{xx}(B)$ traces with the two-
carrier Drude model in Eq.2, with the mobilities of the two subbands
$\mu_{1,2}$ as the only fitting parameters. As shown in Fig.4(b) the
experimental finding are in excellent agreement with the theory, allowing us
to deduce the values of $\mu_{1,2}$ with good accuracy. Moreover, the ability
to extract the mobilities of the HES allows us now to characterize the HES in
more detail.
In Fig.4(c) we plot the extracted mobilities of the HES $\mu_{2}$ against the
carrier density in the HES $n_{2}$ and compare it to the mobilities $\mu_{1}$
of the LES at a similar range of subband densities in the LES $n_{1}$. We find
that the mobilities in the HES are at least a factor of two higher than those
in the LES. Considering that the effective carrier masses are similar for the
LES and the HES, this feature of the HES may be due to the enhanced screening
of charged impurity scatterers at higher carrier densities, effectively
reducing the scattering rate of the HES carriers on these scatterers. A more
detailed theoretical study is required, however, to undertake a quantitative
analysis of this problem.
Figure 4: (a) Longitudinal resistivity $\rho_{xx}(B)$ as a function of
magnetic field for $n_{H}=$ (0.84, 2.88, 3.00, 3.31, 3.83, 4.21, 6.96)$\times
10^{13}$ cm-2, from top to bottom. The $\rho_{xx}(B)$ traces undergo a smooth
transition from a nearly $B$ independent behavior when the HES is empty (below
$n_{H}<n^{*}\sim 2.6\times 10^{13}$ cm-2), to a strong, non-trivial $B$
dependence when the HES is occupied. (b) An exemplary $\rho_{xx}(B)$ trace at
$n_{H}=$6.96$\times 10^{13}$ cm-2 and $n_{SdH}=$1.42$\times 10^{13}$ cm-2 with
accompanying fit (dashed line) from Eq.2, using the mobilities in the LES
$\mu_{1}=541$ Vs/cm2 and the HES $\mu_{2}=2428$ Vs/cm2 as fitting parameters.
(c) The mobilities $\mu_{1,2}(n)$ as extracted from $\rho_{xx}(B)$ traces at
various fixed $n_{H}$ as a function of the density in the individual subbands.
In conclusion, using a polymer electrolyte gate we have achieved two-band
conduction in bilayer graphene. We have found that the filling of these bands
above a Hall density of $|n_{H}|>2.4\times 10^{13}$ cm-2 is marked by an
increase of the sample resistivity by $\sim 10\%$ along with the onset of SdH
oscillations. From simultaneous Hall and magnetoresistivity measurements, as
well as the analysis of the SdH oscillations in the two carrier conduction
regime, we have characterized the distinct carrier densities and mobilities of
the individual subbands, where we have found a strongly enhanced carrier
mobility in the HES of bilayer graphene.
The authors thank I.L. Aleiner, E. Hwang and K.F. Mak for helpful discussion.
This work is supported by the AFOSR MURI, FENA, and DARPA CERA. Sample
preparation was supported by the DOE (DE-FG02-05ER46215).
## References
* (1) H. L. Stormer et. al., Sol. St. Com., 41, 707-709 (1982).
* (2) G. R. Facer et al., Phys. Rev. B 56, 10036-10039 (1997).
* (3) D. R. Leadley et al., Semi. Sci. Tech. 5, 1081-1087 (1990).
* (4) J. Shabani et al., arXiv:1004.0979v2 (2010).
* (5) A. K. Geim and K. S. Novoselov, Nat Mater 6, 183 (2007).
* (6) A. K. Geim and P. Kim, Scientific American 298, 68 (2008).
* (7) E. McCann et. al., Phys. Rev. B 74, 161403 (2006).
* (8) T. Ohta, et.al., Science 313, 951 (2006).
* (9) K. S. Novoselov, et. al., Nature Physics 2, 177 - 180 (2006) (2007).
* (10) E. V. Castro et al., Phys. Rev. Lett. 99, 216802 (2007); J. B. Oostinga et al., Nature Mater. 7, 151 (2008); Y. Zhang et al., Nature (London) 459, 820 (2009);
* (11) Matthew J. Panzer et.al., Advanced Materials 20, 3177 - 3180 (2008); A. Das et.al., Nature Nanotechnology 3, 210 - 215 (2008); J. Yan et. al., Phys. Rev. B 80, 241417 (2009);
* (12) K. Ueno et.al., Nature Materials 7, 855 - 858 (2008);
* (13) Kin Fai Mak et.al., Phys. Rev. Lett. 102, 256405 (2009);
* (14) Dmitri K. Efetov and Philip Kim, Phys. Rev. Lett. 105, 256805 (2010).
* (15) Sungjae Cho and Michael S. Fuhrer, Phys. Rev. B 77, 084102 (2008).
* (16) H. van Houten et.al., Phys. Rev. B 37, 2756-2758 (1988).
* (17) Neil W. Ashcroft and N. David Mermin, Thomson Learning Inc. (1976).
|
arxiv-papers
| 2011-05-31T21:47:28 |
2024-09-04T02:49:19.235379
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Dmitri K. Efetov, Patrick Maher, Simas Glinskis and Philip Kim",
"submitter": "Dmitri K. Efetov",
"url": "https://arxiv.org/abs/1106.0035"
}
|
1106.0036
|
# Riemann–Hilbert problems, matrix orthogonal polynomials
and discrete matrix equations with singularity confinement
Giovanni A. Cassatella-Contra† and Manuel Mañas‡
Departamento de Física Teórica II
(Métodos Matemáticos de la Física)
Universidad Complutense de Madrid
28040-Madrid, Spain
(†gaccontra@fis.ucm.es, ‡manuel.manas@fis.ucm.es)
###### Abstract
In this paper matrix orthogonal polynomials in the real line are described in
terms of a Riemann–Hilbert problem. This approach provides an easy derivation
of discrete equations for the corresponding matrix recursion coefficients. The
discrete equation is explicitly derived in the matrix Freud case, associated
with matrix quartic potentials. It is shown that, when the initial condition
and the measure are simultaneously triangularizable, this matrix discrete
equation possesses the singularity confinement property, independently if the
solution under consideration is given by recursion coefficients to quartic
Freud matrix orthogonal polynomials or not.
## 1 Introduction
The study of singularities of the solutions of nonlinear ordinary differential
equations and, in particular, the quest of equations whose solutions are free
of movable critical points, the so called Painlevé property, lead, more than
110 years ago, to the Painlevé transcendents, see [1] (and [2] for a recent
account of the state of the art in this subject). The Painlevé equations are
relevant in a diversity of fields, not only in Mathematics but also, for
example in Theoretical Physics and in particular in 2D Quantum Gravity an
Topological Field Theory, see for example [2].
A discrete version of the Painlevé property, the singularity confinement
property, was introduced for the first time by Grammaticos, Ramani and
Papageorgiou in 1991 [3], when they studied some discrete equations, including
the dPI equation (discrete version of the first Painlevé equation), see also
the contribution of these authors to [2]. For this equation they realized that
if eventually a singularity could appear at some specific value of the
discrete independent variable it would disappear after performing few steps or
iterations in the equation. This property, as mentioned previously, is
considered by these authors as the equivalent of the Painlevé property [1] for
discrete equations. Hietarinta and Grammaticos also derived some discrete
versions of the other five Painlevé equations [4, 5]. See also the interesting
papers [6] and [7].
Freud orthogonal polynomials in the real line [8] are associated to the weight
$\displaystyle w_{\rho}(x)$ $\displaystyle=|x|^{\rho}e^{-|x|^{m}},$
$\displaystyle\rho$ $\displaystyle>-1,\quad m>0.$
Interestingly, for $m=2,4,6$ it has been shown [9] that from the recursion
relation
$\displaystyle xp_{n}=a_{n+1}p_{n+1}(x)+b_{n}p_{n}(x)+a_{n}p_{n-1}(x),$
the orthogonality of the polynomials leads to a recursion relation satisfied
by the recursion coefficients $a_{n}$. In particular, for $m=4$ Van Assche
obtains for $a_{n}$ the discrete Painlevé I equations, and therefore its
singularities are confined. For related results see also [10]. For a modern
and comprehensive account of this subject see the survey [11].
In 1992 it was found [12] that the solution of a $2\times 2$ Riemann-Hilbert
problem can be expressed in terms of orthogonal polynomials in the real line
and its Cauchy transforms. Later on this property has been used in the study
of certain properties of asymptotic analysis of orthogonal polynomials and
extended to other contexts, for example for the multiple orthogonal
polynomials of mixed type [13].
Orthogonal polynomials with matrix coefficients on the real line have been
considered in detail first by Krein [14, 15] in 1949, and then were studied
sporadically until the last decade of the XX century. These are some papers of
this subject: Berezanskii (1968) [16], Geronimo [17] (1982), and Aptekarev and
Nikishin [18] (1984). In the last paper they solved the scattering problem for
a kind of discrete Sturm–Liouville operators that are equivalent to the
recurrence equation for scalar orthogonal polynomials. They found that
polynomials that satisfy a recurrence relation of the form
$\displaystyle xP_{k}(x)$
$\displaystyle=A_{k}P_{k+1}(x)+B_{k}P_{k}(x)+A_{k-1}^{*}P_{k-1}(x),$
$\displaystyle k$ $\displaystyle=0,1,....$
are orthogonal with respect to a positive definite measure. This is a
matricial version of Favard’s theorem for scalar orthogonal polynomials. Then,
in the 1990’s and the 2000’s some authors found that matrix orthogonal
polynomials (MOP) satisfy in certain cases some properties that satisfy scalar
valued orthogonal polynomials; for example, Laguerre, Hermite and Jacobi
polynomials, i.e., the scalar-type Rodrigues’ formula [19, 20, 21] and a
second order differential equation [22, 23, 24].
Later on, it has been proven [25] that operators of the form
$D$=${\partial}^{2}F_{2}(t)$+${\partial}^{1}F_{1}(t)$+${\partial}^{0}F_{0}$
have as eigenfunctions different infinite families of MOP’s. Moreover, in [24]
a new family of MOP’s satisfying second order differential equations whose
coefficients do not behave asymptotically as the identity matrix was found.
See also [26].
The aim of this paper is to explore the singularity confinement property in
the realm of matrix orthogonal polynomials. For that aim following [12] we
formulate the matrix Riemann–Hilbert problem associated with the MOP’s. From
the Riemann–Hilbert problem it follows not only the recursion relations but
also, for a type of matrix Freud weight with $m=4$, a nonlinear recursion
relation (58) for the matrix recursion coefficients, that might be considered
a matrix version –non Abelian– of the discrete Painlevé I. Finally, we prove
that this matrix equation possesses the singularity confinement property, and
that after a maximum of 4 steps the singularity disappears. This happens when
the quartic potential $V$ and the initial recursion coefficient are
simultaneously triangularizable. It is important to notice that the recursion
coefficients for the matrix orthogonal Freud polynomials provide solutions to
(58) and therefore the singularities are confined. A relevant fact for this
solution is that the collection of all recursion coefficients is an Abelian
set of matrices. However, not all solutions of (58) define a commutative set;
nevertheless, the singularity confinement still holds. In this respect we must
stress that our singularity confinement proof do not rely in matrix orthogonal
polynomials theory but only on the analysis of the discrete equation. This
special feature is not present in the scalar case previously studied
elsewhere.
The layout of this paper is as follows. In section 2 the Riemann–Hilbert
problem for matrix orthogonal polynomials is derived and some of its
consequences studied. In §3 a discrete matrix equation, for which the
recursion coefficients of the Freud MOP’s are solutions, is derived and it is
also proven that its singularities are confined. Therefore, it might be
considered as a matrix discrete Painlevé I equation.
## 2 Riemann–Hilbert problems and matrix orthogonal polynomials in the real
line
### 2.1 Preliminaries on monic matrix orthogonal polynomials in the real line
A family of matrix orthogonal polynomials (MOP’s) in the real line [11] is
associated with a matrix-valued measure $\mu$ on $\mathbb{R}$; i. e., an
assignment of a positive semi-definite $N\times N$ Hermitian matrix $\mu(X)$
to every Borel set $X\subset\mathbb{R}$ which is countably additive. However,
in this paper we constraint ourself to the following case: given an $N\times
N$ Hermitian matrix $V(x)=(V_{i,j}(x))$, we choose
$\mathrm{d}\mu=\rho(x)\mathrm{d}x$, being $\mathrm{d}x$ the Lebesgue measure
in $\mathbb{R}$, and with the weight function specified by $\rho=\exp(-V(x))$
(thus $\rho$ is a positive semi-definite Hermitian matrix). Moreover, we will
consider only even functions in $x$, $V(x)=V(-x)$; in this situation the
finiteness of the measure $\mathrm{d}\mu$ is achieved for any set of
polynomials $V_{i,j}(x)$ in $x^{2}$. Associated with this measure we have a
unique family $\\{P_{n}(x)\\}_{n=0}^{\infty}$ of monic matrix orthogonal
polynomials
$\displaystyle
P_{n}(z)={\mathbb{I}}_{N}z^{n}+\gamma_{n}^{(1)}z^{n-1}+\cdots+\gamma_{n}^{(n)}\in{\mathbb{C}}^{N\times{N}},$
such that
$\displaystyle\int_{\mathbb{R}}P_{n}(x)x^{j}\rho(x)\text{d}x$
$\displaystyle=0,$ $\displaystyle j$ $\displaystyle=0,\dots,n-1.$ (1)
Here $\mathbb{I}_{N}$ denotes the identity matrix in $\mathbb{C}^{N\times N}$.
In terms of the moments of the measure $\mathrm{d}\mu$,
$\displaystyle m_{j}$
$\displaystyle:=\int_{\mathbb{R}}x^{j}\rho(x)\text{d}x\in{\mathbb{C}}^{N{\times}N},$
$\displaystyle j$ $\displaystyle=0,1,\dots$
we define the truncated moment matrix
$\displaystyle m^{(n)}:=(m_{i,j})\in\mathbb{C}^{nN\times nN},$
with $m_{i,j}$=$m_{i+j}$ and $0\leq{i,j}\leq{n-1}$. Invertibility of
$m^{(n)}$, i.e. $\det m^{(n)}\neq 0$, is equivalent to the existence of a
unique family of monic matrix orthogonal polynomials. In fact, we can write
(1) as
$\displaystyle\begin{pmatrix}m_{0}&\cdots&m_{n-1}\\\ \vdots&&\vdots\\\
m_{n-1}&\cdots&m_{2n-2}\end{pmatrix}\begin{pmatrix}\gamma_{n}^{(n)}\\\
\vdots\\\ \gamma_{n}^{(1)}\end{pmatrix}=\begin{pmatrix}-m_{n}\\\ \vdots\\\
-m_{2n-1}\end{pmatrix},$ (2)
and hence uniqueness is equivalent to $\det m^{(n)}\neq 0$. From the
uniqueness and evenness we deduce that
$\displaystyle
P_{n}(z)={\mathbb{I}}_{N}z^{n}+\gamma_{n}^{(2)}z^{n-2}+\gamma_{n}^{(4)}z^{n-4}+\cdots+\gamma_{n}^{(n)},$
(3)
where $\gamma_{n}^{(n)}=0$ if $n$ is odd.
The Cauchy transform of $P_{n}(z)$ is defined by
$\displaystyle
Q_{n}(z):=\frac{1}{2{\pi}\operatorname{i}}\int_{\mathbb{R}}\frac{P_{n}(x)}{x-z}\rho(x)\text{d}x,$
(4)
which is analytic for $z\in\mathbb{C}\backslash\mathbb{R}$. Recalling
$\frac{1}{z-x}=\frac{1}{z}\sum_{j=0}^{n-1}\frac{x^{j}}{z^{j}}+\frac{1}{z}\frac{(\frac{x}{z})^{n}}{1-\frac{x}{z}}$
and (1) we get
$\displaystyle
Q_{n}(z)=-\frac{1}{2\pi\operatorname{i}}\frac{1}{z^{n+1}}\int_{\mathbb{R}}\frac{P_{n}(x)x^{n}}{1-\frac{x}{z}}\rho(x)\text{d}x,$
(5)
and consequently
$\displaystyle Q_{n}(z)$ $\displaystyle=c_{n}^{-1}z^{-n-1}+O(z^{-n-2}),$
$\displaystyle z$ $\displaystyle\rightarrow\infty,$ (6)
where we have introduced the coefficients
$\displaystyle
c_{n}:=\Big{(}-\frac{1}{2\pi\operatorname{i}}\int_{\mathbb{R}}P_{n}(x)\rho(x)x^{n}\mathrm{d}x\Big{)}^{-1},$
(7)
relevant in the sequel of the paper.
###### Proposition 1.
We have that $c_{n}$ satisfies
$\displaystyle{\det}c_{n}=-2\pi\operatorname{i}\frac{{\det}(m^{(n)})}{{\det}(m^{(n+1)})}.$
(8)
###### Proof.
To prove it just define $\boldsymbol{m}:=(m_{n},...,m_{2n-1})$, consider the
identity
$\displaystyle\begin{pmatrix}{m^{(n)}}^{-1}&0\\\
0&{\mathbb{I}}_{N}\end{pmatrix}m^{(n+1)}=\begin{pmatrix}{\mathbb{I}}_{nN}&{m^{(n)}}^{-1}\boldsymbol{m}\\\
\boldsymbol{m}^{t}&m_{2n}\end{pmatrix},$
and apply the Gauss elimination method to get
$\displaystyle\frac{{\det}(m^{(n+1)})}{{\det}(m^{(n)})}=\det(m_{2n}-{\boldsymbol{m}^{t}}{m^{(n)}}^{-1}{\boldsymbol{m}})\neq
0;$
from (2) we conclude
$m_{2n}-{\boldsymbol{m}^{t}}{m^{(n)}}^{-1}{\boldsymbol{m}}=\int_{\mathbb{R}}P_{n}(x)x^{n}\rho(x)\text{d}x$.
∎
The evenness of $V$ leads to $Q_{n}(z)=(-1)^{n+1}Q_{n}(-z)$, so that
$\displaystyle Q_{n}(z)$
$\displaystyle=c_{n}^{-1}z^{-n-1}+\sum_{j=2}^{\infty}a_{n}^{(2j-1)}z^{-n-2j+1},$
$\displaystyle z$ $\displaystyle\rightarrow\infty.$ (9)
In particular,
$\displaystyle Q_{0}(z)$
$\displaystyle=c_{0}^{-1}z^{-1}+c_{1}^{-1}z^{-3}+O(z^{-5}),$ $\displaystyle z$
$\displaystyle\rightarrow\infty.$ (10)
Finally, if we assume that $V_{i,j}$ are Hölder continuous we get the Plemelj
formulae
$\displaystyle\Big{(}Q_{n}(z)_{+}-Q_{n}(z)_{-}\Big{)}\Big{|}_{x{\in}\mathbb{R}}=P_{n}(x)\rho(x),$
(11)
with $Q_{n}(x)_{+}=Q_{n}(z)|_{z=x+i0^{+}}$ and
$Q_{n}(x)_{-}=Q_{n}(z)|_{z=x+i0^{-}}$.
### 2.2 Riemann–Hilbert problem
###### Definition 1.
The Riemann–Hilbert problem to consider here is the finding of a $2N\times 2N$
matrix function $Y_{n}(z)\in\mathbb{C}^{2N\times 2N}$ such that
1. 1.
$Y_{n}(z)$ is analytic in $z\in\mathbb{C}\backslash\mathbb{R}$.
2. 2.
Asymptotically behaves as
$\displaystyle Y_{n}(z)$
$\displaystyle=({\mathbb{I}}_{2N}+O(z^{-1}))\begin{pmatrix}{\mathbb{I}}_{N}z^{n}&0\\\
0&{\mathbb{I}}_{N}z^{-n}\end{pmatrix},$ $\displaystyle z$
$\displaystyle\to\infty.$ (12)
3. 3.
On $\mathbb{R}$ we have the jump
$\displaystyle
Y_{n}(x)_{+}=Y_{n}(x)_{-}\begin{pmatrix}{\mathbb{I}}_{N}&\rho(x)\\\\[8.0pt]
0&{\mathbb{I}}_{N}\end{pmatrix}.$ (13)
An easy extension of the connection among orthogonal polynomials in the real
line with a particular Riemann–Hilbert problem discovered in [12] can be
proven in this matrix context.
###### Proposition 2.
The unique solution to the Riemann–Hilbert problem specified in Definition 1
is given in terms of monic matrix orthogonal polynomials with respect to
$\rho(x)\mathrm{d}x$ and its Cauchy transforms:
$\displaystyle Y_{n}(z)$
$\displaystyle=\begin{pmatrix}P_{n}(z)&Q_{n}(z)\\\\[8.0pt]
c_{n-1}P_{n-1}(z)&c_{n-1}Q_{n-1}(z)\end{pmatrix},$ $\displaystyle n$
$\displaystyle\geq 1.$ (14)
###### Proof.
In the first place let us show that
$\Big{(}\begin{smallmatrix}P_{n}(z)&Q_{n}(z)\\\\[8.0pt]
c_{n-1}P_{n-1}(z)&c_{n-1}Q_{n-1}(z)\end{smallmatrix}\Big{)}$ does satisfy the
three conditions requested by Definition 1.
1. 1.
As the matrix orthogonal polynomials $P_{n}$ are analytic in $\mathbb{C}$ and
its Cauchy transforms are analytic in $\mathbb{C}\backslash\mathbb{R}$, the
proposed solution is analytic in $\mathbb{C}\backslash\mathbb{R}$.
2. 2.
Replacing the asymptotics of the matrix orthogonal polynomials and its Cauchy
transforms we get $\Big{(}\begin{smallmatrix}P_{n}(z)&Q_{n}(z)\\\
c_{n-1}P_{n-1}(z)&c_{n-1}Q_{n-1}(z)\end{smallmatrix}\Big{)}\to\Big{(}\begin{smallmatrix}z^{n}+O(z^{n-1})&O(z^{-n-1})\\\
O(z^{n-1})&z^{-n}+O(z^{-n-1})\end{smallmatrix}\Big{)}=({\mathbb{I}}_{2N}+O(z^{-1}))\Big{(}\begin{smallmatrix}{\mathbb{I}}_{N}z^{n}&0\\\
0&{\mathbb{I}}_{N}z^{-n}\end{smallmatrix}\Big{)}$ when $z\to\infty$.
3. 3.
From (11) we get
$(\begin{smallmatrix}P_{n}(x+\operatorname{i}0)&Q_{n}(x+\operatorname{i}0)\\\
c_{n-1}P_{n-1}(x+\operatorname{i}0)&c_{n-1}Q_{n-1}(x+\operatorname{i}0)\end{smallmatrix}\Big{)}-(\begin{smallmatrix}P_{n}(x-\operatorname{i}0)&Q_{n}(x-\operatorname{i}0)\\\
c_{n-1}P_{n-1}(x-\operatorname{i}0)&c_{n-1}Q_{n-1}(x-\operatorname{i}0)\end{smallmatrix}\Big{)}=(\begin{smallmatrix}0&P_{n}(x)\rho(x)\\\
0&c_{n-1}P_{n-1}(x)\rho(x)\end{smallmatrix}\Big{)}.$
Then, a solution to the RH problem is
$Y_{n}=(\begin{smallmatrix}P_{n}(z)&Q_{n}(z)\\\
c_{n-1}P_{n-1}(z)&c_{n-1}Q_{n-1}(z)\end{smallmatrix}\Big{)}$. But the solution
is unique, as we will show now. Given any solution $Y_{n}$, its determinant
$\det Y_{n}(z)$ is analytic in $\mathbb{C}\backslash\mathbb{R}$ and satisfies
$\displaystyle{\det}Y_{n}(x)_{+}$
$\displaystyle={\det}\Bigg{(}Y_{n}(x)_{-}\begin{pmatrix}{\mathbb{I}}_{N}&\rho(x)\\\
0&{\mathbb{I}}_{N}\end{pmatrix}\Bigg{)}={\det}Y_{n}(x)_{-}{\det}\begin{pmatrix}{\mathbb{I}}_{N}&\rho(x)\\\
0&{\mathbb{I}}_{N}\end{pmatrix}$ $\displaystyle={\det}Y_{n}(x)_{-}.$
Hence, $\det Y_{n}(z)$ is analytic in $\mathbb{C}$. Moreover, Definition 1
implies that
$\displaystyle\det Y_{n}(z)$ $\displaystyle=1+O(z^{-1}),$ $\displaystyle z$
$\displaystyle\to\infty,$
and Liouville theorem ensures that
$\displaystyle{\det}Y_{n}(z)$ $\displaystyle=1,$ $\displaystyle\forall
z\in\mathbb{C}.$ (15)
From (15) we conclude that $Y_{n}^{-1}$ is analytic in
$\mathbb{C}\backslash\mathbb{R}$. Given two solutions $Y_{n}$ and
$\tilde{Y}_{n}$ of the RH problem we consider the matrix
$\tilde{Y}_{n}Y_{n}^{-1}$, and observe that from property 3 of Definition 1 we
have $(\tilde{Y}_{n}Y_{n}^{-1})_{+}=(\tilde{Y}_{n}Y_{n}^{-1})_{-}$, and
consequently $\tilde{Y}_{n}Y_{n}^{-1}$ is analytic in $\mathbb{C}$. From
Definition 1 we get $\tilde{Y}_{n}Y_{n}^{-1}\to\mathbb{I}_{2N}$ as
$z\to\infty$, and Liouville theorem implies that
$\tilde{Y}_{n}Y_{n}^{-1}=\mathbb{I}_{2N}$; i.e., $\tilde{Y}_{n}=Y_{n}$ and the
solution is unique. ∎
###### Definition 2.
Given the matrix $Y_{n}$ we define
$\displaystyle S_{n}(z):=Y_{n}(z)\begin{pmatrix}{\mathbb{I}}_{N}z^{-n}&0\\\
0&{\mathbb{I}}_{N}z^{n}\end{pmatrix}.$ (16)
###### Proposition 3.
1. 1.
The matrix $S_{n}$ has unit determinant:
$\displaystyle\det S_{n}(z)=1.$ (17)
2. 2.
It has the special form
$\displaystyle
S_{n}(z)=\begin{pmatrix}A_{n}(z^{2})&z^{-1}B_{n}(z^{2})\\\\[8.0pt]
z^{-1}C_{n}(z^{2})&D_{n}(z^{2})\end{pmatrix}.$ (18)
3. 3.
The coefficients of $S_{n}$ admit the asymptotic expansions
$\displaystyle\begin{aligned}
A_{n}(z^{2})&={\mathbb{I}}_{N}+S^{(2)}_{n,11}z^{-2}+O(z^{-4}),&B_{n}(z^{2})&=S^{(1)}_{n,12}+S^{(3)}_{n,12}z^{-2}+O(z^{-4}),\\\
C_{n}(z^{2})&=S^{(1)}_{n,21}+S^{(3)}_{n,21}z^{-2}+O(z^{-4}),&D_{n}(z^{2})&={\mathbb{I}}_{N}+S^{(2)}_{n,22}z^{-2}+O(z^{-4}),\end{aligned}$
(19)
for $z\to\infty$.
###### Proof.
1. 1.
Is a consequence of (15) and (16).
2. 2.
It follows from the parity of $P_{n}$ and $Q_{n}$.
3. 3.
(12) implies the following asymptotic behaviour
$\displaystyle S_{n}(z)$
$\displaystyle={\mathbb{I}}_{2N}+S_{n}^{(1)}z^{-1}+O(z^{-2}),$ $\displaystyle
z$ $\displaystyle\to\infty,$ (20)
and (18) gives
$\displaystyle S_{n}^{(2i)}$
$\displaystyle=\begin{pmatrix}S_{n,11}^{(2i)}&0\\\
0&S_{n,22}^{(2i)}\end{pmatrix},$ $\displaystyle S_{n}^{(2i-1)}$
$\displaystyle=\begin{pmatrix}0&S_{n,12}^{(2i-1)}\\\
S_{n,21}^{(2i-1)}&0\end{pmatrix},$
and the result follows.
∎
Observe that from (18) we get
$\displaystyle
S_{n}^{-1}(z)=\begin{pmatrix}\tilde{A}_{n}(z^{2})&z^{-1}\tilde{B}_{n}(z^{2})\\\
z^{-1}\tilde{C}_{n}(z^{2})&\tilde{D}_{n}(z^{2})\end{pmatrix},$ (21)
with the asymptotic expansions for $z\rightarrow\infty$
$\displaystyle\tilde{A}_{n}(z^{2})$
$\displaystyle={\mathbb{I}}_{N}+(S^{(1)}_{n,12}S^{(1)}_{n,21}-S^{(2)}_{n,11})z^{-2}+O(z^{-4}),$
$\displaystyle\tilde{B}_{n}(z^{2})$
$\displaystyle=-S^{(1)}_{n,12}-\big{(}S^{(3)}_{n,12}-S^{(2)}_{n,11}S^{(1)}_{n,12}+S^{(1)}_{n,12}(S^{(1)}_{n,21}S^{(1)}_{n,12}-S^{(2)}_{n,22})\big{)}z^{-2}+O(z^{-4}),$
$\displaystyle\tilde{C}_{n}(z^{2})$
$\displaystyle=-S^{(1)}_{n,21}+\big{(}-S^{(3)}_{n,21}+S^{(1)}_{n,21}S^{(2)}_{n,11}+(S^{(2)}_{n,22}-S^{(1)}_{n,21}S^{(1)}_{n,12})S^{(1)}_{n,21}\big{)}z^{-2}+O(z^{-4}),$
$\displaystyle\tilde{D}_{n}(z^{2})$
$\displaystyle={\mathbb{I}}_{N}+(S^{(1)}_{n,21}S^{(1)}_{n,12}-S^{(2)}_{n,22})z^{-2}+O(z^{-4}).$
#### 2.2.1 Recursion relations
We now introduce the necessary elements, within the Riemann–Hilbert problem
approach, to derive the recursion relations and properties of the recursion
coefficients in the context of matrix orthogonal polynomials.
###### Definition 3.
We introduce the matrix
$\displaystyle Z_{n}(z):=Y_{n}(z)\begin{pmatrix}\rho(z)&0\\\
0&{\mathbb{I}}_{N}\end{pmatrix}=\begin{pmatrix}P_{n}(z)\rho(z)&Q_{n}(z)\\\\[8.0pt]
c_{n-1}P_{n-1}(z)\rho(z)&c_{n-1}Q_{n-1}(z)\end{pmatrix}.$ (22)
###### Proposition 4.
1. 1.
$Z_{n}(z)$ is analytic on $\mathbb{C}\backslash\mathbb{R}$,
2. 2.
for $z\to\infty$ it holds that
$\displaystyle
Z_{n}(z)=({\mathbb{I}}_{2N}+O(z^{-1}))\begin{pmatrix}z^{n}\rho(z)&0\\\
0&z^{-n}{\mathbb{I}}_{N}\end{pmatrix},$ (23)
3. 3.
over $\mathbb{R}$ it is satisfied
$\displaystyle
Z_{n}(x)_{+}=Z_{n}(x)_{-}\begin{pmatrix}{\mathbb{I}}_{N}&{\mathbb{I}}_{N}\\\
0&{\mathbb{I}}_{N}\end{pmatrix}.$ (24)
###### Definition 4.
We introduce
$\displaystyle M_{n}(z)$
$\displaystyle:=\frac{\mathrm{d}{Z}_{n}(z)}{\mathrm{d}{z}}Z_{n}^{-1}(z),$ (25)
$\displaystyle R_{n}(z)$
$\displaystyle:=Z_{n+1}(z){Z_{n}}^{-1}(z)=Y_{n+1}(z){Y_{n}}^{-1}(z).$ (26)
We can easily show that
###### Proposition 5.
The matrices $M_{n}$ and $R_{n}$ satisfy
$\displaystyle
M_{n+1}(z)R_{n}(z)=\frac{\mathrm{d}}{\mathrm{d}{z}}R_{n}(z)+R_{n}(z)M_{n}(z).$
(27)
###### Proof.
It follows from the compatibility condition
$\displaystyle
T\frac{\text{d}Z_{n}(z)}{\text{d}z}=\frac{\text{d}}{\text{d}z}TZ_{n}(z),$
where $TF_{n}:=F_{n+1}$. ∎
We can also show that
###### Proposition 6.
For the functions $R_{n}(z)$ and $M_{n}(z)$ we have the expressions
$\displaystyle R_{n}(z)$
$\displaystyle=\begin{pmatrix}z{\mathbb{I}}_{N}&-S_{n,12}^{(1)}\\\
S_{n+1,21}^{(1)}&0\end{pmatrix},$ (28) $\displaystyle M_{n}(z)$
$\displaystyle=\Bigg{[}\begin{pmatrix}A_{n}(z^{2})\frac{\mathrm{d}\rho(z)}{\mathrm{d}{z}}\rho^{-1}(z)\tilde{A}_{n}(z^{2})&A_{n}(z^{2})z^{-1}\frac{\mathrm{d}\rho(z)}{\mathrm{d}{z}}\rho^{-1}(z)\tilde{B}_{n}(z^{2})\\\\[8.0pt]
z^{-1}C_{n}(z^{2})\frac{\mathrm{d}\rho(z)}{\mathrm{d}{z}}\rho^{-1}(z)\tilde{A}_{n}(z^{2})&z^{-2}C_{n}(z^{2})\frac{\mathrm{d}\rho(z)}{\mathrm{d}{z}}\rho^{-1}(z)\tilde{B}_{n}(z^{2})\end{pmatrix}\Bigg{]}_{+},$
(29)
where $[{\cdot}]_{+}$ denotes the part in positive powers of $z$.
###### Proof.
The expression for $R_{n}$ is a consequence of the following reasoning:
1. 1.
In the first place notice that $R_{n}(z)$ is analytic for
$z\in\mathbb{C}\backslash\mathbb{R}$.
2. 2.
Moreover, denoting
$\displaystyle{R_{n}}_{+}(x)$
$\displaystyle:={Y_{n+1}}_{+}(x)({Y_{n}}_{+}(x))^{-1},$ (30)
$\displaystyle{R_{n}}_{-}(x)$
$\displaystyle:={Y_{n+1}}_{-}(x)({Y_{n}}_{-}(x))^{-1},$ (31)
and substituting (13) in (30) we get ${R_{n}}_{+}(x)={R_{n}}_{-}(x)$ and
therefore $R_{n}(z)$ is analytic in $\mathbb{C}$.
3. 3.
Finally, if we substitute (16) in (26) we deduce that
$\displaystyle R_{n}(z)$ $\displaystyle=Y_{n+1}(z){Y_{n}}^{-1}(z)$
$\displaystyle=S_{n+1}(z)\begin{pmatrix}z{\mathbb{I}}_{N}&0\\\
0&z^{-1}{\mathbb{I}}_{N}\end{pmatrix}S_{n}^{-1}(z)$
$\displaystyle=\begin{pmatrix}z{\mathbb{I}}_{N}&0\\\
0&0\end{pmatrix}+S_{n+1}^{(1)}\begin{pmatrix}{\mathbb{I}}_{N}&0\\\
0&z^{-1}{\mathbb{I}}_{N}\end{pmatrix}-\begin{pmatrix}{\mathbb{I}}_{N}&0\\\
0&0\end{pmatrix}S_{n}^{(1)}+O(z^{-1}),$ $\displaystyle z$
$\displaystyle\rightarrow\infty,$
and the analyticity of $R_{n}$ in $\mathbb{C}$ leads to the desired result.
For the expression for $M_{n}$ we have the argumentation
1. 1.
$M_{n}(z)$ is analytic for $z\in\mathbb{C}\backslash\mathbb{R}$.
2. 2.
Given
$\displaystyle{M_{n}}_{+}(x)$
$\displaystyle:=\frac{{\text{d}}{Z_{n}}_{+}(x)}{{\text{d}}z}({Z_{n}}_{+}(x))^{-1},$
(32) $\displaystyle{M_{n}}_{-}(x)$
$\displaystyle:=\frac{{\text{d}}{Z_{n}}_{-}(x)}{{\text{d}}z}({Z_{n}}_{-}(x))^{-1}.$
(33)
Substituting (24) in (32) we get
$\displaystyle{M_{n}}_{+}(x)={M_{n}}_{-}(x),$
and therefore $M_{n}(z)$ is analytic over $\mathbb{C}$.
3. 3.
From (16) and (22) we see that $Z_{n}(z)$ is
$\displaystyle Z_{n}(z)=S_{n}(z)\begin{pmatrix}z^{n}\rho(z)&0\\\
0&z^{-n}{\mathbb{I}}_{N}\end{pmatrix},$ (34)
so that
$\displaystyle\frac{\text{d}Z_{n}(z)}{\text{d}z}Z_{n}^{-1}(z)=\frac{\text{d}S_{n}(z)}{\text{d}z}S_{n}(z)^{-1}+S_{n}(z)K_{n}(z)S_{n}^{-1}(z),$
(35)
where
$\displaystyle
K_{n}(z):=\begin{pmatrix}{n}z^{-1}{\mathbb{I}}_{N}+\dfrac{\text{d}\rho(z)}{\text{d}z}\rho^{-1}(z)&0\\\\[8.0pt]
0&-{n}z^{-1}{\mathbb{I}}_{N}\end{pmatrix}.$
Finally, as $M_{n}(z)$ is analytic over $\mathbb{C}$, (35) leads to
$\displaystyle
M_{n}(z)=\frac{\text{d}Z_{n}(z)}{\text{d}z}Z_{n}^{-1}(z)=\Bigg{[}S_{n}(z)\begin{pmatrix}\frac{\text{d}\rho(z)}{\text{d}z}\rho^{-1}(z)&0\\\
0&0\end{pmatrix}S_{n}^{-1}(z)\Bigg{]}_{+}.$ (36)
∎
Observe that the diagonal terms of $M_{n}$ are odd functions of $z$ while the
off diagonal are even functions of $z$. Now we give a parametrization of the
first coefficients of $S$ in terms of $c_{n}$.
###### Proposition 7.
The following formulae hold true
$\displaystyle S^{(1)}_{n,12}$ $\displaystyle=c_{n}^{-1},$ $\displaystyle
S^{(1)}_{n,21}$ $\displaystyle=c_{n-1},$ $\displaystyle S^{(2)}_{n,11}$
$\displaystyle=-\sum_{i=1}^{n}c_{i}^{-1}c_{i-1}+c_{n}^{-1}c_{n-1},$
$\displaystyle S^{(2)}_{n,22}$
$\displaystyle=\sum_{i=1}^{n}c_{i-1}c_{i}^{-1},$ $\displaystyle
S^{(3)}_{n,21}$
$\displaystyle=-c_{n-1}\sum_{i=1}^{n-1}c_{i}^{-1}c_{i-1}+c_{n-2},$
$\displaystyle S^{(3)}_{n,12}$
$\displaystyle=c_{n}^{-1}\sum_{i=1}^{n+1}c_{i-1}c_{i}^{-1}.$
###### Proof.
Equating the expressions for $Y_{n}(z)$ provided by (14) and (16) we get
$\displaystyle Y_{n}(z)$ $\displaystyle=\begin{pmatrix}P_{n}(z)&Q_{n}(z)\\\
c_{n-1}P_{n-1}(z)&c_{n-1}Q_{n-1}(z)\end{pmatrix}$
$\displaystyle=\begin{pmatrix}z^{n}{\mathbb{I}}_{N}&0\\\
0&z^{-n}{\mathbb{I}}_{N}\end{pmatrix}\big{(}{\mathbb{I}}_{2N}+S^{(1)}_{n}z^{-1}+S^{(2)}_{n}z^{-2}+S^{(3)}_{n}z^{-3}+O(z^{-4})\big{)},$
$\displaystyle z$ $\displaystyle\rightarrow\infty.$
Expanding the r.h.s. we get
$\displaystyle\begin{aligned}
S^{(1)}_{n,21}&=c_{n-1},&S^{(1)}_{n,12}&=c_{n}^{-1},&\\\ S^{(2)}_{1,11}&=0,\\\
S^{(3)}_{1,21}&=S^{(3)}_{2,21}=0,&S^{(3)}_{n,21}&=c_{n-1}S^{(2)}_{n-1,11},&n\geq
2,\\\ S^{(3)}_{n,12}&=c_{n}^{-1}S^{(2)}_{n+1,22},\end{aligned}$ (37)
where we have used that
$\displaystyle S^{(2)}_{1,22}=c_{0}c_{1}^{-1},$ (38)
which can be proved from (10). Introducing (37) into (28) we get
$\displaystyle
R_{n}(z)=\begin{pmatrix}z{\mathbb{I}}_{N}&-c_{n}^{-1}\\\\[8.0pt]
c_{n}&0\end{pmatrix},$ (39)
and (16) and (39) lead to
$\displaystyle S_{n+1}(z)$
$\displaystyle=\begin{pmatrix}z{\mathbb{I}}_{N}&-c_{n}^{-1}\\\\[8.0pt]
c_{n}&0\end{pmatrix}S_{n}(z)\begin{pmatrix}z^{-1}{\mathbb{I}}_{N}&0\\\\[8.0pt]
0&z{\mathbb{I}}_{N}\end{pmatrix},$
so that
$\displaystyle S^{(2)}_{n+1,11}-S^{(2)}_{n,11}$
$\displaystyle=-c_{n}^{-1}c_{n-1},$ (40) $\displaystyle
S^{(3)}_{n,12}-c_{n}^{-1}S^{(2)}_{n,22}$ $\displaystyle=c_{n+1}^{-1},$ (41)
where we have used (37). From (37) and (41) we get
$\displaystyle S^{(2)}_{n+1,22}-S^{(2)}_{n,22}=c_{n}c_{n+1}^{-1}.$ (42)
Summing up in $n$ in (40) and (42) we deduce
$\displaystyle\sum_{i=1}^{n-1}(S^{(2)}_{i+1,11}-S^{(2)}_{i,11})$
$\displaystyle=-\sum_{i=1}^{n-1}c_{i}^{-1}c_{i-1},$
$\displaystyle\sum_{i=1}^{n-1}(S^{(2)}_{i+1,22}-S^{(2)}_{i,22})$
$\displaystyle=\sum_{i=1}^{n-1}c_{i}c_{i+1}^{-1},$
which leads to
$\displaystyle S^{(2)}_{n,11}$
$\displaystyle=-\sum_{i=1}^{n}c_{i}^{-1}c_{i-1}+c_{n}^{-1}c_{n-1},$ (43)
$\displaystyle S^{(2)}_{n,22}$
$\displaystyle=\sum_{i=1}^{n}c_{i-1}c_{i}^{-1},$ (44)
where we have used (37) and (38). Finally (37), (43) and (44) give
$\displaystyle
S^{(3)}_{n,21}=-c_{n-1}\sum_{i=1}^{n-1}c_{i}^{-1}c_{i-1}+c_{n-2},$ (45)
valid for $n\geq 2$, and
$\displaystyle S^{(3)}_{n,12}=c_{n}^{-1}\sum_{i=1}^{n+1}c_{i-1}c_{i}^{-1}.$
(46)
∎
Notice that (37) gives
$\displaystyle S^{(3)}_{1,21}=0.$ (47)
###### Proposition 8.
Matrix orthogonal polynomials $P_{n}$ (and its Cauchy transforms $Q_{n}$) are
subject to the following recursion relations
$\displaystyle P_{n+1}(z)=zP_{n}(z)-\frac{1}{2}\beta_{n}P_{n-1}(z),$ (48)
with the recursion coefficients $\beta_{n}$ given by
$\displaystyle\beta_{n}$ $\displaystyle:=2c_{n}^{-1}c_{n-1},$ $\displaystyle
n$ $\displaystyle\geq 1,$ $\displaystyle\beta_{0}:=0.$ (49)
###### Proof.
Observe that (26) can be written as
$\displaystyle Y_{n+1}(z)=R_{n}(z)Y_{n}(z).$ (50)
Then, if we replace (14) and (39) into (50) we get the result. ∎
We now show some commutative properties of the polynomials and the recursion
coefficients.
###### Proposition 9.
Let $f(z):\mathbb{C}\to\mathbb{C}^{N\times N}$ such that $[V(x),f(z)]=0$
$\forall(x,z)\in\mathbb{R}\times\mathbb{C}$, then
$\displaystyle[c_{n},f(z)]=[\beta_{n},f(z)]$ $\displaystyle=0,$ $\displaystyle
n$ $\displaystyle\geq 0,$ $\displaystyle\forall z$
$\displaystyle\in\mathbb{C},$ $\displaystyle[P_{n}(z^{\prime}),f(z)]$
$\displaystyle=0,$ $\displaystyle n$ $\displaystyle\geq 0,$
$\displaystyle\forall z,z^{\prime}$ $\displaystyle\in\mathbb{C}.$
###### Proof.
Let us suppose that for a given $m\geq 0$ we have
$\displaystyle[P_{m}(x),f(z)]=[P_{m-1}(x),f(z)]=0.$ (51)
Then, recalling (7) these expressions give
$\displaystyle[c_{m},f(z)]=[c_{m-1},f(z)]=0,$ (52)
respectively. Therefore, using the recursion relations (48) and (49) we obtain
$\displaystyle[P_{m+1}(x),f(z)]=x[P_{m}(x),f(z)]-[c_{m}^{-1}c_{m-1}P_{m-1}(x),f(z)]=0.$
This means that
$\displaystyle[c_{m+1},f(z)]=0.$ (53)
Hypothesis (51) holds for $m$=1, consequently $[c_{n},f(z)]=0$ for
$n=0,1,\dots$ and (49) implies $[\beta_{n},f(z)]=0$. Finally, as the
coefficients of the matrix orthogonal $P_{n}(z)$ are polynomials in the
$\beta$’s we conclude that $[P_{n}(z^{\prime}),f(z)]=0$ for all
$z,z^{\prime}\in\mathbb{C}$. ∎
###### Corollary 1.
Suppose that $[V(x),V(z)]=0$ for all $x\in\mathbb{R}$ and $z\in\mathbb{C}$,
then
$\displaystyle[P_{n}(z),P_{m}(z^{\prime})]$ $\displaystyle=0,$
$\displaystyle\forall n,m$ $\displaystyle\geq 0,$ $\displaystyle z,z^{\prime}$
$\displaystyle\in\mathbb{C},$ (54) $\displaystyle[c_{n},c_{m}]$
$\displaystyle=0,$ (55) $\displaystyle[\beta_{n},\beta_{m}]$
$\displaystyle=0.$ (56)
###### Proof.
Applying Proposition 9 to $f=V$ we deduce that $[P_{n}(z^{\prime}),V(z)]=0$,
so that it allows to use again Proposition 9 but now with $f=P_{n}$ and get
the stated result. From (7) and (54) we deduce (55) and using (49) we get
(56). ∎
## 3 A discrete matrix equation, related to Freud matrix orthogonal
polynomials, with singularity confinement
We will consider the particular case when
$\displaystyle V(z)$ $\displaystyle=\alpha z^{2}+\mathbb{I}_{N}z^{4},$
$\displaystyle\alpha=\alpha^{\dagger}.$ (57)
Observe that $[V(z),V(z^{\prime})]=0$ for any pair of complex numbers
$z,z^{\prime}$. Hence, in this case the corresponding set of matrix orthogonal
polynomials $\\{P_{n}\\}_{n=0}^{\infty}$, that we refer as matrix Freud
polynomials, is an Abelian set. Moreover, we have
$\displaystyle[c_{n},c_{m}]=[\beta_{n},\beta_{m}]=[c_{n},\alpha]=[\beta_{n},\alpha]$
$\displaystyle=0,$ $\displaystyle\forall n,m=0,1,\dots.$
In this situation we have
###### Theorem 1.
The recursion coefficients $\beta_{n}$ (49) for the Freud matrix orthogonal
polynomials determined by (57) satisfy
$\displaystyle\beta_{n+1}$
$\displaystyle=n\beta_{n}^{-1}-\beta_{n-1}-\beta_{n}-\alpha,$ $\displaystyle
n=1,2,\dots$ (58)
with $\beta_{0}:=0$.
###### Proof.
We compute now the matrix $M_{n}$, for which we have
$\displaystyle
M_{n}(z)=\Bigg{[}\begin{pmatrix}-A_{n}(z^{2})(2{\alpha}z+4z^{3}{\mathbb{I}}_{N})\tilde{A}_{n}(z^{2})&-A_{n}(z^{2})(2{\alpha}+4z^{2}{\mathbb{I}}_{N})\tilde{B}_{n}(z^{2})\\\
-C_{n}(z^{2})(2{\alpha}+4z^{2}{\mathbb{I}}_{N})\tilde{A}_{n}(z^{2})&-C_{n}(z^{2})(2{\alpha}z^{-1}+4z{\mathbb{I}}_{N})\tilde{B}_{n}(z^{2})\end{pmatrix}\Bigg{]}_{+},$
(59)
and is clear that
$\displaystyle
M_{n}(z)=M_{n}^{(3)}z^{3}+M_{n}^{(2)}z^{2}+M_{n}^{(1)}z+M_{n}^{(0)},$ (60)
with
$\displaystyle M_{n}^{(3)}$
$\displaystyle=\begin{pmatrix}-4{\mathbb{I}}_{N}&0\\\
0&0\end{pmatrix},M_{n}^{(2)}=\begin{pmatrix}0&4S^{(1)}_{n,12}\\\
-4S^{(1)}_{n,21}&0\end{pmatrix},M_{n}^{(1)}=\begin{pmatrix}-2\alpha-4S^{(1)}_{n,12}S^{(1)}_{n,21}&0\\\
0&4S^{(1)}_{n,21}S^{(1)}_{n,12}\end{pmatrix},$ $\displaystyle M_{n}^{(0)}$
$\displaystyle=\begin{pmatrix}0&2{\alpha}S^{(1)}_{n,12}+4S^{(3)}_{n,12}+4S^{(1)}_{n,12}(S^{(1)}_{n,21}S^{(1)}_{n,12}-S^{(2)}_{n,22})\\\
-2S^{(1)}_{n,21}{\alpha}-4S^{(3)}_{n,21}+4S^{(1)}_{n,21}S^{(2)}_{n,11}-4S^{(1)}_{n,21}S^{(1)}_{n,12}S^{(1)}_{n,21}&0\end{pmatrix}.$
Replacing (37)-(46) into (60) we get
$\displaystyle M_{n}^{(3)}$
$\displaystyle=\begin{pmatrix}-4{\mathbb{I}}_{N}&0\\\ 0&0\end{pmatrix},\quad
M_{n}^{(2)}=\begin{pmatrix}0&4c_{n}^{-1}\\\ -4c_{n-1}&0\end{pmatrix},\quad
M_{n}^{(1)}=\begin{pmatrix}-2\alpha-4c_{n}^{-1}c_{n-1}&0\\\
0&4c_{n-1}c_{n}^{-1}\end{pmatrix},$ (61) $\displaystyle M_{1}^{(0)}$
$\displaystyle=\begin{pmatrix}0&4c_{2}^{-1}+4c_{1}^{-1}c_{0}c_{1}^{-1}+2{\alpha}c_{1}^{-1}\\\\[8.0pt]
-4c_{0}c_{1}^{-1}c_{0}-2c_{0}\alpha&0\end{pmatrix},$ (62) $\displaystyle
M_{n}^{(0)}$
$\displaystyle=\begin{pmatrix}0&4c_{n+1}^{-1}+4c_{n}^{-1}c_{n-1}c_{n}^{-1}+2{\alpha}c_{n}^{-1}\\\
-4c_{n-2}-4c_{n-1}c_{n}^{-1}c_{n-1}-2c_{n-1}\alpha&0\end{pmatrix},$
$\displaystyle n\geq 2.$ (63)
The compatibility condition (27) together with (39), (60), (61), (62) and (63)
gives
$\displaystyle
4(c_{n+2}^{-1}c_{n}+c_{n+1}^{-1}c_{n}c_{n+1}^{-1}c_{n}-c_{n}^{-1}c_{n-1}c_{n}^{-1}c_{n-1}-c_{n}^{-1}c_{n-2})+2{\alpha}c_{n+1}^{-1}c_{n}-2c_{n}^{-1}c_{n-1}\alpha={\mathbb{I}}_{N},$
for $n\geq 2$ and
$\displaystyle
4(c_{3}^{-1}c_{1}+c_{2}^{-1}c_{1}c_{2}^{-1}c_{1}-c_{1}^{-1}c_{0}c_{1}^{-1}c_{0})+2{\alpha}c_{2}^{-1}c_{1}-2c_{1}^{-1}c_{0}\alpha={\mathbb{I}}_{N},$
which can be written as
$\displaystyle\beta_{n+2}\beta_{n+1}+\beta_{n+1}^{2}-\beta_{n}^{2}-\beta_{n}\beta_{n-1}+\alpha\beta_{n+1}-\beta_{n}\alpha={\mathbb{I}}_{N}$
(64)
for $n\geq 2$ and
$\displaystyle\beta_{3}\beta_{2}+\beta_{2}^{2}-\beta_{1}^{2}+\alpha\beta_{2}-\beta_{1}\alpha={\mathbb{I}}_{N},$
(65)
respectively. Using the Abelian character of the set of $\beta$’s we arrive to
$\displaystyle\beta_{n+2}\beta_{n+1}+\beta_{n+1}^{2}-\beta_{n}^{2}-\beta_{n}\beta_{n-1}+\alpha(\beta_{n+1}-\beta_{n})$
$\displaystyle={\mathbb{I}}_{N},$ $\displaystyle n$ $\displaystyle=2,3,\dots,$
(66)
$\displaystyle\beta_{3}\beta_{2}+\beta_{2}^{2}-\beta_{1}^{2}+\alpha(\beta_{2}-\beta_{1})$
$\displaystyle={\mathbb{I}}_{N}.$ (67)
Summing up in (66) from $i$=2 up to $i$=$n$ we obtain
$\displaystyle\sum_{i=2}^{n}[\beta_{i+2}\beta_{i+1}+\beta_{i+1}^{2}-\beta_{i}^{2}-\beta_{i}\beta_{i-1}+\alpha(\beta_{i+1}-\beta_{i})]=\sum_{i=2}^{n}{\mathbb{I}}_{N},$
(68)
and consequently we conclude
$\displaystyle\beta_{n+2}\beta_{n+1}+\beta_{n+1}\beta_{n}+{\beta_{n+1}}^{2}+\alpha\beta_{n+1}$
$\displaystyle=n{\mathbb{I}}_{N}+k,$ $\displaystyle n$ $\displaystyle\geq 1,$
(69)
where
$\displaystyle
k:=\beta_{2}\beta_{1}+\beta_{3}\beta_{2}+\beta_{2}^{2}+\alpha\beta_{2}-{\mathbb{I}}_{N}=\beta_{2}\beta_{1}+\beta_{1}^{2}+\beta_{1}\alpha,$
(70)
where we have used (67). We now proceed to show that $k$=${\mathbb{I}}_{N}$.
(25) implies, for $n$=1 and $z=0$,
$\displaystyle Z^{\prime}_{1}(0)=M_{1}^{(0)}Z_{1}(0),$ (71)
with $M_{1}^{(0)}$ given in (62). This leads to
$\displaystyle\begin{pmatrix}P^{\prime}_{1}(0)\\\
c_{0}P^{\prime}_{0}(0)\end{pmatrix}=M_{1}^{(0)}\begin{pmatrix}P_{1}(0)\\\
c_{0}P_{0}(0)\end{pmatrix}.$ (72)
Now, using (3) we deduce that
$\displaystyle\begin{pmatrix}{\mathbb{I}}_{N}\\\
0\end{pmatrix}=M_{1}^{(0)}\begin{pmatrix}0\\\ c_{0}\end{pmatrix},$ (73)
which allows us to immediately deduce that
$\displaystyle\beta_{2}\beta_{1}+\beta_{1}^{2}+\beta_{1}\alpha={\mathbb{I}}_{N},$
(74)
and consequently $k$=${\mathbb{I}}_{N}$. Finally, we get
$\displaystyle\beta_{n+2}\beta_{n+1}+\beta_{n+1}\beta_{n}+{\beta_{n+1}}^{2}+\alpha\beta_{n+1}=n{\mathbb{I}}_{N}+{\mathbb{I}}_{N}.$
(75)
Finally, notice that (74) reads
$\displaystyle\beta_{2}=\beta_{1}^{-1}-\beta_{1}-\alpha.$ (76)
∎
This theorem ensures that $\beta_{1}$ fixes $\beta_{n}$ for all $n\geq 2$, and
therefore $\beta_{n}=\beta_{n}(\beta_{1},\alpha)$. Moreover, we will see now
that the solutions $\beta_{n}$ not only commute with each other but also that
they can be simultaneously conjugated to lower matrices. This result is
relevant in our analysis of the confinement of singularities.
### 3.1 On singularity confinement
The study of the singularities of the discrete matrix equations (58) reveals,
as we will show, that they are confined; i.e. the singularities may appear
eventually, however they disappear in few steps, no more than four. The
mentioned singularities in (58) appear when for some $n$ the matrix
$\beta_{n}$ is not invertible, that is $\det\beta_{n}=0$, and we can not
continue with the sequence. However, for a better understanding of this
situation in the discrete case we just request that $\det\beta_{n}$ is a small
quantity so that $\beta_{n}^{-1}$ and $\beta_{n+1}$ exist, but they are very
“large” matrices in some appropriate sense. To be more precise we will
consider a small parameter $\epsilon$ and suppose that in a given step $m$ of
the sequence we have
$\displaystyle\beta_{m-1}$ $\displaystyle=O(1),$
$\displaystyle\det\beta_{m-1}$ $\displaystyle=O(1),$ (77)
$\displaystyle\beta_{m}$ $\displaystyle=O(1),$ $\displaystyle\det\beta_{m}$
$\displaystyle=O(\epsilon^{r}),$ (78)
for $\epsilon\to 0$ and with $r\leq N-1$. In other words, we have the
asymptotic expansions
$\displaystyle\beta_{m-1}$
$\displaystyle=\beta_{m-1,0}+\beta_{m-1,1}\epsilon+O(\epsilon^{2}),$
$\displaystyle\epsilon$ $\displaystyle\rightarrow 0,$
$\displaystyle\det\beta_{m-1,0}$ $\displaystyle\neq 0,$ (79)
$\displaystyle\beta_{m}$
$\displaystyle=\beta_{m,0}+\beta_{m,1}\epsilon+O(\epsilon^{2}),$
$\displaystyle\epsilon$ $\displaystyle\rightarrow 0,$
$\displaystyle\dim\operatorname{Ran}\beta_{m,0}$ $\displaystyle=N-r.$ (80)
We now proceed with some preliminar material. In particular we show that we
can restrict the study to the triangular case.
###### Proposition 10.
Let us suppose that $\beta_{1}$ and $\alpha$ are simultaneously
triangularizable matrices; i.e., there exist an invertible matrix $M$ such
that $\beta_{1}=M\phi_{1}M^{-1}$ and $\alpha=M\gamma M^{-1}$ with $\phi_{1}$
and $\gamma$ lower triangular matrices. Then, the solutions $\beta_{n}$ of
(58) can be written as
$\displaystyle\beta_{n}$ $\displaystyle=M\phi_{n}M^{-1},$ $\displaystyle n\geq
0,$
where $\phi_{n}$, $n=0,1,\dots$, are lower triangular matrices satisfying
$\displaystyle\phi_{n+1}=n\phi_{n}^{-1}-\phi_{n-1}-\phi_{n}-\gamma.$
Moreover, let us suppose that for some integer $m$ the matrices $\beta_{m+1}$,
$\beta_{m}$ and $\alpha$ are simultaneously triangularizable, then all the
sequence $\\{\beta_{n}\\}_{n=0}^{\infty}$ is simultaneously triangularizable.
###### Proof.
In the one hand, from (58) we conclude that $M^{-1}\beta_{2}M$ is lower
triangular and in fact that $\\{M^{-1}\beta_{n}M\\}_{n\geq 0}$ is a sequence
of lower triangular matrices. In the other hand, if for some integer $m$ the
matrices $\beta_{m+1}$, $\beta_{m}$ and $\alpha$ are simultaneously
triangularizable we have
$\displaystyle\beta_{m+1}$
$\displaystyle=m\beta_{m}^{-1}-\beta_{m}-\beta_{m-1}-\alpha,$
$\displaystyle\beta_{m}$
$\displaystyle=(m-1)\beta_{m-1}^{-1}-\beta_{m-1}-\beta_{m-2}-\alpha,$
which implies that $\beta_{m-1},\beta_{m-2}$ are triangularized by the same
transformation that triangularizes $\beta_{m+1}$, $\beta_{m}$ and $\alpha$. ∎
The simultaneous triangularizability can be achieved, for example, when
$[\beta_{1},\alpha]=0$, as in this case we can always find an invertible
matrix $M$ such that $\beta_{1}=M\phi_{1}M^{-1}$ and $\alpha=M\gamma M^{-1}$
where $\phi_{1}$ and $\gamma$ are lower triangular matrices, for example by
finding the Jordan form of these two commuting matrices. This is precisely the
situation for the solutions related with matrix orthogonal polynomials.
Obviously, this is just a sufficient condition. From now on, and following
Proposition 10, we will assume that the simultaneous triangularizability of
$\alpha$ and $\beta_{1}$ holds and study the case in where $\alpha$ and all
the $\beta$’s are lower triangular matrices. Thus, we will use the splitting
$\displaystyle\beta_{n}$ $\displaystyle=D_{n}+N_{n},$ (81)
$\displaystyle\alpha$ $\displaystyle=\alpha_{D}+\alpha_{N},$ (82)
where $D_{n}=\operatorname{diag}(D_{n;1},\dots,D_{n;N})$ and
$\alpha_{D}=\operatorname{diag}(\alpha_{D,1},\dots,\alpha_{D,N})$ are the
diagonal parts of $\beta_{n}$ and $\alpha$, respectively and $N_{n}$ and
$\alpha_{N}$ are the strictly lower parts of $\beta_{n}$ and $\alpha$,
respectively. Then, (58) splits into
$\displaystyle\begin{aligned}
D_{n+1}+N_{n+1}={n}D_{n}^{-1}-D_{n-1}-D_{n}-\alpha_{D}\\\
+n\bar{N}_{n}-N_{n-1}-N_{n}-\alpha_{N},\end{aligned}$ (83)
where $\bar{N}_{n}$ denotes the strictly lower triangular part
$\beta_{n}^{-1}$; i.e.,
$\displaystyle\beta_{n}^{-1}=D_{n}^{-1}+\bar{N}_{n}.$
Hence, (58) decouples into
$\displaystyle D_{n+1}={n}D_{n}^{-1}-D_{n-1}-D_{n}-\alpha_{D},$ (84)
$\displaystyle N_{n+1}=n\bar{N}_{n}-N_{n-1}-N_{n}-\alpha_{N}.$ (85)
In this context it is easy to realize that there always exists a
transformation leading to the situation in where
$\displaystyle\beta_{m,0}=\begin{pmatrix}0&0&\cdots&0&0&\cdots&0\\\
0&0&\cdots&0&0&\cdots&0\\\ \vdots&\vdots&&\vdots&\vdots&&\vdots\\\
\beta_{m,0;r+1,1}&\beta_{m,0;r+1,2}&\cdots&\beta_{m,0;r+1,r+1}&0&\cdots&0\\\
\beta_{m,0;r+2,1}&\beta_{m,0;r+2,2}&\cdots&\beta_{m,0;r+2,r+1}&\beta_{m,0;r+2,r+2}&\cdots&0\\\
\vdots&\vdots&&\vdots&\vdots&&\vdots\\\
\beta_{m,0;N,1}&\beta_{m,0;N,2}&\cdots&\beta_{m,0;N,r+1}&\beta_{m,0;N,r+2}&\cdots&\beta_{m,0;N,N}\\\
\end{pmatrix}.$ (86)
###### Proposition 11.
The singularities of the diagonal part are confined. More explicitly, if we
assume that (79), (80) and (86) hold true at a given step $m$ then
$\displaystyle D_{m+1;i}$
$\displaystyle=\frac{m}{\beta_{m,1;i,i}}\epsilon^{-1}-\beta_{m-1,0;i,i}-\frac{\beta_{m,2;i,i}m}{\beta_{m,1;i,i}^{2}}-\alpha_{D,i}+O(\epsilon),$
$\displaystyle D_{m+2;i}$
$\displaystyle=-\frac{m}{\beta_{m,1;i,i}}\epsilon^{-1}+\beta_{m-1,0;i,i}+\frac{\beta_{m,2;i,i}m}{\beta_{m,1;i,i}^{2}}+O(\epsilon),$
$\displaystyle D_{m+3;i}$
$\displaystyle=-\beta_{m,1;i,i}\frac{m+3}{m}\epsilon+O(\epsilon^{2}),$ (87)
$\displaystyle D_{m+4;i}$
$\displaystyle=\frac{m\beta_{m-1,0;i,i}-2\alpha_{D,i}}{m+3}+O(\epsilon),$ (88)
when $\epsilon\to 0$.
###### Proof.
From (79), (80) and (86) we deduce
$\displaystyle D_{m-1,i}$
$\displaystyle=\beta_{m-1,0;i,i}+\beta_{m-1,1;i,i}\epsilon+O(\epsilon^{2}),$
$\displaystyle D_{m,i}$
$\displaystyle=\beta_{m,1;i,i}\epsilon+O(\epsilon^{2}),$
for $\epsilon\to 0$, with $i\geq r+1$. Substitution of these expressions in
(84) leads to the stated formulae. For $i\leq r$ the coefficients $D_{m-1;i}$
and $D_{m;i}$ are $O(1)$ as $\epsilon\to 0$, thus they do not vanish, and
consequently there is confinement of singularities for the diagonal part
$D_{n}$. ∎
In what follows we will consider asymptotic expansions taking values in the
set of lower triangular matrices
$\displaystyle\mathbb{T}$
$\displaystyle:=\\{T_{0}+T_{1}\epsilon+O(\epsilon^{2}),\;\epsilon\to 0,\quad
T_{i}\in\mathfrak{t}_{N}\\},$ $\displaystyle\mathfrak{t}_{N}$
$\displaystyle:=\\{T=(T_{i,j})\in\mathbb{C}^{N\times N},\quad X_{i,j}=0\text{
when $i>j$}\\},$ (89)
where $\mathfrak{t}_{N}$ is the set of lower triangular $N\times N$ matrices.
The reader should notice that this set
$\mathbb{T}=\mathfrak{t}_{N}[[\epsilon]]$ is a subring of the ring of
$\mathbb{C}^{N\times N}$-valued asymptotic expansions; in fact is a subring
with identity, the matrix $\mathbb{I}_{N}$. We will use the notation
$\displaystyle T_{i}$ $\displaystyle:=\begin{pmatrix}T_{i,11}&0\\\
T_{i,21}&T_{i,22}\end{pmatrix},$ $\displaystyle i$ $\displaystyle\geq 1,$ (90)
where $T_{i,11}\in\mathfrak{t}_{r}$, $T_{i,22}\in\mathfrak{t}_{N-r}$ and
$T_{i,21}\in\mathbb{C}^{(N-r)\times r}$. We consider two sets of matrices
determined by (86), namely
$\displaystyle\mathfrak{k}$
$\displaystyle:=\Big{\\{}K_{0}=\begin{pmatrix}0&0\\\
K_{0,21}&K_{0,22}\end{pmatrix},K_{0,21}\in\mathbb{C}^{(N-r)\times
r},K_{0,22}\in\mathfrak{t}_{N-r}\Big{\\}},$ $\displaystyle\mathfrak{l}$
$\displaystyle:=\\{L_{-1}=\begin{pmatrix}L_{-1,11}&0\\\
L_{-1,21}&0\end{pmatrix},\;L_{-1,11}\in\mathfrak{t}_{r},L_{-1,21}\in\mathbb{C}^{(N-r)\times
r}\Big{\\}},$
and the related sets
$\displaystyle\mathbb{K}$
$\displaystyle:=\Big{\\{}K=K_{0}+K_{1}\epsilon+O(\epsilon^{2})\in\mathbb{T},\quad
K_{0}\in\mathfrak{k}\Big{\\}},$ (91) $\displaystyle\mathbb{L}$
$\displaystyle:=\Big{\\{}L=L_{-1}\epsilon^{-1}+L_{0}+L_{1}\epsilon+O(\epsilon^{2})\in\epsilon^{-1}\mathbb{T},\quad
L_{-1}\in\mathfrak{l}\Big{\\}},$ (92)
which fulfill the following important properties.
###### Proposition 12.
1. 1.
Both $\mathbb{K}$ and $\epsilon\mathbb{L}$ are subrings of the ring with
identity $\mathbb{T}$, however these two subrings have no identity.
2. 2.
If an element $X\in\mathbb{K}$ is an invertible matrix, then
$X^{-1}\in\mathbb{L}$, and reciprocally if $X\in\mathbb{L}$ is invertible,
then $X^{-1}\in\mathbb{K}$.
3. 3.
The subrings $\epsilon\mathbb{L}$ and $\mathbb{K}$ are bilateral ideals of
$\mathbb{T}$; i.e., $\mathbb{L}\cdot\mathbb{T}\subset\mathbb{L}$,
$\mathbb{T}\cdot\mathbb{L}\subset\mathbb{L}$,
$\mathbb{T}\cdot\mathbb{K}\subset\mathbb{K}$ and
$\mathbb{K}\cdot\mathbb{T}\subset\mathbb{K}$.
4. 4.
We have $\mathbb{L}\cdot\mathbb{K}\subset\mathbb{T}$.
###### Theorem 2.
If $\beta_{1}$ and $\alpha$ are simultaneously triangularizable matrices then
the singularities of (58) are confined. More explicitly, if for a given step
$m$ the conditions (79), (80) and (86) are satisfied then
$\displaystyle\beta_{m+1},\beta_{m+2}$ $\displaystyle\in\mathbb{L},$
$\displaystyle\beta_{m+3}$ $\displaystyle\in\mathbb{K},$
$\displaystyle\beta_{m+4}$ $\displaystyle\in\mathbb{T},$
$\displaystyle\det\beta_{m+4}$ $\displaystyle=O(1),\quad\epsilon\to 0.$
###### Proof.
From (80) and (86) we conclude that $\beta_{m}\in\mathbb{K}$ and consequently
$\beta_{m}^{-1}\in\mathbb{L}$. Taking into account this fact, (58) implies
that $\beta_{m+1}\in\mathbb{L}$. Therefore, $\beta_{m+1}^{-1}\in\mathbb{K}$
and (58), as $\beta_{m+1}\in\mathbb{L}$, give $\beta_{m+2}\in\mathbb{L}$ and
consequently $\beta_{m+2}^{-1}\in\mathbb{K}$. Iterating (58) we get
$\displaystyle\beta_{m+3}=\beta_{m}-(m+1)\beta_{m+1}^{-1}+(m+2)\beta_{m+2}^{-1}.$
(93)
Using the just derived facts,
$\beta_{m+1}^{-1},\beta_{m+2}^{-1}\in\mathbb{K}$, and that
$\beta_{m}\in\mathbb{K}$, we deduce $\beta_{m+3}\in\mathbb{K}$ which implies
$\beta_{m+3}^{-1}\in\mathbb{L}$. Finally, (58) gives $\beta_{m+4}$ as
$\displaystyle\beta_{m+4}=(m+3)\beta_{m+3}^{-1}-\beta_{m+2}-\beta_{m+3}-\alpha.$
(94)
We conclude that there are only two possibilities:
1. 1.
$\beta_{m+4}=O(1)$ for $\epsilon\to 0$, or
2. 2.
$\beta_{m+4}\in\mathbb{L}$.
Let us consider both possibilities separately.
1. 1.
Recalling that the diagonal part has singularity confinement, see Proposition
11, in the first case we see that $\det\beta_{m+4}=O(1)$ when $\epsilon\to 0$,
as desired.
2. 2.
In this second case we write $\beta_{m+4}$ as
$\displaystyle\beta_{m+4}$ $\displaystyle=\beta_{m+3}^{-1}A+O(1),$
$\displaystyle\epsilon$ $\displaystyle\rightarrow 0,$ $\displaystyle
A:=(m+3){\mathbb{I}}-\beta_{m+3}\beta_{m+2}.$ (95)
Observe that repeated use of (58) leads to the following expressions:
$\displaystyle A$
$\displaystyle={\mathbb{I}}+[(m+1)\beta_{m+1}^{-1}-\beta_{m}]\beta_{m+2}$
$\displaystyle=k+{\mathbb{I}}-[(m+1)\beta_{m+1}^{-1}-\beta_{m}]\beta_{m+1}$
$\displaystyle=k-m{\mathbb{I}}+\beta_{m}\beta_{m+1}$
$\displaystyle=k-\beta_{m}(\beta_{m}+\beta_{m-1}+\alpha),$
where
$\displaystyle
k:=[(m+1)\beta_{m+1}^{-1}-\beta_{m}][(m+1)\beta_{m+1}^{-1}-\beta_{m}-\alpha].$
From these formulae, as $\beta_{m+1}^{-1},\beta_{m}\in\mathbb{K}$ we deduce
that $k\in\mathbb{K}$ and also that
$\beta_{m}(\beta_{m}+\beta_{m-1}+\alpha)\in\mathbb{K}$. Hence, we conclude
that $A\in\mathbb{K}$ and from (95) and Proposition 12 we deduce that
$\beta_{m+4}=O(1)$ when $\epsilon\to 0$. Consequently, we arrive to a
contradiction, and only possibility 1) remains.
∎
## Acknowledgements
The authors thanks economical support from the Spanish Ministerio de Ciencia e
Innovación, research project FIS2008-00200. GAC acknowledges the support of
the grant Universidad Complutense de Madrid. Finally, MM reckons illuminating
discussions with Dr. Mattia Cafasso in relation with orthogonality and
singularity confinement, and both authors are grateful to Prof. Gabriel
Álvarez Galindo for several discussions and for the experimental confirmation,
via Mathematica, of the existence of the confinement of singularities in the
$2\times 2$ case.
## References
* [1] P. Painlevé, _Leçons sur la théorie analytique des equations différentielles (Leçons de Stockholm, delivered in 1895)_ , Hermann, Paris (1897). Reprinted in _Œvres de Paul Painlevé, vol. I, Éditions du CNRS_ , Paris, (1973).
* [2] R. Conte (Editor), _The Painlevé Property, One Century Later_ , Springer Verlag, New York, (1999).
* [3] B. Grammaticos, A. Ramani and V. Papageorgiou, _Do integrable mappings have the Painlevé property?_ , Phys. Rev. Lett. 67, 1825-1828 (1991).
* [4] A. Ramani, B. Grammaticos, J. Hietarinta, _Discrete versions of the Painlevé equations_ , Phys. Rev. Lett. 67, 1829-1832 (1991).
* [5] A. Ramani, D. Takahashi, B. Grammaticos, Y. Ohta, _The ultimate discretisation of the Painlevé equations_ , Physica D, 114, Issues 3-4, 185-196 (1998).
* [6] J. Hietarinta and C. Viallet, _Discrete Painlevé I and singularity confinement in projective space_ , Chaos Solitons and Fractals 11 (2000), 29-32.
* [7] S. Lafortune and A. Goriely, _Singularity confinement and algebraic integrability_ , J. Math.l Phys. 45 (2004), 1191-1208.
* [8] G. Freud, _On the coefficients in the recursion formulae of orthogonal polynomials_ , Proc. Royal Irish Acad. A76 (1976), 1-6.
* [9] W. Van Assche, _Discrete Painlevé equations for recurrence coefficients of orthogonal polynomials_ , Proceedings of the International Conference on Difference Equations, special Functions and Orthogonal Polynomials, World Scientific (2007), 687-725.
* [10] A. P. Magnus, _Freud’s equations for orthogonal polynomials as discrete Painlevé equations_ , Symmetries and Integrability of Difference Equations (Canterbury, 1996), London Math. Soc. Lecture Note Ser., 255, Cambridge University Press, 1999, pp. 228-243.
* [11] David Damanik, Alexander Pushnitski, and Barry Simon, _The Analytic Theory of Matrix Orthogonal Polynomials_ , Surveys in Approximation Theory 4, 2008. pp. 1-85 and also arXiv:0711.2703.
* [12] A. S. Fokas, A. R. Its and A. V. Kitaev, _The isomonodromy approach to matrix models in 2D quantum gravity_ , Commun. Math. Phys. 147 (1992), 395-430.
* [13] E. Daems and A. B. J. Kuijlaars, _Multiple orthogonal polynomials of mixed type and non-intersecting Brownian motions_ , J. Approx. Theory 146 (2007), 91-114.
* [14] M. G. Krein, _Infinite J-matrices and a matrix moment problem_ , Dokl. Akad. Nauk. SSSR 69 (2) (1949), 125-128.
* [15] M. G. Krein, _Fundamental aspects of the representation theory of hermitian operators with deficiency index (m,m)_ , AMS Translations, Series 2, vol. 97, Providence, Rhode Island, 1971, pp. 75-143.
* [16] Yu. M. Berezanskii, _Expansions in eigenfunctions of self-adjoint operators_ , Transl. Math. Monographs 17, Amer. Math. Soc., (1968).
* [17] J. S. Geronimo, _Scattering theory and matrix orthogonal polynomials on the real line_ , Circuits Systems Signal Process 1 (1982), 471-495.
* [18] A. I. Aptekarev, E. M. Nikishin,_The scattering problem for a discrete Sturm–Liouville operator_ , Math. USSR-Sb. 49 (1984), 325-355.
* [19] A. J. Durán, F. J. Grünbaum, _Orthogonal matrix polynomials, scalar-type Rodrigues’ formulas and Pearson equations_ , Journal of Approximation Theory 134 (2005), 267-280.
* [20] A. J. Durán, F. J. Grünbaum, _Structural formulas for orthogonal matrix polynomials satisfying second order differential equations_ , I, Constr. Approx. 22 (2005), 255-271.
* [21] Rodica D. Constin, _Matrix valued polynomials generated by the scalar-type Rodrigues’ formulas_ , Journal of Approximation Theory 161 (2009), 693-705.
* [22] A. J. Durán, _Matrix inner product having a matrix symmetric second order differential operator_ , Rocky Mountain Journal of Mathematics, 27 (1997), 585-600.
* [23] A. J. Durán, F. J. Grünbaum, _Orthogonal matrix polynomials satisfying second order differential equations_ , Internat. Math. Res. Notices 10 (2004), 461-484.
* [24] J. Borrego, M. Castro, A. J. Durán, _Orthogonal matrix polynomials satisfying differential equations with recurrence coefficients having non-scalar limits_ , arXiv:1102.1578v1.
* [25] A. J. Durán and M. D. de la Iglesia, _Second order differential operators having several families of orthogonal matrix polynomials as eigenfunctions_ , Int. Math. Research Notices, Vol. 2008, Article ID rnn084, 24 pages.
* [26] Cantero, M.J., Moral, L. and Velázquez, L., _Differential properties of matrix orthogonal polynomials_ , J. Concrete Appl. Math. v3 i3. 313-334.
|
arxiv-papers
| 2011-05-31T21:47:35 |
2024-09-04T02:49:19.240530
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Giovanni A. Cassatella-Contra and Manuel Manas",
"submitter": "Manuel Manas",
"url": "https://arxiv.org/abs/1106.0036"
}
|
1106.0047
|
# Spatial Inhomogeneity in RFeAsO1-xFx($R=$Pr, Nd) Determined from Rare Earth
Crystal Field Excitations
E. A. Goremychkin Argonne National Laboratory, Argonne, IL 60439, USA ISIS
Facility, Rutherford Appleton Laboratory, Chilton, Didcot, Oxon OX11 OQX,
United Kingdom R. Osborn Argonne National Laboratory, Argonne, IL 60439, USA
C. H. Wang M. D. Lumsden Oak Ridge National Laboratory, Oak Ridge, TN 37831,
USA M. A. McGuire A. S. Sefat B. C. Sales D. Mandrus Oak Ridge National
Laboratory, Oak Ridge, TN 37831, USA H. M. Rønnow Laboratory for Quantum
Magnetism, Ecole Polytechnique Fédérale de Lausanne, CH-1015, Switzerland Y.
Su Jülich Centre for Neutron Science, FZ Jülich, Outstation at FRM II,
D-85747 Garching, Germany A. D. Christianson Oak Ridge National Laboratory,
Oak Ridge, TN 37831, USA ROsborn@anl.gov
###### Abstract
We report inelastic neutron scattering measurements of crystal field
transitions in PrFeAsO, PrFeAsO0.87F0.13, and NdFeAsO0.85F0.15. Doping with
fluorine produces additional crystal field excitations, providing evidence
that there are two distinct charge environments around the rare earth ions,
with probabilities that are consistent with a random distribution of dopants
on the oxygen sites. The 4$f$ electrons of the Pr3+ and Nd3+ ions have non-
magnetic and magnetic ground states, respectively, indicating that the
enhancement of Tc compared to LaFeAsO1-xFx is not due to rare earth magnetism.
Tuning materials to enhance their properties is the driving force behind much
of modern condensed matter physics. Chemical doping is often the most
practical means of accomplishing this goal but a thorough microscopic
understanding of the effect of doping is a major challenge. For example, the
role of phase separation in the high-temperature superconducting
cupratesMuller:1993p1 and colossal magnetoresistive manganitesMoreo:1999p2841
is still being debated. In the recently discovered iron-based superconductors,
chemical doping is the primary method of inducing superconductivity, but there
is conflicting evidence whether this is due to chemical
pressureTakahashi:2008p7382 , the change in carrier concentrationWen:2008p7965
, disorderWadati:2010p34986 , or a combination of all three. Moreover, the
question of phase separation in the superconducting phase itself has not been
conclusively answered and may be material-dependent, with evidence of both
phase separation into antiferromagnetic and superconducting
regionsPark:2009p20194 and phase coexistenceFernandes:2010p33898 .
The RFeAsO1-xFx ($R=$La, Ce, Pr, Nd, Sm, Gd) seriesKamihara:2008p7994 ;
Ren:2008p7759 ; Ren:2008p35719 ; Ren:2008p35740 ; Chen:2008p8161 ;
ChengPeng:2008p35965 ; Kadowaki:2009p36063 ; Khlybov:2009p36031 was the first
family of iron-based superconductors to be discovered. Replacement of
lanthanum with other rare earths increases Tc up to $\sim 55$ K in the
optimally doped regime close to $x=0.15$. Doping with fluorine adds electrons
into iron $d$-bands but the substitution of trivalent rare earths for
lanthanum does not change the carrier concentration, so any changes are either
due to the influence of local 4$f$ magnetism or the effect of chemical
pressure from the well-known lanthanide contraction. Understanding which of
these are responsible for the enhancement of Tc has not been resolved. Recent
neutron diffraction, muon spin relaxation ($\mu$SR), and Mössbauer studies of
the $R$FeAsO parent compounds have shown evidence for strong coupling between
rare earth and iron magnetismKimber:2008p35766 ; Zhao:2008p13387 ;
Maeter:2009p30898 ; McGuire:2009p33606 . Assuming that this coupling persists
in fluorine-doped systems, we might expect the 4$f$ moments to influence the
superconducting properties as well. Indeed, a recent 19F NMR study of
SmFeAsO1-xFxhas inferred a non-negligible coupling between the 4$f$ and
conduction electronsPrando:2010p35776 .
Measurements of rare earth crystal field (CF) excitations using inelastic
neutron scattering provide unique insight into the effects both of
substituting magnetic rare earth ions for lanthanum and of doping fluorine
onto the oxygen sublattice. This is because it is both a local probe, since
the CF potential acting on the 4$f$ electrons is determined by the
electrostatic environment produced predominantly by the nearest-neighbor
oxygen/fluorine ions, and a bulk probe, since the CF transition intensities
are a true thermodynamic average of the whole sample. Relative changes in the
CF peak intensities can be directly related to the volume fraction of rare
earth ions affected by a particular configuration of neighboring ligands.
In this article, we present a comparison of CF excitations measured in
PrFeAsO1-xFxwith $x=0.0$ and 0.13, which shows that there are two distinct
charge environments in the superconducting compound, similar to the
conclusions of a recent 75As NQR studyLang:2010p32185 . However, the measured
reduction in CF intensity with $x$ is consistent with a random distribution of
fluorine ions, showing that the charge environments are produced by the dopant
ions and not electronic phase separation in the iron layers, as proposed in
the earlier work. Similarly, our measurements on NdFeAsO0.85F0.15 reveal more
crystal field excitations than would be allowed due to either tetragonal or
orthorhombic symmetry, also indicating the presence of inequivalent rare earth
sites. Moreover, we are able to eliminate or severely constrain the relevance
of rare earth magnetism to the pairing mechanism since superconducting
PrFeAsO0.87F0.13 has a singlet ground state while superconducting
NdFeAsO0.85F0.15 has a magnetic Kramer’s doublet ground state, even though
both the superconducting transition temperatures are similarRen:2008p7759 ;
Ren:2008p35719 .
Powder samples of RFeAsO1-xFxwere synthesized following the method described
in Ref. McGuire:2009p33606 . Superconducting transition temperatures
determined by the onset of diamagnetism in an applied field of 20 Oe are 41 K
and 49 K for PrFeAsO1-xFx ($x=0.13\pm 0.01$ determined from the phase diagram
of Ref. Rotundu:2009p36139 ), and NdFeAsOF0.15 (nominal composition),
respectively. Structural characterization by neutron diffraction was performed
on GEM at the ISIS Facility and HB2A at the High Flux Isotope Reactor.
Inelastic neutron scattering (INS) studies were conducted on time-of-flight
spectrometers Merlin at ISIS and IN4 and IN6 at the Institut Laue Langevin.
The INS data have been placed on an absolute scale by normalization to a
vanadium standard.
Figure 1: Neutron powder diffraction in PrFeAsO0.87F0.13 measured on HB2A at 4
K (circles) compared to the Rietveld refinement (line). The line below the
plot shows the difference between the data and the refinement.
Fig. 1 shows the neutron diffraction pattern for PrFeAsO0.87F0.13 measured at
4 K, compared to a Rietveld fit using FullProfRodriguezCarvajal:1993p35834 .
Structural refinements confirmed the orthorhombic (Cmma) structure in the
parent compound and the tetragonal ($P4/nmm$) structure in the superconducting
sample, with evidence for $4.3\pm 0.2$% of FeAs impurity in both the $x=0.0$
and 0.13 compounds. A PrOF impurity phase ($\sim 3.8$%) is also detectable in
PrFeAsO0.87F0.13, but is too small to produce the CF excitations discussed
below. There is no evidence of structural inhomogeneity in the primary phases
of PrFeAsO and PrFeAsO0.87F0.13. No impurity phase was observed in
NdFeAsO0.85F0.15.
Fig. 2 shows the CF excitations in PrFeAsO and PrFeAsO0.87F0.13 below 14 meV.
The crystal field excitations are evident as the peaks at nonzero energy
transfers that are not present in LaFeAsO, which has no 4$f$ electrons, There
are additional CF excitations between 30 and 40 meV, which are not shown. In
the parent compound, PrFeAsO, the CF excitations centered at $\sim 3.5$ meV
are split above TN(Pr)=12 K. We assume that this is due to the internal
molecular field created by the Fe sublattice below TN(Fe) = 127
KZhao:2008p13387 , although we cannot confirm that they become degenerate
above TN(Fe) because of thermal broadening. We have insufficent information to
solve the CF potential so we cannot construct a microscopic model to explain
the collapse of the splitting when the rare earth sublattice magnetically
orders as seen in the inset of Fig. 2(a). However, it may be due to a spin
reorientation, similar to what has been observed in NdFeAsOTian:2010p34291 ,
and is consistent with other evidence of an interplay between iron and rare
earth magnetism in PrFeAsO, such as a reduction in the intensity of the iron
magnetic Bragg peakKimber:2008p35766 and a reduction in the $\mu$SR
frequencyMaeter:2009p30898 approaching TN(Pr).
Figure 2: (Color) Inelastic neutron scattering spectra of (a) PrFeAsO and (b)
PrFeAsO0.87F0.13(circles) measured on IN4 with an incident energy of 17 meV
and an average elastic wavevector, Q, of 0.85 Å-1. The non-magnetic background
is approximately given by the spectra of LaFeAsO (triangles). The solid lines
in the panels (a) and (b) are fits to Gaussian lineshapes convolved with the
instrumental resolution. (a) In PrFeAsO, the CF excitations from the ground
state are centered at $3.58\pm 0.01$ meV at 1.5 K. The inset shows the
temperature evolution of the CF excitation energies. (b) In PrFeAsO0.87F0.13,
A, B and C label CF excitations from the ground state, with energies of
$2.78\pm 0.01$ meV, $9.72\pm 0.05$ meV, and $11.8\pm 0.1$ meV, respectively,
measured at 1.5 K. The inset shows the probabilities as a function of doping
of the rare earth site having zero (solid line), one (dashed line), or two
(dotted line) fluorine ions as nearest neighbors on the oxygen sublattice
assuming a random distribution. The solid circle is the ratio of the
intensities of the $\sim 3$ meV CF excitation in PrFeAsO0.87F0.13 and PrFeAsO.
In PrFeAsO0.87F0.13, there is also a CF peak at $\sim 3$ meV (labeled A) but
there is no evidence of any splitting, which is consistent with the absence of
long range magnetic order of the iron sublattice. However, the most striking
observation is the appearance of two extra CF peaks at $9.72\pm 0.05$ meV and
$11.8\pm 0.1$ (labelled B and C, respectively) meV in the superconducting
compound. There are no structural or magnetic phase transitions and the amount
of PrOF impurity phase is far too small to explain them. The temperature
dependence of their intensities confirm that all the transitions represent
transitions from the ground state. However, the intensity of peak A decreases
much faster than the intensities of B and C. As seen in Fig. 2(b), between 1.5
K and 50 K, the intensity of peak A falls by a factor of four whereas the
intensities of peaks B and C remain almost the same.
In order to quantify this observation, we have fit the measured data with a
set of three Gaussian peaks and converted them to static susceptibilities in
absolute units, using the Kramers-Kronig relations. The results of the
temperature dependence of these fits are presented in Fig. 3. The
susceptibilities of both the A and B transitions show typical Van Vleck
behavior but their different temperature dependencies unambiguously indicate
that these transitions belongs to two different rare-earth sites with
different CF potential or charge environments.
Figure 3: Temperature dependence of the Van Vleck susceptibilities derived
from the inelastic neutron scattering data for the crystal field transitions
at $\sim 3.5$ meV (A) and $\sim 9.7$ meV (B). The temperature dependence of
the intensity of peak C is similar to peak B.
The wide separation between the peaks at A and B show that fluorine doping
strongly affects the local electrostatic potential felt by the rare earth
ions. Given that there are four oxygen nearest neighbors to the rare earth, up
to five different crystal field spectra are possible due to configurations
ranging from zero to four fluorine nearest neighbors. Assuming a random
distribution of fluorine atoms at this doping level would result in 57% of the
rare earth sites with no fluorine nearest neighbors. Additionally 34% of the
rare earth sites would have one fluorine nearest neighbor. The remaining 9% of
the rare earth sites would have two, three, or four fluorine nearest neighbors
with decreasing probability. Any tendency of the fluorine ions to cluster
would alter these ratios. In particular, it would increase the fraction of
rare earth sites with no fluorine ions and redistribute the remaining ratios.
In fact, a comparison of the spectral weight of the peak at $\sim 3.5$ meV
between the superconducting compound and the parent compound yields a ratio of
$58\pm 4$%, which is consistent with a random distribution fluorine dopants.
Although it should be confirmed by measuring this ratio as a function of $x$,
this value is sufficiently precise to be strong evidence against any
substantial clustering of fluorine ions on the oxygen sublattice.
The widths of the CF transitions in both the parent and superconducting
compounds are quite small and have a Gaussian line shape. The $\sim 3$ meV
peak width (FWHM) at 1.5 K is $0.61\pm 0.02$ meV in PrFeAsO and $1.54\pm 0.03$
meV in PrFeAsO0.87F0.13, after correction for the instrumental resolution.
This indicates that there is some broadening from chemical disorder, probably
produced by lattice strains due to longer-range fluctuations in the
configuration of fluorine neighbors. The fact that we are seeing well defined,
sharp CF transitions in the superconducting compound is evidence that the
distribution of structural or electronic defects around the praseodymium sites
is small and that we are dealing with two well-defined charge environments.
Figure 4: (Color) (a) Inelastic neutron scattering spectra of
NdFeAsO0.85F0.15measured on Merlin with incident energies of 15 meV (circles)
and 35 meV (diamonds) at an average Q of 1 Å-1 and temperatures of 7 K (blue
symbols) and 50 K (red symbols). Open symbols are non-magnetic scattering from
LaFeAsO0.85F0.15. In the panels (b)-(e) the inelastic neutron scattering data
measured on the IN6 with an incident energy of 3.1 meV and an average elastic
Q of 0.65 Å-1 for NdFeAsO0.85F0.15(b), (d) and PrFeAsO0.87F0.13(c), (e) at 10
K (b), (c) and 30 K (d), (e). The dotted line in (b), (d) and solid line in
(c), (e) is the elastic nuclear scattering. In (b) and (d), the dashed line is
the quasielastic Gaussian line shape fit and the solid line is the sum of
elastic nuclear, magnetic and small linear background contributions.
Another example where fluorine doping produces multiple rare earth
environments in a RFeAsO1-xFxmaterial is for $R=$Nd. The INS data for
NdFeAsO0.85F0.15measured on the Merlin spectrometer at 7 K and 50 K and at
initial energies 15 meV and 30 meV are shown in Fig. 4(a). For Nd3+ ions in a
point group symmetry lower than cubic, i.e. tetragonal, orthorhombic, or
monoclinic, the $J=9/2$ ground state multiplet breaks up into five Kramers
doublets, so there can be at most four CF transitions from the ground state.
There are clearly five peaks due to ground state CF transitions and thus,
while it is difficult to assign each peak to a particular rare earth site, the
data are consistent with the picture presented above, in which the local
electrostatic potential of the rare earth ion is modified by fluorine doping.
These observations bear some similarity to recent NQR measurements showing the
existence of two charge environments in underdoped RFeAsO1-xFxfor $R=$La and
SmLang:2010p32185 . 75As NQR is sensitive to the the electric field gradient,
i.e., the second-degree CF potential, acting on the arsenic sites, which is
also affected by changes in the local fluorine ion distribution. However, Lang
et al argue that the two spectral components they observe in the
superconducting phase is due to local electronic order on the iron layer,
because of the $x$-dependence of the spectral weights. It is more difficult to
model the effect of different fluorine ion configurations because of the
greater distance of the arsenic ions from the oxygen/fluorine sublattice and
the longer-range of the second-degree CF potential, at least in ionic
environments. It is clear that their data is more affected by inhomogeneous
broadening than the neutron data making the spectral weight ratios more
difficult to determine, so we cannot rule out their interpretation, but it
seems unlikely that two such similar probes should both find evidence of two
charge environments with completely different origins.
The final issue we wish to comment on is the effect that rare earth
substitution has on superconductivity. As stated earlier, the fact that the
rare earth substituents are isovalent to lanthanum mean that the increase in
Tc is either due to chemical pressure from their smaller ionic size or due to
the 4$f$ magnetic moments coupling to the iron magnetism and influencing the
pairing interaction. The high resolution neutron scattering data shown in Fig
4(b-e) show that the low energy magnetic fluctuations are very different in
PrFeAsO0.87F0.13 and NdFeAsO0.85F0.15. In the case of neodymium, there is
strong quasielastic scattering indicating a magnetic ground state as expected
for a system of Kramer’s doublets. On the other hand, the praseodymium sample
exhibits no magnetic signal in this energy window, indicating unambiguously a
nonmagnetic singlet ground state. Therefore, even though the superconducting
transition temperature is nearly the same for both materials at optimal
doping, the rare earth ground states are very different in both materials.
This makes it unlikely that the 4$f$ magnetic moment is involved in the
superconductivity in these materials. Thus, we conclude that it is the effect
of chemical pressure due to the lanthanide contraction that is responsible for
enhancing Tc when lanthanum is replaced by another rare earth element.
In conclusion, we have measured the crystal field excitations in RFeAsO1-xFx
using inelastic neutron scattering and established the existence of two charge
environments for rare earth sites in nearly optimally-doped PrFeAsO1-xFx and
NdFeAsO1-xFxcompounds that are due to a random distribution of fluorine ions,
rather than electronic phase separation on the iron layers as proposed in an
earlier NQR studyLang:2010p32185 . Measurements of low-energy magnetic
fluctuations reveal that Pr3+ and Nd3+ 4$f$ elecrons have nonmagnetic and
magnetic CF ground states, respectively, from which we infer that the 4$f$
magnetic moments are not responsible for the nearly identical enhancement of
superconductivity in these compounds compared to LaFeAsO1-xFx and conclude
that a more likely candidate is chemical pressure produced by the lanthanide
contraction.
We acknowledge useful discussions with E. Dagotto and assistance in the
neutron scattering experiments from O. Garlea (ORNL), T. Guidi (ISIS), A.
Orecchini (ILL) and M. Koza (ILL). Research at Argonne and Oak Ridge is
supported by the U.S. Department of Energy, Office of Science, Office of Basic
Energy Sciences, Materials Sciences and Engineering Division and Scientific
User Facilities Division.
## References
* (1) Phase Separation in Cuprate Superconductors, edited by K. Müller and G. Benedek (World Scientific Pub Co Inc., Italy, 1993).
* (2) A. Moreo, S. Yunoki, and E. Dagotto, Science 283, 2034 (1999).
* (3) H. Takahashi et al., Nature 453, 376 (2008).
* (4) H.-H. Wen et al., EPL 82, 17009 (2008), rB820311.
* (5) H. Wadati, I. Elfimov, and G. A. Sawatzky, Phys Rev Lett 105, 157004 (2010).
* (6) J. T. Park et al., Phys Rev Lett 102, 117006 (2009).
* (7) R. M. Fernandes and J. Schmalian, Phys Rev B 82, 014521 (2010).
* (8) Y. Kamihara, T. Watanabe, M. Hirano, and H. Hosono, J Am Chem Soc 130, 3296 (2008).
* (9) Z.-A. Ren et al., EPL 82, 57002 (2008).
* (10) Z.-A. Ren et al., Mater Res Innov 12, 105 (2008).
* (11) Z.-A. Ren et al., Chinese Phys Lett 25, 2215 (2008).
* (12) G. F. Chen et al., Phys Rev Lett 100, 247002 (2008).
* (13) C. Peng et al., Sci China Ser G 51, 719 (2008).
* (14) K. Kadowaki, A. Goya, T. Mochiji, and S. Chong, Journal of Physics: Conference Series 150, 052088 (2009).
* (15) E. P. Khlybov et al., JETP Lett 90, 387 (2009).
* (16) S. A. J. Kimber et al., Phys. Rev. B 78, 140503 (2008).
* (17) J. Zhao et al., Phys Rev B 78, 132504 (2008).
* (18) H. Maeter et al., Phys Rev B 80, 094524 (2009).
* (19) M. A. McGuire et al., New J Phys 11, 025011 (2009).
* (20) G. Prando et al., Phys. Rev. B 81, 100508 (2010).
* (21) G. Lang et al., Phys Rev Lett 104, 097001 (2010).
* (22) C. Rotundu et al., Phys. Rev. B 80, 144517 (2009).
* (23) J. Rodríguez-Carvajal, Physica B 192, 55 (1993).
* (24) W. Tian et al., Phys Rev B 82, 060514 (2010).
|
arxiv-papers
| 2011-05-31T22:36:54 |
2024-09-04T02:49:19.248537
|
{
"license": "Public Domain",
"authors": "E. A. Goremychkin, R. Osborn, C. H. Wang, M. D. Lumsden, M. A.\n McGuire, A. S. Sefat, B. C. Sales, D. Mandrus, H. M. R{\\o}nnow, Y. Su, and A.\n D. Christianson",
"submitter": "Ray Osborn",
"url": "https://arxiv.org/abs/1106.0047"
}
|
1106.0103
|
# Branching Ratio and CP Asymmetry of $B_{s}\to
K^{*}_{0}(1430)\rho(\omega,\phi)$ Decays in the PQCD Approach
Zhi-Qing Zhang 111Electronic address: zhangzhiqing@haut.edu.cn Department of
Physics, Henan University of Technology, Zhengzhou, Henan 450052, P.R.China
###### Abstract
In the two-quark model supposition for $K_{0}^{*}(1430)$, which can be viewed
as either the first excited state (scenario I) or the lowest lying state
(scenario II), the branching ratios and the direct CP-violating asymmetries
for decays $\bar{B}_{s}^{0}\to
K^{*0}_{0}(1430)\phi,K^{*0}_{0}(1430)\omega,K^{*0}_{0}(1430)\rho^{0},K^{*+}_{0}(1430)\rho^{-}$
are studied by employing the perturbative QCD factorization approach. We find
the following results: (a) Enhanced by the color allowed tree amplitude with
large Wilson coefficients $a_{1}=C_{2}+C_{1}/3$, the branching ratio of
$\bar{B}_{s}^{0}\to K^{*+}_{0}(1430)\rho^{-}$ is much larger than those of the
other three decays and arrives at $(3.4^{+0.8}_{-0.7})\times 10^{-5}$ in
scenario I, even $10^{-4}$ order in scenario II, and its direct CP violating
asymmetry is the smallest, around $10\%$, so this channel might be measurable
in the current LHC-b experiments, where a large number (about $10^{12}$) of
$B$ mesons will be produced per year. This high statistics will make the
measurement possible. (b) For the decay modes $\bar{B}^{0}_{s}\to
K^{*0}_{0}(1430)\omega,K^{*0}_{0}(1430)\rho^{0}$, their direct CP-violating
asymmetries are large, but it might be difficult to measure them, because
their branching ratios are small and less than (or near) $10^{-6}$ in both
scenarios. For example, in scenario I, these values are ${\cal
B}(\bar{B}_{s}^{0}\to K^{*}_{0}(1430)\omega)=(8.2^{+1.8}_{-1.7})\times
10^{-7},{\cal B}(\bar{B}_{s}^{0}\to
K^{*}_{0}(1430)\rho^{0})=(9.9^{+2.1}_{-2.0})\times 10^{-7},{\cal
A}_{CP}^{dir}(\bar{B}^{0}_{s}\to
K^{*0}_{0}(1430)\omega)=-24.1^{+2.8}_{-2.5},{\cal
A}_{CP}^{dir}(\bar{B}^{0}_{s}\to
K^{*0}_{0}(1430)\rho^{0})=26.6^{+2.5}_{-2.5}.$ (c) For the decay
$\bar{B}^{0}_{s}\to K^{*}_{0}(1430)\phi$, the predicted branching ratios are
also small and a few times $10^{-7}$ in both scenarios; there is no tree
contribution at the leading order, so its direct CP-violating asymmetry is
naturally zero.
###### pacs:
13.25.Hw, 12.38.Bx, 14.40.Nd
## I Introduction
Along with many scalar mesons found in experiments, more and more efforts have
been made to study the scalar meson spectrum theoretically nato ; jaffe ; jwei
; baru ; celenza ; stro ; close1 . Today, it is still a difficult but
interesting topic. Our most important task is to uncover the mysterious
structures of the scalar mesons. There are two typical schemes for their
classification nato ; jaffe . Scenario I: the nonet mesons below 1 GeV,
including $f_{0}(600),f_{0}(980),K^{*}_{0}(800)$, and $a_{0}(980)$, are
usually viewed as the lowest lying $q\bar{q}$ states, while the nonet ones
near 1.5 GeV, including $f_{0}(1370),f_{0}(1500)/f_{0}(1700),K^{*}_{0}(1430)$,
and $a_{0}(1450)$, are suggested as the first excited states. In scenario II,
the nonet mesons near 1.5 GeV are treated as $q\bar{q}$ ground states, while
the nonet mesons below 1 GeV are exotic states beyond the quark model, such as
four-quark bound states.
In order to uncover the inner structures of these scalar mesons, many
factorization approaches are also used to research the $B$ meson decay modes
with a final state scalar meson, such as the generalized factorization
approach GMM , QCD factorization approach CYf0K ; ccysp ; ccysv , and
perturbative QCD (PQCD) approach zqzhang1 ; zqzhang2 ; zqzhang3 ; zqzhang4 ;
zqzhang5 . On the experimental side, along with the running of the Large
Hadron Collider beauty (LHC-b) experiments, some of $B_{s}$ decays with a
scalar meson in the final state might be observed in the current lhc1 ; lhc2 .
In order to make precise measurements of rare decay rates and CP violating
observables in the $B$-meson systems, the LHC-b detector is designed to
exploit the large number of $b$-hadrons produced. LHC-b will produce up to
$10^{12}$ $b\bar{b}$ pairs per year $(10^{7}s)$. Furthermore, it can
reconstruct a $B$-decay vertex with very good resolution, which is essential
for studying the rapidly oscillating $B_{s}$ mesons. In a word, $B_{s}$ decays
with a scalar in the final state can also serve as an ideal platform to probe
the natures of these scalar mesons. So the studies of these decay modes for
$B_{s}$ are necessary in the next a few years.
Here $K^{*}_{0}(1430)$ can be treated as a $q\bar{q}$ state in both scenario I
and scenario II, it is easy to make quantitative predictions in the two-quark
model supposition, so we would like to use the PQCD approach to calculate the
branching ratios and the CP-violating asymmetries for decays
$\bar{B}_{s}^{0}\to
K^{*0}_{0}(1430)\phi,K^{*0}_{0}(1430)\omega,K^{*0}_{0}(1430)\rho^{0},K^{*+}_{0}(1430)\rho^{-}$
in two scenarios. In the following, $K^{*}_{0}(1430)$ is denoted as
$K^{*}_{0}$ in some places for convenience. The layout of this paper is as
follows. In Sec. II, the decay constants and light-cone distribution
amplitudes of relevant mesons are introduced. In Sec. III, we then analyze
these decay channels using the PQCD approach. The numerical results and the
discussions are given in section IV. The conclusions are presented in the
final part.
## II decay constants and distribution amplitudes
In general, the $B_{s}$ meson is treated as a heavy-light system, and its
Lorentz structure can be written asgrozin ; kawa
$\displaystyle\Phi_{B_{s}}=\frac{1}{\sqrt{2N_{c}}}(P/_{B_{s}}+M_{B_{s}})\gamma_{5}\phi_{B_{s}}(k_{1}).$
(1)
The contribution of $\bar{\phi}_{B_{s}}$ is numerically small caidianlv and
has been neglected. For the distribution amplitude $\phi_{B_{s}}(x,b)$ in
Eq.(1), we adopt the following model:
$\displaystyle\phi_{B_{s}}(x,b)=N_{B_{s}}x^{2}(1-x)^{2}\exp[-\frac{M^{2}_{B_{s}}x^{2}}{2\omega^{2}_{b_{s}}}-\frac{1}{2}(\omega_{b_{s}}b)^{2}],$
(2)
where $\omega_{b_{s}}$ is a free parameter, we take $\omega_{b_{s}}=0.5\pm
0.05$ GeV in numerical calculations, and $N_{B_{s}}=63.67$ is the
normalization factor for $\omega_{b_{s}}=0.5$.
In the two-quark picture, the vector decay constant $f_{K^{*}_{0}}$ and the
scalar decay constant $\bar{f}_{K^{*}_{0}}$ for the scalar meson $K^{*}_{0}$
can be defined as
$\displaystyle\langle K^{*}_{0}(p)|\bar{q}_{2}\gamma_{\mu}q_{1}|0\rangle$
$\displaystyle=$ $\displaystyle f_{K^{*}_{0}}p_{\mu},$ (3)
$\displaystyle\langle
K^{*}_{0}(p)|\bar{q}_{2}q_{1}|0\rangle=m_{K^{*}_{0}}\bar{f}_{K^{*}_{0}},$ (4)
where $m_{K^{*}_{0}}(p)$ is the mass (momentum) of the scalar meson
$K^{*}_{0}$. The relation between $f_{K^{*}_{0}}$ and $\bar{f}_{K^{*}_{0}}$ is
$\displaystyle\frac{m_{{K^{*}_{0}}}}{m_{2}(\mu)-m_{1}(\mu)}f_{{K^{*}_{0}}}=\bar{f}_{{K^{*}_{0}}},$
(5)
where $m_{1,2}$ are the running current quark masses. For the scalar meson
$K^{*}_{0}(1430)$, $f_{K^{*}_{0}}$ will get a very small value after the
$SU(3)$ symmetry breaking is considered. The light-cone distribution
amplitudes for the scalar meson $K^{*}_{0}(1430)$ can be written as
$\displaystyle\langle K^{*}_{0}(p)|\bar{q}_{1}(z)_{l}q_{2}(0)_{j}|0\rangle$
$\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2N_{c}}}\int^{1}_{0}dx\;e^{ixp\cdot z}$ (6)
$\displaystyle\times\\{p\\!\\!\\!/\Phi_{K^{*}_{0}}(x)+m_{K^{*}_{0}}\Phi^{S}_{K^{*}_{0}}(x)+m_{K^{*}_{0}}(n\\!\\!\\!/_{+}n\\!\\!\\!/_{-}-1)\Phi^{T}_{K^{*}_{0}}(x)\\}_{jl}.\quad\quad$
Here $n_{+}$ and $n_{-}$ are lightlike vectors:
$n_{+}=(1,0,0_{T}),n_{-}=(0,1,0_{T})$, and $n_{+}$ is parallel with the moving
direction of the scalar meson. The normalization can be related to the decay
constants:
$\displaystyle\int^{1}_{0}dx\Phi_{K^{*}_{0}}(x)=\int^{1}_{0}dx\Phi^{T}_{K^{*}_{0}}(x)=0,\,\,\,\,\,\,\,\int^{1}_{0}dx\Phi^{S}_{K^{*}_{0}}(x)=\frac{\bar{f}_{K^{*}_{0}}}{2\sqrt{2N_{c}}}\;.$
(7)
The twist-2 light-cone distribution amplitude $\Phi_{K^{*}_{0}}$ can be
expanded in the Gegenbauer polynomials:
$\displaystyle\Phi_{K^{*}_{0}}(x,\mu)$ $\displaystyle=$
$\displaystyle\frac{\bar{f}_{K^{*}_{0}}(\mu)}{2\sqrt{2N_{c}}}6x(1-x)\left[B_{0}(\mu)+\sum_{m=1}^{\infty}B_{m}(\mu)C^{3/2}_{m}(2x-1)\right],$
(8)
where the decay constants and the Gegenbauer moments $B_{1},B_{3}$ of
distribution amplitudes for $K^{*}_{0}(1430)$ have been calculated in the QCD
sum rulesccysp . These values are all scale dependent and specified below:
$\displaystyle{\rm scenarioI:}B_{1}$ $\displaystyle=$ $\displaystyle 0.58\pm
0.07,B_{3}=-1.2\pm 0.08,\bar{f}_{K^{*}_{0}}=-(300\pm 30){\rm MeV},$ (9)
$\displaystyle{\rm scenarioII:}B_{1}$ $\displaystyle=$ $\displaystyle-0.57\pm
0.13,B_{3}=-0.42\pm 0.22,\bar{f}_{K^{*}_{0}}=(445\pm 50){\rm MeV},\quad$ (10)
which are taken by fixing the scale at 1GeV.
As for the twist-3 distribution amplitudes $\Phi_{K^{*}_{0}}^{S}$ and
$\Phi_{K^{*}_{0}}^{T}$, we adopt the asymptotic form:
$\displaystyle\Phi^{S}_{K^{*}_{0}0}$ $\displaystyle=$
$\displaystyle\frac{1}{2\sqrt{2N_{c}}}\bar{f}_{K^{*}_{0}},\,\,\,\,\,\,\,\Phi_{K^{*}_{0}}^{T}=\frac{1}{2\sqrt{2N_{c}}}\bar{f}_{K^{*}_{0}}(1-2x).$
(11)
The distribution amplitudes up to twist-3 of the vector mesons are
$\displaystyle\langle
V(P,\epsilon^{*}_{L})|\bar{q}_{2\beta}(z)q_{1\alpha}(0)|0\rangle=\frac{1}{2N_{C}}\int^{1}_{0}dxe^{ixP\cdot
z}[M_{V}\epsilon/\,^{*}_{L}\Phi_{V}(x)+\epsilon/\,_{L}^{*}P/\Phi_{V}^{t}(x)+M_{V}\Phi^{s}_{V}(x)]_{\alpha\beta},\quad$
(12)
for longitudinal polarization. The distribution amplitudes can be parametrized
as
$\displaystyle\Phi_{V}(x)$ $\displaystyle=$
$\displaystyle\frac{2f_{V}}{\sqrt{2N_{C}}}[1+a^{\|}_{2}C^{\frac{3}{2}}_{2}(2x-1)],$
(13) $\displaystyle\Phi_{V}^{t}(x)$ $\displaystyle=$
$\displaystyle\frac{3f^{T}_{V}}{2\sqrt{2N_{C}}}(2x-1)^{2},\quad\phi_{V}^{s}(x)=-\frac{3f^{T}_{V}}{2\sqrt{2N_{C}}}(2x-1),$
(14)
where the decay constant $f_{V}$ yao and the transverse decay constant
$f^{T}_{V}$ pball are given as the following values:
$\displaystyle f_{\rho}$ $\displaystyle=$ $\displaystyle 209\pm 2{\rm
MeV},f_{\omega}=195\pm 3{\rm MeV},f_{\phi}=231\pm 4{\rm MeV},$ (15)
$\displaystyle f^{T}_{\rho}$ $\displaystyle=$ $\displaystyle 165\pm 9{\rm
MeV},f^{T}_{\omega}=151\pm 9{\rm MeV},f^{T}_{\phi}=186\pm 9{\rm MeV}.$ (16)
Here the Gegenbauer polynomial is defined as
$C^{\frac{3}{2}}_{2}(t)=\frac{3}{2}(5t^{2}-1)$. For the Gegenbauer moments, we
quote the numerical results as pball1 :
$\displaystyle a^{\|}_{2\rho}=a^{\|}_{2\omega}=0.15\pm
0.07,a^{\|}_{2\phi}=0.18\pm 0.08.$ (17)
## III the perturbative QCD calculation
Under the two-quark model for the scalar meson $K^{*}_{0}$ supposition, the
decay amplitude for $\bar{B}^{0}_{s}\to VK^{*}_{0}$, where $V$ represents
$\rho,\omega,\phi$, can be conceptually written as the convolution,
$\displaystyle{\cal A}(\bar{B}^{0}_{s}\to
VK^{*}_{0})\sim\int\\!\\!d^{4}k_{1}d^{4}k_{2}d^{4}k_{3}\
\mathrm{Tr}\left[C(t)\Phi_{B_{s}}(k_{1})\Phi_{V}(k_{2})\Phi_{K^{*}_{0}}(k_{3})H(k_{1},k_{2},k_{3},t)\right],$
(18)
where $k_{i}$’s are momenta of the antiquarks included in each meson, and
$\mathrm{Tr}$ denotes the trace over Dirac and color indices. $C(t)$ is the
Wilson coefficient which results from the radiative corrections at short
distance. In the above convolution, $C(t)$ includes the harder dynamics at
larger scale than the $M_{B}$ scale and describes the evolution of local
$4$-Fermi operators from $m_{W}$ (the $W$ boson mass) down to
$t\sim\mathcal{O}(\sqrt{\bar{\Lambda}M_{B_{s}}})$ scale, where
$\bar{\Lambda}\equiv M_{B_{s}}-m_{b}$. The function $H(k_{1},k_{2},k_{3},t)$
describes the four-quark operator and the spectator quark connected by a hard
gluon whose $q^{2}$ is in the order of $\bar{\Lambda}M_{B_{s}}$, and includes
the $\mathcal{O}(\sqrt{\bar{\Lambda}M_{B_{s}}})$ hard dynamics. Therefore,
this hard part $H$ can be perturbatively calculated. The functions
$\Phi_{(V,K^{*}_{0})}$ are the wave functions of the vector meson $V$ and the
scalar meson $K^{*}_{0}$, respectively.
Since the $b$ quark is rather heavy, we consider the $B_{s}$ meson at rest for
simplicity. It is convenient to use the light-cone coordinate
$(p^{+},p^{-},{\bf p}_{T})$ to describe the meson’s momenta,
$\displaystyle p^{\pm}=\frac{1}{\sqrt{2}}(p^{0}\pm p^{3}),\quad{\rm
and}\quad{\bf p}_{T}=(p^{1},p^{2}).$ (19)
Using these coordinates, the $B_{s}$ meson and the two final state meson
momenta can be written as
$\displaystyle P_{B_{s}}=\frac{M_{B_{s}}}{\sqrt{2}}(1,1,{\bf 0}_{T}),\quad
P_{2}=\frac{M_{B_{s}}}{\sqrt{2}}(1-r^{2}_{K^{*}_{0}},r^{2}_{V},{\bf
0}_{T}),\quad
P_{3}=\frac{M_{B_{s}}}{\sqrt{2}}(r^{2}_{K^{*}_{0}},1-r^{2}_{V},{\bf 0}_{T}),$
(20)
respectively, where the ratio $r_{K^{*}_{0}(V)}=m_{K^{*}_{0}(V)}/M_{B_{s}}$,
and $m_{K^{*}_{0}(V)}$ is the scalar meson $K^{*}_{0}$ (the vector meson $V$)
mass. Putting the antiquark momenta in $B_{s}$, $V$, and $K^{*}_{0}$ mesons as
$k_{1}$, $k_{2}$, and $k_{3}$, respectively, we can choose
$\displaystyle k_{1}=(x_{1}P_{1}^{+},0,{\bf k}_{1T}),\quad
k_{2}=(x_{2}P_{2}^{+},0,{\bf k}_{2T}),\quad k_{3}=(0,x_{3}P_{3}^{-},{\bf
k}_{3T}).$ (21)
For these considered decay channels, the integration over $k_{1}^{-}$,
$k_{2}^{-}$, and $k_{3}^{+}$ in Eq.(18) will lead to
$\displaystyle{\cal A}(B_{s}\to VK^{*}_{0})$ $\displaystyle\sim$
$\displaystyle\int\\!\\!dx_{1}dx_{2}dx_{3}b_{1}db_{1}b_{2}db_{2}b_{3}db_{3}$
(22)
$\displaystyle\cdot\mathrm{Tr}\left[C(t)\Phi_{B_{s}}(x_{1},b_{1})\Phi_{V}(x_{2},b_{2})\Phi_{K^{*}_{0}}(x_{3},b_{3})H(x_{i},b_{i},t)S_{t}(x_{i})\,e^{-S(t)}\right],\quad$
where $b_{i}$ is the conjugate space coordinate of $k_{iT}$, and $t$ is the
largest energy scale in function $H(x_{i},b_{i},t)$. In order to smear the
end-point singularity on $x_{i}$, the jet function $S_{t}(x)$ li02 , which
comes from the resummation of the double logarithms $\ln^{2}x_{i}$, is used.
The last term $e^{-S(t)}$ in Eq.(22) is the Sudakov form factor which
suppresses the soft dynamics effectively soft .
For the considered decays, the related weak effective Hamiltonian $H_{eff}$
can be written as buras96
$\displaystyle{\cal
H}_{eff}=\frac{G_{F}}{\sqrt{2}}\,\left[\sum_{p=u,c}V_{pb}V_{pd}^{*}\left(C_{1}(\mu)O_{1}^{p}(\mu)+C_{2}(\mu)O_{2}^{p}(\mu)\right)-V_{tb}V_{td}^{*}\sum_{i=3}^{10}C_{i}(\mu)\,O_{i}(\mu)\right].$
(23)
Here the Fermi constant $G_{F}=1.16639\times 10^{-5}GeV^{-2}$ and the
functions $Q_{i}(i=1,...,10)$ are the local four-quark operators. We specify
below the operators in ${\cal H}_{eff}$ for $b\to d$ transition:
$\displaystyle\begin{array}[]{llllll}O_{1}^{u}&=&\bar{d}_{\alpha}\gamma^{\mu}Lu_{\beta}\cdot\bar{u}_{\beta}\gamma_{\mu}Lb_{\alpha}\
,&O_{2}^{u}&=&\bar{d}_{\alpha}\gamma^{\mu}Lu_{\alpha}\cdot\bar{u}_{\beta}\gamma_{\mu}Lb_{\beta}\
,\\\
O_{3}&=&\bar{d}_{\alpha}\gamma^{\mu}Lb_{\alpha}\cdot\sum_{q^{\prime}}\bar{q}_{\beta}^{\prime}\gamma_{\mu}Lq_{\beta}^{\prime}\
,&O_{4}&=&\bar{d}_{\alpha}\gamma^{\mu}Lb_{\beta}\cdot\sum_{q^{\prime}}\bar{q}_{\beta}^{\prime}\gamma_{\mu}Lq_{\alpha}^{\prime}\
,\\\
O_{5}&=&\bar{d}_{\alpha}\gamma^{\mu}Lb_{\alpha}\cdot\sum_{q^{\prime}}\bar{q}_{\beta}^{\prime}\gamma_{\mu}Rq_{\beta}^{\prime}\
,&O_{6}&=&\bar{d}_{\alpha}\gamma^{\mu}Lb_{\beta}\cdot\sum_{q^{\prime}}\bar{q}_{\beta}^{\prime}\gamma_{\mu}Rq_{\alpha}^{\prime}\
,\\\
O_{7}&=&\frac{3}{2}\bar{d}_{\alpha}\gamma^{\mu}Lb_{\alpha}\cdot\sum_{q^{\prime}}e_{q^{\prime}}\bar{q}_{\beta}^{\prime}\gamma_{\mu}Rq_{\beta}^{\prime}\
,&O_{8}&=&\frac{3}{2}\bar{d}_{\alpha}\gamma^{\mu}Lb_{\beta}\cdot\sum_{q^{\prime}}e_{q^{\prime}}\bar{q}_{\beta}^{\prime}\gamma_{\mu}Rq_{\alpha}^{\prime}\
,\\\
O_{9}&=&\frac{3}{2}\bar{d}_{\alpha}\gamma^{\mu}Lb_{\alpha}\cdot\sum_{q^{\prime}}e_{q^{\prime}}\bar{q}_{\beta}^{\prime}\gamma_{\mu}Lq_{\beta}^{\prime}\
,&O_{10}&=&\frac{3}{2}\bar{d}_{\alpha}\gamma^{\mu}Lb_{\beta}\cdot\sum_{q^{\prime}}e_{q^{\prime}}\bar{q}_{\beta}^{\prime}\gamma_{\mu}Lq_{\alpha}^{\prime}\
,\end{array}$ (29)
where $\alpha$ and $\beta$ are the $SU(3)$ color indices; $L$ and $R$ are the
left- and right-handed projection operators with $L=(1-\gamma_{5})$,
$R=(1+\gamma_{5})$. The sum over $q^{\prime}$ runs over the quark fields that
are active at the scale $\mu=O(m_{b})$, i.e.,
$(q^{\prime}\epsilon\\{u,d,s,c,b\\})$.
Figure 1: Diagrams contributing to the decay
$\bar{B}_{s}^{0}\to\rho^{0}K^{*0}_{0}(1430)$ .
In Fig. 1, we give the leading order Feynman diagrams for the channel
$\bar{B}_{s}^{0}\to\rho^{0}K^{*0}_{0}(1430)$ as an example. The Feynman
diagrams for the other decays are similar and not given. The analytic formulas
of each considered decays are similar to those of $B\to f_{0}(980)K^{*}$
zqzhang4 and $B\to K^{*}_{0}(1430)\rho(\omega)$ zqzhang5 . We just need to
replace some corresponding wave functions, Wilson coefficients, and
parameters. Here we do not show these formulas.
Combining the contributions from different diagrams, the total decay
amplitudes for these decays can be written as
$\displaystyle\sqrt{2}{\cal M}(K^{*0}_{0}\rho^{0})$ $\displaystyle=$
$\displaystyle\xi_{u}\left[M_{eK^{*}_{0}}C_{2}+F_{eK^{*}_{0}}a_{2}\right]-\xi_{t}\left[F_{eK^{*}_{0}}\left(-a_{4}+\frac{1}{2}(3C_{7}+C_{8})+\frac{5}{3}C_{9}+C_{10}\right)\right.$
(30)
$\displaystyle\left.+M_{eK^{*}_{0}}(-\frac{C_{3}}{3}+\frac{C_{9}}{6}+\frac{3C_{10}}{2})-(M^{P1}_{eK^{*}_{0}}+M^{P1}_{aK^{*}_{0}})(C_{5}-\frac{C_{7}}{2})+M^{P2}_{eK^{*}_{0}}\frac{3C_{8}}{2}\right.$
$\displaystyle\left.-M_{aK^{*}_{0}}(C_{3}-\frac{1}{2}C_{9})-F_{aK^{*}_{0}}(a_{4}-\frac{1}{2}a_{10})-F^{P2}_{aK^{*}_{0}}(a_{6}-\frac{1}{2}a_{8})\right],$
$\displaystyle\sqrt{2}{\cal M}(K^{*0}_{0}\omega)$ $\displaystyle=$
$\displaystyle\xi_{u}\left[M_{eK^{*}_{0}}C_{2}+F_{eK^{*}_{0}}a_{2}\right]-\xi_{t}\left[F_{eK^{*}_{0}}\left(\frac{7C_{3}}{3}+\frac{5C_{4}}{3}+2a_{5}+\frac{a_{7}}{2}+\frac{C_{9}}{3}-\frac{C_{10}}{3}\right)\right.$
(31)
$\displaystyle\left.+M_{eK^{*}_{0}}(\frac{C_{3}}{3}+2C_{4}-\frac{C_{9}}{6}+\frac{C_{10}}{2})+(M^{P1}_{eK^{*}_{0}}+M^{P1}_{aK^{*}_{0}})(C_{5}-\frac{C_{7}}{2})\right.$
$\displaystyle\left.+M^{P2}_{eK^{*}_{0}}(2C_{6}+\frac{C_{8}}{2})+M_{aK^{*}_{0}}(C_{3}-\frac{1}{2}C_{9})+F_{aK^{*}_{0}}(a_{4}-\frac{1}{2}a_{10})\right.$
$\displaystyle\left.+F^{P2}_{aK^{*}_{0}}(a_{6}-\frac{1}{2}a_{8})\right],$
$\displaystyle{\cal M}(K^{*+}_{0}\rho^{-})$ $\displaystyle=$
$\displaystyle\xi_{u}\left[M_{eK^{*}_{0}}C_{1}+F_{eK^{*}_{0}}a_{1}\right]-\xi_{t}\left[F_{eK^{*}_{0}}\left(a_{4}+a_{10}\right)+M_{eK^{*}_{0}}(C_{3}+C_{9})\right.$
(32)
$\displaystyle\left.+M^{P1}_{eK^{*}_{0}}(C_{5}+C_{7})+M_{aK^{*}_{0}}(C_{3}-\frac{1}{2}C_{9})+M^{P1}_{aK^{*}_{0}}(C_{5}-\frac{1}{2}C_{7})\right.$
$\displaystyle\left.+F_{aK^{*}_{0}}(a_{4}-\frac{1}{2}a_{10})+F^{P2}_{aK^{*}_{0}}(a_{6}-\frac{1}{2}a_{8})\right],$
$\displaystyle{\cal M}(K^{*0}_{0}\phi)$ $\displaystyle=$
$\displaystyle-\xi_{t}\left[F^{P2}_{e\phi}(a_{6}-\frac{a_{8}}{2})+M_{e\phi}(C_{3}-\frac{C_{9}}{2})+(M^{P1}_{e\phi}+M^{P1}_{a\phi})(C_{5}-\frac{C_{7}}{2})\right.$
(33)
$\displaystyle\left.+M_{a\phi}(C_{3}-\frac{1}{2}C_{9})+F_{a\phi}(a_{4}-\frac{1}{2}a_{10})+F_{a\phi}(a_{6}-\frac{1}{2}a_{8})\right.$
$\displaystyle\left.+F_{eK^{*}_{0}}\left(a_{3}+a_{5}-\frac{1}{2}a_{7}-\frac{1}{2}a_{7}\right)+M_{eK^{*}_{0}}(C_{4}-\frac{1}{2}C_{10})\right.$
$\displaystyle\left.+M^{P2}_{eK^{*}_{0}}(C_{6}-\frac{1}{2}C_{8})\right].$
The combinations of the Wilson coefficients are defined as usual zjxiao :
$\displaystyle a_{1}(\mu)$ $\displaystyle=$ $\displaystyle
C_{2}(\mu)+\frac{C_{1}(\mu)}{3},\quad
a_{2}(\mu)=C_{1}(\mu)+\frac{C_{2}(\mu)}{3},$ $\displaystyle a_{i}(\mu)$
$\displaystyle=$ $\displaystyle C_{i}(\mu)+\frac{C_{i+1}(\mu)}{3},\quad
i=3,5,7,9,$ $\displaystyle a_{i}(\mu)$ $\displaystyle=$ $\displaystyle
C_{i}(\mu)+\frac{C_{i-1}(\mu)}{3},\quad i=4,6,8,10.$ (34)
## IV Numerical results and discussions
We use the following input parameters in the numerical calculations pdg08 :
$\displaystyle f_{B_{s}}$ $\displaystyle=$ $\displaystyle
230MeV,M_{B_{s}}=5.37GeV,M_{W}=80.41GeV,$ (35) $\displaystyle V_{ub}$
$\displaystyle=$ $\displaystyle|V_{ub}|e^{-i\gamma}=3.93\times
10^{-3}e^{-i68^{\circ}},V_{ud}=0.974,$ (36) $\displaystyle V_{td}$
$\displaystyle=$ $\displaystyle|V_{td}|e^{-i\beta}=8.1\times
10^{-3}e^{-i21.6^{\circ}},V_{tb}=1.0,$ (37) $\displaystyle\alpha$
$\displaystyle=$ $\displaystyle 100^{\circ}\pm
20^{\circ},\tau_{B_{s}}=1.470\times 10^{-12}s.$ (38)
Using the wave functions and the values of relevant input parameters, we find
the numerical values of the corresponding form factors
$\bar{B}^{0}_{s}\to\phi,K^{*}_{0}(1430)$ at zero momentum transfer
$\displaystyle A^{\bar{B}^{0}_{s}\to\phi}_{0}(q^{2}=0)$ $\displaystyle=$
$\displaystyle 0.29^{+0.05+0.01}_{-0.04-0.01},$ (39) $\displaystyle
F^{\bar{B}^{0}_{s}\to K^{*}_{0}}_{0}(q^{2}=0)$ $\displaystyle=$
$\displaystyle-0.30^{+0.03+0.01+0.01}_{-0.03-0.01-0.01},\quad\mbox{ scenario
I},$ (40) $\displaystyle F^{\bar{B}^{0}_{s}\to K^{*}_{0}}_{0}(q^{2}=0)$
$\displaystyle=$ $\displaystyle
0.56^{+0.05+0.03+0.04}_{-0.07-0.04-0.05},\quad\;\;\;\mbox{ scenario II},$ (41)
where the uncertainties are from $\omega_{b_{s}}=0.5\pm 0.05$ of $B_{s}$ and
the Gegenbauer moment $a_{2\phi}=0.18\pm 0.08$ of the vector meson $\phi$ for
$A^{\bar{B}^{0}_{s}\to\phi}$, and from the decay constant, the Gegenbauer
moments $B_{1}$ and $B_{3}$ of the scalar meson $K^{*}_{0}$ for
$F^{\bar{B}^{0}_{s}\to K^{*}_{0}}$. For the $\bar{B}_{s}\to\phi$ transition
form factor, its value is about $0.30$, which is favored by many model
calculations wuyl ; lucd ; chenghy , while a large value
$A^{\bar{B}_{s}\to\phi}_{0}=0.474$ is obtained by the light-cone sum-rule
method pball1 . The discrepancy can be clarified by the current LHC-b
experiments. As for the form factors $F^{\bar{B}^{0}_{s}\to K^{*}_{0}}_{0}$ in
two scenarios, they are agree well with those given in lirh .
In the $B_{s}$-rest frame, the decay rates of $\bar{B}^{0}_{s}\to
K^{*}_{0}(1430)\rho(\omega,\phi)$ can be written as
$\displaystyle\Gamma=\frac{G_{F}^{2}}{32\pi m_{B_{s}}}|{\cal
M}|^{2}(1-r^{2}_{K^{*}_{0}}),$ (42)
where ${\cal M}$ is the total decay amplitude of each considered decay and
$r_{K^{*}_{0}}$ is the mass ratio, both of which have been given in Sec. III.
The ${\cal M}$ can be rewritten as
$\displaystyle{\cal
M}=V_{ub}V^{*}_{ud}T-V_{tb}V^{*}_{td}P=V_{ub}V^{*}_{ud}\left[1+ze^{i(\alpha+\delta)}\right],$
(43)
where $\alpha$ is the Cabibbo-Kobayashi-Maskawa weak phase angle, and $\delta$
is the relative strong phase between the tree and the penguin amplitudes,
which are denoted as ”T” and ”P,” respectively. The term $z$ describes the
ratio of penguin to tree contributions and is defined as
$\displaystyle
z=\left|\frac{V_{tb}V^{*}_{td}}{V_{ub}V^{*}_{ud}}\right|\left|\frac{P}{T}\right|.$
(44)
From Eq.(43), it is easy to write decay amplitude $\overline{\cal M}$ for the
corresponding conjugated decay mode. So the CP-averaged branching ratio for
each considered decay is defined as
$\displaystyle{\cal B}=(|{\cal M}|^{2}+|\overline{\cal
M}|^{2})/2=|V_{ub}V^{*}_{ud}T|^{2}\left[1+2z\cos\alpha\cos\delta+z^{2}\right].$
(45)
Figure 2: The dependence of the branching ratios for $\bar{B}_{s}^{0}\to
K^{*0}_{0}(1430)\omega$ (solid curve), $\bar{B}_{s}^{0}\to
K^{*0}_{0}(1430)\rho^{0}$ (dashed curve) on the Cabibbo-Kobayashi-Maskawa
angle $\alpha$. The left (right) panel is plotted in scenario I (II).
Using the input parameters and the wave functions as specified in this section
and Sec. II, we can calculate the branching ratios of the considered modes
$\displaystyle{\cal B}(\bar{B}_{s}^{0}\to K^{*0}_{0}(1430)\phi)$
$\displaystyle=$ $\displaystyle(2.9^{+0.6+0.2+0.6}_{-0.5-0.1-0.5})\times
10^{-7},\mbox{ scenario I},$ (46) $\displaystyle{\cal B}(\bar{B}_{s}^{0}\to
K^{*0}_{0}(1430)\omega)$ $\displaystyle=$
$\displaystyle(8.2^{+1.7+0.0+0.6}_{-1.6-0.1-0.6})\times 10^{-7},\mbox{
scenario I},$ (47) $\displaystyle{\cal B}(\bar{B}_{s}^{0}\to
K^{*0}_{0}(1430)\rho^{0})$ $\displaystyle=$
$\displaystyle(9.9^{+2.0+0.0+0.7}_{-1.9-0.1-0.7})\times 10^{-7},\mbox{
scenario I},$ (48) $\displaystyle{\cal B}(\bar{B}_{s}^{0}\to
K^{*+}_{0}(1430)\rho^{-})$ $\displaystyle=$
$\displaystyle(3.4^{+0.7+0.3+0.3}_{-0.6-0.2-0.2})\times 10^{-5},\mbox{
scenario I},$ (49) $\displaystyle{\cal B}(\bar{B}_{s}^{0}\to
K^{*0}_{0}(1430)\phi)$ $\displaystyle=$
$\displaystyle(9.5^{+2.5+2.8+3.1}_{-1.7-1.9-1.4})\times 10^{-7},\mbox{
scenario II},$ (50) $\displaystyle{\cal B}(\bar{B}_{s}^{0}\to
K^{*0}_{0}(1430)\omega)$ $\displaystyle=$
$\displaystyle(8.6^{+2.1+0.6+2.2}_{-1.8-0.5-1.5})\times 10^{-7},\mbox{
scenario II},$ (51) $\displaystyle{\cal B}(\bar{B}_{s}^{0}\to
K^{*0}_{0}(1430)\rho^{0})$ $\displaystyle=$
$\displaystyle(9.6^{+2.2+0.4+2.0}_{-2.0-0.4-2.1})\times 10^{-7},\mbox{
scenario II},$ (52) $\displaystyle{\cal B}(\bar{B}_{s}^{0}\to
K^{*+}_{0}(1430)\rho^{-})$ $\displaystyle=$
$\displaystyle(10.8^{+2.5+1.2+1.9}_{-2.3-1.1-1.7})\times 10^{-5},\mbox{
scenario II},$ (53)
where the uncertainties are mainly from the decay constant, the Gegenbauer
moments $B_{1}$ and $B_{3}$ of the scalar meson $K^{*}_{0}$. From the results,
one can find that the branching ratios of $\bar{B}_{s}^{0}\to
K^{*0}_{0}(1430)\phi,K^{*+}_{0}(1430)\rho^{-}$ in scenario II are about
$3.2\sim 3.3$ times larger than those in scenario I. While for the decays
$\bar{B}_{s}^{0}\to K^{*0}_{0}(1430)\omega(\rho^{0})$, their branching ratios
for two scenarios are very close to each other, respectively. In these four
decay channels, the branching ratio of $\bar{B}_{s}^{0}\to
K^{*+}_{0}(1430)\rho^{-}$ is the largest one. This is not a surprise: one can
recall that the channel $\bar{B}_{s}^{0}\to K^{+}_{0}\rho^{-}$ also receives a
large branching ratio, about $(2.45^{+1.52}_{-1.29})\times 10^{-5}$ predicted
by the QCD factorization approach Beneke and about
$(1.78^{+0.78}_{-0.59})\times 10^{-5}$ predicted by the PQCD approach lucd0 .
Certainly, for the other three decays $\bar{B}_{s}^{0}\to
K^{*0}_{0}(1430)\phi,K^{*0}_{0}(1430)\omega(\rho^{0})$, their branch ratios
have the same order with those of the decays $\bar{B}_{s}^{0}\to
K^{0}_{0}\phi,K^{0}_{0}\omega(\rho^{0})$, which are listed in Table I. It is
easy to get the conclusion that the branching ratios of the decays
$\bar{B}_{s}^{0}\to K^{*}_{0}(1430)V$ are not far away from those of
$\bar{B}_{s}^{0}\to KV$, where $V$ represents $\rho,\omega,\phi$. The same
conclusion is also obtained in Ref.zqzhang1 .
Table 1: Comparing the branching ratios of $\bar{B}_{s}^{0}\to K^{0}_{0}\phi,K^{0}_{0}\omega,K^{0}_{0}\rho^{0},K^{+}_{0}\rho^{-}$ predicted in Beneke and those of $\bar{B}_{s}^{0}\to K^{*0}_{0}(1430)\phi,K^{*0}_{0}(1430)\omega,K^{*0}_{0}(1430)\rho^{0},K^{*+}_{0}(1430)\rho^{-}$ predicted in this work in scenario I . Mode | Br($\times 10^{-6}$)
---|---
$\bar{B}_{s}^{0}\to K^{*0}_{0}(1430)\phi$ | $0.29^{+0.06+0.02+0.06}_{-0.05-0.01-0.05}$
$\bar{B}_{s}^{0}\to K^{0}_{0}\phi$ | $0.27^{+0.09+0.28+0.09+0.67}_{-0.08-0.14-0.06-0.18}$
$\bar{B}_{s}^{0}\to K^{*0}_{0}(1430)\omega$ | $0.82^{+0.17+0.00+0.06}_{-0.16-0.01-0.06}$
$\bar{B}_{s}^{0}\to K^{0}_{0}\omega$ | $0.51^{+0.20+0.15+0.68+0.40}_{-0.18-0.11-0.23-0.25}$
$\bar{B}_{s}^{0}\to K^{*0}_{0}(1430)\rho^{0}$ | $0.99^{+0.02+0.00+0.07}_{-0.19-0.01-0.07}$
$\bar{B}_{s}^{0}\to K^{0}_{0}\rho^{0}$ | $0.61^{+0.33+0.21+1.06+0.56}_{-0.26-0.15-0.38-0.36}$
$\bar{B}_{s}^{0}\to K^{*+}_{0}(1430)\rho^{-}$ | $34.0^{+0.7+0.3+0.3}_{-0.6-0.2-0.2}$
$\bar{B}_{s}^{0}\to K^{+}_{0}\rho^{-}$ | $24.5^{+11.9+9.2+1.8+1.6}_{-9.7-7.8-3.0-1.6}$
Figure 3: The dependence of the branching ratio for $\bar{B}^{0}_{s}\to K^{*+}_{0}(1430)\rho^{-}$ on the Cabibbo-Kobayashi-Maskawa angle $\alpha$. The left (right) panel is plotted in scenario I (II). Table 2: Decay amplitudes for decays $\bar{B}^{0}_{s}\to K^{*+}_{0}(1430)\rho^{-},K^{*0}_{0}(1430)\rho^{0}$ ($\times 10^{-2}\mbox{GeV}^{3}$). | | $F^{T}_{eK^{*}_{0}}$ | $F_{eK^{*}_{0}}$ | $M^{T}_{eK^{*}_{0}}$ | $M_{eK^{*}_{0}}$ | $M_{aK^{*}_{0}}$ | $F_{aK^{*}_{0}}$
---|---|---|---|---|---|---|---
$\bar{B}^{0}_{s}\to K^{*0}_{0}(1430)\rho^{0}$ (SI) | | -22.4 | 4.9 | $-11.7+8.2i$ | $-0.15+0.24i$ | $-0.14+0.11i$ | $-4.1-2.9i$
$\bar{B}^{0}_{s}\to K^{*+}_{0}(1430)\rho^{-}$ (SI) | | 203 | -8.6 | $6.5-5.2i$ | $0.08-0.28i$ | $0.16-0.09i$ | $5.3+4.4i$
$\bar{B}^{0}_{s}\to K^{*0}_{0}(1430)\rho^{0}$ (SII) | | 27.6 | -7.8 | $0.7+5.9i$ | $0.36-0.20i$ | $0.40+0.20i$ | $3.1+8.1i$
$\bar{B}^{0}_{s}\to K^{*+}_{0}(1430)\rho^{-}$ (SII) | | -371 | 14.3 | $-0.04-4.6i$ | $0.58+0.41i$ | $-0.58-0.28i$ | $-4.1-11.5i$
In Table II, we list the values of the factorizable and nonfactorizable
amplitudes from the emission and annihilation topology diagrams of the decays
$\bar{B}^{0}_{s}\to K^{*0}_{0}(1430)\rho^{0}$ and $\bar{B}^{0}_{s}\to
K^{*+}_{0}(1430)\rho^{-}$. $F_{e(a)K^{*}_{0}}$ and $M_{e(a)K^{*}_{0}}$ are the
$\rho$ meson emission (annihilation) factorizable contributions and
nonfactorizable contributions from penguin operators respectively. The upper
label $T$ denotes the contributions from tree operators. For the decay
$\bar{B}^{0}_{s}\to K^{*0}_{0}(1430)\rho^{0}$, there are not diagrams obtained
by exchanging the position of $K^{*0}_{0}$ and $\rho^{0}$ in Fig.1, so there
are not contributions from $F_{e(a)\rho}$ and $M_{e(a)\rho}$. It is same for
the decay $\bar{B}^{0}_{s}\to K^{*+}_{0}(1430)\rho^{-}$. From Table II, one
can find that because of the large Wilson coefficients $a_{1}=C_{2}+C_{1}/3$,
the tree-dominated decay channel $\bar{B}^{0}_{s}\to K^{*+}_{0}(1430)\rho^{-}$
receives a large branching ratio value in both scenarios compared with
$\bar{B}^{0}_{s}\to K^{*0}_{0}(1430)\rho^{0}$.
The dependence of the branching ratios for the decays $\bar{B}^{0}_{s}\to
K^{*0}_{0}(1430)\rho^{0},K^{*0}_{0}(1430)\omega,K^{*+}_{0}(1430)\rho^{-}$ on
the Cabibbo-Kobayashi-Maskawa angle $\alpha$ is displayed in Fig.2 and Fig.3.
The branching ratios of the $K^{*0}_{0}(1430)\rho^{0}$ and
$K^{*+}_{0}(1430)\rho^{-}$ modes increase with $\alpha$, while that of the
$K^{*0}_{0}(1430)\omega$ mode decreases with $\alpha$. The values of
$\cos\delta$ [shown in Eq.(45)] for the decay modes $\bar{B}^{0}_{s}\to
K^{*0}_{0}(1430)\rho^{0},K^{*+}_{0}(1430)\rho^{-}$ are opposite in sign with
that of $\bar{B}^{0}_{s}\to K^{*0}_{0}(1430)\omega$ , and as a result the
behaviors of the branching ratios with the Cabibbo-Kobayashi-Maskawa angle
$\alpha$ for the former are very different with that of the latter. We can
also find that the branching ratio of the decay $\bar{B}^{0}_{s}\to
K^{*+}_{0}(1430)\rho^{-}$ is insensitive to the variation of $\alpha$ in
scenario I. For the decay $\bar{B}^{0}_{s}\to K^{*0}_{0}(1430)\phi$, there are
only penguin operator contributions in this channel, so its branching ratio
has no relation with the angle $\alpha$ at the leading order.
Now, we turn to the evaluations of the direct CP-violating asymmetries of the
considered decays in the PQCD approach. The direct CP-violating asymmetry can
be defined as
$\displaystyle{\cal A}_{CP}^{dir}=\frac{|\overline{\cal M}|^{2}-|{\cal
M}|^{2}}{|{\cal M}|^{2}+|\overline{\cal
M}|^{2}}=\frac{2z\sin\alpha\sin\delta}{1+2z\cos\alpha\cos\delta+z^{2}}\;.$
(54)
Here the ratio $z$ and the strong phase $\delta$ are calculable in PQCD
approach, so it is easy to find the numerical values of ${\cal A}_{CP}^{dir}$
(in unit of $10^{-2}$) by using the input parameters listed in the previous
for the considered decays in two scenarios:
$\displaystyle{\cal A}_{CP}^{dir}(\bar{B}^{0}_{s}\to
K^{*0}_{0}(1430)\omega)=-24.1^{+0.0+2.7+0.6}_{-0.0-2.5-0.2},\mbox{ scenario
I},$ (55) $\displaystyle{\cal A}_{CP}^{dir}(\bar{B}^{0}_{s}\to
K^{*0}_{0}(1430)\rho^{0})=26.6^{+0.0+2.5+0.3}_{-0.0-2.5-0.5},\mbox{ scenario
I},$ (56) $\displaystyle{\cal A}_{CP}^{dir}(\bar{B}^{0}_{s}\to
K^{*+}_{0}(1430)\rho^{-})=7.7^{+0.0+0.2+0.2}_{-0.0-0.3-0.2},\mbox{ scenario
I},$ (57) $\displaystyle{\cal A}_{CP}^{dir}(\bar{B}^{0}_{s}\to
K^{*0}_{0}(1430)\omega)=-86.7^{+0.1+7.1+1.3}_{-0.1-5.3-2.8},\mbox{ scenario
II},$ (58) $\displaystyle{\cal A}_{CP}^{dir}(\bar{B}^{0}_{s}\to
K^{*0}_{0}(1430)\rho^{0})=84.5^{+0.1+4.9+1.0}_{-0.1-6.3-3.8},\mbox{ scenario
II},$ (59) $\displaystyle{\cal A}_{CP}^{dir}(\bar{B}^{0}_{s}\to
K^{*+}_{0}(1430)\rho^{-})=12.6^{+0.0+0.2+0.8}_{-0.0-0.2-0.6},\mbox{ scenario
II},$ (60)
where the uncertainties are mainly from the decay constant, the Gegenbauer
moments $B_{1}$ and $B_{3}$ of the scalar meson $K^{*}_{0}$. Compared with the
values of the branching ratios, we can find that if the direct CP-violating
asymmetries are sensitive to some parameters, while the branching ratios are
insensitive to them, for example, the decay constant of $K^{*}_{0}$. For the
decays $\bar{B}^{0}_{s}\to K^{*0}_{0}(1430)\omega,K^{*0}_{0}(1430)\rho^{0}$,
their direct CP-violating asymmetries in scenario II are more than 3 times
than those in scenario I. In both scenarios, the direct CP-violating
asymmetries of these two decay channels are close to each other in size, while
they are opposite in sign. The reason for this is the following. The mesons
$\rho^{0},\omega$ have very similar mass, decay constant, and distribution
amplitude, only the opposite sign of $d\bar{d}$ in their quark components, and
the difference will appear in penguin operators. From our numerical results,
we can find that the contributions from tree operators for these two channels
(denoted as $T_{K^{*0}_{0}\rho^{0}}$ and $T_{K^{*0}_{0}\omega})$ are really
very close, and those from penguin operators for these two channels (denoted
as $P_{K^{*0}_{0}\rho^{0}}$ and $P_{K^{*0}_{0}\omega}$) are opposite in sign.
Furthermore, the real parts of $P_{K^{*0}_{0}\rho^{0}}$ and
$P_{K^{*0}_{0}\omega}$ in each scenario have large differences in size.
$\displaystyle T_{K^{*0}_{0}\rho^{0}}$ $\displaystyle=$
$\displaystyle(-34.1+i8.2)\times
10^{-2},P_{K^{*0}_{0}\rho^{0}}=(0.49-i2.5)\times 10^{-2},$ (61) $\displaystyle
T_{K^{*0}_{0}\omega}$ $\displaystyle=$ $\displaystyle(-31.8+i7.6)\times
10^{-2},P_{K^{*0}_{0}\omega}=(-3.7+i2.8)\times 10^{-2},\mbox{ scenario I},$
(62) $\displaystyle T_{K^{*0}_{0}\rho^{0}}$ $\displaystyle=$
$\displaystyle(28.3+i5.9)\times
10^{-2},P_{K^{*0}_{0}\rho^{0}}=(-3.9+i8.1)\times 10^{-2},$ (63) $\displaystyle
T_{K^{*0}_{0}\omega}$ $\displaystyle=$ $\displaystyle(26.4+i5.5)\times
10^{-2},P_{K^{*0}_{0}\omega}=(8.1-i7.2)\times 10^{-2},\mbox{ scenario II}.$
(64)
These values can explain why the two channels have similar CP-violating
asymmetry in size (certainly, their branching ratios are also similar for the
same reason). Using the upper results, we can calculate $\sin\delta$ [shown in
Eq.(54)] in two scenarios:
$\displaystyle\sin\delta_{K^{*0}_{0}\rho^{0}}$ $\displaystyle=$ $\displaystyle
0.91,\sin\delta_{K^{*0}_{0}\omega}=-0.40,\mbox{ scenario I},$ (65)
$\displaystyle\sin\delta_{K^{*0}_{0}\rho^{0}}$ $\displaystyle=$ $\displaystyle
0.97,\sin\delta_{K^{*0}_{0}\omega}=-0.80,\mbox{ scenario II}.$ (66)
These values can explain why the CP-violating asymmetries of these two decays
have opposite signs.
The direct CP-violating asymmetry of $\bar{B}^{0}_{s}\to
K^{*+}_{0}(1430)\rho^{-}$ is the smallest in these decays, about $10\%$, but
its branching ratio is the largest one, about $3.4\times 10^{-5}$ in scenario
I, even at the order of $10^{-4}$ in scenario II. So this channel might be
easily measured at LHC-b experiments.
Figure 4: The dependence of the direct CP asymmetries for $\bar{B}^{0}_{s}\to
K^{*0}_{0}(1430)\omega$ (solid curve), $\bar{B}^{0}_{s}\to
K^{*+}_{0}(1430)\rho^{-}$ (dotted curve), $\bar{B}^{0}_{s}\to
K^{*0}_{0}(1430)\rho^{0}$ (dashed curve) on the Cabibbo-Kobayashi-Maskawa
angle $\alpha$. The left (right) panel is plotted in scenario I (II)
From Fig.4(a) and 4(b), one can see that though the direct CP asymmetry values
for each decay in two scenarios are very different in size, they have similar
trends depending on the Cabibbo-Kobayashi-Maskawa angle $\alpha$. As for the
decay $\bar{B}^{0}_{s}\to K^{*0}_{0}(1430)\phi$, there is no tree contribution
at the leading order, so the direct CP-violating asymmetry is naturally zero.
## V Conclusion
In this paper, we calculate the branching ratios and the CP-violating
asymmetries of decays $\bar{B}_{s}^{0}\to K^{*}_{0}(1430)\rho(\omega,\phi)$ in
the PQCD factorization approach. Using the decay constants and light-cone
distribution amplitudes derived from QCD sum-rule method, we find that
* •
We predict the form factor
$A^{\bar{B}^{0}_{s}\to\phi}_{0}(q^{2}=0)=0.29^{+0.05+0.01}_{-0.04-0.01}$ for
$\omega_{b_{s}}=0.5\pm 0.05$ and the Gegenbauer moment $a_{2\phi}=0.18\pm
0.08$, which agrees well with the values as calculated by many approaches and
disagrees with the value $A^{\bar{B}^{0}_{s}\to\phi}_{0}=0.474$ obtained by
the light-cone sum-rule method. The discrepancy can be clarified by the LHC-b
experiments. The form factors of $\bar{B}^{0}_{s}\to K^{*}_{0}(q^{2}=0)$ in
two scenarios are given as
$\displaystyle F^{\bar{B}^{0}_{s}\to K^{*}_{0}}_{0}(q^{2}=0)$ $\displaystyle=$
$\displaystyle-0.30^{+0.03+0.01+0.01}_{-0.03-0.01-0.01},\quad\mbox{ scenario
I},$ (67) $\displaystyle F^{\bar{B}^{0}_{s}\to K^{*}_{0}}_{0}(q^{2}=0)$
$\displaystyle=$ $\displaystyle
0.56^{+0.05+0.03+0.04}_{-0.07-0.04-0.05},\quad\;\;\;\mbox{ scenario II},$ (68)
where the uncertainties are from the decay constant, the Gegenbauer moments
$B_{1}$ and $B_{3}$ of the scalar meson $K^{*}_{0}$.
* •
Because of the large Wilson coefficients $a_{1}=C_{2}+C_{1}/3$, the branching
ratios of $\bar{B}^{0}_{s}\to K^{*+}_{0}(1430)\rho^{-}$ are much larger than
those of the other three decays in both scenarios and arrive at a few times
$10^{-5}$ in scenario I, even at the $10^{-4}$ order in scenario II, while its
direct CP-violating asymmetry is the smallest one, around $10\%$. The values
for this channel might be measured by the current LHC-b experiments.
* •
For the decays $\bar{B}^{0}_{s}\to
K^{*0}_{0}(1430)\omega,K^{*0}_{0}(1430)\rho^{0}$, their direct CP-violating
asymmetries are large, but it might be difficult to measure them, because
their branching ratios are small and less than (or near) $10^{-6}$ in both
scenarios.
* •
The values of $\cos\delta$ for the decays $\bar{B}^{0}_{s}\to
K^{*0}_{0}(1430)\rho^{0},K^{*+}_{0}(1430)\rho^{-}$ are opposite in sign with
that for $\bar{B}^{0}_{s}\to K^{*0}_{0}(1430)\omega$; as a result, the
behaviors of the branching ratios of the former with Cabibbo-Kobayashi-Maskawa
angle $\alpha$ are very different with that of the latter. Because the values
of $\sin\delta$ are opposite in sign, their direct CP-violating asymmetries of
the former have an opposite sign with that of the latter. Here $\delta$ is the
relative strong phase angle between the tree and the penguin amplitudes.
* •
Because the mesons $\rho^{0},\omega$ have very similar mass, decay constant,
distribution amplitude, only opposite sign of $d\bar{d}$ in their quark
components, and this difference only appears in the penguin operators; so
these two tree document decays $\bar{B}^{0}_{s}\to K^{*0}_{0}(1430)\rho^{0}$
and $\bar{B}^{0}_{s}\to K^{*0}_{0}(1430)\omega$ should have similar branching
ratios and CP-violating asymmetries.
* •
As for the decay $\bar{B}^{s}_{0}\to K^{*0}_{0}(1430)\phi$, though there exist
large differences between the two scenarios, the predicted branching ratios
are small and a few times $10^{-7}$ in both scenarios. There is no tree
contribution at the leading order, so the direct CP-violating asymmetry is
naturally zero.
## Acknowledgment
This work is partly supported by the National Natural Science Foundation of
China under Grant No. 11047158, and by Foundation of Henan University of
Technology under Grant No.150374. The author would like to thank Cai-Dian Lü
for helpful discussions.
## References
* (1) N. A. Tornqvist, Phys. Rev. Lett. 49, 624 (1982).
* (2) G.L. Jaffe, Phys. Rev. D 15, 267 (1977); Erratum-ibid. Phys. Rev. D 15 281 (1977); A. L. Kataev, Phys. Atom. Nucl. 68, 567 (2005), Yad. Fiz. 68, 597 (2005); A. Vijande, A. Valcarce, F. Fernandez and B. Silvestre-Brac, Phys. Rev. D 72, 034025 (2005).
* (3) J. Weinstein , N. Isgur , Phys. Rev. Lett. 48, 659 (1982); Phys. Rev. D 27, 588 (1983); 41, 2236 (1990); M. P. Locher, et al., Eur. Phys. J. C 4, 317 (1998).
* (4) V. Baru, et al., Phys. Lett. B 586, 53 (2004).
* (5) L. Celenza, et al., Phys. Rev. C 61 (2000) 035201.
* (6) M. Strohmeier-Presicek, et al., Phys. Rev. D 60 054010 (1999).
* (7) F. E. Close, A. Kirk, Phys. Lett. B 483 345 (2000).
* (8) A. K. Giri , B. Mawlong, R. Mohanta Phys. Rev. D 74, 114001 (2006).
* (9) H. Y. Cheng, K. C. Yang Phys. Rev. D 71, 054020 (2005).
* (10) H. Y. Cheng , C. K. Chua , K. C. Yang Phys. Rev. D 73, 014017 (2006).
* (11) H. Y. Cheng, C. K. Chua and K. C. Yang, Phys. Rev. D 77, 014034 (2008).
* (12) Z. Q. Zhang and Z.J. Xiao, Chin. Phys. C 33(07), 508 (2009).
* (13) Z. Q. Zhang and Z.J. Xiao, Chin. Phys. C 34(05), 528 (2010).
* (14) Z. Q. Zhang, J. Phys. G 37, 085012 (2010).
* (15) Z. Q. Zhang, J.D. Zhang, Eur. Phys. J. C 67, 163 (2010).
* (16) Z. Q. Zhang, Phys. Rev. D 82, 034036 (2010).
* (17) N. Brambilla, et al., (Quarkonium Working Group), CERN-2005-005, hep-ph/0412158; M. P. Altarelli and F. Teubert, Int. J. Mod.Phys. A 23, 5117 (2008).
* (18) M. Artuso, et al., ”B, D and K decays”, Report of Working Group 2 of the CERN workshop on Flavor in the Era of the LHC, Eur. Phys. J. C 57, 309 (2008).
* (19) A. G. Grozin and M. Neubert, Phys. Rev. D 55, 272 (1977); M. Beneke and T. Feldmann, Nucl. Phys.B 592, 3 (2001).
* (20) H. Kawamura, et al., Phys.Lett.B 523, 111 (2001); Mod. Phys. Lett. A 18, 799 (2003).
* (21) C. D. Lu, M. Z. Yang, Eur.Phys.J.C 28, 515 (2003).
* (22) Particle Data Group, W. M. Yao, et al., J. Phys. G 33, 1 (2006).
* (23) P. Ball, G. W. Jones and R. Zwicky, Phys. Rev. D 75, 054004 (2007).
* (24) P. Ball and R. Zwicky, Phys. Rev. D 71, 014029 (2005); P. Ball and R. Zwicky, JHEP 0604, 046 (2006); P. Ball and G. W. Jones, JHEP bf0703, 069 (2007).
* (25) H. N. Li, Phys. Rev. D 66, 094010 (2002).
* (26) H.N. Li and B. Tseng, Phys. Rev. D 57, 443 (1998).
* (27) G. Buchalla , A. J. Buras , M. E. Lautenbacher, Rev. Mod. Phys. 68, 1125 (1996).
* (28) Z. J. Xiao, Z. Q. Zhang, X. Liu, L. B. Guo, Phys. Rev. D 78, 114001 (2008).
* (29) Particle Data Group, C. Amsler, et al., Phys. Lett. B 667, 1 (2008);
* (30) Y. L. Wu, M. Zhong and Y. B. Zuo, Int. J. Mod. Phys. A 21, 6125 (2006).
* (31) C. D. Lu, W. Wang and Z. T. Wei, Phys. Rev. D 76, 014013 (2007).
* (32) H. Y. Cheng, C. K. Chua, and C. W. Hwang, Phys. Rev. D 69, 074025 (2004).
* (33) R. H. Li, et al., Phys. Rev. D 79, 014013, (2009).
* (34) M. Beneke, M. Neubert, Nucl. Phys.B 675, 333 (2001).
* (35) A. Ali, et al., Phys. Rev. D 76, 074018 (2007).
|
arxiv-papers
| 2011-06-01T06:45:00 |
2024-09-04T02:49:19.254383
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Zhi-Qing Zhang",
"submitter": "Zhi-Qing Zhang",
"url": "https://arxiv.org/abs/1106.0103"
}
|
1106.0118
|
¡!DOCTYPE HTML PUBLIC ”-//W3C//DTD HTML 4.01 Transitional//EN”
”http://www.w3.org/TR/html4/loose.dtd”¿ ¡html¿ ¡head¿ ¡meta http-
equiv=”Content-Type” content=”text/html; charset=iso-8859-15”¿ ¡title¿1st
International Workshop on Distributed Evolutionary Computation in Informal
Environments¡/title¿ ¡/head¿
¡body¿ ¡h1¿1st International Workshop on Distributed Evolutionary Computation
in Informal Environments - Proceedings ¡/h1¿
¡p¿¡a href=’http://geneura.ugr.es/ iwdecie’¿The first IWDECIE will take place
in June 5th¡/a¿, in conjunction with ¡a href=’http://cec2011.org’¿the Congress
on Evolutionary Computation, the big EC¡/a¿. This is the first edition, and we
had six submissions, of which five were accepted. Every extended abstract
received at least 2 reviews; these are the full-length version of those
abstracts¡/a¿.
¡h2¿List of accepted papers¡/h2¿
LIST:1106.0190
¡!–dl¿ ¡dt¿¡b¿Evolution of Things¡/B¿¡/dt¿ ¡dd¿A.E. Eiben, N. Ferreira, M.C.
Schut and S. Kernbach¡br /¿ ¡/dd–¿
LIST:1105.4971
¡!– ¡dt¿¡b¿Distributed Evolutionary Computation using REST ¡/B¿¡/dt¿ ¡dd¿Pedro
A. Castillo, J.J. Merelo, Antonio Mora, Gustavo Romero and Victor Rivas¡br /¿
(paper REPORT-NO:IWDECIE/2011/3 ) ¡/dd¿ –¿
LIST:1105.5900
¡!–dt¿¡b¿ HydroCM: A Heterogeneous Model for a Heterogeneous Environment
¡/B¿¡/dt¿ ¡dd¿Julián J. Domiacute;nguez and Enrique Alba¡br /¿ (paper
REPORT-NO:IWDECIE/2011/6 ) ¡/dd–¿
LIST:1105.4978
¡!–dt¿¡b¿SOAP vs REST: Comparing a master-slave GA implementation ¡/B¿¡/dt¿
¡dd¿Pedro A. Castillo , Jose L. Bernier, J.J. Merelo and Pablo
Garciacute;a-Saacute;nchez ¡br /¿ (paper REPORT-NO:IWDECIE/2011/2 ) ¡/dd–¿
LIST:1105.6205
¡!–dt¿¡b¿Cloud-based EAs: An algorithmic study¡/B¿¡/dt¿ ¡dd¿JJ Merelo,
Mariacute;a Isabel Garciacute;a Arenas, Antonio Mora, Pedro Castillo, Gustavo
Romero Loacute;pez and Juan Luis Jimeacute;nez ¡br /¿ (paper REPORT-
NO:IWDECIE/2011/4 ) ¡/dd¿
¡/dl¿–¿
¡/body¿ ¡/html¿
|
arxiv-papers
| 2011-06-01T08:29:28 |
2024-09-04T02:49:19.260034
|
{
"license": "Public Domain",
"authors": "Juan-J. Merelo, Maribel Garc\\'ia-Arenas, Juan-Luis J. Laredo,\n Francisco Fern\\'andez de la Vega (editors)",
"submitter": "Juan Juli\\'an Merelo-Guerv\\'os Pr.",
"url": "https://arxiv.org/abs/1106.0118"
}
|
1106.0195
|
# Scaling theory of continuum dislocation dynamics in three dimensions: Self-
organized fractal pattern formation
Yong S. Chen yc355@cornell.edu Woosong Choi Laboratory of Atomic and Solid
State Physics (LASSP), Clark Hall, Cornell University, Ithaca, New York
14853-2501, USA Stefanos Papanikolaou Department of Mechanical Engineering
and Materials Science & Department of Physics, Yale University, New Haven,
Connecticut, 06520-8286, USA Matthew Bierbaum James P. Sethna
sethna@lassp.cornell.edu Laboratory of Atomic and Solid State Physics (LASSP),
Clark Hall, Cornell University, Ithaca, New York 14853-2501, USA
###### Abstract
We focus on mesoscopic dislocation patterning via a continuum dislocation
dynamics theory (CDD) in three dimensions (3D). We study three distinct
physically motivated dynamics which consistently lead to fractal formation in
3D with rather similar morphologies, and therefore we suggest that this is a
general feature of the 3D collective behavior of geometrically necessary
dislocation (GND) ensembles. The striking self-similar features are measured
in terms of correlation functions of physical observables, such as the GND
density, the plastic distortion, and the crystalline orientation. Remarkably,
all these correlation functions exhibit spatial power-law behaviors, sharing a
single underlying universal critical exponent for each type of dynamics.
###### pacs:
61.72.Bb, 61.72.Lk, 05.45.Df, 05.45.Pq
## I Introduction
Dislocations in plastically deformed crystals, driven by their long-range
interactions, collectively evolve into complex heterogeneous structures where
dislocation-rich cell walls or boundaries surround dislocation-depleted cell
interiors. These have been observed both in single crystals Kawasaki and
Takeuchi (1980); Mughrabi et al. (1986); Schwink (1992) and polycrystals Ungár
et al. (1986) using transmission electron microscopy (TEM). The mesoscopic
cellular structures have been recognized as scale-free patterns through
fractal analysis of TEM micrographs Gil Sevillano et al. (1991); Gil Sevillano
(1993); Hähner et al. (1998); Zaiser et al. (1999). The complex collective
behavior of dislocations has been a challenge for understanding the underlying
physical mechanisms responsible for the development of emergent dislocation
morphologies.
Complex dislocation microstructures, as an emergent mesoscale phenomenon, have
been previously modeled using various theoretical and numerical approaches
Ananthakrishna (2007). Discrete dislocation dynamics (DDD) models have
provided insights into the dislocation pattern formations: parallel edge
dislocations in a two-dimensional system evolve into ‘matrix structures’
during single slip Bakó and Groma (1999), and ‘fractal and cell structures’
during multiple slip Bakó et al. (2007); Bakó and Hoffelner (2007); random
dislocations in a three-dimensional system self-organize themselves into
microstructures through junction formation, cross-slip, and short-range
interactions Madec et al. (2002); Gomez-Garcia et al. (2006). However, DDD
simulations are limited by the computational challenges on the relevant scales
of length and strain. Beyond these micro-scale descriptions, CDD has also been
used to study complex dislocation structures. Simplified reaction-diffusion
models have described persistent slip bands Walgraef and Aifantis (1985),
dislocation cellular structures during multiple slip Hähner (1996), and
dislocation vein structures Saxlová et al. (1997). Stochasticity in CDD models
Hähner et al. (1998); Bakó and Groma (1999); Groma and Bakó (2000) or in the
splittings and rotations of the macroscopic cells Pantleon (1996, 1998);
Sethna et al. (2003) have been suggested as an explanation for the formation
of organized dislocation structures. The source of the noise in these
stochastic theories is derived from either extrinsic disorder or short-length-
scale fluctuations.
Figure 1: Experimental and simulated dislocation cellular structures. In (a),
a typical TEM micrograph at a micron scale is taken from a Cu single crystal
after $[100]$ tensile deformation to a stress of $76.5$ MPa Hähner et al.
(1998); in (b), a simulated GND density plot is shown. Note the striking
morphological similarity between theory and experiment.
In a recent manuscript Chen et al. (2010), we analyzed the behavior of a
grossly simplified continuum dislocation model for plasticity Acharya (2001);
Roy and Acharya (2005); Acharya and Roy (2006); Limkumnerd and Sethna (2006);
Chen et al. (2010) – a physicist’s ‘spherical cow’ approximation designed to
explore the minimal ingredients necessary to explain key features of the
dynamics of deformation. Our simplified model ignores many features known to
be important for cell boundary morphology and evolution, including slip
systems and crystalline anisotropy, dislocation nucleation, lock formation and
entanglement, line tension, geometrically unnecessary forest dislocations,
etc. However, our model does encompass a realistic order parameter field (the
Nye-Kröner dislocation density tensor Nye (1953); Kröner (1958) embodying the
GNDs), which allows detailed comparisons of local rotations and deformations,
stress, and strain. It is not a realistic model of a real material, but it is
a model material with a physically sensible evolution law. Given these
simplifications, our model exhibited a surprisingly realistic evolution of
cellular structures. We analyzed these structures in two-dimensional
simulations (full three-dimensional rotations and deformations, but uniform
along the $z$-axis) using both the fractal box counting method Gil Sevillano
et al. (1991); Gil Sevillano (1993); Hähner et al. (1998); Zaiser et al.
(1999) and the single-length-scale scaling methods Hughes et al. (1997, 1998);
Mika and Dawson (1999); Hughes and Hansen (2001) used in previous theoretical
analyses of experimental data. Our model qualitatively reproduced the self-
similar, fractal patterns found in the former, and the scaling behavior of the
cell sizes and misorientations under strain found in the latter (power-law
refinement of the cell sizes, power-law increases in misorientations, and
scaling collapses of the distributions).
There are many features of real materials which are not explained by our
model. We do not observe distinctions between ‘geometrically necessary’ and
‘incidental’ boundaries, which appear experimentally to scale in different
ways. The fractal scaling observed in our model may well be cut off or
modified by entanglement, slip-system physics, quantization of Burgers vector
Kuhlmann-Wilsdorf (1985) or anisotropy – we cannot predict that real materials
should have fractal cellular structures; we only observe that our model
material does so naturally. Our spherically symmetric model obviously cannot
reproduce the dependence of morphological evolution on the axis of applied
strain (and hence the number of activated slip systems); indeed, the fractal
patterns observed in some experiments Hähner et al. (1998); Zaiser et al.
(1999) could be associated with the high-symmetry geometry they studied Wert
et al. (2007); Hansen et al. (2011). While many realistic features of
materials that we ignore may be important for cell-structure formation and
evolution, our model gives clear evidence that these features are not
essential to the formation of cellular structures when crystals undergo
plastic deformation.
In this longer manuscript, we provide an in-depth analysis of three plasticity
models. We show how they (and more traditional models) can be derived from the
structures of the broken symmetries and order parameters. We extend our
simulations to 3D, where the behavior is qualitatively similar with a few
important changes. Here we focus our attention on relaxation (rather than
strain), and on correlation functions (rather than fractal box counting or
cell sizes and misorientations).
Studying simplified ‘spherical cow’ models such as ours is justified if they
capture some key phenomenon, providing a perspective or explanation for the
emergent behavior. Under some circumstances, these simplified models can
capture the long-wavelength behavior precisely – the model is said to be in
the same universality class as the observed behavior (Sethna, 2006, Chapter
12). The Ising model for magnetism, two-fluid criticality, and order-disorder
transitions; self-organized critical models for magnetic Barkhausen noise
Sethna et al. (2001); Durin and Zapperi (2006) and dislocation avalanches
Zaiser (2006) all exhibit the same type of emergent scale-invariant behavior
as observed in some experimental cellular structures Hähner et al. (1998). For
all of these systems, ‘spherical cow’ models provide quantitative experimental
predictions of all phenomena on long length and time scales, up to overall
amplitudes, relevant perturbations, and corrections to scaling. Other
experimental cellular structures Hughes et al. (1998) have been interpreted in
terms of scaling functions with a characteristic scale, analogous to those
seen in crystalline grain growth. Crystalline grain growth also has a
‘universal’ description, albeit one which depends upon the entire anisotropic
interfacial energy and mobility Rutenberg and Vollmayr-Lee (1999) (and not
just temperature and field).111See, however, Ref. Kacher et al., 2011 for
experimental observations of bursty grain growth is incompatible with these
theories. We are cautiously optimistic that a model like ours (but with
metastability and crystalline) could indeed describe the emergent complex
dislocation structures and dynamics in real materials. Indeed, recent work on
dislocation avalanches suggests that even the yield stress may be a universal
critical point Friedman et al. (2012).
Despite universality, we must justify and explain the form of the CDD model we
study. In Sec. II we take the continuum, ‘hydrodynamic’ limit approach,
traditionally originating with Landau in the study of systems near thermal
equilibrium (clearly not true of deformed metals!). All degrees of freedom are
assumed slaves to the order parameter, which is systematically constructed
from conserved quantities and broken symmetries Martin (1968); Forster (1975);
Hohenberg and Halperin (1977) – this is the fundamental tool used in the
physics community to derive the diffusion equation, the Navier-Stokes
equation, and continuum equations for superconductors, superfluids, liquid
crystals, etc. Ref. Rickman and Viñals, 1997 have utilized this general
approach to generate CDD theories, and in Sec. II we explain how our approach
differs from theirs.
In Sec. III we explore the validity of several approximations in our model,
starting in the engineering language of state variables. Here local
equilibration is not presumed; the state of the system depends in some
arbitrarily complex way on the history. Conserved quantities and broken
symmetries can be supplemented by internal state variables – statistically
stored dislocations (SSDs), yield surfaces, void fractions, etc., whose
evolution laws are judged according to their success in matching experimental
observations. (Eddy viscosity theories of turbulence are particular successful
examples of this framework.) The ‘single-velocity’ models we use were
originally developed by Acharya et al. Acharya (2001); Roy and Acharya (2005),
and we discuss their microscopic derivation Acharya (2001) and the correction
term $L^{p}$ resulting from coarse-graining and multiple microscopic
velocities Acharya and Roy (2006). This term is usually modeled by the effects
of SSDs using crystal plasticity models. We analyze experiments to suggest
that ignoring SSDs may be justified on the length-scales needed in our
modeling. However, we acknowledge the near certainty that Acharya’s $L^{p}$
will be important – the true coarse-grained evolution laws will incorporate
multiple velocities. Our model should be viewed as a physically sensible model
material, not a rigorous continuum limit of a real material.
In this manuscript, we study fractal cell structures that form upon relaxation
from randomly deformed initial conditions (Sec. IV.2). One might be concerned
that relaxation of a randomly imposed high-stress dislocation structure (an
instantaneous hammer blow) could yield qualitatively different behavior from
realistic deformations, where the dislocation structures evolve continuously
as the deformation is imposed. In Sec. IV.2 we note that this alternative
‘slow hammering’ gives qualitatively the same fractal dislocation patterns.
Also, the resulting cellular structures are qualitatively very similar to
those we observe under subsequent uniform external strain Chen et al. (2010,
2012a), except that the relaxed structures are statistically isotropic. We
also find that cellular structures form immediately at small deformations.
Cellular structures in real materials emerge only after significant
deformation; presumably this feature is missing in our model because our model
has no impediment to cross-slip or multiple slip, and no entanglement of
dislocations. This initial relaxation should not be viewed as annealing or
dislocation creep. A proper description of annealing must include dislocation
line tension effects, since the driving force for annealing is the reduction
in total dislocation density – our dislocations annihilate when their Nye
Burgers vector density cancels under evolution, not because of the dislocation
core energies. Creep involves dislocation climb, which (for two of our three
models) is forbidden.
We focus here on correlation functions, rather than the methods used in
previous analyses of experiments. Correlation functions have a long, dignified
history in the study of systems exhibiting emergent scale invariance –
materials at continuous thermodynamic phase transitions Chaikin and Lubensky
(1995), fully developed turbulence L’vov (1991); Choi et al. (2012a); Salman
and Truskinovsky (2012), and crackling noise and self-organized criticality
Sethna et al. (2001). We study not only numerical simulations of these
correlations, but provide also extensive analysis of the relations between the
correlation functions for different physical quantities and their (possibly
universal) power-law exponents. The decomposition of the system into cells
(needed for the cell-size and misorientation distribution analyses Hughes et
al. (1997, 1998); Mika and Dawson (1999); Hughes and Hansen (2001)) demands
the introduction of an artificial cutoff misorientation angle, and demands
either laborious human work or rather sophisticated numerical algorithms Chen
et al. (2012b). These sections of the current manuscript may be viewed both as
a full characterization of the behavior of our simple model, and as an
illustration of how one can use correlation functions to analyze the complex
morphologies in more realistic models and in experiments providing 2D or 3D
real-space data. We believe that analyses that explicitly decompose structures
into cells remain important for systems with single changing length-scale:
grain boundary coarsening should be studied both with correlation functions
and with explicit studies of grain shape and geometry evolution, and the same
should apply to cell-structure models and experiments that are not fractal.
But our model, without such an intermediate length-scale, is best analyzed
using correlation functions.
Our earlier work Chen et al. (2010) focused on 2D. How different are our
predictions in 3D? In this paper, we explore three different CDDs that display
similar dislocation fractal formation in 3D and confirm analytically that
correlation functions of the GND density, the plastic distortion, and the
crystalline orientation, all share a single underlying critical exponent, up
to exponent relations, dependent only on the type of dynamics. Unlike our 2D
simulations, where forbidding climb led to rather distinct critical exponents,
all three dynamics in 3D share quite similar scaling behaviors.
We begin our discussion in Sec. II.1 by defining the various dislocation,
distortion, and orientation fields. In Sec. II.2, we derive standard local
dynamical evolution laws using traditional condensed matter approaches,
starting from both the non-conserved plastic distortion and the conserved GND
densities as order parameters. Here, we also explain why these resulting
dynamical laws are inappropriate at the mesoscale. In Sec. II.3, we show how
to extend this approach by defining appropriate constitutive laws for the
dislocation flow velocity to build novel dynamics Landau and Lifshitz (1970).
There are three different dynamics we study: i) isotropic climb-and-glide
dynamics (CGD) Acharya (2001, 2003, 2004); Roy and Acharya (2005); Limkumnerd
and Sethna (2006), ii) isotropic glide-only dynamics, where we define the part
of the local dislocation density that participates in the local mobile
dislocation population, keeping the local volume conserved at all times (GOD-
MDP) Chen et al. (2010), iii) isotropic glide-only dynamics, where glide is
enforced by a local vacancy pressure due to a co-existing background of
vacancies that have an infinite energy cost (GOD-LVP) Acharya and Roy (2006).
All three types of dynamics present physically valid alternative approaches
for deriving a coarse-grained continuum model for GNDs. In Sec. III, we
explore the effects of coarse-graining, explain our rationale for ignoring
SSDs at the mesoscale, and discuss the single-velocity approximation we use.
In Sec. IV, we discuss the details of numerical simulations in both two and
three dimensions, and characterize the self-organized critical complex
patterns in terms of correlation functions of the order parameter fields. In
Sec. V, we provide a scaling theory, derive relations among the critical
exponents of these related correlation functions, study the correlation
function as a scaling function of coarse-graining length scale, and conclude
in Sec. VI.
In addition, we provide extensive details of our study in Appendices. In A, we
collect useful formulas from the literature relating different physical
quantities within traditional plasticity, while in B we show how functional
derivatives and the dissipation rate can be calculated using this formalism,
leading to our proof that our CDDs are strictly dissipative (lowering the
appropriate free energy with time). In C, we show the flexibility of our CDDs
by extending our dynamics: In particular, we show how to add vacancy diffusion
in the structure of CDD, and also, how external disorder can be in principle
incorporated (to be explored in future work). In D, we elaborate on numerical
details – we demonstrate the statistical convergence of our simulation method
and also we explain how we construct the Gaussian random initial conditions.
Finally, in E, we discuss the scaling properties of several correlation
functions in real and Fourier spaces, including the strain-history-dependent
plastic deformation and distortion fields, the stress-stress correlation
functions, the elastic energy density spectrum, and the stressful part of GND
density.
## II Continuum models
### II.1 Order parameter fields
#### II.1.1 Conserved order parameter field
A dislocation is the topological defect of a crystal lattice. In a continuum
theory, it can be described by a coarse-grained variable, the GND density, 222
Dislocations which cancel at the macroscale may be geometrically necessary at
the mesoscale. See Sec. III for our rationale for not including the effects of
SSDs (whose Burgers vectors cancel in the coarse-graining process). (also
called the net dislocation density or the Nye-Kröner dislocation density),
which can be defined by the GND density tensor
$\rho(\mathbf{x})=\sum_{\alpha}(\hat{\mathbf{t}}^{\alpha}\\!\\!\cdot\hat{\mathbf{n}})\hat{\mathbf{n}}\otimes\mathbf{b}^{\alpha}\delta(\mathbf{x}-{\bm{\xi}}^{\alpha}),$
(1)
so
$\rho_{km}(\mathbf{x})=\sum_{\alpha}\hat{t}_{k}^{\alpha}b_{m}^{\alpha}\delta(\mathbf{x}-{\bm{\xi}}^{\alpha}),$
(2)
measuring the sum of the net flux of dislocations $\alpha$ located at
$\bm{\xi}$, tangent to $\hat{\mathbf{t}}$, with Burgers vector $\mathbf{b}$,
in the neighborhood of $\mathbf{x}$, through an infinitesimal plane with the
normal direction along $\hat{\mathbf{n}}$, seen in Fig. 2. In the continuum,
the discrete sum of line singularities in Eqs. (1) and (2) is smeared into a
continuous (nine-component) field, just as the continuum density of a liquid
is at root a sum of point contributions from atomic nuclei.
Figure 2: (Color online) Representation of the crystalline line defect —
dislocation. Each curved line represents a dislocation line with the tangent
direction $\hat{\mathbf{t}}$, and the Burgers vector $\mathbf{b}$ which
characterizes the magnitude and direction of the distortion to the lattice.
The two-index GND density $\rho_{km}$ Nye (1953); Kröner (1958) (Eqs. 1 and 2)
is the net flux of the Burgers vector density $\mathbf{b}$ along
$\hat{\mathbf{e}}^{(m)}$ through an infinitesimal piece of a plane with normal
direction $\hat{\mathbf{n}}$ along $\hat{\mathbf{e}}^{(k)}$. The three-index
version $\varrho_{ijm}$ (Eqs. 3 and 4) is the flux density through the plane
along the axes $\hat{\mathbf{e}}^{(i)}$ and $\hat{\mathbf{e}}^{(j)}$, with the
unit bivector $\hat{E}=\hat{\mathbf{e}}^{(i)}\wedge\hat{\mathbf{e}}^{(j)}$.
Since the normal unit pseudo-vector $\hat{\mathbf{n}}$ is equivalent to an
antisymmetric unit bivector $\hat{E}$,
$\hat{E}_{ij}=\varepsilon_{ijk}\hat{n}_{k}$, we can reformulate the GND
density as a three-index tensor
$\varrho(\mathbf{x})=\sum_{\alpha}(\hat{\mathbf{t}}^{\alpha}\\!\\!\cdot\hat{\mathbf{n}})\hat{E}\otimes\mathbf{b}^{\alpha}\delta(\mathbf{x}-{\bm{\xi}}^{\alpha}),$
(3)
so
$\varrho_{ijm}(\mathbf{x})=\sum_{\alpha}(\hat{\mathbf{t}}^{\alpha}\\!\\!\cdot\hat{\mathbf{n}})\hat{E}_{ij}b_{m}^{\alpha}\delta(\mathbf{x}-{\bm{\xi}}^{\alpha}),$
(4)
measuring the same sum of the net flux of dislocations in the neighborhood of
$\mathbf{x}$, through the infinitesimal plane indicated by the unit bivector
$\hat{E}$. This three-index variant will be useful in Sec. II.3.2, where we
adapt the equations of Refs. Roy and Acharya, 2005 and Limkumnerd and Sethna,
2006 to forbid dislocation climb (GOD-MDP).
According to the definition of $\hat{E}$, we can find the relation between
$\rho$ and $\varrho$
$\varrho_{ijm}(\mathbf{x})=\sum_{\alpha}(\hat{t}_{l}^{\alpha}\hat{n}_{l})\varepsilon_{ijk}\hat{n}_{k}b_{m}^{\alpha}\delta(\mathbf{x}-{\bm{\xi}}^{\alpha})=\varepsilon_{ijk}\rho_{km}(\mathbf{x}).$
(5)
It should be noted here that dislocations cannot terminate within the crystal,
implying that
$\partial_{i}\rho_{ij}(\mathbf{x})=0,$ (6)
or
$\varepsilon_{ijk}\partial_{k}\varrho_{ijl}(\mathbf{x})=0.$ (7)
Within plasticity theories, the gradient of the total displacement field
$\mathbf{u}$ represents the compatible total distortion field Kröner (1958,
1981) $\beta_{ij}=\partial_{i}u_{j}$, which is the sum of the elastic and the
plastic distortion fields Kröner (1958, 1981), $\beta=\beta^{\rm p}+\beta^{\rm
e}$. Due to the presence of dislocation lines, both $\beta^{\rm p}$ and
$\beta^{\rm e}$ are incompatible, characterized by the GND density $\rho$
$\displaystyle\rho_{ij}$ $\displaystyle=$
$\displaystyle\epsilon_{ilm}\partial_{l}\beta^{\rm e}_{mj},$ (8)
$\displaystyle=$ $\displaystyle-\epsilon_{ilm}\partial_{l}\beta^{\rm p}_{mj}.$
(9)
The elastic distortion field $\beta^{\rm e}$ is the sum of its symmetric
strain and antisymmetric rotation fields,
$\beta^{\rm e}=\epsilon^{\rm e}+\omega^{\rm e},$ (10)
where we assume linear elasticity, ignoring the ‘geometric nonlinearity’ in
these tensors. Substituting the sum of two tensor fields into the
incompatibility relation Eq. (8) gives
$\rho_{ij}=\varepsilon_{ikl}\partial_{k}\omega_{lj}^{\rm
e}+\varepsilon_{ikl}\partial_{k}\epsilon_{lj}^{\rm e}.$ (11)
The elastic rotation tensor $\omega^{\rm e}$ can be rewritten as an axial
vector, the crystalline orientation vector $\mathbf{\Lambda}$
$\Lambda_{k}=\frac{1}{2}\varepsilon_{ijk}\omega^{\rm e}_{ij},$ (12)
or
$\omega^{\rm e}_{ij}=\varepsilon_{ijk}\Lambda_{k}.$ (13)
Thus we can substitute Eq. (13) into Eq. (11)
$\rho_{ij}=(\delta_{ij}\partial_{k}\Lambda_{k}-\partial_{j}\Lambda_{i})+\varepsilon_{ikl}\partial_{k}\epsilon_{lj}^{\rm
e}.$ (14)
For a system without residual elastic stress, the GND density thus depends
only on the varying crystalline orientation Limkumnerd and Sethna (2007).
Dynamically, the time evolution law of the GND density emerges from the
conservation of the Burgers vector Kosevich (1979); Lazar (2011)
$\frac{\partial}{\partial t}\rho_{ik}=-\varepsilon_{ijq}\partial_{j}J_{qk},$
(15)
or
$\frac{\partial}{\partial
t}\varrho_{ijk}=-\varepsilon_{ijm}\varepsilon_{mpq}\partial_{p}J_{qk}=-g_{ijpq}\partial_{p}J_{qk},$
(16)
where $J$ represents the Burgers vector flux, and the symbol $g_{ijpq}$
indicates
$\varepsilon_{ijm}\varepsilon_{mpq}=\delta_{ip}\delta_{jq}-\delta_{iq}\delta_{jp}$.
#### II.1.2 Non-conserved order parameter field
The natural physicist’s order parameter field $\varrho$, characterizing the
incompatibility, can be written in terms of the plastic distortion field
$\beta^{\rm p}$
$\varrho_{ijk}=\varepsilon_{ijm}\rho_{mk}=-g_{ijls}\partial_{l}\beta^{\rm
p}_{sk}.$ (17)
In the linear approximation, the alternative order parameter field $\beta^{\rm
p}$ fully specifies the local deformation $\bm{u}$ of the material, the
elastic distortion $\beta^{\rm e}$, the internal long-range stress field
$\sigma^{\rm int}$ and the crystalline orientation (the Rodrigues vector
$\mathbf{\Lambda}$ giving the axis and angle of rotation), as summarized in A.
It is natural, given Eq. (9) and Eq. (15), to use the flux $J$ of the Burgers
vector density to define the dynamics of the plastic distortion tensor
$\beta^{\rm p}$ Kosevich (1979); Limkumnerd and Sethna (2006); Lazar (2011):
$\frac{\partial\beta^{\rm p}_{ij}}{\partial t}=J_{ij}.$ (18)
As noted by Ref. Acharya, 2004, Eq. (9) and Eq. (15) equate a curl of
$\beta^{\rm p}$ to a curl of $J$, so an arbitrary divergence may be added to
Eq. (18): the evolution of the plastic distortion $\beta^{\rm p}$ is not
determined by the evolution of the GND density. Ref. Acharya, 2004 resolves
this ambiguity using a Stokes-Helmholtz decomposition of $\beta^{\rm p}$. In
our notation, $\beta^{\rm p}=\beta^{\rm p,I}+\beta^{\rm p,H}$. The ‘intrinsic’
plastic distortion $\beta^{\rm p,I}$ is divergence-free
($\partial_{i}\beta^{\rm p,I}_{ij}=0$, i.e., $k_{i}\widetilde{\beta}^{\rm
p,I}_{ij}=0$), and determined by the GND density $\rho$. The ‘history-
dependent’ 333Changing the initial reference state through a curl-free plastic
distortion (leaving behind no dislocations) will change $\beta^{\rm p,H}$ but
not $\beta^{\rm p,I}$; the former depends on the history of the material and
not just the current state, motivating our nomenclature. $\beta^{\rm p,H}$ is
curl-free ($\epsilon_{\ell ij}\partial_{\ell}\beta^{\rm p,I}_{ij}=0$,
$\epsilon_{\ell ij}k_{\ell}\widetilde{\beta}^{\rm p,I}_{ij}=0$). In Fourier
space, we can do this decomposition explicitly, as
$\displaystyle\widetilde{\beta}^{\rm p}_{ij}(\mathbf{k})$ $\displaystyle=$
$\displaystyle-i\varepsilon_{ilm}\frac{k_{l}}{k^{2}}\widetilde{\rho}_{mj}(\mathbf{k})+ik_{i}\widetilde{\psi}_{j}(\mathbf{k})$
(19) $\displaystyle\equiv$ $\displaystyle\widetilde{\beta}^{\rm
p,I}_{ij}(\mathbf{k})+\widetilde{\beta}^{\rm p,H}_{ij}(\mathbf{k}).$
This decomposition will become important to us in Sec. IV.3.3, where the
correlation functions of $\beta^{\rm p,I}$ and $\beta^{\rm p,H}$ will scale
differently with distance.
Ref. Acharya, 2004 treats the evolution of the two components $\beta^{\rm
p,I}$ and $\beta^{\rm p,H}$ separately. Because our simulations have periodic
boundary conditions, the evolution of $\beta^{\rm p,H}$ does not affect the
evolution of $\rho$. As noted by Ref. Acharya, 2004, in more general
situations $\beta^{\rm p,H}$ will alter the shape of the body, and hence
interact with the boundary conditions 444For our simulations with external
shear Chen et al. (2010), the $\mathbf{k}=0$ of $\beta^{\rm p,H}$ couples to
the boundary condition. We determine the plastic evolution of the
$\mathbf{k}=0$ mode explicitly in that case. For correlation functions
presented here, the $\mathbf{k}=0$ mode is unimportant because we subtract
$\beta^{\rm p}$ fields at different sites before correlating.. Hence in the
simulations presented here, we use Eq. (18), with the warning that the plastic
deformation fields shown in the figures are arbitrary up to an overall
divergence. The correlation functions we study of the intrinsic plastic
distortion $\beta^{\rm p,I}$ are independent of this ambiguity, but the
correlation functions of $\beta^{\rm p,H}$ we discuss in the Appendix E.1 will
depend on this choice.
In the presence of external loading, we can express the appropriate free
energy $\mathcal{F}$ as the sum of two terms: the elastic interaction energy
of GNDs, and the energy of interaction with the applied stress field. The free
energy functional is
$\mathcal{F}=\int d^{3}\mathbf{x}\biggl{(}\frac{1}{2}\sigma_{ij}^{\rm
int}\epsilon_{ij}^{\rm e}-\sigma_{ij}^{\rm ext}\epsilon_{ij}^{\rm
p}\biggr{)}.$ (20)
Alternatively, it can be rewritten in Fourier space
$\mathcal{F}=-\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\biggl{(}\frac{1}{2}M_{ijmn}(\mathbf{k})\widetilde{\beta}^{\rm
p}_{ij}(\mathbf{k})\widetilde{\beta}^{\rm
p}_{mn}(-\mathbf{k})+\widetilde{\sigma}_{ij}^{\rm
ext}(\mathbf{k})\widetilde{\beta}^{\rm p}_{ij}(-\mathbf{k})\biggr{)},$ (21)
as discussed in B.1.
### II.2 Traditional dissipative continuum dynamics
There are well known approaches for deriving continuum equations of motion for
dissipative systems, which in this case produce a traditional von Mises-style
theory Rickman and Viñals (1997), useful at longer scales. We begin by
reproducing these standard equations.
For the sake of simplicity, we ignore external stress ($\sigma_{ij}$
simplified to $\sigma^{\rm int}_{ij}$) in the following three subsections. We
start by using the standard methods applied to the non-conserved order
parameter $\beta^{\rm p}$, and then turn to the conserved order parameter
$\varrho$.
#### II.2.1 Dissipative dynamics built from the non-conserved order parameter
field $\beta^{\rm p}$
The plastic distortion $\beta^{\rm p}$ is a non-conserved order parameter
field, which is utilized by the engineering community to study texture
evolution and plasticity of mechanically deformed structural materials. The
simplest dissipative dynamics in terms of $\beta^{\rm p}$ minimizes the free
energy by steepest descents
$\frac{\partial}{\partial t}\beta^{\rm
p}_{ij}=-\Gamma\frac{\delta\mathcal{F}}{\delta\beta^{\rm p}_{ij}},$ (22)
where $\Gamma$ is a positive material-dependent constant. We may rewrite it in
Fourier space, giving
$\frac{\partial}{\partial t}\widetilde{\beta}^{\rm
p}_{ij}(\mathbf{k})=-\Gamma\frac{\delta\mathcal{F}}{\delta\widetilde{\beta}^{\rm
p}_{ij}(-\mathbf{k})}.$ (23)
The functional derivative $\delta\mathcal{F}/\delta\widetilde{\beta}^{\rm
p}_{ij}(-\mathbf{k})$ is the negative of the long-range stress
$\frac{\delta\mathcal{F}}{\delta\widetilde{\beta}^{\rm
p}_{ij}(-\mathbf{k})}=-M_{ijmn}(\mathbf{k})\widetilde{\beta}^{\rm
p}_{mn}(\mathbf{k})\equiv-\widetilde{\sigma}_{ij}(\mathbf{k}).$ (24)
This dynamics implies a simplified version of von Mises plasticity
$\frac{\partial}{\partial t}\widetilde{\beta}^{\rm
p}_{ij}(\mathbf{k})=\Gamma\widetilde{\sigma}_{ij}(\mathbf{k}).$ (25)
#### II.2.2 Dissipative dynamics built from the conserved order parameter
field $\varrho$
We can also derive an equation of motion starting from the GND density
$\varrho$, as was done by Ref. Rickman and Viñals, 1997. For this dissipative
dynamics Eq. (16), the simplest expression for $J$ is
$J_{qk}=-\Gamma^{\prime}_{ablq}\partial_{l}\frac{\delta\mathcal{F}}{\delta\varrho_{abk}},$
(26)
where the material-dependent constant tensor $\Gamma^{\prime}$ must be chosen
to guarantee a decrease of the free energy with time.
The infinitesimal change of $\mathcal{F}$ with respect to the GND density
$\varrho$ is
$\delta\mathcal{F}[\varrho]=\int\\!d^{3}\mathbf{x}\,\,\frac{\delta\mathcal{F}}{\delta\varrho_{ijk}}\delta\varrho_{ijk}.$
(27)
The free energy dissipation rate is thus $\delta\mathcal{F}/\delta t$ for
$\delta\varrho=\frac{\partial\varrho}{\partial t}\delta t$, hence
$\frac{\partial}{\partial
t}\mathcal{F}[\varrho]=\int\\!d^{3}\mathbf{x}\,\,\frac{\delta\mathcal{F}}{\delta\varrho_{ijk}}\frac{\partial\varrho_{ijk}}{\partial
t}.$ (28)
Substituting Eq. (16) into Eq. (28) and integrating by parts gives
$\frac{\partial}{\partial
t}\mathcal{F}[\varrho]=\int\\!d^{3}\mathbf{x}\,\,\biggl{(}g_{ijpq}\partial_{p}\frac{\delta\mathcal{F}}{\delta\varrho_{ijk}}\biggr{)}J_{qk}.$
(29)
Substituting Eq. (26) into Eq. (29) gives
$\frac{\partial}{\partial
t}\mathcal{F}[\varrho]=-\int\\!d^{3}\mathbf{x}\,\,\biggl{(}g_{ijpq}\partial_{p}\frac{\delta\mathcal{F}}{\delta\varrho_{ijk}}\biggr{)}\biggl{(}\Gamma^{\prime}_{ablq}\partial_{l}\frac{\delta\mathcal{F}}{\delta\varrho_{abk}}\biggr{)}.$
(30)
Now, to guarantee that energy never increases, we choose
$\Gamma^{\prime}_{ablq}=\Gamma g_{ablq}$, ($\Gamma$ is a positive material-
dependent constant), which yields the rate of change of energy as a negative
of a perfect square
$\frac{\partial}{\partial
t}\mathcal{F}[\varrho]=-\int\\!d^{3}\mathbf{x}\,\,\Gamma\sum_{q,k}\biggl{(}g_{ablq}\partial_{l}\frac{\delta\mathcal{F}}{\delta\varrho_{abk}}\biggr{)}^{2}.$
(31)
Using Eqs. (16) and (26), we can write the dynamics in terms of $\varrho$
$\frac{\partial}{\partial t}\varrho_{ijk}=\Gamma
g_{ijpq}g_{ablq}\partial_{p}\partial_{l}\frac{\delta\mathcal{F}}{\delta\varrho_{abk}}.$
(32)
Substituting the functional derivative
$\delta\mathcal{F}/\delta\varrho_{abk}$, Eq. (114), derived in B.2, into Eq.
(32) and comparing to Eq. (16) tells us
$\frac{\partial}{\partial t}\varrho_{ijk}(\mathbf{x})=-\Gamma
g_{ijpq}\partial_{p}\sigma_{qk}(\mathbf{x})=-g_{ijpq}\partial_{p}J_{qk}(\mathbf{x}),$
(33)
where
$J_{qk}=\Gamma\sigma_{qk}$ (34)
duplicating the von Mises law (Eq. 25) of the previous subsection. The
simplest dissipative dynamics of either non-conserved or conserved order
parameter fields thus turns out to be the traditional linear dynamics, a
simplified von Mises law.
The problem with this law for us is that it allows for plastic deformation in
the absence of dislocations, i.e., the Burgers vector flux can be induced
through the elastic loading on the boundaries, even in a defect-free medium.
This is appropriate on engineering length scales above or around a micron,
where SSDs dominate the plastic deformation. (Methods to incorporate their
effects into a theory like ours have been provided by Acharya et al. Acharya
and Roy (2006); Roy and Acharya (2006) and Varadhan et al. Varadhan et al.
(2006))
By ignoring the SSDs, our theory assumes that there is an intermediate coarse-
grain length scale, large compared to the distance between dislocations and
small compared to the distance where the cancelling of dislocations with
different Burgers vectors dominates the dynamics, discussed in Sec. III. We
believe this latter length scale is given by the distance between cell walls
(as discussed in Sec. IV.2). The cell wall misorientations are geometrically
necessary. On the one hand, it is known Kuhlmann-Wilsdorf and Hansen (1991);
Hughes and Hansen (1993) that neighboring cell walls often have
misorientations of alternating signs, so that on coarse-grain length scales
just above the cell wall separation one would expect explicit treatment of the
SSDs would be necessary. On the other hand, the density of dislocations in
cell walls is high, so that a coarse-grain length much smaller than the
interesting structures (and hence where we believe SSDs are unimportant)
should be possible Kiener et al. (2011). (Our cell structures are fractal,
with no characteristic ‘cell size’; this coarse-grain length sets the minimum
cutoff scale of the fractal, and the grain size or inhomogeneity length will
set the maximum scale.) With this assumption, to treat the formation of
cellular structures, we turn to theories of the form given in Eq. (15),
defined in terms of dislocation currents $J$ that depend directly on the local
GND density.
### II.3 Our CDD model
The microscopic motion of a dislocation under external strain depends upon
temperature. In general, it moves quickly along the glide direction, and
slowly (or not at all) along the climb direction where vacancy diffusion must
carry away the atoms. The glide speed can be limited by phonon drag at higher
temperatures, or can accelerate to nearly the speed of sound at low
temperatures Hirth and Lothe (1982). It is traditional to assume that the
dislocation velocity is over-damped, and proportional to the component of the
force per unit dislocation length in the glide plane.555In real materials the
dislocation dynamics is intermittent, as dislocations bow out or depin from
junctions and disorder, and engage in complex dislocation avalanches.
To coarse-grain this microscopics, for reasons described in Sec. III , we
choose a CDD model whose dislocation currents vanish when the GND density
vanishes, without considering SSDs. Ref. Limkumnerd and Sethna, 2006 derived a
dislocation current $J$ for this case using a closure approximation of the
underlying microscopics. Their work reproduced (in the case of both glide and
climb) an earlier dynamical model proposed by Acharya et al. Acharya (2001);
Roy and Acharya (2005); Acharya and Roy (2006), who also incorporate the
effects of SSDs. We follow the general approach of Acharya and collaborators
Acharya (2001, 2003, 2004); Roy and Acharya (2005); Varadhan et al. (2006);
Acharya and Roy (2006) in Sec. II.3.1 to derive an evolution law for
dislocations allowed both to glide and climb, and then modify it to remove
climb in Sec. II.3.2. We derive a second variant of glide-only dynamics in
Sec. II.3.3 by coupling climb to vacancies and then taking the limit of
infinite vacancy energy, which reproduces a model proposed earlier by Ref.
Acharya and Roy, 2006.
In our CGD and GOD-LVP dynamics (Sections II.3.1 and II.3.3 below), all
dislocations in the infinitesimal volume at $\mathbf{x}$ are moving with a
common velocity $\bm{v}(\mathbf{x})$. We discuss the validity of this single-
velocity form for the equations of motion at length in Sec. III, together with
a discussion of the coarse-graining and the emergence of SSDs. We view our
simulations as physically sensible ‘model materials’ – perhaps not the correct
theory for any particular material, but a sensible framework to generate
theories of plastic deformation and explain generic features common to many
materials.
#### II.3.1 Climb-glide dynamics (CGD)
We start with a model presuming (perhaps unphysically) that vacancy diffusion
is so fast that dislocations climb and glide with equal mobility. The elastic
Peach-Koehler force due to the stress $\sigma(\mathbf{x})$ on the local GND
density is given by $f^{PK}_{u}=\sigma_{mk}\varrho_{umk}$. We assume that the
velocity $\bm{v}\propto\bm{f}^{PK}$, giving a local constitutive relation
$v_{u}\propto\sigma_{mk}\varrho_{umk}.$ (35)
How should we determine the proportionality constant between velocity and
force? In experimental systems, this is complicated by dislocation
entanglement and short-range forces between dislocations. Ignoring these
features, the velocity of each dislocation should depend only on the stress
induced by the other dislocations, not the local density of dislocations
Zapperi and Zaiser (2011). We can incorporate this in an approximate way by
making the proportionality factor in Eq. (35) inversely proportional to the
GND density. We measure the latter by summing the square of all components of
$\varrho$, hence $|\varrho|=\sqrt{\varrho_{ijk}\varrho_{ijk}/2}$ and
$v_{u}=\frac{D}{|\varrho|}\sigma_{mk}\varrho_{umk}$, where $D$ is a positive
material-dependent constant. This choice has the additional important feature
that the evolution of a sharp domain wall whose width is limited by the
lattice cutoff is unchanged when the lattice cutoff is reduced.
The flux $J$ of the Burgers vector is thus Kosevich (1979)
$J_{ij}=v_{u}\varrho_{uij}=\frac{D}{|\varrho|}\sigma_{mk}\varrho_{umk}\varrho_{uij}.$
(36)
Notice that this dynamics satisfies our criterion that $J=0$ when there are no
GNDs (i.e., $\varrho=0$). Notice also that we do not incorporate the effects
of SSDs (Acharya’s $L^{p}$ Acharya and Roy (2006)); we discuss this further in
Sec. III.
Substituting this flux $J$ (Eq. 36) into the free energy dissipation rate (Eq.
120) gives
$\frac{\partial\mathcal{F}}{\partial
t}=-\int\\!d^{3}\mathbf{x}\,\,\sigma_{ij}J_{ij}=-\int\\!d^{3}\mathbf{x}\,\,\frac{|\varrho|}{D}v^{2}\leq
0.$ (37)
Details are given in B.3.
#### II.3.2 Glide-only dynamics: mobile dislocation population (GOD-MDP)
When the temperature is low enough, dislocation climb is negligible, i.e.,
dislocations can only move in their glide planes. Fundamentally, dislocation
glide conserves the total number of atoms, which leads to an unchanged local
volume. Since the local volume change in time is represented by the trace
$J_{ii}$ of the flux of the Burgers vector, conservative motion of GNDs
demands $J_{ii}=0$. Ref. Limkumnerd and Sethna, 2006 derived the equation of
motion for dislocation glide only, by removing the trace of $J$ from Eq. (36).
However, their dynamics fails to guarantee that the free energy monotonically
decreases. Here we present an alternative approach.
We can remove the trace of $J$ by modifying the first equality in Eq. (36),
$J^{\prime}_{ij}=v^{\prime}_{u}\biggl{(}\varrho_{uij}-\frac{1}{3}\delta_{ij}\varrho_{ukk}\biggr{)},$
(38)
where
$\varrho^{\prime}_{uij}=\varrho_{uij}-\frac{1}{3}\delta_{ij}\varrho_{ukk}$ can
be viewed as a subset of ‘mobile’ dislocations moving with velocity
$\bm{v}^{\prime}$.
Substituting the current (Eq. 38) into the free energy dissipation rate (Eq.
120) gives
$\frac{\partial\mathcal{F}}{\partial
t}=-\int\\!d^{3}\mathbf{x}\,\,\sigma_{ij}\bigl{(}v^{\prime}_{u}\varrho^{\prime}_{uij}\bigr{)}.$
(39)
If we choose the velocity
$v^{\prime}_{u}\propto\sigma_{ij}\varrho^{\prime}_{uij}$, the appropriate free
energy monotonically decreases in time. We thus express
$v^{\prime}_{u}=\frac{D}{|\varrho|}\varrho^{\prime}_{uij}\sigma_{ij}$, where
$D$ is a positive material-dependent constant, and the prefactor $1/|\varrho|$
is added for the same reasons, as discussed in the second paragraph of Sec.
II.3.1.
The current $J^{\prime}$ of the Burgers vector is thus written Chen et al.
(2010)
$J^{\prime}_{ij}=v^{\prime}_{u}\varrho^{\prime}_{uij}=\frac{D}{|\varrho|}\sigma_{mn}\biggl{(}\varrho_{umn}-\frac{1}{3}\delta_{mn}\varrho_{ull}\biggr{)}\biggl{(}\varrho_{uij}-\frac{1}{3}\delta_{ij}\varrho_{ukk}\biggr{)}.$
(40)
This natural evolution law becomes much less self-evident when expressed in
terms of the traditional two-index version $\rho$ (Eqs. 1&2)
$\displaystyle J^{\prime}_{ij}$ $\displaystyle=$
$\displaystyle\frac{D}{|\varrho|}\biggl{(}\sigma_{in}\rho_{mn}\rho_{mj}-\sigma_{mn}\rho_{in}\rho_{mj}-\frac{1}{3}\sigma_{mm}\rho_{ni}\rho_{nj}+\frac{1}{3}\sigma_{mm}\rho_{in}\rho_{nj}$
$\displaystyle-\frac{\delta_{ij}}{3}\Bigl{(}\sigma_{kn}\rho_{mn}\rho_{mk}-\sigma_{mn}\rho_{kn}\rho_{mk}-\frac{1}{3}\sigma_{mm}\rho_{nk}\rho_{nk}+\frac{1}{3}\sigma_{mm}\rho_{kn}\rho_{nk}\Bigr{)}\biggr{)},$
(which is why we introduce the three-index variant $\varrho$).
This current $J^{\prime}$ makes the free energy dissipation rate the negative
of a perfect square in Eq. (122). Details are given in B.3.
#### II.3.3 Glide-only dynamics: local vacancy-induced pressure (GOD-LVP)
At high temperature, the fast vacancy diffusion leads to dislocation climb out
of the glide direction. As the temperature decreases, vacancies are frozen out
so that dislocations only slip in the glide planes. In C.1, we present a
dynamical model coupling the vacancy diffusion to our CDD model. Here we
consider the limit of frozen-out vacancies with infinite energy costs, which
leads to another version of glide-only dynamics.
According to the coupling dynamics Eq. (130), we write down the general form
of dislocation current
$J^{\prime\prime}_{ij}=\frac{D}{|\varrho|}\biggl{(}\sigma_{mn}-\delta_{mn}p\biggr{)}\varrho_{umn}\varrho_{uij},$
(42)
where $p$ is the local pressure due to vacancies.
The limit of infinitely costly vacancies ($\alpha\to\infty$ in C.1) leads to
the traceless current, $J^{\prime\prime}_{ii}=0$. Solving this equation gives
a critical local pressure $p^{c}$
$p^{c}=\frac{\sigma_{pq}\varrho_{spq}\varrho_{skk}}{\varrho_{uaa}\varrho_{ubb}}.$
(43)
Figure 3: (Color online) Relaxation of various CDD models. The blue dot
represents the initial random plastically-deformed state; the red dots
indicate the equilibrated stress-free states driven by different dynamics.
Curve A: steepest decent dynamics leads to the trivial homogeneous
equilibrated state, discussed in Sec. II.2. Curve B: our CDD models settle the
system into non-trivial stress-free states with wall-like singularities of the
GND density, discussed in Sec. II.3.
The corresponding current $J^{\prime\prime}$ of the Burgers vector in this
limit is thus written
$\displaystyle
J^{\prime\prime}_{ij}=\frac{D}{|\varrho|}\biggl{(}\sigma_{mn}-\frac{\sigma_{pq}\varrho_{spq}\varrho_{skk}}{\varrho_{uaa}\varrho_{ubb}}\delta_{mn}\biggr{)}\varrho_{umn}\varrho_{uij},$
(44)
reproducing the glide-only dynamics proposed by Ref. Acharya and Roy, 2006.
Substituting the current (Eq. 44) into the free energy dissipation rate (Eq.
120) gives
$\frac{\partial\mathcal{F}}{\partial
t}=-\int\\!d^{3}\mathbf{x}\frac{D}{|\varrho|}\biggl{[}f^{PK}_{i}f^{PK}_{i}-\biggl{(}\frac{d_{i}f_{i}^{PK}}{|\mathbf{d}|}\biggr{)}^{2}\biggr{]}\leq
0,$ (45)
where $f_{i}^{PK}=\sigma_{mn}\varrho_{imn}$ and $d_{i}=\varrho_{ikk}$. The
equality emerges when the force ${\bf f}^{PK}$ is along the same direction as
${\bf d}$.
Unlike the traditional linear dissipative models, our CDD model, coarse
grained from microscopic interactions, drives the random plastic distortion to
non-trivial stress-free states with dislocation wall singularities, as
schematically illustrated in Fig. 3.
Our minimal CDD model, consisting of GNDs evolving under the long-range
interaction, provides a framework for understanding dislocation morphologies
at the mesoscale. Eventually, it can be extended to include vacancies by
coupling them to the dislocation current (as discussed in C.1, or extended to
include disorder, dislocation pinning, and entanglement by adding appropriate
interactions to the free energy functional and refining the effective stress
field (as discussed in C.2). It has already been extended to include SSDs
incorporating traditional crystal plasticity theories Varadhan et al. (2006);
Acharya and Roy (2006); Roy and Acharya (2006).
## III Coarse Graining
The discussion in Sec. II uses the language and conceptual framework of the
condensed matter physics of systems close to equilibrium – the generalized
“hydrodynamics” used to derive equations of motion for liquids and gases,
liquid crystals, superfluids and superconductors, magnetic materials, etc. In
these subjects, one takes the broken symmetries and conserved quantities, and
systematically writes the most general evolution laws allowed by symmetry,
presuming that these quantities determine the state of the material. In that
framework, the Burgers vector flux $J$ of Eqs. (15) and (16) would normally be
written as a general function of $\rho$ and its gradients, constrained by
symmetries and the necessity that the net energy decreases with time. Indeed,
this was the approach Limkumnerd originally took Limkumnerd (2006), but the
complexity of the resulting theory and the multiplicity of terms allowed by
symmetry led them to specialize Limkumnerd and Sethna (2006) to a particular
choice motivated by the Peach-Koehler force — leading to the equation of
motion previously developed by Acharya et al. Acharya (2001); Roy and Acharya
(2005).
The assumption that the net continuum dislocation density determines the
evolution, however, is an uncontrolled 666There are two uncontrolled
approximations we make. Here we assume that the continuum, coarse-grained
dislocation density $\rho=\rho^{\Sigma}$ determines the evolution: we ignore
SSDs as unimportant on the sub-cellular length-scales of interest to us.
Later, we shall further assume that the nine independent components of
$\rho_{ij}$ all are dragged by the stress with the same velocity. and probably
invalid assumption. (Ref. Falk and Langer, 1998 have argued that the chaotic
motion of dislocations may lead to a statistical ensemble that could allow a
systematic theory of this type to be justified, but consensus has not been
reached on whether this will indeed be possible.) The situation is less
analogous to deriving the Navier-Stokes equation (where local equilibrium at
the viscous length is sensible) than to deriving theories of eddy viscosity in
fully developed turbulence (where unavoidable uncontrolled approximations are
needed to subsume swirls on smaller scales into an effective viscosity of the
coarse-grained system). Important features of how dislocations are arranged in
a local region will not be determined by the net Burgers vector density, and
extra state variables embodying their effects are needed. In the context of
dislocation dynamics, these state variables are usually added as SSDs and
yield surfaces – although far more complex memory effects could in principle
be envisioned.
Let us write $\rho^{0}$ as the microscopic dislocation density (the sum of
line-$\delta$ functions along individual dislocations, as in Eq. (1) and
following equations). For the microscopic density, allowing both glide and
climb, the dislocation current $J^{0}$ is directly given by the velocity
$\bm{v}^{0}(\mathbf{x})$ of the individual dislocation passing through
$\mathbf{x}$ (see Eq. 36):
$J^{0}_{ij}=v^{0}_{u}\varrho^{0}_{uij}.$ (46)
Let $F^{\sigma}$ be the microscopic quantity $F^{0}$ coarse-grained density
over a length-scale $\Sigma$,
$F^{\Sigma}_{ij}(\mathbf{x})=\int
d^{3}\mathbf{y}F^{0}_{ij}(\mathbf{x}+\mathbf{y})w^{\Sigma}(\mathbf{y}),$ (47)
where $w^{\Sigma}$ is a smoothing or blurring function. Typically, we use a
normal or Gaussian distribution $\rho^{\Sigma}$
$w^{\Sigma}(\mathbf{y})=N^{\Sigma}(\mathbf{y})=(2\pi\Sigma^{2})^{-3/2}e^{-y^{2}/(2\Sigma^{2})}.$
(48)
For our purposes, we can define the SSD density as the difference between the
coarse-grained density and the microscopic density Sandfeld et al. (2010):
$\rho^{SSD}(\mathbf{x})=\rho^{0}(\mathbf{x})-\rho^{\Sigma}(\mathbf{x})=\rho^{0}(\mathbf{x})-\int
d^{3}\mathbf{y}\rho^{0}(\mathbf{x}+\mathbf{y})w^{\Sigma}(\mathbf{y}).$ (49)
(Ref. Acharya, 2011 calls this quantity the dislocation fluctuation tensor
field.)
First, we address the question of SSDs, which we do not include in our
simulations. In the past Chen et al. (2010), we have argued that they do not
contribute to the long-range stresses that drive the formation of the cell
walls, and that the successful generation of cellular structures in our
simplified model suggests that they are not crucial. Here we go further, and
suggest that their density may be small on the relevant length-scales for
cell-wall formation, and also that in a theory (like ours) with scale-
invariant structures it would not be consistent to add them separately.
What is the dependence of the SSD density on the coarse-graining scale?
Clearly $\rho^{0}$ contains all dislocations; clearly for a bent single
crystal of size $L$, $\rho^{L}$ contains only those dislocations necessary to
mediate the rotation across the crystal (usually a tiny fraction of the total
density of dislocations). As $\Sigma$ increases past the distance between
dislocations, cancelling pairs of Burgers vectors through the same grid face
will leave the GNDs and join the SSDs. If the dislocation densities were
smoothly varying, as is often envisioned on long length scales, the SSD
density would be roughly independent of $\Sigma$ except on microscopic scales.
But, for a cellular structure with gross inhomogeneities in dislocation
density, the SSD density on the mesoscale may be much lower than that on the
macroscale. Very tangibly, if alternating cell walls separated by $\ell$ have
opposite misorientations (as is quite commonly observed Kuhlmann-Wilsdorf and
Hansen (1991); Hughes and Hansen (1993)), then the SSD density for
$\Sigma>\ell$ will include most of the dislocations incorporated into these
cell walls, while for $\Sigma<\ell$ the cell walls will be viewed as
geometrically necessary.
How does the GND density within the cell walls compare with the total
dislocation density for a typical material? Is it possible that the GNDs
dominate over SSDs in the regime where these cell wall patterns form? Recent
simulations clearly suggest (see Ref. Kiener et al., 2011 [Figure 5]) that the
distinction between GNDs and SSDs is not clear at the length scale of a
micron, and with reasonable definitions GNDs dominate by at least an order of
magnitude over the residual average SSD density. But what about the
experiments? While more experiments are necessary to clarify this issue, the
existing evidence supports that at mesoscales, SSDs at least are not
necessarily dominant. In particular, Ref. Hughes et al., 1997 observes that
cell boundary structures exhibit $D_{av}\theta_{av}/b=C$ where $D_{av}$ is the
average wall spacing and $\theta_{av}$ is the average misorientation angle
with $C\sim 650$ for ‘geometrically necessary’ boundaries (GNBs) and $C\sim
80$ for ‘incidental dislocation’ boundaries (IDBs). The resulting dislocation
density should scale as
$\rho_{GND}=\frac{1}{D_{av}h}=\frac{\theta_{av}}{D_{av}b}\sim\frac{C}{D_{av}^{2}}=\frac{\theta^{2}_{av}}{b^{2}C},$
(50)
where $h$ is the average spacing between GNDs in the wall 777Only
misorientation mediating dislocations are counted.. There are some estimates
available from the literature. Reference Hughes et al. (1997) tells us for
pure aluminum that $D_{av}$ is often observed to be $D_{av}=1-5\mu m$ which
leads to roughly $\rho_{GND}^{GNB}\sim 10^{13}\times(2.6-65)/m^{2}$ and
$\rho^{IDB}_{GND}$ one order of magnitude smaller. Similar estimates in Ref.
Godfrey and Hughes, 2000 give $\rho_{GND}^{GNB}=10^{14}-6\times 10^{15}/m^{2}$
for aluminum at von Mises strains of $\epsilon=0.2$ and $0.6$ respectively.
The larger von Mises strains, the higher dislocation density. Typically, in
highly deformed aluminum ($\epsilon\sim 2.7$), the total dislocation density
is roughly $10^{16}/m^{2}$ (see Ref. Hughes et al., 1998). While SSDs within a
cell boundary may exist, it is clearly far from true that SSDs dominate the
dynamics in these experiments.
These TEM analyses of cell boundary sizes and misorientations have a
misorientation cutoff $\theta_{0}\sim 2^{\circ}$ Liu (1994); they analyze the
cell boundaries using a single typical length scale $D_{av}$. Our model
behavior is formally much closer to the fractal scaling analysis that Ref.
Hähner et al., 1998 used. How does one identify a cutoff in a theory
exhibiting scale invariance (i.e., with no natural length scale)? Clearly our
simulations are cut off at the numerical grid spacing, and the scale invariant
theory applies after a few grid spacings. Similarly, if the real materials are
described by a scale-invariant morphology (still an open question), the cutoff
to the scale invariant regime will be where the granularity of the
dislocations becomes important – the dislocation spacing, or perhaps the
annihilation length. This is precisely the length scale at which the
dislocations are individually resolved – at which there is no separate
populations of SSDs and GNDs. Thus ignoring SSDs in our theory is at least
self-consistent.
So, not only are the SSDs unimportant for the long-range stresses and appear
unnecessary for our (presumably successful) modeling of the formation of cell
walls, but they also may be rare on the sub-cellular coarse-graining scale we
use in our modeling, and it makes sense in our mesoscale theory for us to omit
their effects.
The likelihood that we do not need to incorporate explicit SSDs in our
equations of motion does not mean that our equations are correct. The
microscopic equation of motion, Eq. (46) naively looks the same as our
‘single-velocity’ equation of motion we use (e.g., Eq. 36). But, as derived in
Ref. Acharya and Roy, 2006, the coarse-graining procedure (Eq. 47) leads to a
correction term $L^{p}$ to the single-velocity equations:
$\displaystyle J^{\Sigma}_{ij}$ $\displaystyle=$
$\displaystyle(v^{0}_{s}\varrho^{0}_{sij})^{\Sigma}$ (51) $\displaystyle=$
$\displaystyle
v^{\Sigma}_{s}\varrho^{\Sigma}_{sij}+\bigl{[}(v^{0}_{s}\varrho^{0}_{sij})^{\Sigma}-v^{\Sigma}_{s}\varrho^{\Sigma}_{sij}\bigr{]}$
$\displaystyle=$ $\displaystyle
v^{\Sigma}_{s}\varrho^{\Sigma}_{sij}+L^{p}_{ij}.$
Acharya interprets 888More precisely, equation (4) of Ref. Acharya and Roy,
2006 contains two different definitions for $L^{p}$; the one in Eq. (51) and
${L^{p}_{ij}}^{\prime}=\bigl{[}(v^{0}_{s}(\varrho^{0}_{sij}-\varrho^{\Sigma}_{sij})^{\Sigma}\bigr{]}=\bigl{[}(v^{0}_{s}\varrho^{SSD}_{sij})^{\Sigma}\bigr{]}$.
${L^{p}}^{\prime}$ is of course a strain rate due to SSDs, but since
$\varrho^{\Sigma}$ varies in space ${L^{p}}^{\prime}$ is not equal to $L^{p}$.
Ref. Acharya, 2012 suggests using a two-variable version of the SSD density,
$\check{\varrho}^{SSD}(\mathbf{x},\mathbf{x}^{\prime})=\varrho^{0}(\mathbf{x}^{\prime})-\varrho^{\Sigma}(\mathbf{x})$,
making the two definitions equivalent. this correction term $L^{p}$ as the
strain rate due to SSDs Acharya and Roy (2006); Roy and Acharya (2006), and
later Beaudoin Varadhan et al. (2006) and others Mach et al. (2010) then use
traditional crystal plasticity SSD evolution laws for it. Their GNDs thus move
according to the same single-velocity laws as ours do, supplemented by SSDs
that evolve by crystal plasticity (and thereby contribute changes to the GND
density). This is entirely appropriate for scales large compared to the
cellular structures, where most of the dislocations are indeed SSDs.
Although we argue that SSDs are largely absent at the nanoscale where we are
using our continuum theory, this does not mean the single-velocity form of our
equations of motion can be trusted. Unlike fluid mixtures, where momentum
conservation and Galilean invariance lead to a shared mean velocity after a
few collision times, the microscopic dislocations are subject to different
resolved shear stresses and are mobile along different glide planes, so
neighboring dislocations may well move in a variety of directions Sandfeld et
al. (2010). If so, the microscopic velocity $\bm{v}^{0}$ will fluctuate in
concert with the microscopic Burgers vector density $\varrho^{0}$ on
microscopic scales, and the correction $L^{p}$ will be large. Hence Acharya’s
correction term $L^{p}$ also incorporates multiple velocities for the GND
density. Our single-velocity approximation (e.g., Eq. 36) must be viewed as a
physically allowed equation of motion, but a second uncontrolled approximation
– the general evolution law for the coarse-grained system will be more
complex.
Let us be perfectly clear that our arguments, compelling on scales small
compared to the mesoscale cellular structures, should not be viewed as a
critique of the use of SSDs on larger scales. Much of our understanding of
yield stress and work hardening revolves around the macroscopic dislocation
density, which perforce are due to SSDs (since they dominate on macroscopic
scales). We also admire the work of Beaudoin, Acharya, and others which
supplements the GND equations we both study with crystal plasticity rules for
the SSDs motivated by Eq. (51). Surely on macroscales the SSDs dominate the
deformation, and using a single-velocity law for the GNDs is better than
ignoring them altogether, and we have no particular reason to believe that the
contribution of multiple GND velocities in the evolution laws through $L^{p}$
will be significant or dominant.
## IV Results
### IV.1 Two and three dimensional simulations
We perform simulations in 2D and 3D for the dislocation dynamics of Eq. (15)
and Eq. (18), with dynamical currents defined by CGD (Eq. 36), GOD-MDP (Eq.
40), and GOD-LVP (Eq. 44). We numerically observe that simulations of Eqs.
(15), (18) lead to the same results statistically (i.e., the numerical time
step approximations leave the physics invariant). We therefore focus our
presentation on the results of Eq. (18), where the evolving field variable
$\beta^{\rm p}$ is unconstrained. Our CGD and GOD-MDP models have been quite
extensively simulated in one and two dimensions and relevant results can be
found in Refs. Chen et al., 2010, Limkumnerd and Sethna, 2006, and Limkumnerd
and Sethna, 2008. In this paper, we concentrate on periodic grids of spatial
extent $L$ in both two Chen et al. (2010) and three dimensions. The numerical
approach we use is a second-order central upwind scheme designed for Hamilton-
Jacobi equations Kurganov et al. (2001) using finite differences. This method
is quite efficient in capturing $\delta-$shock singular structures Choi et al.
(2012a), even though it is flexible enough to allow for the use of approximate
solvers near the singularities.
Figure 4: (Color online) Complex dislocation structures in two dimensions
($1024^{2}$) for the relaxed states of an initially random distortion. Top:
Dislocation climb is allowed; Middle: Glide only using a mobile dislocation
population; Bottom: Glide only using a local vacancy pressure. Left: Net GND
density $|\varrho|$ plotted linearly in density with dark regions a factor
$\sim 10^{4}$ more dense than the lightest visible regions. (a) When climb is
allowed, the resulting morphologies are sharp, regular, and close to the
system scale. (c) When climb is forbidden using a mobile dislocation
population, there is a hierarchy of walls on a variety of length scales,
getting weaker on finer length scales. (e) When climb is removed using a local
vacancy pressure, the resulting morphologies are as sharp as those (a)
allowing climb. Right: Corresponding local crystalline orientation maps, with
the three components of the orientation vector $\mathbf{\Lambda}$ linearly
mapped onto a vector of RGB values. Notice the fuzzier cell walls (c) and (d)
suggests a larger fractal dimension. Figure 5: (Color online) Complex
dislocation structures in three dimensions ($128^{3}$) for the relaxed states
of an initially random distortion. Notice these textured views on the surface
of simulation cubes. Top: Dislocation climb is allowed; Middle: Glide only
using a mobile dislocation population; Bottom: Glide only using a local
vacancy pressure. Left: Net GND density $|\varrho|$ plotted linearly in
density with dark regions a factor $\sim 10^{3}$ more dense than the lightest
visible regions. The cellular structures in (a), (c), and (e) seem similarly
fuzzy; our theory in three dimensions generates fractal cell walls. Right:
Corresponding local crystalline maps, with the three components of the
orientation vector $\mathbf{\Lambda}$ linearly mapped onto a vector of RGB
values. Figure 6: (Color online) The elastic free energy decreases to zero as
a power law in time in both two and three dimensions. In both (a) and (b), we
show that the free energy $\mathcal{F}$ decays monotonically in time, and goes
to zero as a power law for CGD, GOD-MDP, and GOD-LVP simulations, as the
system relaxes in the absence of external strain.
Our numerical simulations show a close analogy to those of turbulent flows
Choi et al. (2012a). As in three-dimensional turbulence, defect structures
lead to intermittent transfer of morphology to short length scales. As
conjectured Pumir and Siggia (1992a, b) for the Euler equations or the
inviscid limit of Navier-Stokes equations, our simulations develop
singularities in finite time Limkumnerd and Sethna (2006); Chen et al. (2010).
Here these singularities are $\delta$-shocks representing grain-boundary-like
structures emerging from the mutual interactions among mobile dislocations
Choi et al. (2012b). In analogy with turbulence, where the viscosity serves to
smooth out the vortex-stretching singularities of the Euler equations, we have
explored the effects of adding an artificial viscosity term to our equations
of motion Choi et al. (2012a). In the presence of artificial viscosity, our
simulations exhibit nice numerical convergence in all dimensions Choi et al.
(2012b). However, in the limit of vanishing viscosity, the solutions of our
dynamics continue to depend on the lattice cutoff in higher dimensions, (our
simulations only exhibit numerical convergence in one dimension). Actually,
the fact that the physical system is cut off by the atomic scale leads to the
conjecture that our equations are in some sense non-renormalizable in the
ultraviolet. These issues are discussed in detail in Refs. Choi et al., 2012a
and Choi et al., 2012b. See also Ref. Acharya and Tartar, 2011 for global
existence and uniqueness results from an alternative regularization for this
type of equations; it is not known whether these alternative regularizations
will continue to exhibit the fractal scaling we observe.
In the vanishing viscosity limit, our simulations exhibit fractal structure
down to the smallest scales. When varying the system size continuously, the
solutions of our dynamics exhibit a convergent set of correlation functions of
the various order parameter fields, which are used to characterize the
emergent self-similarity. This statistical convergence is numerically tested
in D.1.
In both two and three dimensional simulations, we relax the deformed system
with and without dislocation climb in the absence of external loading. Here,
the initial plastic distortion field $\beta^{\rm p}$ is still a Gaussian
random field with correlation length scale $\sqrt{2}L/5\sim 0.28L$ and initial
amplitude $\beta_{0}=1$. (In our earlier work Chen et al. (2010), we described
this length as $L/5$, using a non-standard definition of correlation length
scale; see D.2.) These random initial conditions are explained in D.2. In 2D,
Figure 4 shows that CGD and GOD-LVP simulations (top and bottom) exhibit much
sharper, flatter boundaries than GOD-MDP (middle). This difference is
quantitatively described by the large shift in the static critical exponent
$\eta$ in 2D for both CGD and GOD-LVP. In our earlier work Chen et al. (2010),
we announced this difference as providing a sharp distinction between high-
temperature, non-fractal grain boundaries (for CGD), and low-temperature,
fractal cell wall structures (for GOD-MDP). This appealing message did not
survive the transition to 3D; Figure 5 shows basically indistinguishable
complex cellular structures, for all three types of dynamics. Indeed, Table 1
shows only a small change in critical exponents, among CGD, GOD-MDP, and GOD-
LVP. During both two and three dimensional relaxations, their appropriate free
energies monotonically decay to zero as shown in Fig. 6.
Figure 7: (Color online) Relaxation with various initial length scales in two
dimensions. GNDs are not allowed to climb due to the constraint of a mobile
dislocation population in these simulations. (a), (b), and (c) are the net GND
density map $|\varrho|$, the net plastic distortion $|\beta^{\rm p}|$ (the
warmer color indicating the larger distortion), and the crystalline
orientation map in a fully-relaxed state evolved from an initial random
plastic distortion with correlated length scale $0.07L$. They are compared to
the same sequence of plots, (d), (e), and (f), which are in the relaxed state
with the initial length scale $0.21L$ three times as long. Notice the features
with the longest wave length reflecting the initial distortion length scales.
(g), (h), and (i) are the scalar forms (discussed in Sec. IV.3) of correlation
functions of the GND density $\rho$, the intrinsic plastic distortion
$\beta^{\rm p,I}$, and the crystalline orientation $\mathbf{\Lambda}$ for
well-relaxed states with initial length scales varying from $0.07L$ to
$0.28L$. They exhibit power laws independent of the initial length scales,
with cutoffs set by the initial lengths. (The scaling relation among their
critical exponents will be discussed in Sec. V.)
### IV.2 Self-similarity and initial conditions
Self-similar structures, as emergent collective phenomena, have been studied
in mesoscale crystals Chen et al. (2010), human-scale social network Song et
al. (2005), and the astronomical-scale universe Vergassola et al. (1994). In
some models Vergassola et al. (1994), the self-similarity comes from scale-
free initial conditions with a power-law spectrum Peebles (1993); Coles and
Lucchin (1995). In our CDD model, our simulations start from a random plastic
distortion with a Gaussian distribution characterized by a single length
scale. The scale-free dislocation structure spontaneously emerges as a result
of the deterministic dynamics.
Our Gaussian random initial condition is analogous to hitting a bulk material
randomly with a hammer. The hammer head (the dent size scale) corresponds to
the correlated length. We need to generate inhomogeneous deformations like
random dents, because our theory is deterministic and hence uniform initial
conditions under uniform loading will not develop patterns.
We have considered alternatives to our imposition of Gaussian random
deformation fields as initial conditions. (a) As an alternative to random
initial deformations, we could have imposed a more regular (albeit nonuniform)
deformation – starting with our material bent into a sinusoidal arc, and then
letting it relax. Such simulations produce more symmetric versions of the
fractal patterns we see; indeed, our Gaussian random initial deformations have
correlation lengths ‘hammer size’ comparable to the system size, so our
starting deformations are almost sinusoidal (although different components
have different phases) – see D.2. (b) To explore the effects of multiple
uncorrelated random domains (multiple small dents), we reduce the Gaussian
correlation length as shown in Fig. 7. We find that the initial-scale
deformation determines the maximal cutoff for the fractal correlations in our
model. In other systems (such as two-dimensional turbulence) one can observe
an ‘inverse cascade’ with fractal structures propagating to long length
scales; we observe no evidence of these here. (c) As an alternative to
imposing an initial plastic deformation field and then relaxing, we have
explored deforming the material slowly and continuously in time. Our
preliminary ‘slow hammering’ explorations turn the Gaussian initial conditions
${\beta^{\rm p0}}$ into a source term, modifying Eq. (18) with an additional
term to give $\partial_{t}\beta^{\rm p}_{ij}=J_{ij}+\beta^{\rm p0}_{ij}/\tau$.
Our early explorations suggest that slow hammering simulations will be
qualitatively compatible with the relaxation of an initial rapid hammering. In
this paper, to avoid the introduction of the hammering time scale $\tau$, we
focus on the (admittedly less physically motivated) relaxation behavior.
In real materials, initial grain boundaries, impurities, or sample sizes, can
be viewed as analogies to our initial dents — explaining the observation of
dislocation cellular structures both in single crystals and polycrystalline
materials.
Figure 7 shows relaxation without dislocation climb (due to the constraint of
a mobile dislocation population) at various initial length scales in 2D. From
Fig. 7(a) to (f), the net GND density, the net plastic distortion, and the
crystalline orientation map, measured at two well-relaxed states evolved from
different random distortions, all show fuzzy fractal structures, distinguished
only by their longest-length-scale features that originate from the initial
conditions. In Fig. 7(g), (h), and (i), the correlation functions of the GND
density $\rho$, the intrinsic plastic distortion $\beta^{\rm p,I}$, and the
crystalline orientation $\mathbf{\Lambda}$ are applied to characterize the
emergent self-similarity, as discussed in the following section IV.3. They all
exhibit the same power law, albeit with different cutoffs due to the initial
conditions.
### IV.3 Correlation functions
Hierarchical dislocation structures have been observed both experimentally
Kawasaki and Takeuchi (1980); Mughrabi et al. (1986); Ungár et al. (1986);
Schwink (1992) and in our simulations Chen et al. (2010). Early work analyzed
experimental cellular structures using the fractal box counting method Hähner
et al. (1998) or by separating the systems into cells and analyzing their
sizes and misorientations Hughes et al. (1997, 1998); Mika and Dawson (1999);
Hughes and Hansen (2001). In our previous publication, we analyzed our
simulated dislocation patterns using these two methods, and showed broad
agreement with these experimental analyses Chen et al. (2010). In fact, lack
of the measurements of physical order parameters leads to incomplete
characterization of the emergent self-similarity 999In these analyses of TEM
micrographs, the authors must use an artificial cut-off to facilitate the
analysis. This arbitrary scale obscures the scale-free nature behind the
emergent dislocation patterns.. We will not pursue these methods here.
In our view, the emergent self-similarity should best be exhibited by the
correlation functions of the order parameter fields, such as the GND density
$\rho$, the plastic distortion $\beta^{\rm p}$, and the crystalline
orientation vector $\mathbf{\Lambda}$. Here we focus on scalar invariants of
the various tensor correlation functions.
For the vector correlation function
$\mathcal{C}^{\mathbf{\Lambda}}_{ij}(\mathbf{x})$ (Eq. 52), only the sum
$\mathcal{C}^{\mathbf{\Lambda}}_{ii}(\mathbf{x})$ is a scalar invariant under
three dimensional rotations. For the tensor fields $\rho$ and $\beta^{\rm p}$,
their two-point correlation functions are measured in terms of a complete set
of three independent scalar invariants, which are indicated by ‘tot’ (total),
‘per’ (permutation), and ‘tr’ (trace). In searching for the explanation of the
lack of scaling Chen et al. (2010) for $\beta^{\rm p}$ (see Sec. IV.3.3 and
E.1), we checked whether these independent invariants might scale
independently. In fact, most of them share a single underlying critical
exponent, except for the trace-type scalar invariant of the correlation
function of $\beta^{\rm p,I}$, which go to a constant in well-relaxed states,
as discussed in Sec. V.1.2.
Figure 8: (Color online) Correlation functions of $\mathbf{\Lambda}$ in both
two and three dimensions. In (a) and (b), red, blue, and green lines indicate
CGD, GOD-MDP, and GOD-LVP simulations, respectively. Left: Correlation
functions of $\mathbf{\Lambda}$ are measured in relaxed, unstrained $1024^{2}$
systems; Right: These correlation functions are measured in relaxed,
unstrained $128^{3}$ systems. All dashed lines show estimated power laws
quoted in Table 1. Figure 9: (Color online) Correlation functions of $\varrho$
in both two and three dimensions. Left: (a) is measured in relaxed, unstrained
$1024^{2}$ systems; Right: (b) is measured in relaxed, unstrained $128^{3}$
systems. All dashed lines show estimated power laws quoted in Table 1. Notice
all three scalar forms of the correlation functions of GND density share the
same power law.
#### IV.3.1 Correlation function of crystalline orientation field
As dislocations self-organize themselves into complex structures, the relative
differences of the crystalline orientations are correlated over a long length
scale.
For a vector field, like the crystalline orientation $\mathbf{\Lambda}$, the
natural two-point correlation function is
$\displaystyle\mathcal{C}^{\mathbf{\Lambda}}_{ij}(\mathbf{x})$
$\displaystyle=$
$\displaystyle\langle(\Lambda_{i}(\mathbf{x})-\Lambda_{i}(0))(\Lambda_{j}(\mathbf{x})-\Lambda_{j}(0))\rangle$
(52) $\displaystyle=$ $\displaystyle
2\langle\Lambda_{i}\Lambda_{j}\rangle-2\langle\Lambda_{i}(\mathbf{x})\Lambda_{j}(0)\rangle.$
Note that we correlate changes in $\mathbf{\Lambda}$ between two points. Just
as for the height-height correlation function in surface growth Chaikin and
Lubensky (1995), adding a constant to $\mathbf{\Lambda}(\mathbf{x})$ (rotating
the sample) leads to an equivalent configuration, so only differences in
rotations can be meaningfully correlated.
It can be also described in Fourier space
$\widetilde{\mathcal{C}}^{\mathbf{\Lambda}}_{ij}(\mathbf{k})=2\langle\Lambda_{i}\Lambda_{j}\rangle(2\pi)^{3}\delta(\mathbf{k})-\frac{2}{V}\widetilde{\Lambda}_{i}(\mathbf{k})\widetilde{\Lambda}_{j}(-\mathbf{k}).$
(53)
In an isotropic medium, we study the scalar invariant formed from
$\mathcal{C}^{\Lambda}_{ij}$
$\mathcal{C}^{\mathbf{\Lambda}}(\mathbf{x})=\mathcal{C}^{\mathbf{\Lambda}}_{ii}(\mathbf{x})=2\langle\Lambda^{2}\rangle-2\langle\Lambda_{i}(\mathbf{x})\Lambda_{i}(0)\rangle.$
(54)
Figure 8 shows the correlation functions of crystalline orientations in both
$1024^{2}$ and $128^{3}$ simulations. The large shift in critical exponents
seen in 2D (Fig. 8(a)) for both CGD and GOD-LVP is not observed in the fully
three dimensional simulations (Fig. 8(b)).
Figure 10: (Color online) Correlation functions of $\beta^{\rm p}$ in two
dimensions. Red, blue, and green lines indicate CGD, GOD-MDP, and GOD-LVP
simulations, respectively. None of these curves shows a convincing power law.
Figure 11: (Color online) Correlation functions of $\beta^{\rm p,I}$ in both
two and three dimensions. In (a) and (b), the correlation functions of the
intrinsic part of plastic distortion field are shown. Left: (a) is measured in
relaxed, unstrained $1024^{2}$ systems; Right: (b) is measured in in relaxed,
unstrained $128^{3}$ systems. All dashed lines show estimated power laws
quoted in Table 1. Notice that we omit the correlation functions of
$\mathcal{C}^{\beta^{\rm p,I}}_{tr}$, which are independent of distance, and
unrelated to the emergent self-similarity, as shown in Sec. V.1.2.
#### IV.3.2 Correlation function of GND density field
As GNDs evolve into $\delta$-shock singularities, the critical fluctuations of
the GND density can be measured by the two-point correlation function
$\mathcal{C}^{\rho}(\mathbf{x})$ of the GND density, which decays as the
separating distance between two sites increases. The complete set of
rotational invariants of the correlation function of $\rho$ includes three
scalar forms
$\displaystyle\mathcal{C}^{\rho}_{tot}(\mathbf{x})$ $\displaystyle=$
$\displaystyle\langle\rho_{ij}(\mathbf{x})\rho_{ij}(0)\rangle,$ (55)
$\displaystyle\mathcal{C}^{\rho}_{per}(\mathbf{x})$ $\displaystyle=$
$\displaystyle\langle\rho_{ij}(\mathbf{x})\rho_{ji}(0)\rangle,$ (56)
$\displaystyle\mathcal{C}^{\rho}_{tr}(\mathbf{x})$ $\displaystyle=$
$\displaystyle\langle\rho_{ii}(\mathbf{x})\rho_{jj}(0)\rangle.$ (57)
Figure 9 shows all the correlation functions of GND density in both $1024^{2}$
and $128^{3}$ simulations. These three scalar forms of the correlation
functions of $\rho$ exhibit the same critical exponent $\eta$, as listed in
Table 1. Similar to the measurements of $\mathcal{C}^{\mathbf{\Lambda}}$, the
large shift in critical exponents seen in 2D (Fig. 9(a)) for both CGD and GOD-
LVP is not observed in the fully three dimensional simulations (Fig. 9(b)).
#### IV.3.3 Correlation function of plastic distortion field
The plastic distortion $\beta^{\rm p}$ is a mixture of both the divergence-
free $\beta^{\rm p,I}$ and the curl-free $\beta^{\rm p,H}$. Figure 10 shows
that $\beta^{\rm p}$ does not appear to be scale invariant, as observed in our
earlier work Chen et al. (2010). It is crucial to study the correlations of
the two physical fields, $\beta^{\rm p,I}$ and $\beta^{\rm p,H}$, separately.
Similarly to the crystalline orientation $\mathbf{\Lambda}$, we correlate the
differences between $\beta^{\rm p,I}$ at neighboring points. The complete set
of scalar invariants of correlation functions of $\beta^{\rm p,I}$ thus
includes the three scalar forms
$\displaystyle\mathcal{C}^{\beta^{\rm p,I}}_{tot}(\mathbf{x})$
$\displaystyle=$ $\displaystyle\langle(\beta^{\rm
p,I}_{ij}(\mathbf{x})-\beta^{\rm p,I}_{ij}(0))(\beta^{\rm
p,I}_{ij}(\mathbf{x})-\beta^{\rm p,I}_{ij}(0))\rangle$ (58) $\displaystyle=$
$\displaystyle 2\langle\beta^{\rm p,I}_{ij}\beta^{\rm
p,I}_{ij}\rangle-2\langle\beta^{\rm p,I}_{ij}(\mathbf{x})\beta^{\rm
p,I}_{ij}(0)\rangle;$ $\displaystyle\mathcal{C}^{\beta^{\rm
p,I}}_{per}(\mathbf{x})$ $\displaystyle=$ $\displaystyle-\langle(\beta^{\rm
p,I}_{ij}(\mathbf{x})-\beta^{\rm p,I}_{ij}(0))(\beta^{\rm
p,I}_{ji}(\mathbf{x})-\beta^{\rm p,I}_{ji}(0))\rangle$ (59) $\displaystyle=$
$\displaystyle-2\langle\beta^{\rm p,I}_{ij}\beta^{\rm
p,I}_{ji}\rangle+2\langle\beta^{\rm p,I}_{ij}(\mathbf{x})\beta^{\rm
p,I}_{ji}(0)\rangle;$ $\displaystyle\mathcal{C}^{\beta^{\rm
p,I}}_{tr}(\mathbf{x})$ $\displaystyle=$ $\displaystyle\langle(\beta^{\rm
p,I}_{ii}(\mathbf{x})-\beta^{\rm p,I}_{ii}(0))(\beta^{\rm
p,I}_{jj}(\mathbf{x})-\beta^{\rm p,I}_{jj}(0))\rangle$ (60) $\displaystyle=$
$\displaystyle 2\langle\beta^{\rm p,I}_{ii}\beta^{\rm
p,I}_{jj}\rangle-2\langle\beta^{\rm p,I}_{ii}(\mathbf{x})\beta^{\rm
p,I}_{jj}(0)\rangle;$
where an overall minus sign is added to $\mathcal{C}^{\beta^{\rm p,I}}_{per}$
so as to yield a positive measure.
In Fig. 11, the correlation functions of the intrinsic plastic distortion
$\beta^{\rm p,I}$ in both $1024^{2}$ and $128^{3}$ simulations exhibit a
critical exponent $\sigma^{\prime}$. These measured critical exponents are
shown in Table 1. We discuss the less physically relevant case of $\beta^{\rm
p,H}$ in E.1, Fig. 17.
## V Scaling theory
The emergent self-similar dislocation morphologies are characterized by the
rotational invariants of correlation functions of physical observables, such
as the GND density $\rho$, the crystalline orientation $\mathbf{\Lambda}$, and
the intrinsic plastic distortion $\beta^{\rm p,I}$. Here we derive the
relations expected between these correlation functions, and show that their
critical exponents collapse into a single underlying one through a generic
scaling theory.
In our model, the initial elastic stresses are relaxed via dislocation motion,
leading to the formation of cellular structures. In the limit of slow imposed
deformations, the elastic stress goes to zero in our model. We will use the
absence of external stress to simplify our correlation function relations.
(Some relations can be valid regardless of the existence of residual stress.)
Those relations that hold only in stress-free states will be labeled ‘sf’;
they will be applicable in analyzing experiments only insofar as residual
stresses are small.
### V.1 Relations between correlation functions
#### V.1.1 $\mathcal{C}^{\rho}$ and $\mathcal{C}^{\mathbf{\Lambda}}$
For a stress-free state, we thus ignore the elastic strain term in Eq. (14)
and write in Fourier space
$\widetilde{\rho}_{ij}(\mathbf{k})\overset{sf}{=}-ik_{j}\widetilde{\Lambda}_{i}(\mathbf{k})+i\delta_{ij}k_{k}\widetilde{\Lambda}_{k}(\mathbf{k}).$
(61)
First, we can substitute Eq. (61) into the Fourier-transformed form of the
correlation function Eq. (55)
$\displaystyle\widetilde{\mathcal{C}}^{\rho}_{tot}(\mathbf{k})$
$\displaystyle\overset{sf}{=}$
$\displaystyle\frac{1}{V}\biggl{(}-ik_{j}\widetilde{\Lambda}_{i}(\mathbf{k})+i\delta_{ij}k_{k}\widetilde{\Lambda}_{k}(\mathbf{k})\biggr{)}\biggl{(}ik_{j}\widetilde{\Lambda}_{i}(-\mathbf{k})-i\delta_{ij}k_{m}\widetilde{\Lambda}_{m}(-\mathbf{k})\biggr{)}$
(62) $\displaystyle\overset{sf}{=}$
$\displaystyle\frac{1}{V}(\delta_{ij}k^{2}+k_{i}k_{j})\widetilde{\Lambda}_{i}(\mathbf{k})\widetilde{\Lambda}_{j}(-\mathbf{k}).$
Multiplying both sides of Eq. (53) by $(\delta_{ij}k^{2}+k_{i}k_{j})$ gives
$(\delta_{ij}k^{2}+k_{i}k_{j})\widetilde{\mathcal{C}}^{\mathbf{\Lambda}}_{ij}(\mathbf{k})\overset{sf}{=}-\frac{2}{V}(\delta_{ij}k^{2}+k_{i}k_{j})\widetilde{\Lambda}_{i}(\mathbf{k})\widetilde{\Lambda}_{j}(-\mathbf{k}).$
(63)
Comparing Eq. (63) and Eq. (62), we may write
$\widetilde{\mathcal{C}}^{\rho}_{tot}$ in terms of
$\widetilde{\mathcal{C}}^{\mathbf{\Lambda}}_{ij}$ as
$\widetilde{\mathcal{C}}^{\rho}_{tot}(\mathbf{k})\overset{sf}{=}-\frac{1}{2}(\delta_{ij}k^{2}+k_{i}k_{j})\widetilde{\mathcal{C}}^{\mathbf{\Lambda}}_{ij}(\mathbf{k}).$
(64)
Second, we can substitute Eq. (61) into the Fourier-transformed form of the
correlation function Eq. (56)
$\widetilde{\mathcal{C}}^{\rho}_{per}(\mathbf{k})\overset{sf}{=}\frac{2}{V}k_{i}k_{j}\widetilde{\Lambda}_{i}(\mathbf{k})\widetilde{\Lambda}_{j}(-\mathbf{k}).$
(65)
Multiplying both sides of Eq. (53) by $k_{i}k_{j}$ and comparing with Eq. (65)
gives
$\widetilde{\mathcal{C}}^{\rho}_{per}(\mathbf{k})\overset{sf}{=}-k_{i}k_{j}\widetilde{\mathcal{C}}^{\mathbf{\Lambda}}_{ij}(\mathbf{k}).$
(66)
Finally, we substitute Eq. (61) into the Fourier-transformed form of the
correlation function Eq. (57)
$\widetilde{\mathcal{C}}^{\rho}_{tr}(\mathbf{k})\overset{sf}{=}\frac{4}{V}k_{i}k_{j}\widetilde{\Lambda}_{i}(\mathbf{k})\widetilde{\Lambda}_{j}(-\mathbf{k}).$
(67)
Repeating the same procedure of deriving
$\widetilde{\mathcal{C}}^{\rho}_{per}$, we write
$\widetilde{\mathcal{C}}^{\rho}_{tr}$ in terms of
$\widetilde{\mathcal{C}}^{\mathbf{\Lambda}}_{ij}$ as
$\widetilde{\mathcal{C}}^{\rho}_{tr}(\mathbf{k})\overset{sf}{=}-2k_{i}k_{j}\widetilde{\mathcal{C}}^{\mathbf{\Lambda}}_{ij}(\mathbf{k}).$
(68)
Through an inverse Fourier transform, we convert Eq. (64), Eq. (66), and Eq.
(68) back to real space to find
$\displaystyle\mathcal{C}^{\rho}_{tot}(\mathbf{x})$
$\displaystyle\overset{sf}{=}$
$\displaystyle\frac{1}{2}\partial^{2}\mathcal{C}^{\mathbf{\Lambda}}(\mathbf{x})+\frac{1}{2}\partial_{i}\partial_{j}\mathcal{C}^{\mathbf{\Lambda}}_{ij}(\mathbf{x}),$
(69) $\displaystyle\mathcal{C}^{\rho}_{per}(\mathbf{x})$
$\displaystyle\overset{sf}{=}$
$\displaystyle\partial_{i}\partial_{j}\mathcal{C}^{\mathbf{\Lambda}}_{ij}(\mathbf{x}),$
(70) $\displaystyle\mathcal{C}^{\rho}_{tr}(\mathbf{x})$
$\displaystyle\overset{sf}{=}$ $\displaystyle
2\partial_{i}\partial_{j}\mathcal{C}^{\mathbf{\Lambda}}_{ij}(\mathbf{x}).$
(71)
#### V.1.2 $\mathcal{C}^{\beta^{\rm p,I}}$ and
$\mathcal{C}^{\mathbf{\Lambda}}$
The intrinsic part of the plastic distortion field is directly related to the
GND density field. In stress-free states, the crystalline orientation vector
can fully describe the GND density. We thus can connect
$\mathcal{C}^{\beta^{\rm p,I}}$ to $\mathcal{C}^{\mathbf{\Lambda}}$.
First, substituting $\widetilde{\beta}^{\rm
p,I}_{ij}=-i\varepsilon_{ilm}k_{l}\widetilde{\rho}_{mj}/k^{2}$ into the
Fourier-transformed form of Eq. (58) gives
$\displaystyle\widetilde{\mathcal{C}}^{\beta^{\rm
p,I}}_{tot}\\!\\!(\mathbf{k})\\!\\!$ $\displaystyle=$ $\displaystyle
2\langle\beta^{\rm p,I}_{ij}\beta^{\rm
p,I}_{ij}\rangle(2\pi)^{3}\delta(\mathbf{k})-\frac{2}{V}\biggl{(}-i\epsilon_{ilm}\frac{k_{l}}{k^{2}}\widetilde{\rho}_{mj}(\mathbf{k})\biggr{)}\biggl{(}i\epsilon_{ist}\frac{k_{s}}{k^{2}}\widetilde{\rho}_{tj}(-\mathbf{k})\biggr{)}$
(72) $\displaystyle=$ $\displaystyle\\!\\!2\langle\beta^{\rm
p,I}_{ij}\beta^{\rm
p,I}_{ij}\rangle(2\pi)^{3}\delta(\mathbf{k})\\!-\\!\frac{2}{k^{2}}\biggl{(}\\!\frac{1}{V}\widetilde{\rho}_{mj}(\mathbf{k})\widetilde{\rho}_{mj}(\\!-\mathbf{k})\\!\biggr{)}.$
During this derivation, some terms vanish due to the geometrical constraint on
$\rho$, Eq. (6). Multiplying $-k^{2}/2$ on both sides of Eq. (72) and applying
the Fourier-transformed form of Eq. (55) gives
$-\frac{k^{2}}{2}\widetilde{\mathcal{C}}^{\beta^{\rm
p,I}}_{tot}(\mathbf{k})=\widetilde{\mathcal{C}}^{\rho}_{tot}(\mathbf{k}).$
(73)
In stress-free states, we can substitute Eq. (64) into Eq. (73)
$-\frac{k^{2}}{2}\widetilde{\mathcal{C}}^{\beta^{\rm
p,I}}_{tot}(\mathbf{k})\overset{sf}{=}\widetilde{\mathcal{C}}^{\rho,sf}_{tot}(\mathbf{k})=-\frac{1}{2}\biggl{(}\delta_{ij}k^{2}+k_{i}k_{j}\biggr{)}\widetilde{\mathcal{C}}^{\mathbf{\Lambda}}_{ij}(\mathbf{k}),$
(74)
which is rewritten after multiplying $-2/k^{2}$ on both sides
$\widetilde{\mathcal{C}}^{\beta^{\rm
p,I}}_{tot}(\mathbf{k})\overset{sf}{=}\widetilde{\mathcal{C}}^{\mathbf{\Lambda}}(\mathbf{k})+\frac{k_{i}k_{j}}{k^{2}}\widetilde{\mathcal{C}}^{\mathbf{\Lambda}}_{ij}(\mathbf{k}).$
(75)
Second, substituting $\widetilde{\beta}^{\rm
p,I}_{ij}=-i\varepsilon_{ilm}k_{l}\widetilde{\rho}_{mj}/k^{2}$ into the
Fourier-transformed form of Eq. (59) gives
$\displaystyle\widetilde{\mathcal{C}}^{\beta^{\rm
p,I}}_{per}\\!\\!(\mathbf{k})\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!-2\langle\beta^{\rm p,I}_{ij}\beta^{\rm
p,I}_{ji}\rangle(2\pi)^{3}\delta(\mathbf{k})+\frac{2}{V}\biggl{(}-i\epsilon_{ilm}\frac{k_{l}}{k^{2}}\widetilde{\rho}_{mj}(\mathbf{k})\biggr{)}\biggl{(}i\epsilon_{jst}\frac{k_{s}}{k^{2}}\widetilde{\rho}_{ti}(-\mathbf{k})\biggr{)}$
(76) $\displaystyle=$ $\displaystyle\\!\\!-2\langle\beta^{\rm
p,I}_{ij}\beta^{\rm
p,I}_{ji}\rangle(2\pi)^{3}\delta(\mathbf{k})-\frac{2}{Vk^{4}}k_{i}k_{j}\widetilde{\rho}_{mj}(\mathbf{k})\widetilde{\rho}_{mi}(\\!-\mathbf{k})$
$\displaystyle+\frac{2}{k^{2}}\widetilde{\mathcal{C}}^{\rho}_{tot}(\mathbf{k})-\frac{2}{k^{2}}\widetilde{\mathcal{C}}^{\rho}_{tr}(\mathbf{k}),$
where we skip straightforward but tedious expansions and the geometrical
constraint on $\rho$, Eq. (6). Notice that this relation is correct even in
the presence of stress.
In stress-free states, we substitute Eqs. (61), (64), (68) into Eq. (76), and
ignore the constant zero wavelength term
$\displaystyle\widetilde{\mathcal{C}}^{\beta^{\rm
p,I}}_{per}\\!\\!(\mathbf{k})$ $\displaystyle\overset{sf}{=}$
$\displaystyle-\frac{2k_{i}k_{j}}{Vk^{4}}\biggl{(}-ik_{j}\widetilde{\Lambda}_{m}(\mathbf{k})+i\delta_{mj}k_{k}\widetilde{\Lambda}_{k}(\mathbf{k})\biggr{)}\biggl{(}ik_{i}\widetilde{\Lambda}_{m}(-\mathbf{k})-i\delta_{mi}k_{n}\widetilde{\Lambda}_{n}(-\mathbf{k})\biggr{)}$
(77)
$\displaystyle-\frac{1}{k^{2}}(k^{2}\delta_{ij}+k_{i}k_{j})\widetilde{\mathcal{C}}^{\mathbf{\Lambda}}_{ij}(\mathbf{k})+\frac{4}{k^{2}}k_{i}k_{j}\widetilde{\mathcal{C}}^{\mathbf{\Lambda}}_{ij}(\mathbf{k})$
$\displaystyle\overset{sf}{=}$ $\displaystyle
2\frac{k_{i}k_{j}}{k^{2}}\widetilde{\mathcal{C}}^{\mathbf{\Lambda}}_{ij}(\mathbf{k}).$
Finally, substituting $\widetilde{\beta}^{\rm
p,I}_{ij}=-i\varepsilon_{ilm}k_{l}\widetilde{\rho}_{mj}/k^{2}$ into the
Fourier-transformed form of Eq. (60) gives
$\displaystyle\widetilde{\mathcal{C}}^{\beta^{\rm
p,I}}_{tr}\\!\\!(\mathbf{k})\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!2\langle\beta^{\rm p,I}_{ii}\beta^{\rm
p,I}_{jj}\rangle(2\pi)^{3}\delta(\mathbf{k})-\frac{2}{V}\biggl{(}-i\epsilon_{ilm}\frac{k_{l}}{k^{2}}\widetilde{\rho}_{mi}(\mathbf{k})\biggr{)}\biggl{(}i\epsilon_{jst}\frac{k_{s}}{k^{2}}\widetilde{\rho}_{tj}(-\mathbf{k})\biggr{)}$
(78) $\displaystyle=$ $\displaystyle\\!\\!2\langle\beta^{\rm
p,I}_{ii}\beta^{\rm
p,I}_{jj}\rangle(2\pi)^{3}\delta(\mathbf{k})+\frac{2}{Vk^{4}}k_{i}k_{j}\widetilde{\rho}_{mi}(\mathbf{k})\widetilde{\rho}_{mj}(\\!-\mathbf{k})$
$\displaystyle-\frac{2}{k^{2}}\widetilde{\mathcal{C}}^{\rho}_{tot}(\mathbf{k})+\frac{2}{k^{2}}\widetilde{\mathcal{C}}^{\rho}_{per}(\mathbf{k}),$
valid in the presence of stress. Here we repeat a similar procedure as was
used to derive in Eq. (76).
In stress-free states, we substitute Eqs. (61), (64), (66) into Eq. (78)
$\displaystyle\widetilde{\mathcal{C}}^{\beta^{\rm
p,I}}_{tr}\\!\\!(\mathbf{k})$ $\displaystyle\overset{sf}{=}$ $\displaystyle
2\langle\beta^{\rm p,I}_{ii}\beta^{\rm
p,I}_{jj}\rangle(2\pi)^{3}\delta(\mathbf{k})+\frac{1}{k^{2}}(k^{2}\delta_{ij}+k_{i}k_{j})\widetilde{\mathcal{C}}^{\mathbf{\Lambda}}_{ij}(\mathbf{k})-\frac{2}{k^{2}}k_{i}k_{j}\widetilde{\mathcal{C}}^{\mathbf{\Lambda}}_{ij}(\mathbf{k})$
(79)
$\displaystyle+\frac{2k_{i}k_{j}}{Vk^{4}}\biggl{(}-ik_{i}\widetilde{\Lambda}_{m}(\mathbf{k})+i\delta_{mi}k_{k}\widetilde{\Lambda}_{k}(\mathbf{k})\biggr{)}\biggl{(}ik_{j}\widetilde{\Lambda}_{m}(-\mathbf{k})-i\delta_{mj}k_{n}\widetilde{\Lambda}_{n}(-\mathbf{k})\biggr{)}$
$\displaystyle\overset{sf}{=}$ $\displaystyle 2\langle\beta^{\rm
p,I}_{ii}\beta^{\rm p,I}_{jj}\rangle(2\pi)^{3}\delta(\mathbf{k}),$
which is a trivial constant in space.
Through an inverse Fourier transform, Eqs. (75), (77), and (79) can be
converted back to real space, giving
$\displaystyle\mathcal{C}^{\beta^{\rm p,I}}_{tot}\\!(\mathbf{x})\\!\\!\\!$
$\displaystyle\overset{sf}{=}$
$\displaystyle\\!\\!\mathcal{C}^{\mathbf{\Lambda}}\\!(\mathbf{x})\\!\\!+\\!\\!\frac{1}{4\pi}\\!\\!\\!\int\\!\\!d^{3}\mathbf{x}^{\prime}\\!\biggl{(}\\!\frac{\delta_{ij}}{R^{3}}\\!-\\!3\frac{R_{i}R_{j}}{R^{5}}\\!\biggr{)}\\!\mathcal{C}^{\mathbf{\Lambda}}_{ij}(\mathbf{x}^{\prime}),$
(80) $\displaystyle\mathcal{C}^{\beta^{\rm
p,I}}_{per}\\!(\mathbf{x})\\!\\!\\!$ $\displaystyle\overset{sf}{=}$
$\displaystyle\\!\\!\frac{1}{2\pi}\int\\!\\!d^{3}\mathbf{x}^{\prime}\biggl{(}\frac{\delta_{ij}}{R^{3}}-3\frac{R_{i}R_{j}}{R^{5}}\biggr{)}\mathcal{C}^{\mathbf{\Lambda}}_{ij}(\mathbf{x}^{\prime}),$
(81) $\displaystyle\mathcal{C}^{\beta^{\rm p,I}}_{tr}\\!(\mathbf{x})\\!\\!\\!$
$\displaystyle\overset{sf}{=}$
$\displaystyle\\!\\!2\\!\\!\int\\!\\!d^{3}\mathbf{x}^{\prime}\beta^{\rm
p,I}_{ii}(\mathbf{x}^{\prime})\beta^{\rm
p,I}_{jj}(\mathbf{x}^{\prime})\\!=\\!2\langle\beta^{\rm p,I}_{ii}\beta^{\rm
p,I}_{jj}\rangle,$ (82)
where $\mathbf{R}=\mathbf{x}^{\prime}-\mathbf{x}$. According to Eqs. (75) and
(77), we can extract a relation
$\mathcal{C}^{\beta^{\rm p,I}}_{per}(\mathbf{x})-2\mathcal{C}^{\beta^{\rm
p,I}}_{tot}(\mathbf{x})+2\mathcal{C}^{\mathbf{\Lambda}}(\mathbf{x})\overset{sf}{=}const.$
(83)
Table 1: Critical exponents for correlation functions at stress-free
states.(C.F. and S.T. represent ‘Correlation Functions’ and ‘Scaling Theory’,
respectively.)
C.F. | S.T. | Simulations
---|---|---
Climb&Glide | Glide Only (MDP) | LVP Glide Only (LVP)
2D($1024^{2}$) | 3D($128^{3}$) | 2D($1024^{2}$) | 3D($128^{3}$) | 2D($1024^{2}$) | 3D($128^{3}$)
$\mathcal{C}^{\rho}_{tot}$ | $\eta$ | $0.80\pm 0.30$ | $0.55\pm 0.05$ | $0.45\pm 0.25$ | $0.60\pm 0.20$ | $0.80\pm 0.30$ | $0.55\pm 0.05$
$\mathcal{C}^{\rho}_{per}$ | $\eta$ | $0.80\pm 0.20$ | $0.55\pm 0.05$ | $0.45\pm 0.20$ | $0.60\pm 0.20$ | $0.70\pm 0.30$ | $0.50\pm 0.05$
$\mathcal{C}^{\rho}_{tr}$ | $\eta$ | $0.80\pm 0.20$ | $0.55\pm 0.05$ | $0.45\pm 0.20$ | $0.60\pm 0.10$ | $0.70\pm 0.30$ | $0.45\pm 0.05$
$\mathcal{C}^{\mathbf{\Lambda}}$ | $2-\eta$ | $1.10\pm 0.65$ | $1.45\pm 0.25$ | $1.50\pm 0.30$ | $1.35\pm 0.25$ | $1.10\pm 0.65$ | $1.50\pm 0.25$
$\mathcal{C}^{\beta^{\rm p,I}}_{tot}$ | $2-\eta$ | $1.10\pm 0.60$ | $1.45\pm 0.15$ | $1.45\pm 0.25$ | $1.30\pm 0.20$ | $1.10\pm 0.60$ | $1.50\pm 0.20$
$\mathcal{C}^{\beta^{\rm p,I}}_{per}$ | $2-\eta$ | $1.15\pm 0.45$ | $1.50\pm 0.25$ | $1.45\pm 0.25$ | $1.50\pm 0.50$ | $1.20\pm 0.45$ | $1.55\pm 0.25$
We can convert Eq. (73) through an inverse Fourier transform
$\mathcal{C}^{\rho}_{tot}(\mathbf{x})=\frac{1}{2}\partial^{2}\mathcal{C}^{\beta^{\rm
p,I}}_{tot}(\mathbf{x}),$ (84)
or
$\mathcal{C}^{\beta^{\rm p,I}}_{tot}(\mathbf{x})=-\frac{1}{2\pi}\int
d^{3}\mathbf{x}^{\prime}\frac{\mathcal{C}^{\rho}_{tot}(\mathbf{x}^{\prime})}{R},$
(85)
valid in the presence of residual stress.
### V.2 Critical exponent relations
When the self-similar dislocation structures emerge, the correlation functions
of all physical quantities are expected to exhibit scale-free power laws. We
consider the simplest possible scenario, where single variable scaling is
present to reveal the minimal number of underlying critical exponents.
First, we define the critical exponent $\eta$ as the power law describing the
asymptotic decay of
$\mathcal{C}^{\rho}_{tot}(\mathbf{x})\sim|\mathbf{x}|^{-\eta}$, one of the
correlation functions for the GND density tensor (summed over components). If
we rescale the spatial variable $\mathbf{x}$ by a factor $b$, the correlation
function $\mathcal{C}^{\rho}$ is rescaled by the power law as
$\mathcal{C}^{\rho}_{tot}(b\mathbf{x})=b^{-\eta}\mathcal{C}^{\rho}_{tot}(\mathbf{x}).$
(86)
Similarly, the correlation function of the crystalline orientation field
$\mathbf{\Lambda}$ is described by a power law,
$\mathcal{C}^{\mathbf{\Lambda}}(\mathbf{x})\sim|\mathbf{x}|^{\sigma}$, where
$\sigma$ is its critical exponent. We repeat the rescaling by the same factor
$b$
$\mathcal{C}^{\mathbf{\Lambda}}(b\mathbf{x})=b^{\sigma}\mathcal{C}^{\mathbf{\Lambda}}(\mathbf{x}).$
(87)
Since $\mathcal{C}^{\rho}_{tot}$ can be written in terms of
$\mathcal{C}^{\mathbf{\Lambda}}$, Eq. (69), we rescale this relation by the
same factor $b$
$\mathcal{C}^{\rho}_{tot}(b\mathbf{x})\overset{sf}{=}\frac{1}{2}\biggl{[}\frac{\partial}{b}\biggr{]}^{2}\mathcal{C}^{\mathbf{\Lambda}}(b\mathbf{x})+\frac{1}{2}\biggl{[}\frac{\partial_{i}}{b}\biggr{]}\biggl{[}\frac{\partial_{j}}{b}\biggr{]}\mathcal{C}^{\mathbf{\Lambda}}_{ij}(b\mathbf{x}).$
(88)
Substituting Eq. (87) into Eq. (88) gives
$\displaystyle\mathcal{C}^{\rho}_{tot}(b\mathbf{x})$
$\displaystyle\overset{sf}{=}$ $\displaystyle
b^{\sigma-2}\biggl{[}\frac{1}{2}\partial^{2}\mathcal{C}^{\mathbf{\Lambda}}(\mathbf{x})+\frac{1}{2}\partial_{i}\partial_{j}\mathcal{C}^{\mathbf{\Lambda}}_{ij}(\mathbf{x})\biggr{]}$
(89) $\displaystyle\overset{sf}{=}$ $\displaystyle
b^{\sigma-2}\mathcal{C}^{\rho}_{tot}(\mathbf{x}).$
Comparing with Eq. (86) gives a relation between $\sigma$ and $\eta$
$\sigma=2-\eta.$ (90)
We can repeat the same renormalization group procedure to analyze the critical
exponents of the other two scalar forms of the correlation functions of the
GND density field. Clearly, $\mathcal{C}^{\rho}_{per}$ and
$\mathcal{C}^{\rho}_{tr}$ share the same critical exponent $\eta$ with
$\mathcal{C}^{\rho}_{tot}$.
Also, we can define the critical exponent $\sigma^{\prime}$ as the power law
describing the asymptotic growth of $\mathcal{C}^{\beta^{\rm
p,I}}_{tot}(\mathbf{x})\sim|\mathbf{x}|^{\sigma^{\prime}}$, one of the
correlation functions for the intrinsic part of the plastic distortion field.
We can rescale the correlation function $\mathcal{C}^{\beta^{\rm p,I}}$
$\mathcal{C}^{\beta^{\rm
p,I}}_{tot}(b\mathbf{x})=b^{\sigma^{\prime}}\mathcal{C}^{\beta^{\rm
p,I}}_{tot}(\mathbf{x}).$ (91)
We rescale the relation Eq. (84) by the same factor $b$, and substitute Eq.
(91) into it
$\displaystyle\mathcal{C}^{\rho}_{tot}(b\mathbf{x})$ $\displaystyle=$
$\displaystyle\frac{1}{2}\biggl{[}\frac{\partial}{b}\biggr{]}^{2}\mathcal{C}^{\beta^{\rm
p,I}}_{tot}(b\mathbf{x})=b^{\sigma^{\prime}-2}\biggl{[}\frac{1}{2}\partial^{2}\mathcal{C}^{\beta^{\rm
p,I}}_{tot}(\mathbf{x})\biggr{]}$ (92) $\displaystyle=$ $\displaystyle
b^{\sigma^{\prime}-2}\mathcal{C}^{\rho}_{tot}(\mathbf{x}).$
Comparing with Eq. (86) also gives a relation between $\sigma^{\prime}$ and
$\eta$
$\sigma^{\prime}=2-\eta.$ (93)
Since both $\mathcal{C}^{\beta^{\rm p,I}}_{tot}$ and
$\mathcal{C}^{\mathbf{\Lambda}}$ share the same critical exponent $2-\eta$, it
is clear that $\mathcal{C}^{\beta^{\rm p,I}}_{per}$, the other scalar form of
the correlation functions of the intrinsic plastic distortion field, also
shares this critical exponent, according to Eq. (83).
Thus the correlation functions of three physical quantities (the GND density
$\rho$, the crystalline orientation $\mathbf{\Lambda}$, and the intrinsic
plastic distortion $\beta^{\rm p,I}$) all share the same underlying universal
critical exponent $\eta$ for self-similar morphologies, in the case of zero
residual stress, and still hold in the limit of slow imposed deformation.
Table 1 verifies the existence of single underlying critical exponent in both
two and three dimensional simulations for each type of dynamics. Imposed
strain, studied in Ref. Chen et al., 2010, could in principle change $\eta$,
but the scaling relations derived here should still apply. The strain, of
course, breaks the isotropic symmetry, allowing even more allowed correlation
functions to be measured.
### V.3 Coarse graining, correlation functions, and cutoffs
Our dislocation density $\rho$, as discussed in Sec. III, is a coarse-grained
average over some distance $\Sigma$ – taking the discrete microscopic
dislocations and yielding a continuum field expressing their flux in different
directions. Our power laws and scaling will be cut off in some way at this
coarse-graining scale. For our simulations, the correlation functions extend
down to a few times the numerical grid spacing (depending on the numerical
diffusion in the algorithm we use). For experiments, the correlation functions
will be cut off in ways that are determined by the instrumental resolution.
Since the process of coarse-graining is at the heart of the renormalization-
group methods we rely upon to explain the emergent scale invariance in our
model, we make an initial exploration here of how coarse-graining by the
Gaussian blur of Eq. (47) and Eq. (48) affects the
$\rho^{\Sigma}-\rho^{\Sigma}$ correlation function.
Following Eq. (47),
$\displaystyle\mathcal{C}^{\rho^{\Sigma}}_{tot}(\mathbf{x})$ $\displaystyle=$
$\displaystyle\langle\rho_{ij}^{\Sigma}(\mathbf{x})\rho_{ij}^{\Sigma}(0)\rangle$
(94) $\displaystyle=$
$\displaystyle\frac{1}{V}\frac{1}{(2\pi\Sigma^{2})^{3}}\int
d^{3}\mathbf{y}\int
d^{3}\mathbf{z}\rho_{ij}^{0}(\mathbf{y}+\mathbf{z})e^{-z^{2}/(2\Sigma^{2})}$
$\displaystyle\times\int
d^{3}\mathbf{z}^{\prime}\rho_{ij}^{0}(\mathbf{y}+\mathbf{x}+\mathbf{z}^{\prime})e^{-z^{\prime
2}/(2\Sigma^{2})}.$
By changing variables $\mathbf{s}=\mathbf{y}+\mathbf{z}$ and
$\mathbf{\Delta}=\mathbf{z}^{\prime}-\mathbf{z}$, we integrate out the
variable $\mathbf{z}$ of Eq. (94)
$\displaystyle\mathcal{C}^{\rho^{\Sigma}}_{tot}(\mathbf{x})$ $\displaystyle=$
$\displaystyle\frac{1}{8\pi^{3/2}\Sigma^{3}}\frac{1}{V}\int
d^{3}\mathbf{\Delta}\int
d^{3}\mathbf{s}\rho_{ij}^{0}(\mathbf{s})\rho_{ij}^{0}(\mathbf{s}+\mathbf{\Delta}+\mathbf{x})e^{-\Delta^{2}/(4\Sigma^{2})}$
(95) $\displaystyle=$ $\displaystyle\frac{1}{8\pi^{3/2}\Sigma^{3}}\int
d^{3}\mathbf{\Delta}\mathcal{C}^{\rho^{0}}_{tot}(\mathbf{x}+\mathbf{\Delta})e^{-\Delta^{2}/(4\Sigma^{2})}.$
In our simulating system, the correlation functions of GND density can be
described by a power-law (Eq. 86),
$\mathcal{C}^{\rho}_{tot}(\mathbf{x})=g|\mathbf{x}|^{-\eta}$, where $g$ is a
constant. Thus, Eq. (95) is
$\mathcal{C}^{\rho^{\Sigma}}_{tot}(\mathbf{x})=\frac{g}{8\pi^{3/2}\Sigma^{3}}\int
d^{3}\mathbf{\Delta}|\mathbf{x}+\mathbf{\Delta}|^{-\eta}e^{-\Delta^{2}/(4\Sigma^{2})}.$
(96)
This correlation function of the coarse-grained GND density at the given scale
$\Sigma$ is a power-law smeared by a Gaussian distribution.
Figure 12: Scaling function of the correlation function of coarse-grained GND
density $\rho^{\Sigma}.$ We calculate the correlation function of the coarse-
grained GND density at the given scale $\Sigma$. Theoretically, its scaling
function remains a power-law at the small coarse-graining length scale, and
flattens out to be 1 as the correlation length of the system is far larger
than the coarse-graining scale.
Since the scalar field of the coarse-grained correlation function is
rotational invariant, we assume that $\mathbf{x}$ is aligned along the $x$
axis, $\mathbf{x}=(x,0,0)$. Then we could evaluate the integral of Eq. (96) in
cylindrical coordinates $\mathbf{\Delta}=(X,r,\theta)$
$\displaystyle\mathcal{C}^{\rho^{\Sigma}}_{tot}(x,\Sigma)$ $\displaystyle=$
$\displaystyle\frac{g}{8\pi^{3/2}\Sigma^{3}}\int_{0}^{\infty}\\!\\!2\pi
rdr\int_{-\infty}^{\infty}\\!\\!dX|(x+X)^{2}+r^{2}|^{-\eta/2}e^{-(X^{2}+r^{2})/(4\Sigma^{2})}$
$\displaystyle=$
$\displaystyle\frac{g}{2^{1+\eta}\pi^{1/2}}\Sigma^{-\eta-1}\\!\\!\int_{-\infty}^{\infty}\\!\\!dXe^{(X^{2}+2xX)/(4\Sigma^{2})}\Gamma\bigl{(}1-\eta/2,(x+X)^{2}/(4\Sigma^{2})\bigr{)}.$
We can rewrite this coarse-grained correlation functions Eq.
(LABEL:eq:crhoCoarseGrain3) as a power-law multiplied by a scaling function
$\mathcal{C}^{\rho^{\Sigma}}_{tot}(x,\Sigma)=g|x|^{-\eta}\Psi(x/\Sigma),$ (98)
where the scaling function $\Psi(\cdot)$ (Fig. 12) equals
$\Psi(\phi)=\frac{1}{2^{1+\eta}\pi^{1/2}}|\phi|^{\eta}\int_{-\infty}^{\infty}\\!\\!dse^{s(s+2\phi)/4}\Gamma\bigl{(}1-\eta/2,(s+\phi)^{2}/4\bigr{)}.$
(99)
## VI Conclusion
In our earlier works Limkumnerd and Sethna (2006); Chen et al. (2010); Choi et
al. (2012a), we have proposed a flexible framework of CDD to study complex
mesoscale phenomena of collective dislocation motion. Traditionally,
deterministic CDDs have missed the experimentally ubiquitous feature of
cellular pattern formation. Our CDD models have made progress in that respect.
In the beginning, we focused our efforts on describing coarse-grained
dislocations that naturally develop dislocation cellular structures in ways
that are consistent with experimental observations of scale invariance and
fractality, a target achieved in Ref. Chen et al., 2010. However, that paper
studied only 2D, instead of the more realistic 3D.
In this manuscript, we go further in many aspects of the theory extending the
results of our previous work:
We provide a derivation of our theory that explains the differences with
traditional theories of plasticity. In addition to our previously studied
climb-glide (CGD) and glide-only (GOD-MDP) models, we extend our construction
in order to incorporate vacancies, and re-derive Acharya and Roy (2006) a
different glide-only dynamics (GOD-LVP) which we show exhibits very similar
behavior in 2D to our CGD model. It is worth mentioning that in this way, the
GOD-LVP and the CGD dynamics become statistically similar in 2D, while the
previously studied, less physical, GOD-MDP model provides rather different
behavior in 2D Chen et al. (2010).
We present 3D simulation results here for the first time, showing
qualitatively different behavior from that of 2D. In 3D, all three types of
dynamics – CGD, GOD-MDP and GOD-LVP – show similar non-trivial fractal
patterns and scaling dimensions. Thus our 3D analysis shows that the flatter
‘grain boundaries’ we observe in the 2D simulations are not intrinsic to our
dynamics, but are an artifact of the artificial $z$-independent initial
conditions. Experimentally, grain boundaries are indeed flatter and cleaner
than cell walls, and our theory no longer provides a new explanation for this
distinction. We expect that the dislocation core energies left out of our
model would flatten the walls, and that adding disorder or entanglement would
prevent the low-temperature glide-only dynamics from flattening as much.
We also fully describe, in a statistical sense, multiple correlation functions
– the local orientation, the plastic distortion, the GND density – their
symmetries and their mutual scaling relations. Correlation functions of
important physical quantities are categorized and analytically shown to share
one stress-free exponent. The anomaly in the correlation functions of
$\beta^{\rm p}$, which was left as a question in our previous publication Chen
et al. (2010), has been discussed and explained. All of these correlation
functions and properties are verified with the numerical results of the
dynamics that we extensively discussed.
As discussed in Sec. I, our model is an immensely simplified caricature of the
deformation of real materials. How does it connect to reality?
First, we show that a model for which the dynamics is driven only by elastic
strain produces realistic cell wall structures even while ignoring slip
systems, crystalline anisotropy Hughes et al. (1998), pinning, junction
formation, and SSDs. The fact that low-energy dislocation structures (LEDS)
provides natural explanations for many properties of these structures has long
been emphasized by Ref. Kuhlmann-Wilsdorf, 1987. Intermittent flow, forest
interactions, and pinning will in general impede access to low energy states.
These real-world features, our model suggests, can be important for the
morphology of the cell wall structures but are not the root cause of their
formation nor of their evolution under stress (discussed in previous work Chen
et al. (2010)).
One must note, however, that strain energy minimization does not provide the
explanation for wall structures in our model material. Indeed, there is an
immense space of dislocation densities which make the strain energy zero
Limkumnerd and Sethna (2007), including many continuous densities. Our
dynamics relaxes into a small subset of these allowed structures – it is the
dynamics that leads to cell structure formation here, not purely the energy.
In discrete dislocation simulations and real materials, the quantization of
the Burgers vector leads to a weak logarithmic energetic preference for sharp
walls. This $-\mu{\mathbf{b}}/(4\pi(1-\nu))\theta\log\theta$ energy of low-
angle grain boundaries yields a $\log 2$ preference for one wall of angle
$\theta$ rather than two walls of angle $\theta/2$. This leads to a ‘zipping’
together of low angle grain boundaries. Since $\mathbf{b}\to 0$ in a continuum
theory, this preference is missing from our model. Yet, we still find cell
wall formation suggesting that such mechanisms are not central to cell wall
formation.
Second, how should we connect our fractal cell wall structures with those
(fractal or non-fractal) seen in experiments? Many qualitatively different
kinds of cellular structures are seen in experiments – variously termed cell
block structures, mosaic structures, ordinary cellular structures, …. Ref.
Hansen et al., 2011 recently categorized these structures into three types,
and argue that the orientation of the stress with respect to the crystalline
axes largely determines which morphology is exhibited. The cellular structures
in our model, which ignores crystalline anisotropy, likely are the theoretical
progenitors of all of these morphologies. In particular, Hansen’s type 1 and
type 3 structures incorporate both ‘geometrically necessary’ and ‘incidental
dislocation’ boundaries (GNBs and IDBs), while type 2 structures incorporate
only the latter. Our simulations cannot distinguish between these two types,
and indeed qualitatively look similar to Hansen’s type 2 structures. One
should note that the names of these boundaries are misleading – the
‘incidental’ boundaries do mediate geometrical rotations, with the type 2
boundaries at a given strain having similar average misorientations to the
geometrically necessary boundaries of type 1 structures (Hansen et al., 2011,
Figure 8). It is commonly asserted that the IDBs are formed by statistical
trapping of stored dislocations; our model suggests that stochasticity is not
necessary for their formation.
Third, how is our model compatible with traditional plasticity, which focuses
on the total density of dislocation lines? Our model evolves the net
dislocation density, ignoring the geometrically unnecessary or statistically
stored dislocations with cancelling Burgers vectors. These latter dislocations
are important for yield stress and work hardening on macroscales, but are
invisible to our theory (since they do not generate stress). Insofar as the
cancellation of Burgers vectors on the macroscale is due to cell walls of
opposing misorientations on the mesoscale, there needs to be no conflict here.
Also our model remains agnostic about whether cell boundaries include
significant components of geometrically unnecessary dislocations. However, our
model does assume that the driving force for cell boundary formation is the
motion of GNDs, as opposed to (for example) inhomogeneous flows of SSDs.
There still remain many fascinating mesoscale experiments, such as dislocation
avalanches Miguel et al. (2001); Dimiduk et al. (2006), size-dependent
hardness (smaller is stronger) Uchic et al. (2004), and complex anisotropic
loading Schmitt et al. (1991); Lopes et al. (2003), that we hope to emulate.
We intend in the future to include several relevant additional ingredients to
our dynamics, such as vacancies (C.1), impurities (C.2), immobile
dislocations/SSDs and slip systems, to reflect real materials.
## Acknowledgement
We would like to thank A. Alemi, P. Dawson, R. LeVeque, M. Miller, E. Siggia,
A. Vladimirsky, D. Warner, M. Zaiser and S. Zapperi for helpful and inspiring
discussions on plasticity and numerical methods over the last three years. We
were supported by the Basic Energy Sciences (BES) program of DOE through DE-
FG02-07ER46393. Our work was partially supported by the National Center for
Supercomputing Applications under MSS090037 and utilized the Lincoln and Abe
clusters.
## Appendix A Physical quantities in terms of the plastic distortion tensor
$\beta^{\rm p}$
In an isotropic infinitely large medium, the local deformation $\bm{u}$, the
elastic distortion $\beta^{\rm e}$ and the internal long-range stress
$\sigma^{\rm int}$ can be expressed Mura (1991); Limkumnerd and Sethna (2006)
in terms of the plastic distortion field $\beta^{\rm p}$ in Fourier space:
$\displaystyle\widetilde{\bm{u}}_{i}(\mathbf{k})$ $\displaystyle=$
$\displaystyle N_{ikl}(\mathbf{k})\widetilde{\beta}^{\rm p}_{kl}(\mathbf{k}),$
$\displaystyle N_{ikl}(\mathbf{k})$ $\displaystyle=$
$\displaystyle-\frac{i}{k^{2}}(k_{k}\delta_{il}+k_{l}\delta_{ik})-i\frac{\nu
k_{i}\delta_{kl}}{(1-\nu)k^{2}}+i\frac{k_{i}k_{k}k_{l}}{(1-\nu)k^{4}};$ (100)
$\displaystyle\widetilde{\beta}_{ij}^{\rm e}(\mathbf{k})$ $\displaystyle=$
$\displaystyle T_{ijkl}(\mathbf{k})\widetilde{\beta}_{kl}^{\rm
p}(\mathbf{k}),$ $\displaystyle T_{ijkl}(\mathbf{k})$ $\displaystyle=$
$\displaystyle\frac{1}{k^{2}}(k_{i}k_{k}\delta_{jl}+k_{i}k_{l}\delta_{jk}-k^{2}\delta_{ik}\delta_{jl})$
(101) $\displaystyle+\frac{k_{i}k_{j}}{(1-\nu)k^{4}}(\nu
k^{2}\delta_{kl}-k_{k}k_{l});$ $\displaystyle\widetilde{\sigma}_{ij}^{\rm
int}(\mathbf{k})$ $\displaystyle=$ $\displaystyle
M_{ijmn}(\mathbf{k})\widetilde{\beta}_{mn}^{\rm p}(\mathbf{k}),$
$\displaystyle M_{ijmn}(\mathbf{k})$ $\displaystyle=$
$\displaystyle\frac{2u\nu}{1-\nu}\Bigl{(}\frac{k_{m}k_{n}\delta_{ij}+k_{i}k_{j}\delta_{mn}}{k^{2}}-\delta_{ij}\delta_{mn}\Bigr{)}$
(102)
$\displaystyle+u\Bigl{(}\frac{k_{i}k_{m}}{k^{2}}\delta_{jn}+\frac{k_{j}k_{n}}{k^{2}}\delta_{im}-\delta_{im}\delta_{jn}\Bigr{)}$
$\displaystyle+u\Bigl{(}\frac{k_{i}k_{n}}{k^{2}}\delta_{jm}+\frac{k_{j}k_{m}}{k^{2}}\delta_{in}-\delta_{in}\delta_{jm}\Bigr{)}$
$\displaystyle-\frac{2u}{1-\nu}\frac{k_{i}k_{j}k_{m}k_{n}}{k^{4}}.$
All these expressions are valid for systems with periodic boundary conditions.
According to the definition Eq. (12) of the crystalline orientation
$\mathbf{\Lambda}$, we can replace $\omega^{\rm e}$ with $\beta^{\rm e}$ and
$\epsilon^{\rm e}$ by using the elastic distortion tensor decomposition Eq.
(10)
$\Lambda_{i}=\frac{1}{2}\varepsilon_{ijk}(\beta^{\rm
e}_{jk}-\epsilon_{jk}^{\rm e}).$ (103)
Here the permutation factor acting on the symmetric elastic strain tensor
gives zero. Hence we can express the crystalline orientation vector
$\mathbf{\Lambda}$ in terms of $\beta^{\rm p}$ by using Eq. (101)
$\displaystyle\widetilde{\Lambda}_{i}(\mathbf{k})$ $\displaystyle=$
$\displaystyle\frac{1}{2}\varepsilon_{ijk}\biggl{\\{}\frac{1}{k^{2}}(k_{j}k_{s}\delta_{kt}+k_{j}k_{t}\delta_{ks}-k^{2}\delta_{js}\delta_{kt})$
(104) $\displaystyle+\frac{k_{j}k_{k}}{(1-\nu)k^{4}}(\nu
k^{2}\delta_{st}-k_{s}k_{t})\biggr{\\}}\widetilde{\beta}^{\rm
p}_{st}(\mathbf{k})$ $\displaystyle=$
$\displaystyle\frac{1}{2k^{2}}(\varepsilon_{ijt}k_{j}k_{s}+\varepsilon_{ijs}k_{j}k_{t}-k^{2}\varepsilon_{ist})\widetilde{\beta}^{\rm
p}_{st}(\mathbf{k}).$
## Appendix B Energy dissipation rate
### B.1 Free energy in Fourier space
In the absence of external stress, the free energy $\mathcal{F}$ is the
elastic energy caused by the internal long-range stress
$\mathcal{F}=\int\\!d^{3}\mathbf{x}\,\,\frac{1}{2}\sigma_{ij}^{\rm
int}\epsilon_{ij}^{\rm
e}=\int\\!d^{3}\mathbf{x}\,\,\frac{1}{2}C_{ijmn}\epsilon_{ij}^{\rm
e}\epsilon_{mn}^{\rm e},$ (105)
where the stress is $\sigma_{ij}^{\rm int}=C_{ijmn}\epsilon_{mn}^{\rm e}$,
with $C_{ijmn}$ the stiffness tensor.
Using the symmetry of $C_{ijmn}$ and ignoring large rotations,
$\epsilon_{ij}^{\rm e}=(\beta^{\rm e}_{ij}+\beta^{\rm e}_{ji})/2$, we can
rewrite the elastic energy $\mathcal{F}$ in terms of $\beta^{\rm e}$
$\mathcal{F}=\int\\!d^{3}\mathbf{x}\,\,\frac{1}{2}C_{ijmn}\beta^{\rm
e}_{ij}\beta^{\rm e}_{mn}.$ (106)
Performing a Fourier transform on both $\beta^{\rm p}_{ij}$ and $\beta^{\rm
p}_{mn}$ simultaneously gives
$\mathcal{F}=\int\\!d^{3}\mathbf{x}\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\int\frac{d^{3}\mathbf{k}^{\prime}}{(2\pi)^{3}}\,\,e^{i(\mathbf{k}+\mathbf{k}^{\prime})\mathbf{x}}\biggl{(}\frac{1}{2}C_{ijmn}\widetilde{\beta}^{\rm
e}_{ij}(\mathbf{k})\widetilde{\beta}^{\rm
e}_{mn}(\mathbf{k}^{\prime})\biggr{)}.$ (107)
Integrating out the spatial variable $\mathbf{x}$ leaves a $\delta-$function
$\delta(\mathbf{k}+\mathbf{k}^{\prime})$ in Eq. (107). We hence integrate out
the k-space variable $\mathbf{k}^{\prime}$
$\mathcal{F}=\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\,\,\frac{1}{2}C_{ijmn}\widetilde{\beta}^{\rm
e}_{ij}(\mathbf{k})\widetilde{\beta}^{\rm e}_{mn}(-\mathbf{k}).$ (108)
Substituting Eq. (101) into Eq. (108) gives
$\displaystyle\mathcal{F}\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\,\,\frac{1}{2}\bigl{(}C_{ijmn}T_{ijpq}(\mathbf{k})T_{mnst}(-\mathbf{k})\bigr{)}\widetilde{\beta}^{\rm
p}_{pq}(\mathbf{k})\widetilde{\beta}^{\rm p}_{st}(-\mathbf{k})$ (109)
$\displaystyle=$
$\displaystyle\\!\\!-\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\,\,\frac{1}{2}M_{pqst}(\mathbf{k})\widetilde{\beta}^{\rm
p}_{pq}(\mathbf{k})\widetilde{\beta}^{\rm p}_{st}(-\mathbf{k}),$
where we skip straightforward but tedious simplifications.
When turning on the external stress, we repeat the same procedure used in Eq.
(107), yielding
$\mathcal{F}^{\rm ext}=-\int d^{3}\mathbf{x}\,\,\sigma_{ij}^{\rm
ext}\beta^{\rm
p}_{ij}=-\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\,\,\widetilde{\sigma}_{ij}^{\rm
ext}(\mathbf{k})\widetilde{\beta}^{\rm p}_{ij}(-\mathbf{k}).$ (110)
### B.2 Calculation of energy functional derivative with respect to the GND
density $\varrho$
According to Eq. (17), the infinitesimal change of the variable
$\delta\varrho$ is given in terms of $\delta\beta^{\rm p}$
$\delta\varrho_{ijk}=-g_{ijls}\partial_{l}\bigl{(}\delta\beta^{\rm
p}_{sk}\bigr{)}.$ (111)
Substituting Eq. (111) into Eq. (27) and applying integration by parts, the
infinitesimal change of $\mathcal{F}$ is hence rewritten in terms of
$\beta^{\rm p}$
$\delta\mathcal{F}[\beta^{\rm
p}]=\int\\!d^{3}\mathbf{x}\,\,g_{ijls}\partial_{l}\Biggl{(}\frac{\delta\mathcal{F}}{\delta\varrho_{ijk}}\Biggr{)}\delta\beta^{\rm
p}_{sk}.$ (112)
According to Eq. (24), it suggests
$\delta\mathcal{F}[\beta^{\rm
p}]=\int\\!d^{3}\mathbf{x}\,\,\frac{\delta\mathcal{F}}{\delta\beta^{\rm
p}_{sk}}\delta\beta^{\rm
p}_{sk}=\int\\!d^{3}\mathbf{x}\,\,(-\sigma_{sk})\delta\beta^{\rm p}_{sk}.$
(113)
Comparing Eq. (112) and Eq. (113) implies
$g_{ijls}\partial_{l}\Biggl{(}\frac{\delta\mathcal{F}}{\delta\varrho_{ijk}}\Biggr{)}=-\sigma_{sk},$
(114)
up to a total derivative which we ignore due to the use of periodic boundary
conditions.
### B.3 Derivation of energy dissipation rate
We can apply variational methods to calculate the dissipation rate of the free
energy. As is well known, the general elastic energy $\mathcal{E}$ in a
crystal can be expressed as
$\mathcal{E}=\frac{1}{2}\int\\!d^{3}\mathbf{x}\,\,\sigma_{ij}\epsilon_{ij}^{\rm
e}$, with $\epsilon_{ij}^{\rm e}$ the elastic strain. An infinitesimal change
of $\mathcal{E}$ is:
$\delta\mathcal{E}=\frac{1}{2}\int\\!d^{3}\mathbf{x}\,\,\sigma_{ij}\delta\epsilon_{ij}^{\rm
e}+\frac{1}{2}\int\\!d^{3}\mathbf{x}\,\,\delta\sigma_{ij}\epsilon_{ij}^{\rm
e}=\int\\!d^{3}\mathbf{x}\,\,\sigma_{ij}\delta\epsilon_{ij}^{\rm e},$ (115)
where we use $\sigma_{ij}\delta\epsilon_{ij}^{\rm
e}=C_{ijkl}\epsilon_{kl}^{\rm e}\delta\epsilon_{ij}^{\rm
e}=\delta\sigma_{ij}\epsilon_{ij}^{\rm e}$.
So the infinitesimal change of the free energy Eq. (20) is
$\delta\mathcal{F}=\int\\!d^{3}\mathbf{x}\,\,\biggl{(}\sigma_{ij}^{\rm
int}\delta\epsilon_{ij}^{\rm e}-\sigma_{ij}^{\rm ext}\delta\epsilon_{ij}^{\rm
p}\biggr{)}.$ (116)
We apply the relation $\epsilon^{\rm e}=\epsilon-\epsilon^{\rm p}$, where
$\epsilon^{\rm p}$ is the plastic strain and $\epsilon$ is the total strain:
$\delta\mathcal{F}=\int\\!d^{3}\mathbf{x}\,\,\biggl{(}\sigma_{ij}^{\rm
int}\delta\epsilon_{ij}-\sigma_{ij}^{\rm int}\delta\epsilon^{\rm
p}_{ij}-\sigma_{ij}^{\rm ext}\delta\epsilon^{\rm p}\biggr{)}.$ (117)
Using the symmetry of $\sigma_{ij}$ and ignoring large rotations,
$\epsilon_{ij}=\frac{1}{2}(\partial_{i}u_{j}+\partial_{j}u_{i})$, we can
rewrite the first term of Eq. (117) as
$\int\\!d^{3}\mathbf{x}\,\,\sigma_{ij}^{\rm int}\delta(\partial_{i}u_{j})$.
Integrating by parts yields
$\int\\!d^{3}\mathbf{x}\,\,\bigl{(}\partial_{i}(\delta u_{j}\sigma_{ij}^{\rm
int})-\delta u_{j}\partial_{i}\sigma_{ij}^{\rm int}\bigr{)}$. We can convert
the first volume integral to a surface integral, which vanishes for an
infinitely large system. Hence
$\delta\mathcal{F}=\int\\!d^{3}\mathbf{x}\,\,\biggl{(}\partial_{i}\sigma_{ij}^{\rm
int}\delta u_{j}-(\sigma_{ij}^{\rm int}+\sigma_{ij}^{\rm
ext})\delta\epsilon^{\rm p}_{ij}\biggr{)}.$ (118)
The first term of Eq. (118) is zero assuming instantaneous elastic relaxation
due to the local force equilibrium condition,
$\delta\mathcal{F}=-\int\\!d^{3}\mathbf{x}\,\,(\sigma_{ij}^{\rm
int}+\sigma_{ij}^{\rm ext})\delta\beta_{ij}^{\rm p},$ (119)
using the symmetry of $\sigma_{ij}$ and $\epsilon^{\rm
p}_{ij}=\frac{1}{2}(\beta^{\rm p}_{ij}+\beta^{\rm p}_{ji})$.
The free energy dissipation rate is thus $\delta\mathcal{F}/\delta t$ for
$\delta\beta^{\rm p}_{ij}=\frac{\partial\beta^{\rm p}}{\partial t}\delta t$,
hence
$\displaystyle\frac{\partial\mathcal{F}}{\partial t}$ $\displaystyle=$
$\displaystyle-\int\\!d^{3}\mathbf{x}\,\,(\sigma_{ij}^{\rm
int}+\sigma_{ij}^{\rm ext})\frac{\partial\beta^{\rm p}_{ij}}{\partial t}$
(120) $\displaystyle=$
$\displaystyle-\int\\!d^{3}\mathbf{x}\,\,(\sigma_{ij}^{\rm
int}+\sigma_{ij}^{\rm ext})J_{ij}.$
When dislocations are allowed to climb, substituting the CGD current Eq. (36)
into Eq. (120) implies that the free energy dissipation rate is strictly
negative
$\displaystyle\frac{\partial\mathcal{F}}{\partial t}$ $\displaystyle=$
$\displaystyle-\int\\!d^{3}\mathbf{x}\,\,(\sigma_{ij}^{\rm
int}+\sigma_{ij}^{\rm ext})\bigl{[}v_{l}\varrho_{lij}\bigr{]}$ (121)
$\displaystyle=$
$\displaystyle-\int\\!d^{3}\mathbf{x}\,\,\frac{|\varrho|}{D}v^{2}\leq 0.$
When removing dislocation climb by considering the mobile dislocation
population, we substitute Eq. (40) into Eq. (120) to guarantee that the rate
of the change of the free energy density is also the negative of a perfect
square
$\displaystyle\frac{\partial\mathcal{F}}{\partial t}$ $\displaystyle=$
$\displaystyle-\int d^{3}\mathbf{x}(\sigma_{ij}^{\rm int}+\sigma_{ij}^{\rm
ext})\Biggl{[}v^{\prime}_{l}\bigl{(}\varrho_{lij}-\frac{1}{3}\delta_{ij}\varrho_{lkk}\bigr{)}\Biggr{]}$
(122) $\displaystyle=$
$\displaystyle-\int\\!d^{3}\mathbf{x}\,\,\frac{|\varrho|}{D}v^{\prime 2}\leq
0.$
## Appendix C Model Extensions: Adding vacancies and disorder to CDD
### C.1 Coupling vacancy diffusion to CDD
In plastically deformed crystals at low temperature, dislocations usually move
only in the glide plane because vacancy diffusion is almost frozen out. When
temperature increases, vacancy diffusion leads to dislocation climb out of the
glide plane. At intermediate temperatures, slow vacancy diffusion can enable
local creep. The resulting dynamics should couple the vacancy and dislocation
fields in non-trivial ways. Here we couple the vacancy diffusion to the
dislocation motion in our CDD model.
We introduce an order parameter field $c(\mathbf{x})$, indicating the vacancy
concentration density at the point $\mathbf{x}$. The free energy $\mathcal{F}$
is thus expressed
$\mathcal{F}=\mathcal{F}^{Dis}+\mathcal{F}^{Vac}=\int
d^{3}\mathbf{x}\biggl{(}\frac{1}{2}\sigma_{ij}\epsilon_{ij}^{\rm
e}+\frac{1}{2}\alpha(c-c_{0})^{2}\biggr{)},$ (123)
where $\alpha$ is a positive material parameter related to the vacancy
creation energy, and $c_{0}$ is the overall equilibrium vacancy concentration
density.
Assuming that GNDs share the velocity $\bm{v}$ in an infinitesimal volume, we
write the current $J$ for GNDs
$J_{ij}=v_{u}\varrho_{uij}.$ (124)
The current trace $J_{ii}$ describes the rate of volume change, which acts as
a source and sink of vacancies. The coupling dynamics for vacancies is thus
given as
$\partial_{t}c=\gamma\nabla^{2}c+J_{ii},$ (125)
where $\gamma$ is a positive vacancy diffusion constant.
The infinitesimal change of the free energy $\mathcal{F}$ (Eq. 123) is
$\delta\mathcal{F}=\int
d^{3}\mathbf{x}\biggl{(}\frac{\delta\mathcal{F}^{Dis}}{\delta\beta^{\rm
p}_{ij}}\delta\beta^{\rm p}_{ij}+\frac{\delta\mathcal{F}^{Vac}}{\delta
c}\delta c\biggr{)}.$ (126)
We apply Eq. (119) and $\delta\mathcal{F}^{Vac}/\delta c=\alpha(c-c_{0})$
$\delta\mathcal{F}=\int d^{3}\mathbf{x}\biggl{(}-\sigma_{ij}\delta\beta^{\rm
p}_{ij}+\alpha(c-c_{0})\delta c\biggr{)}.$ (127)
The free energy dissipation rate is thus $\delta\mathcal{F}/\delta t$ for
$\delta\beta^{\rm p}_{ij}=\frac{\partial\beta^{\rm p}}{\partial t}\delta t$
and $\delta c=\frac{\partial c}{\partial t}\delta c$, hence
$\frac{\partial\mathcal{F}}{\partial t}=-\int
d^{3}\mathbf{x}\biggl{(}\sigma_{ij}\frac{\partial\beta^{\rm p}_{ij}}{\partial
t}-\alpha(c-c_{0})\frac{\partial c}{\partial t}\biggr{)}.$ (128)
Substituting the current $J$ (Eq. 124) and Eq. (125) into Eq. (128) gives
$\displaystyle\frac{\partial\mathcal{F}}{\partial t}$ $\displaystyle=$
$\displaystyle-\int
d^{3}\mathbf{x}\bigl{(}\sigma_{ij}(v_{u}\varrho_{uij})-\alpha(c-c_{0})(\gamma\nabla^{2}c+v_{u}\varrho_{uii})\bigr{)}$
(129) $\displaystyle=$ $\displaystyle-\int
d^{3}\mathbf{x}\bigl{(}(\sigma_{ij}-\alpha(c-c_{0})\delta_{ij})\varrho_{uij}\bigr{)}v_{u}-\int
d^{3}\mathbf{x}\alpha\gamma(\nabla c)^{2},$
where we integrate by parts by assuming an infinitely large system.
If we choose the velocity
$v_{u}=\frac{D}{|\varrho|}\bigl{(}\sigma_{ij}-\alpha(c-c_{0})\delta_{ij}\bigr{)}\varrho_{uij}$,
($D$ is a positive material dependent constant and $1/|\varrho|$ is added for
the same reasons as discussed in Sec. II.3.1), the free energy is guaranteed
to decrease monotonically. The coupling dynamics for both GNDs and vacancies
is thus
$\left\\{\begin{array}[]{l l}\partial_{t}\beta^{\rm
p}_{ij}=\frac{D}{|\varrho|}\bigl{(}\sigma_{mn}-\alpha(c-c_{0})\delta_{mn}\bigr{)}\varrho_{umn}\varrho_{uij},\\\
\partial_{t}c=\gamma\nabla^{2}c+\frac{D}{|\varrho|}\bigl{(}\sigma_{mn}-\alpha(c-c_{0})\delta_{mn}\bigr{)}\varrho_{umn}\varrho_{ukk}.\end{array}\right.$
(130)
This dynamics gives us a clear picture of the underlying physical mechanism:
the vacancies contribute an extra hydrostatic pressure $p=-\alpha(c-c_{0})$.
### C.2 Coupling disorder to CDD
In real crystals, the presence of precipitates or impurities results in a
force pinning nearby dislocations. We can mimic this effect by incorporating a
spatially varying random potential field $V(\mathbf{x})$.
In our CDD model, we can add the interaction energy between GNDs and random
disorder into the free energy $\mathcal{F}$ (Eq. 20)
$\mathcal{F}=\mathcal{F}_{E}+\mathcal{F}_{I}=\int
d^{3}\mathbf{x}\biggl{(}\frac{1}{2}\sigma_{ij}^{\rm int}\epsilon_{ij}^{\rm
e}-\sigma_{ij}^{\rm ext}\epsilon_{ij}^{\rm
p}+V(\mathbf{x})|\varrho|\biggr{)},$ (131)
where $\mathcal{F}_{E}$ indicates the elastic free energy corresponding to the
integral of the first two terms, and $\mathcal{F}_{I}$ indicates the
interaction energy, the integral of the last term.
An infinitesimal change of the free energy is written
$\delta\mathcal{F}=\delta\mathcal{F}_{E}+\delta\mathcal{F}_{I}=\int
d^{3}\mathbf{x}\biggl{(}\frac{\delta\mathcal{F}_{E}}{\delta\beta^{\rm
p}_{ij}}\delta\beta^{\rm p}_{ij}+\frac{\delta\mathcal{F}_{I}}{\delta\beta^{\rm
p}_{sk}}\delta\beta^{\rm p}_{sk}\biggr{)}.$ (132)
In an infinitely large system, Eq. (119) gives
$\frac{\delta\mathcal{F}_{E}}{\delta\beta^{\rm p}_{ij}}=-(\sigma_{ij}^{\rm
int}+\sigma_{ij}^{\rm ext}),$ (133)
and Eq. (112) implies
$\displaystyle\delta\mathcal{F}_{I}$ $\displaystyle=$ $\displaystyle\int
d^{3}\mathbf{x}g_{ijls}\partial_{l}\Bigl{(}\frac{\delta\mathcal{F}_{I}}{\delta\varrho_{ijk}}\Bigr{)}\delta\beta^{\rm
p}_{sk}$ (134) $\displaystyle=$ $\displaystyle\int
d^{3}\mathbf{x}g_{ijls}\partial_{l}\Bigl{(}V(\mathbf{x})\frac{\varrho_{ijk}}{|\varrho|}\Bigr{)}\delta\beta^{\rm
p}_{sk}.$
Substituting Eq. (133) and Eq. (134) into Eq. (132) gives
$\displaystyle\delta\mathcal{F}$ $\displaystyle=$ $\displaystyle-\int
d^{3}\mathbf{x}\biggl{(}\sigma_{ij}^{\rm int}+\sigma_{ij}^{\rm
ext}-g_{mnli}\partial_{l}\Bigl{(}V(\mathbf{x})\frac{\varrho_{mnj}}{|\varrho|}\Bigr{)}\biggr{)}\delta\beta^{\rm
p}_{ij}$ (135) $\displaystyle=$ $\displaystyle-\int d^{3}\mathbf{x}\sigma^{\rm
eff}_{ij}\delta\beta^{\rm p}_{ij}.$
where the effective stress field is $\sigma^{\rm eff}_{ij}=\sigma_{ij}^{\rm
int}+\sigma_{ij}^{\rm
ext}-g_{mnli}\partial_{l}\Bigl{(}V(\mathbf{x})\frac{\varrho_{mnj}}{|\varrho|}\Bigr{)}$.
By replacing $\sigma_{ij}$ with $\sigma^{\rm eff}_{ij}$ in the equation of
motion of either allowing climb (Eq. 36) or removing climb (Eqs. 40 and 44),
we achieve the new CDD model that models GNDs interacting with disorder.
Figure 13: (Color online) Statistical convergence of correlation functions of
$\Lambda$, $\rho$ and $\beta^{\rm p,I}$ by varying lattice sizes in two
dimensions. We compare correlation functions of relaxed glide-only states
(GOD-MDP) at resolutions from $128^{2}$ to $1024^{2}$ systems. Top: We see
that the correlation functions in all cases exhibit similar power laws in (a),
(b), and (c); Bottom: (d), (e), and (f) show a single underlying critical
exponent which appears to converge with increasing resolution, where $a$ is
the grid spacing. The black dashed lines are guides to the eye. Figure 14:
(Color online) Statistical convergence of correlation functions of $\Lambda$,
$\rho$ and $\beta^{\rm p,I}$ by varying the initial length scales in
$1024^{2}$ simulations. We measure correlation functions of relaxed glide-only
states (GOD-MDP) at initial correlated lengths from $0.07L$ to $0.28L$. In
(a), (b), and (c), the radial-length variable $R$ is rescaled by their initial
correlation lengths, and the corresponding correlation functions are divided
by the same lengths to the exhibiting powers. They roughly collapse into the
scaling laws. Notice that the power laws measured in the state with the
initial correlated length $0.07L$ get distorted due to the small outer cutoff.
## Appendix D Details of the Simulations
### D.1 Finite size effects
Although we suspect that our simulations don’t have weak solutions Choi et al.
(2012b), we can show that these solutions converge statistically. We use two
ways to exhibit the statistical convergence.
When we continue to decrease the grid spacing to zero (the continuum limit),
we show the statistical convergence of correlation functions of $\rho$,
$\mathbf{\Lambda}$, and $\beta^{\rm p,I}$, with a slow expected drift of
apparent exponents with system size, see Fig. 13.
We can also decrease the initial correlated length scales in a large two
dimensional simulation. Since the emergent self-similar structures are always
developed below the initial correlated lengths, as discussed in Sec. IV.2,
this is similar to decreasing the system size by reducing the initial
correlated lengths. In Fig. 14, the correlation functions of $\rho$,
$\mathbf{\Lambda}$, and $\beta^{\rm p,I}$ collapse into a single scaling
curve, using finite size scaling.
### D.2 Gaussian random initial conditions
Gaussian random fields are extensively used in physical modelings to mimic
stochastic fluctuations with a correlated length scale. In our simulations, we
construct an initially random plastic distortion, a nine-component tensor
field, where every component is an independent Gaussian random field sharing a
underlying length scale.
We define a Gaussian random field $f$ with correlation length $\sigma_{0}$ by
convolving white noise
$\langle\xi(\mathbf{x})\xi(\mathbf{x}^{\prime})\rangle=\delta(\mathbf{x}-\mathbf{x}^{\prime})$
with a Gaussian of width $\sigma_{0}$:
$\displaystyle f(\mathbf{x})=\int
d^{3}\mathbf{x}^{\prime}\xi(\mathbf{x}^{\prime})e^{-(\mathbf{x}-\mathbf{x}^{\prime})^{2}/\sigma_{0}^{2}}.$
(136)
In Fourier space, this can be done as a multiplication:
$\widetilde{f}(\mathbf{k})=e^{-\sigma_{0}^{2}k^{2}/4}\widetilde{\xi}(\mathbf{k}).$
(137)
The square
$\widetilde{f}(\mathbf{k})\widetilde{f}(-\mathbf{k})=e^{-\sigma_{0}^{2}k^{2}/2}$
implies that the correlation function $\langle
f(\mathbf{x})f(\mathbf{x}^{\prime})\rangle=(2\pi\sigma^{2}_{0})^{-3/2}e^{-(\mathbf{x}-\mathbf{x}^{\prime})^{2}/(2\sigma_{0}^{2})}$.
Figure 15: (Color online) Gaussian random initial conditions with the
correlated length scale $0.28L$ in two dimensions. (a) shows the initial net
GND density map; (b) exhibits the correlation functions of $\rho$ under
various initial conditions, where we compare the Gaussian random field to both
a sinusoidal wave and a single periodic superposition of Gaussian peaks. The
kink arises due to the edges and corners of the square unit cell.
In our simulations, the initial plastic distortion tensor field $\beta^{\rm
p}$ is constructed in Fourier space
$\widetilde{\beta}^{\rm
p}_{ij}(\mathbf{k})=e^{-\sigma_{0}^{2}k^{2}/4}\widetilde{\zeta}_{ij}(\mathbf{k}),$
(138)
where the white noise signal $\zeta$ is characterized as
$\langle\zeta_{(i,j)}(\mathbf{x})\zeta_{(i,j)}(\mathbf{x}^{\prime})\rangle=A_{(i,j)}\delta(\mathbf{x}-\mathbf{x}^{\prime})$,
and in Fourier space
$\frac{1}{V}\widetilde{\zeta}_{(i,j)}(\mathbf{k})\widetilde{\zeta}_{(i,j)}(-\mathbf{k})=A_{(i,j)}$.
(We use $(i,j)$ to indicate a component of the tensor field, to avoid the
Einstein summation rule.) The correlation function of each component of
$\beta^{\rm p,I}$ is thus expressed in Fourier space
$\displaystyle\widetilde{\mathcal{C}}^{\beta^{\rm p,I}}_{(i,j)}$
$\displaystyle=$ $\displaystyle 2\langle\beta^{\rm p,I}_{(i,j)}\beta^{\rm
p,I}_{(i,j)}\rangle(2\pi)^{3}\delta(\mathbf{k})-\frac{2}{V}\widetilde{\beta}^{\rm
p}_{(i,j)}(\mathbf{k})\widetilde{\beta}^{\rm p}_{(i,j)}(-\mathbf{k})$ (139)
$\displaystyle=$ $\displaystyle 2\langle\beta^{\rm p,I}_{(i,j)}\beta^{\rm
p,I}_{(i,j)}\rangle(2\pi)^{3}\delta(\mathbf{k})-2A_{(i,j)}e^{-\sigma_{0}^{2}k^{2}/2},$
where the Gaussian kernel width $\sigma_{0}$, as a standard length scale,
defines the correlation length of our simulation. (In our earlier work, we use
a non-standard definition for the correlation length, so our $\sigma_{0}$
equals the old length scale times $\sqrt{2}$.)
According to Eq. (9) and Eq. (138), we can express the initial GND density
field $\rho$ in Fourier space
$\widetilde{\rho}_{ij}(\mathbf{k})=-i\varepsilon_{ilm}e^{-\sigma_{0}^{2}k^{2}/4}k_{l}\widetilde{\zeta}_{mj}(\mathbf{k}).$
(140)
The scalar invariant $\mathcal{C}^{\rho}_{tot}$ of the correlation function of
$\rho$ is thus expressed in Fourier space
$\displaystyle\mathcal{C}^{\rho}_{tot}(\mathbf{k})$ $\displaystyle=$
$\displaystyle\frac{1}{V}\widetilde{\rho}_{ij}(\mathbf{k})\widetilde{\rho}_{ij}(-\mathbf{k})$
(141) $\displaystyle=$
$\displaystyle\frac{1}{V}e^{-\sigma_{0}^{2}k^{2}/2}\bigl{(}k^{2}\delta_{mn}-k_{m}k_{n})\widetilde{\zeta}_{mj}(\mathbf{k})\widetilde{\zeta}_{nj}(-\mathbf{k}).$
The resulting initial GND density is not Gaussian correlated, unlike the
initial plastic distortion. Figure 15 exhibits the initial GND density map due
to the Gaussian random plastic distortions with the correlation length
$0.28L$, and its correlation function. We compare the latter to the
correlation functions of both a sinusoidal wave and a single periodic
superposition of Gaussian peaks. The similarity of the three curves shows that
our Gaussian random initial condition at $\sigma_{0}\sim 0.28L$ approaches the
largest effective correlation length possible for periodic boundary
conditions.
Figure 16: (Color online) Strain-history-dependent fields $\beta^{\rm p,H}$
and $\bm{\psi}$ in two dimensions for the relaxed states. Top: Dislocation
climb is allowed; Middle: Glide-only using a mobile dislocation population;
Bottom: Glide-only using a local vacancy pressure. Left: The strain-history-
dependent plastic distortion $|\beta^{\rm p,H}|$. (a), (c), and (e) exhibit
patterns reminiscent of self-similar dislocation structures. Right: The
strain-history-dependent plastic deformation $|\bm{\psi}|$. (b), (d), and (f)
exhibit smooth patterns with a little distortion, which are not fractal.
Figure 17: (Color online) Correlation functions of $\beta^{\rm p,H}$ in both
two and three dimensions. In both (a) and (b), the correlation functions of
the strain-history-dependent part of the plastic distortion $\beta^{\rm p,H}$
are shown. Left: (a) is measured in relaxed, unstrained $1024^{2}$ systems;
Right: (b) is measured in in relaxed, unstrained $128^{3}$ systems. All dashed
lines show estimated power laws quoted in Table 2. Figure 18: (Color online)
Correlation functions of $\bm{\psi}$ in both two and three dimensions. In (a)
and (b), the correlation functions of the strain-history-dependent deformation
$\bm{\psi}$ are shown. Red, blue, green lines indicate CGD, GOD-MDP, and GOD-
LVP, respectively. Left: (a) is measured in relaxed, unstrained $1024^{2}$
systems; Right: (b) is measured in relaxed, unstrained $128^{3}$ systems. All
dashed lines show estimated power laws quoted in Table 2. Table 2: Critical
exponents for correlation functions of strain-history-dependent fields at
stress-free states.(C.F. and Exp. represent ‘Correlation Functions’ and
‘Exponents’, respectively.)
C.F. | Exp. | Simulations
---|---|---
Climb&Glide | Glide Only (MDP) | Glide Only (LVP)
2D($1024^{2}$) | 3D($128^{3}$) | 2D($1024^{2}$) | 3D($128^{3}$) | 2D($1024^{2}$) | 3D($128^{3}$)
$\mathcal{C}^{\beta^{\rm p,H}}_{tot}$ | $\tau$ | $0.65\pm 1.00$ | $1.05\pm 0.65$ | $1.25\pm 0.60$ | $1.20\pm 0.50$ | $0.55\pm 1.10$ | $1.05\pm 0.65$
$\mathcal{C}^{\beta^{\rm p,H}}_{per}$ | $\tau^{\prime}$ | $0.70\pm 0.95$ | $1.10\pm 0.60$ | $1.95\pm 0.05$ | $1.75\pm 0.15$ | $0.50\pm 1.15$ | $1.05\pm 0.70$
$\mathcal{C}^{\beta^{\rm p,H}}_{tr}$ | $\tau^{\prime}$ | $0.70\pm 0.95$ | $1.10\pm 0.60$ | $1.95\pm 0.05$ | $1.75\pm 0.15$ | $0.50\pm 1.15$ | $1.05\pm 0.70$
$\mathcal{C}^{\bm{\psi}}$ | $\tau^{\prime\prime}$ | $1.90\pm 0.10$ | $1.85\pm 0.15$ | $1.95\pm 0.05$ | $1.90\pm 0.10$ | $1.95\pm 0.05$ | $1.90\pm 0.10$
## Appendix E Other correlation functions unrelated to static scaling theory
### E.1 Correlation functions of the strain-history-dependent plastic
deformation and distortion fields
The curl-free strain-history-dependent part of the plastic distortion field,
as shown in Fig. 16(a), (c), and (e), exhibits structures reminiscent of self-
similar morphology. We correlate their differences at neighboring points
$\displaystyle\mathcal{C}^{\beta^{\rm p,H}}_{tot}(\mathbf{x})$
$\displaystyle=$ $\displaystyle\langle(\beta^{\rm
p,H}_{ij}(\mathbf{x})-\beta^{\rm p,H}_{ij}(0))(\beta^{\rm
p,H}_{ij}(\mathbf{x})-\beta^{\rm p,H}_{ij}(0))\rangle,$ (142)
$\displaystyle\mathcal{C}^{\beta^{\rm p,H}}_{per}(\mathbf{x})$
$\displaystyle=$ $\displaystyle\langle(\beta^{\rm
p,H}_{ij}(\mathbf{x})-\beta^{\rm p,H}_{ij}(0))(\beta^{\rm
p,H}_{ji}(\mathbf{x})-\beta^{\rm p,H}_{ji}(0))\rangle,$ (143)
$\displaystyle\mathcal{C}^{\beta^{\rm p,H}}_{tr}(\mathbf{x})$ $\displaystyle=$
$\displaystyle\langle(\beta^{\rm p,H}_{ii}(\mathbf{x})-\beta^{\rm
p,H}_{ii}(0))(\beta^{\rm p,H}_{jj}(\mathbf{x})-\beta^{\rm
p,H}_{jj}(0))\rangle.$ (144)
Consider also the deformation field $\bm{\psi}$ (shown in Fig. 16(b), (d), and
(f)) of Eq. (19) whose gradient gives the strain-history-dependent plastic
deformation $\beta^{\rm p,H}$. Similarly to the crystalline orientation
$\mathbf{\Lambda}$, we correlate differences of $\bm{\psi}$. The unique
rotational invariant of its two-point correlation functions is written
$\mathcal{C}^{\bm{\psi}}(\mathbf{x})=2\langle\psi^{2}\rangle-2\langle\psi_{i}(\mathbf{x})\psi_{i}(0)\rangle.$
(145)
In Fig. 17, the correlation functions of the strain-history-dependent plastic
distortion $\beta^{\rm p,H}$ in both $1024^{2}$ and $128^{3}$ simulations show
critical exponents $\tau$ and $\tau^{\prime}$. Although apparently unrelated
to the previous underlying critical exponent $\eta$, this exponents $\tau$ and
$\tau^{\prime}$ quantify the fractality of the strain-history-dependent
plastic distortion. Figure 18 shows the correlation functions of the strain-
history-dependent deformation $\bm{\psi}$, with the critical exponent
$\tau^{\prime\prime}$ close to $2$, which implies a smooth non-fractal field,
shown in Fig. 16(c) and (d). All measured critical exponents are listed in
Table 2.
Figure 17 shows the power-law dependence of the rotational invariants
$\mathcal{C}^{\beta^{\rm p,H}}_{per}$ and $\mathcal{C}^{\beta^{\rm p,H}}_{tr}$
(they overlap). According to the definition $\widetilde{\beta}^{\rm
p,H}_{ij}=ik_{i}\widetilde{\psi}_{j}$, we can write down the Fourier-
transformed forms of Eq. (143) and Eq. (144) respectively
$\displaystyle\widetilde{\mathcal{C}}^{\beta^{\rm
p,H}}_{per}(\mathbf{k})\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!2\langle\beta^{\rm p,H}_{ij}\beta^{\rm
p,H}_{ji}\rangle(2\pi)^{3}\delta(\mathbf{k})-\frac{2}{V}k_{i}k_{j}\widetilde{\psi}_{j}(\mathbf{k})\widetilde{\psi}_{i}(-\mathbf{k}),$
(146) $\displaystyle\widetilde{\mathcal{C}}^{\beta^{\rm
p,H}}_{tr}(\mathbf{k})\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!2\langle\beta^{\rm p,H}_{ii}\beta^{\rm
p,H}_{jj}\rangle(2\pi)^{3}\delta(\mathbf{k})-\frac{2}{V}k_{i}k_{j}\widetilde{\psi}_{i}(\mathbf{k})\widetilde{\psi}_{j}(-\mathbf{k}).$
(147)
Except the zero-wavelength terms, the same functional forms shared by these
two rotational scalars explain the observed overlapping power laws.
### E.2 Stress-stress correlation functions
As the system relaxes to its final stress-free state, we can measure the
fluctuations of the internal elastic stress fields, using a complete set of
two rotational invariants of correlation functions
$\displaystyle\mathcal{C}^{\sigma}_{tot}(\mathbf{x})=\langle\sigma_{ij}^{\rm
int}(\mathbf{x})\sigma_{ij}^{\rm int}(0)\rangle,$ (148)
$\displaystyle\mathcal{C}^{\sigma}_{tr}(\mathbf{x})=\langle\sigma_{ii}^{\rm
int}(\mathbf{x})\sigma_{jj}^{\rm int}(0)\rangle;$ (149)
and in Fourier space
$\displaystyle\widetilde{\mathcal{C}}^{\sigma}_{tot}(\mathbf{k})=\frac{1}{V}\widetilde{\sigma}_{ij}^{\rm
int}(\mathbf{k})\widetilde{\sigma}_{ij}^{\rm int}(-\mathbf{k}),$ (150)
$\displaystyle\widetilde{\mathcal{C}}^{\sigma}_{tr}(\mathbf{k})=\frac{1}{V}\widetilde{\sigma}_{ii}^{\rm
int}(\mathbf{k})\widetilde{\sigma}_{jj}^{\rm int}(-\mathbf{k}).$ (151)
Because $\sigma_{ij}$ is symmetric, these two correlation functions form a
complete set of linear invariants under rotational transformations.
### E.3 Energy density spectrum
The average internal elastic energy $\mathcal{E}$ is written
$\displaystyle\mathcal{E}$ $\displaystyle=$ $\displaystyle\frac{1}{V}\int
d^{d}\mathbf{x}\biggl{[}\frac{1}{2}\sigma_{ij}^{\rm int}\epsilon_{ij}^{\rm
e}\biggr{]}$ (152) $\displaystyle=$ $\displaystyle\frac{1}{V}\int
d^{d}\mathbf{x}\frac{1}{4\mu}\biggl{[}\sigma_{ij}^{\rm int}\sigma_{ij}^{\rm
int}-\frac{\nu}{1+\nu}\sigma_{ii}^{\rm int}\sigma_{jj}^{\rm int}\biggr{]},$
where, in an isotropic bulk medium, the elastic strain $\epsilon^{\rm e}$ is
expressed in terms of $\sigma^{\rm int}$,
$\epsilon^{\rm e}_{ij}=\frac{1}{2\mu}\biggl{(}\sigma^{\rm
int}_{ij}-\frac{\nu}{1+\nu}\delta_{ij}\sigma^{\rm int}_{kk}\biggr{)}.$ (153)
Figure 19: (Color online) Stress-stress correlation functions
$\widetilde{C}^{\sigma}(k)$, elastic energy spectrum $E(k)$, correlation
functions of the stress-full part of GND density
$\widetilde{\mathcal{C}}^{\rho^{E}}(k)$. Red, blue, and green lines indicate
CGD, GOD-MDP, and GOD-LVP, respectively. All dashed lines show estimated power
laws quoted in Table 3. Table 3: Power-laws relations among
$\widetilde{C}^{\sigma}(k)$, $E(k)$, and
$\widetilde{\mathcal{C}}^{\rho^{E}}(k)$. (d represents the dimension; P.Q. and
S.T. represent ‘Physical Quantities’ and ‘Scaling Theory’, respectively.)
P.Q. | S.T. | Simulations
---|---|---
Climb&Glide | Glide Only (MDP) | Glide Only (LVP)
2D($1024^{2}$) | 3D($128^{3}$) | 2D($1024^{2}$) | 3D($128^{3}$) | 2D($1024^{2}$) | 3D($128^{3}$)
$\widetilde{C}^{\sigma}_{tot}(k)$ | $\gamma$ | $-2.65$ | $-3.1$ | $-1.65$ | $-3.0$ | $-1.95$ | $-3.1$
$\widetilde{C}^{\sigma}_{tr}(k)$ | $\gamma$ | $-2.65$ | $-2.9$ | $-1.65$ | $-3.0$ | $-1.95$ | $-2.9$
$E(k)$ | $\gamma+d-1$ | $-1.65$ | $-1.1$ | $-0.65$ | $-1.0$ | $-0.95$ | $-1.1$
$\widetilde{\mathcal{C}}^{\rho^{E}}_{tot}(k)$ | $\gamma+2$ | $-0.65$ | $-1.0$ | $0.45$ | $-1.0$ | $-0.05$ | $-1.0$
$\widetilde{\mathcal{C}}^{\rho^{E}}_{per}(k)$ | $\gamma+2$ | $-0.65$ | $-1.0$ | $0.45$ | $-1.0$ | $-0.05$ | $-0.9$
We can rewrite Eq. (152) in Fourier space
$\displaystyle\mathcal{E}$ $\displaystyle=$
$\displaystyle\frac{1}{V}\int\frac{d^{d}\mathbf{k}}{(2\pi)^{d}}\frac{1}{4\mu}\biggl{[}\widetilde{\sigma}_{ij}^{\rm
int}(\mathbf{k})\widetilde{\sigma}_{ij}^{\rm
int}(-\mathbf{k})-\frac{\nu}{1+\nu}\widetilde{\sigma}_{ii}^{\rm
int}(\mathbf{k})\widetilde{\sigma}_{jj}^{\rm int}(-\mathbf{k})\biggr{]}.$
(154)
Substituting Eq. (150) and Eq. (151) into Eq. (154) gives
$\mathcal{E}=\int\frac{d^{d}\mathbf{k}}{2^{d+2}\pi^{d}}\frac{1}{\mu}\biggl{[}\widetilde{\mathcal{C}}^{\sigma}_{tot}(\mathbf{k})-\frac{\nu}{1+\nu}\widetilde{\mathcal{C}}^{\sigma}_{tr}(\mathbf{k})\biggr{]}$
(155)
If the stress-stress correlation functions are isotropic, we can integrate out
the angle variable of Eq. (155)
$\mathcal{E}=\int_{0}^{\infty}\\!\\!dk\frac{f(d)}{\mu}k^{d-1}\biggl{[}\widetilde{\mathcal{C}}^{\sigma}_{tot}(k)-\frac{\nu}{1+\nu}\widetilde{\mathcal{C}}^{\sigma}_{tr}(k)\biggr{]},$
(156)
where $f(d)$ is a constant function over the dimension $d$,
$f(d)=\begin{cases}1/(8\pi)&d=2,\\\ 1/(8\pi^{2})&d=3.\end{cases}$ (157)
Writing the elastic energy density in terms of the energy density spectrum
$\mathcal{E}(t)=\int_{0}^{\infty}E(k,t)dk$ implies
$E(k)=\frac{f(d)}{\mu}k^{d-1}\biggl{[}\widetilde{\mathcal{C}}^{\sigma}_{tot}(k)-\frac{\nu}{1+\nu}\widetilde{\mathcal{C}}^{\sigma}_{tr}(k)\biggr{]}.$
(158)
### E.4 Correlation function of the stressful part of GND density
According to Eq. (14), the stressful part of GND density is defined as
$\rho^{E}_{ij}(\mathbf{x})=\varepsilon_{isl}\partial_{s}\epsilon_{lj}^{\rm
e}(\mathbf{x}).$ (159)
Substituting Eq. (153) into Eq. (159) gives
$\rho^{E}_{ij}=\frac{1}{2\mu}\varepsilon_{isl}\partial_{s}\biggl{(}\sigma_{lj}^{\rm
int}-\frac{\nu}{1+\nu}\delta_{lj}\sigma_{mm}^{\rm int}\biggl{)}.$ (160)
The complete set of rotational invariants of the correlation function of
$\rho^{E}$ includes three scalar forms
$\displaystyle\mathcal{C}^{\rho^{E}}_{tot}(\mathbf{x})$ $\displaystyle=$
$\displaystyle\langle\rho_{ij}^{E}(\mathbf{x})\rho_{ij}^{E}(0)\rangle,$ (161)
$\displaystyle\mathcal{C}^{\rho^{E}}_{per}(\mathbf{x})$ $\displaystyle=$
$\displaystyle\langle\rho_{ij}^{E}(\mathbf{x})\rho_{ji}^{E}(0)\rangle,$ (162)
$\displaystyle\mathcal{C}^{\rho^{E}}_{tr}(\mathbf{x})$ $\displaystyle=$
$\displaystyle\langle\rho_{ii}^{E}(\mathbf{x})\rho_{jj}^{E}(0)\rangle,$ (163)
where $\mathcal{C}^{\rho^{E}}_{tr}(\mathbf{x})$ is always zero due to
$\rho^{E}_{ii}=0$.
Substituting Eq. (160) into both Eqs. (161) and (162) and applying the Fourier
transform gives
$\displaystyle\widetilde{\mathcal{C}}^{\rho^{E}}_{tot}(\mathbf{k})$
$\displaystyle=$
$\displaystyle\frac{1}{4\mu^{2}V}\varepsilon_{isl}(ik_{s})\biggl{(}\widetilde{\sigma}_{lj}^{\rm
int}(\mathbf{k})-\frac{\nu}{1+\nu}\delta_{lj}\widetilde{\sigma}_{mm}^{\rm
int}(\mathbf{k})\biggl{)}$
$\displaystyle\times\varepsilon_{ipq}(-ik_{p})\biggl{(}\widetilde{\sigma}_{qj}^{\rm
int}(-\mathbf{k})-\frac{\nu}{1+\nu}\delta_{qj}\widetilde{\sigma}_{nn}^{\rm
int}(-\mathbf{k})\biggl{)}$ $\displaystyle=$
$\displaystyle\frac{k^{2}}{4\mu^{2}}\biggl{(}\frac{1}{V}\widetilde{\sigma}_{lj}^{\rm
int}(\mathbf{k})\widetilde{\sigma}_{lj}^{\rm
int}(-\mathbf{k})\biggr{)}-\frac{\nu
k^{2}}{2\mu^{2}(1+\nu)^{2}}\biggl{(}\frac{1}{V}\widetilde{\sigma}_{mm}^{\rm
int}(\mathbf{k})\widetilde{\sigma}_{nn}^{\rm int}(-\mathbf{k})\biggr{)},$
(164) $\displaystyle\widetilde{\mathcal{C}}^{\rho^{E}}_{per}(\mathbf{k})$
$\displaystyle=$
$\displaystyle\frac{1}{4\mu^{2}V}\varepsilon_{isl}(ik_{s})\biggl{(}\widetilde{\sigma}_{lj}^{\rm
int}(\mathbf{k})-\frac{\nu}{1+\nu}\delta_{lj}\widetilde{\sigma}_{mm}^{\rm
int}(\mathbf{k})\biggl{)}$
$\displaystyle\times\varepsilon_{jpq}(-ik_{p})\biggl{(}\widetilde{\sigma}_{qi}^{\rm
int}(-\mathbf{k})-\frac{\nu}{1+\nu}\delta_{qi}\widetilde{\sigma}_{nn}^{\rm
int}(-\mathbf{k})\biggl{)}$ $\displaystyle=$
$\displaystyle\frac{k^{2}}{4\mu^{2}}\biggl{(}\frac{1}{V}\widetilde{\sigma}_{lj}^{\rm
int}(\mathbf{k})\widetilde{\sigma}_{lj}^{\rm
int}(-\mathbf{k})\biggr{)}-\frac{(1+\nu^{2})k^{2}}{4\mu^{2}(1+\nu)^{2}}\biggl{(}\frac{1}{V}\widetilde{\sigma}_{mm}^{\rm
int}(\mathbf{k})\widetilde{\sigma}_{nn}^{\rm int}(-\mathbf{k})\biggr{)},$
where we make use of the equilibrium condition $\partial_{i}\sigma_{ij}=0$ and
thus $k_{i}\widetilde{\sigma}_{ij}=0$. Substituting Eqs. (150) and (151) into
Eqs. (164) and (LABEL:eq:kcfErho2)
$\widetilde{\mathcal{C}}^{\rho^{E}}_{tot}(\mathbf{k})=\frac{k^{2}}{4\mu^{2}}\biggl{[}\widetilde{\mathcal{C}}^{\sigma}_{tot}(\mathbf{k})-\frac{2\nu}{(1+\nu)^{2}}\widetilde{\mathcal{C}}^{\sigma}_{tr}(\mathbf{k})\biggr{]},$
(166)
$\widetilde{\mathcal{C}}^{\rho^{E}}_{per}(\mathbf{k})=\frac{k^{2}}{4\mu^{2}}\biggl{[}\widetilde{\mathcal{C}}^{\sigma}_{tot}(\mathbf{k})-\frac{1+\nu^{2}}{(1+\nu)^{2}}\widetilde{\mathcal{C}}^{\sigma}_{tr}(\mathbf{k})\biggr{]}.$
(167)
Here we can ignore the angle dependence if the stress-stress correlation
functions are isotropic.
### E.5 Scaling relations
According to Eq. (158), the term $k^{d-1}$ suggests that the power-law
exponent relation between $E$ and $\widetilde{\mathcal{C}}^{\sigma}$ is
$\gamma^{\prime}=\gamma+d-1.$ (168)
Again, both Eqs. (164) and (LABEL:eq:kcfErho2) imply that the power-law
exponent relation between $\widetilde{\mathcal{C}}^{\rho^{E}}$ and
$\widetilde{\mathcal{C}}^{\sigma}$ is
$\gamma^{\prime\prime}=\gamma+2,$ (169)
regardless of the dimension.
Table 3 shows a nice agreement between predicted scaling and numerical
measurements for power-law exponents of $\widetilde{C}^{\sigma}$, $E$, and
$\widetilde{\mathcal{C}}^{\rho^{E}}$. These relations are valid in the
presence of residual stress.
During the relaxation processes, the elastic free energy follows a power-law
decay in time asymptotically, seen in Fig. 6. All the above measured
correlation functions of elastic quantities share the same power laws in
Fourier space, albeit with decaying magnitudes in time.
## References
* Kawasaki and Takeuchi (1980) Y. Kawasaki and T. Takeuchi, Scr. Metall. 14, 183 (1980).
* Mughrabi et al. (1986) H. Mughrabi, T. Ungar, W. Kienle, and M. Wilkens, Philos. Mag. A 53, 793 (1986).
* Schwink (1992) C. Schwink, Scr. Metall. Mater. 27, 963 (1992).
* Ungár et al. (1986) T. Ungár, L. S. Tóth, J. Illy, and I. Kovács, Acta Metall. 34, 1257 (1986).
* Gil Sevillano et al. (1991) J. Gil Sevillano, E. Bouchaud, and L. P. Kubin, Scr. Metall. Mater. 25, 355 (1991).
* Gil Sevillano (1993) J. Gil Sevillano, Phys. Scr. T49B, 405 (1993).
* Hähner et al. (1998) P. Hähner, K. Bay, and M. Zaiser, Phys. Rev. Lett. 81, 2470 (1998).
* Zaiser et al. (1999) M. Zaiser, K. Bay, and P. Hähner, Acta Mater. 47, 2463 (1999).
* Ananthakrishna (2007) G. Ananthakrishna, Phys. Rep. 440, 113 (2007).
* Bakó and Groma (1999) B. Bakó and I. Groma, Phys. Rev. B 60, 9228 (1999).
* Bakó et al. (2007) B. Bakó, I. Groma, G. Gyorgyi, and G. T. Zimanyi, Phys. Rev. Lett. 98, 075701 (2007).
* Bakó and Hoffelner (2007) B. Bakó and W. Hoffelner, Phys. Rev. B 76, 214108 (2007).
* Madec et al. (2002) R. Madec, B. Devincre, and L. P. Kubin, Scr. Mater. 47, 689 (2002).
* Gomez-Garcia et al. (2006) D. Gomez-Garcia, B. Devincre, and L. P. Kubin, Phys. Rev. Lett. 96, 125503 (2006).
* Walgraef and Aifantis (1985) D. Walgraef and E. Aifantis, Int. J. Eng. Sci. 23, 1351 (1985).
* Hähner (1996) P. Hähner, Appl. Phys. A 62, 473 (1996).
* Saxlová et al. (1997) M. Saxlová, J. Kratochvil, and J. Zatloukal, Mater. Sci. Eng. A 234, 205 (1997).
* Groma and Bakó (2000) I. Groma and B. Bakó, Phys. Rev. Lett. 84, 1487 (2000).
* Pantleon (1996) W. Pantleon, Scr. Metall. 35, 511 (1996).
* Pantleon (1998) W. Pantleon, Acta Mater. 46, 451 (1998).
* Sethna et al. (2003) J. P. Sethna, V. R. Coffman, and E. Demler, Phys. Rev. B 67, 184107 (2003).
* Chen et al. (2010) Y. S. Chen, W. Choi, S. Papanikolaou, and J. P. Sethna, Phys. Rev. Lett. 105, 105501 (2010).
* Acharya (2001) A. Acharya, J. Mech. Phys. Solids 49, 761 (2001).
* Roy and Acharya (2005) A. Roy and A. Acharya, J. Mech. Phys. Solids 53, 143 (2005).
* Acharya and Roy (2006) A. Acharya and A. Roy, J. Mech. Phys. Solids 54, 1687 (2006).
* Limkumnerd and Sethna (2006) S. Limkumnerd and J. P. Sethna, Phys. Rev. Lett. 96, 095503 (2006).
* Nye (1953) J. F. Nye, Act. Metall. 1, 153 (1953).
* Kröner (1958) E. Kröner, _Kontinuumstheorie der Versetzungen und Eigenspannungen_ (Springer, Berlin, Germany, 1958).
* Hughes et al. (1997) D. A. Hughes, Q. Liu, D. C. Chrzan, and N. Hansen, Acta Mater. 45, 105 (1997).
* Hughes et al. (1998) D. A. Hughes, D. C. Chrzan, Q. Liu, and N. Hansen, Phys. Rev. Lett. 81, 4664 (1998).
* Mika and Dawson (1999) D. P. Mika and P. R. Dawson, Acta Mater. 47, 1355 (1999).
* Hughes and Hansen (2001) D. A. Hughes and N. Hansen, Phys. Rev. Lett. 87, 135503 (2001).
* Kuhlmann-Wilsdorf (1985) D. Kuhlmann-Wilsdorf, Metall. Mater. Trans. A 16, 2091 (1985).
* Wert et al. (2007) J. A. Wert, X. Huang, G. Winther, W. Pantleon, and H. F. Poulsen, Materials Today 10, 24 (2007).
* Hansen et al. (2011) N. Hansen, X. Huang, and G. Winther, Metall. Mater. Trans. A 42, 613 (2011).
* Sethna (2006) J. P. Sethna, _Statistical mechanics: Entropy, order parameters, and complexity_ (Oxford University Press, New York, 2006).
* Sethna et al. (2001) J. P. Sethna, K. A. Dahmen, and C. R. Myers, Nature 410, 242 (2001).
* Durin and Zapperi (2006) G. Durin and S. Zapperi, in _The Science of Hysteresis, Vol. II_ , edited by G. Bertotti and I. Mayergoyz (Elsevier, Amsterdam, Netherlands, 2006), pp. 181–267.
* Zaiser (2006) M. Zaiser, Adv. Phys. 55(1-2), 185 (2006).
* Rutenberg and Vollmayr-Lee (1999) A. D. Rutenberg and B. P. Vollmayr-Lee, Phys. Rev. Lett. 83, 3772 (1999).
* Friedman et al. (2012) N. Friedman, A. T. Jennings, G. Tsekenis, J. Y. Kim, M. Tao, J. T. Uhl, J. R. Greer, and K. A. Dahmen, Phys. Rev. Lett. 109, 095507 (2012).
* Martin (1968) P. C. Martin, in _Many-body physics_ , edited by C. de Witt and R. Balian (Gordon and Breach, New York, 1968), pp. 37–136.
* Forster (1975) D. Forster, _Hydrodynamic fluctuations, broken symmetry, and correlation functions_ (Benjamin-Cummings, Reading, MA, 1975).
* Hohenberg and Halperin (1977) P. C. Hohenberg and B. I. Halperin, Rev. Mod. Phys. 49, 435 (1977).
* Rickman and Viñals (1997) J. M. Rickman and J. Viñals, Philos. Mag. A 75, 1251 (1997).
* Chen et al. (2012a) Y. S. Chen, W. Choi, S. Papanikolaou, and J. P. Sethna (2012a), (manuscript in preparation).
* Chaikin and Lubensky (1995) P. Chaikin and T. Lubensky, _Principles of Condensed Matter Physics_ (Cambridge University Press, Cambridge, England, 1995).
* L’vov (1991) V. L’vov, Phys. Rep. 207, 1 (1991).
* Choi et al. (2012a) W. Choi, Y. S. Chen, S. Papanikolaou, and J. P. Sethna, Comput. Sci. Eng. 14(1), 33 (2012a).
* Salman and Truskinovsky (2012) O. Salman and L. Truskinovsky, Int. J. Eng. Sci. 59, 219 (2012).
* Chen et al. (2012b) Y. S. Chen, W. Choi, S. Papanikolaou, M. Bierbaum, and J. P. Sethna (2012b), Plasticity Tools, http://www.lassp.cornell.edu/sethna/Plasticity/Tools/.
* Landau and Lifshitz (1970) L. D. Landau and E. M. Lifshitz, _Theory of Elasticity_ (Pergamon Press, New York, 1970), 2nd ed.
* Acharya (2003) A. Acharya, Proc. R. Soc. A 459, 1343 (2003).
* Acharya (2004) A. Acharya, J. Mech. Phys. Solids 52, 301 (2004).
* Kröner (1981) Kröner, in _Physics of Defects, Les Houches, Session XXXV, 1980_ , edited by R. Balian, M. Kleman, and J. P. Poirier (North-Holland: Amsterdam, 1981), pp. 215–315.
* Limkumnerd and Sethna (2007) S. Limkumnerd and J. P. Sethna, Phys. Rev. B 75, 224121 (2007).
* Kosevich (1979) A. M. Kosevich, in _Dislocations in Solids_ , edited by F. N. R. Nabarro and M. S. Duesbery (North-Holland: Amsterdam, 1979), vol. 1, pp. 33–165.
* Lazar (2011) M. Lazar, Math. Mech. Solids 16, 253 (2011).
* Roy and Acharya (2006) A. Roy and A. Acharya, J. Mech. Phys. Solids 54, 1711 (2006).
* Varadhan et al. (2006) S. Varadhan, A. Beaudoin, and C. Fressengeas, in _Proceedings of Science, SMPRI2005_ (2006), p. 004.
* Kuhlmann-Wilsdorf and Hansen (1991) D. Kuhlmann-Wilsdorf and N. Hansen, Scr. Metall. Mater. 25, 1557 (1991).
* Hughes and Hansen (1993) D. A. Hughes and N. Hansen, Metall. Trans. A 24, 2022 (1993).
* Kiener et al. (2011) D. Kiener, P. J. Guruprasad, S. M. Keralavarma, G. Dehm, and A. A. Benzerga, Acta Mater. 59, 3825 (2011).
* Hirth and Lothe (1982) J. P. Hirth and J. Lothe, _Theory of Dislocations_ (Wiley, New York, 1982).
* Zapperi and Zaiser (2011) S. Zapperi and M. Zaiser (2011), (private communication).
* Limkumnerd (2006) S. Limkumnerd, Ph.D. thesis, Cornell University (2006).
* Falk and Langer (1998) M. L. Falk and J. S. Langer, Phys. Rev. E 57, 7192 (1998).
* Sandfeld et al. (2010) S. Sandfeld, T. Hochrainer, P. Gumbsch, and M. Zaiser, Philos. Mag. 90, 3697–3728 (2010).
* Acharya (2011) A. Acharya, J. Elasticity 104, 23 (2011).
* Godfrey and Hughes (2000) A. Godfrey and D. A. Hughes, Acta Mater. 48, 1897 (2000).
* Liu (1994) Q. Liu, J. Appl. Cryst. 27, 762 (1994).
* Mach et al. (2010) J. Mach, A. J. Beaudoin, and A. Acharya, J. Mech. Phys. Solids 58, 105 (2010).
* Limkumnerd and Sethna (2008) S. Limkumnerd and J. P. Sethna, J. Mech. Phys. Solids 56, 1450 (2008).
* Kurganov et al. (2001) A. Kurganov, S. Noelle, and G. Petrova, SIAM J. Sci. Comput. 23, 707 (2001).
* Pumir and Siggia (1992a) A. Pumir and E. D. Siggia, Phys. Fluids A 4, 1472 (1992a).
* Pumir and Siggia (1992b) A. Pumir and E. D. Siggia, Phys. Rev. Lett. 68, 1511 (1992b).
* Choi et al. (2012b) W. Choi, Y. S. Chen, S. Papanikolaou, and J. P. Sethna (2012b), (manuscript in preparation).
* Acharya and Tartar (2011) A. Acharya and L. Tartar, Bulletin of the Italian Mathematical Union 9(IV), 409 (2011).
* Song et al. (2005) C. Song, S. Havlin, and H. A. Makse, Nature 433, 392 (2005).
* Vergassola et al. (1994) M. Vergassola, B. Dubrulle, U. Frisch, and A. Noullez, Astron. Astrophys. 289, 325 (1994).
* Peebles (1993) P. J. E. Peebles, _Principles of Physical Cosmology_ (Princeton University Press, Princeton, NJ, 1993).
* Coles and Lucchin (1995) P. Coles and F. Lucchin, _Cosmology: the Origin and Evolution of Cosmic Structures_ (Wiley, Chichester, England, 1995).
* Kuhlmann-Wilsdorf (1987) D. Kuhlmann-Wilsdorf, Mater. Sci. Eng. 86, 53 (1987).
* Miguel et al. (2001) M. C. Miguel, A. Vespignani, S. Zapperi, J. Weiss, and J. R. Grasso, Nature 410, 667 (2001).
* Dimiduk et al. (2006) D. M. Dimiduk, C. Woodward, R. LeSar, and M. D. Uchic, Science 312, 1188 (2006).
* Uchic et al. (2004) M. D. Uchic, D. M. Dimiduk, J. N. Florando, and W. D. Nix, Science 305, 986 (2004).
* Schmitt et al. (1991) J. H. Schmitt, J. V. Fernandes, J. J. Gracio, and M. F. Vieira, Mater. Sci. Eng. A 147, 143 (1991).
* Lopes et al. (2003) A. B. Lopes, F. Barlat, J. J. Gracio, J. F. Ferreira Duarte, and E. F. Rauch, Int. J. Plasticity 19, 1 (2003).
* Mura (1991) T. Mura, _Micromechanics of Defects in Solids_ (Kluwer Academic Publishers, Dordrecht, Netherlands, 1991), 2nd ed.
* Kacher et al. (2011) J. Kacher, I. Robertson, M. Nowell, J. Knapp, and K. Hattar, Mater. Sci. Eng. A 528, 1628 (2011).
* Acharya (2012) A. Acharya (2012), (private communication).
|
arxiv-papers
| 2011-06-01T14:54:49 |
2024-09-04T02:49:19.268242
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Yong S. Chen, Woosong Choi, Stefanos Papanikolaou, Matthew Bierbaum\n and James P. Sethna",
"submitter": "Yong Chen",
"url": "https://arxiv.org/abs/1106.0195"
}
|
1106.0365
|
# Lower Bounds for Sparse Recovery††thanks: This research has been supported
in part by David and Lucille Packard Fellowship, MADALGO (Center for Massive
Data Algorithmics, funded by the Danish National Research Association) and NSF
grant CCF-0728645. E. Price has been supported in part by Cisco Fellowship.
Khanh Do Ba
MIT CSAIL Piotr Indyk
MIT CSAIL Eric Price
MIT CSAIL David P. Woodruff
IBM Almaden
###### Abstract
We consider the following $k$-sparse recovery problem: design an $m\times n$
matrix $A$, such that for any signal $x$, given $Ax$ we can efficiently
recover $\hat{x}$ satisfying $\left\|x-\hat{x}\right\|_{1}\leq
C\min_{k\mbox{-sparse }x^{\prime}}\left\|x-x^{\prime}\right\|_{1}$. It is
known that there exist matrices $A$ with this property that have only
$O(k\log(n/k))$ rows.
In this paper we show that this bound is tight. Our bound holds even for the
more general randomized version of the problem, where $A$ is a random
variable, and the recovery algorithm is required to work for any fixed $x$
with constant probability (over $A$).
## 1 Introduction
In recent years, a new “linear” approach for obtaining a succinct approximate
representation of $n$-dimensional vectors (or signals) has been discovered.
For any signal $x$, the representation is equal to $Ax$, where $A$ is an
$m\times n$ matrix, or possibly a random variable chosen from some
distribution over such matrices. The vector $Ax$ is often referred to as the
measurement vector or sketch of $x$. Although $m$ is typically much smaller
than $n$, the sketch $Ax$ contains plenty of useful information about the
signal $x$. A particularly useful and well-studied problem is that of stable
sparse recovery: given $Ax$, recover a $k$-sparse vector $\hat{x}$ (i.e.,
having at most $k$ non-zero components) such that
(1) $\left\|x-\hat{x}\right\|_{p}\leq C\min_{k\mbox{-sparse
}x^{\prime}}\left\|x-x^{\prime}\right\|_{q}$
for some norm parameters $p$ and $q$ and an approximation factor $C=C(k)$. If
the matrix $A$ is random, then Equation (1) should hold for each $x$ with some
probability (say, 3/4). Sparse recovery has applications to numerous areas
such as data stream computing [Mut03, Ind07] and compressed sensing [CRT06,
Don06, DDT+08].
It is known that there exist matrices $A$ and associated recovery algorithms
that produce approximations $\hat{x}$ satisfying Equation (1) with $p=q=1$
(i.e., the “$\ell_{1}/\ell_{1}$ guarantee”), constant $C$ and sketch length
$m=O(k\log(n/k))$. In particular, a random Gaussian matrix [CRT06]111In fact,
they even achieve a somewhat stronger $\ell_{2}/\ell_{1}$ guarantee, see
Section 1.2. or a random sparse binary matrix ([BGI+08], building on [CCFC04,
CM05]) has this property with overwhelming probability. In comparison, using a
non-linear approach, one can obtain a shorter sketch of length $O(k)$: it
suffices to store the $k$ coefficients with the largest absolute values,
together with their indices.
Surprisingly, it was not known whether the $O(k\log(n/k))$ bound for linear
sketching could be improved upon in general, although such lower bounds were
known to hold under certain restrictions (see section 1.2 for a more detailed
overview). This raised hope that the $O(k)$ bound might be achievable even for
general vectors $x$. Such a scheme would have been of major practical
interest, since the sketch length determines the compression ratio, and for
large $n$ any extra $\log n$ factor worsens that ratio tenfold.
In this paper we show that, unfortunately, such an improvement is not
possible. We address two types of recovery schemes:
* •
A deterministic one, which involves a fixed matrix $A$ and a recovery
algorithm which work for all signals $x$. The aforementioned results of
[CRT06] and others are examples of such schemes.
* •
A randomized one, where the matrix $A$ is chosen at random from some
distribution, and for each signal $x$ the recovery procedure is correct with
constant probability (say, $3/4$). Some of the early schemes proposed in the
data stream literature (e.g., [CCFC04, CM05]) belong to this category.
Our main result is that, even in the randomized case, the sketch length $m$
must be at least $\Omega(k\log(n/k))$. By the aforementioned result of [CRT06]
this bound is tight.
Thus, our results show that the linear compression is inherently more costly
than the simple non-linear approach.
### 1.1 Our techniques
On a high level, our approach is simple and natural, and utilizes the packing
approach: we show that any two “sufficiently” different vectors $x$ and
$x^{\prime}$ are mapped to images $Ax$ and $Ax^{\prime}$ that are
“sufficiently” different themselves, which requires that the image space is
“sufficiently” high-dimensional. However, the actual arguments are somewhat
subtle.
Consider first the (simpler) deterministic case. We focus on signals $x=y+z$,
where $y$ can be thought of as the “head” of the signal and $z$ as the “tail”.
The “head” vectors $y$ come from a set $Y$ that is a binary error-correcting
code, with a minimum distance $\Omega(k)$, where each codeword has weight $k$.
On the other hand, the “tail” vectors $z$ come from an $\ell_{1}$ ball (say
$B$) with a radius that is a small fraction of $k$. It can be seen that for
any two elements $y,y^{\prime}\in Y$, the balls $y+B$ and $y^{\prime}+B$, as
well as their images, must be disjoint. At the same time, since all vectors
$x$ live in a “large” $\ell_{1}$ ball $B^{\prime}$ of radius $O(k)$, all
images $Ax$ must live in a set $AB^{\prime}$. The key observation is that the
set $AB^{\prime}$ is a scaled version of $A(y+B)$ and therefore the ratios of
their volumes can be bounded by the scaling factor to the power of the
dimension $m$. Since the number of elements of $Y$ is large, this gives a
lower bound on $m$.
Unfortunately, the aforementioned approach does not seem to extend to the
randomized case. A natural approach would be to use Yao’s principle, and focus
on showing a lower bound for a scenario where the matrix $A$ is fixed while
the vectors $x=y+z$ are “random”. However, this approach fails, in a very
strong sense. Specifically, we are able to show that there is a distribution
over matrices $A$ with only $O(k)$ rows so that for a fixed $y\in Y$ and $z$
chosen uniformly at random from the small ball $B$, we can recover $y$ from
$A(y+z)$ with high probability. In a nutshell, the reason is that a random
vector from $B$ has an $\ell_{2}$ norm that is much smaller than the
$\ell_{2}$ norm of elements of $Y$ (even though the $\ell_{1}$ norms are
comparable). This means that the vector $x$ is “almost” $k$-sparse (in the
$\ell_{2}$ norm), which enables us to achieve the $O(k)$ measurement bound.
Instead, we resort to an altogether different approach, via communication
complexity [KN97]. We start by considering a “discrete” scenario where both
the matrix $A$ and the vectors $x$ have entries restricted to the polynomial
range $\\{-n^{c}\ldots n^{c}\\}$ for some $c=O(1)$. In other words, we assume
that the matrix and vector entries can be represented using $O(\log n)$ bits.
In this setting we show the following: there is a method for encoding a
sequence of $d=O(k\log(n/k)\log n)$ bits into a vector $x$, so that any sparse
recovery algorithm can recover that sequence given $Ax$. Since each entry of
$Ax$ conveys only $O(\log n)$ bits, it follows that the number $m$ of rows of
$A$ must be $\Omega(k\log(n/k))$.
The encoding is performed by taking
$x=\sum_{j=1}^{\log n}D^{j}x_{j},$
where $D=O(1)$ and the $x_{j}$’s are chosen from the error-correcting code $Y$
defined as in the deterministic case. The intuition behind this approach is
that a good $\ell_{1}/\ell_{1}$ approximation to $x$ reveals most of the bits
of $x_{\log n}$. This enables us to identify $x_{\log n}$ exactly using error
correction. We could then compute $Ax-Ax_{\log n}=A(\sum_{j=1}^{\log
n-1}D^{j}x_{j})$, and identify $x_{\log n-1}\ldots x_{1}$ in a recursive
manner. The only obstacle to completing this argument is that we would need
the recovery algorithm to work for all $x_{i}$, which would require lower
probability of algorithm failure (roughly $1/\log n$). To overcome this
problem, we replace the encoding argument by a reduction from a related
communication complexity problem called Augmented Indexing. This problem has
been used in the data stream literature [CW09, KNW10] to prove lower bounds
for linear algebra and norm estimation problems. Since the problem has
communication complexity of $\Omega(d)$, the conclusion follows.
We apply the argument to arbitrary matrices $A$ by representing them as a sum
$A^{\prime}+A^{\prime\prime}$, where $A^{\prime}$ has $O(\log n)$ bits of
precision and $A^{\prime\prime}$ has “small” entries. We then show that
$A^{\prime}x=A(x+s)$ for some $s$ with
$\left\|s\right\|_{1}<n^{-\Omega(1)}\left\|x\right\|_{1}$. In the
communication game, this means we can transmit $A^{\prime}x$ and recover
$x_{\log n}$ from $A^{\prime}(\sum_{j=1}^{\log
n}D^{j}x_{j})=A(\sum_{j=1}^{\log n}D^{j}x_{j}+s)$.
One catch is that $s$ depends on $A$. The recovery algorithm is guaranteed to
work with probability $3/4$ for any $x$, so it works with probability $3/4$
over any distribution on $x$ independent of $A$. However, there is no
guarantee about recovery of $x+s$ when $s$ depends on $A$ (even if $s$ is
tiny). To deal with this, we choose a $u$ uniformly from the $\ell_{1}$ ball
of radius $k$. We can set $\left\|s\right\|_{1}\ll k/n$, so $x+u$ and $x+u+s$
are distributions with $o(1)$ statistical distance. Hence recovery from
$A(x+u+s)$ matches recovery from $A(x+u)$ with probability at least $1-o(1)$,
and $\left\|u\right\|_{1}$ is small enough that successful recovery from
$A(x+u)$ identifies $x_{\log n}$. Hence we can recover $x_{\log n}$ from
$A(x+u+s)=A^{\prime}x+Au$ with probability at least $3/4-o(1)>1/2$, which
means that the Augmented Indexing reduction applies to arbitrary matrices as
well.
### 1.2 Related Work
There have been a number of earlier works that have, directly or indirectly,
shown lower bounds for various models of sparse recovery and certain classes
of matrices and algorithms. Specifically, one of the most well-known recovery
algorithms used in compressed sensing is $\ell_{1}$-minimization, where a
signal $x\in\mathbb{R}^{n}$ measured by matrix $A$ is reconstructed as
$\hat{x}:=\operatorname*{arg\,min}_{x^{\prime}:\,Ax^{\prime}=Ax}\|x^{\prime}\|_{1}.$
Kashin and Temlyakov [KT07] (building on prior work on Gelfand width [GG84,
Glu84, Kas77], see also [Don06]) gave a characterization of matrices $A$ for
which the above recovery algorithm yields the $\ell_{2}/\ell_{1}$ guarantee,
i.e.,
$\|x-\hat{x}\|_{2}\leq Ck^{-1/2}\min_{k\mbox{-sparse
}x^{\prime}}\|x-x^{\prime}\|_{1}$
for some constant $C$, from which it can be shown that such an $A$ must have
$m=\Omega(k\log(n/k))$ rows.
Note that the $\ell_{2}/\ell_{1}$ guarantee is somewhat stronger than the
$\ell_{1}/\ell_{1}$ guarantee investigated in this paper. Specifically, it is
easy to observe that if the approximation $\hat{x}$ itself is required to be
$O(k)$-sparse, then the $\ell_{2}/\ell_{1}$ guarantee implies the
$\ell_{1}/\ell_{1}$ guarantee (with a somewhat higher approximation constant).
For the sake of simplicity, in this paper we focus mostly on the
$\ell_{1}/\ell_{1}$ guarantee. However, our lower bounds apply to the
$\ell_{2}/\ell_{1}$ guarantee as well: see footnote on page 3.
The results on Gelfand width can be also used to obtain lower bounds for
general recovery algorithms (for the deterministic recovery case), as long as
the sparsity parameter $k$ is larger than some constant. This was explicitly
stated in [FPRU10], see also [Don06].
On the other hand, instead of assuming a specific recovery algorithm,
Wainwright [Wai07] assumes a specific (randomized) measurement matrix. More
specifically, the author assumes a $k$-sparse binary signal
$x\in\\{0,\alpha\\}^{n}$, for some $\alpha>0$, to which is added i.i.d.
standard Gaussian noise in each component. The author then shows that with a
random Gaussian matrix $A$, with each entry also drawn i.i.d. from the
standard Gaussian, we cannot hope to recover $x$ from $Ax$ with any sub-
constant probability of error unless $A$ has
$m=\Omega(\frac{1}{\alpha^{2}}\log\frac{n}{k})$ rows. The author also shows
that for $\alpha=\sqrt{1/k}$, this is tight, i.e., that $m=\Theta(k\log(n/k))$
is both necessary and sufficient. Although this is only a lower bound for a
specific (random) matrix, it is a fairly powerful one and provides evidence
that the often observed upper bound of $O(k\log(n/k))$ is likely tight.
More recently, Dai and Milenkovic [DM08], extending on [EG88] and [FR99],
showed an upper bound on superimposed codes that translates to a lower bound
on the number of rows in a compressed sensing matrix that deals only with
$k$-sparse signals but can tolerate measurement noise. Specifically, if we
assume a $k$-sparse signal $x\in([-t,t]\cap\mathbb{Z})^{n}$, and that
arbitrary noise $\mu\in\mathbb{R}^{n}$ with $\|\mu\|_{1}<d$ is added to the
measurement vector $Ax$, then if exact recovery is still possible, $A$ must
have had $m\geq Ck\log n/\log k$ rows, for some constant $C=C(t,d)$ and
sufficiently large $n$ and $k$.222Here $A$ is assumed to have its columns
normalized to have $\ell_{1}$-norm 1. This is natural since otherwise we could
simply scale $A$ up to make the image points $Ax$ arbitrarily far apart,
effectively nullifying the noise.
## 2 Preliminaries
In this paper we focus on recovering sparse approximations $\hat{x}$ that
satisfy the following $C$-approximate $\ell_{1}/\ell_{1}$ guarantee with
sparsity parameter $k$:
(2) $\left\|x-\hat{x}\right\|_{1}\leq C\min_{k\mbox{-sparse
}x^{\prime}}\left\|x-x^{\prime}\right\|_{1}.$
We define a $C$-approximate deterministic $\ell_{1}/\ell_{1}$ recovery
algorithm to be a pair $(A,\mathscr{A})$ where $A$ is an $m\times n$
observation matrix and $\mathscr{A}$ is an algorithm that, for any $x$, maps
$Ax$ (called the sketch of $x$) to some $\hat{x}$ that satisfies Equation (2).
We define a $C$-approximate randomized $\ell_{1}/\ell_{1}$ recovery algorithm
to be a pair $(A,\mathscr{A})$ where $A$ is a random variable chosen from some
distribution over $m\times n$ measurement matrices, and $\mathscr{A}$ is an
algorithm which, for any $x$, maps a pair $(A,Ax)$ to some $\hat{x}$ that
satisfies Equation (2) with probability at least $3/4$.
We use $B^{n}_{p}(r)$ to denote the $\ell_{p}$ ball of radius $r$ in
$\mathbb{R}^{n}$; we skip the superscript $n$ if it is clear from the context.
For any vector $x$, we use $\|x\|_{0}$ to denote the “$\ell_{0}$ norm of $x$”,
i.e., the number of non-zero entries in $x$.
## 3 Deterministic Lower Bound
We will prove a lower bound on $m$ for any $C$-approximate deterministic
recovery algorithm. First we use a discrete volume bound (Lemma 3.1) to find a
large set $Y$ of points that are at least $k$ apart from each other. Then we
use another volume bound (Lemma 3.2) on the images of small $\ell_{1}$ balls
around each point in $Y$. If $m$ is too small, some two images collide. But
the recovery algorithm, applied to a point in the collision, must yield an
answer close to two points in $Y$. This is impossible, so $m$ must be large.
###### Lemma 3.1.
(Gilbert-Varshamov) For any $q,k\in\mathbb{Z}^{+},\epsilon\in\mathbb{R}^{+}$
with $\epsilon<1-1/q$, there exists a set $Y\subset\\{0,1\\}^{qk}$ of binary
vectors with exactly $k$ ones, such that $Y$ has minimum Hamming distance
$2\epsilon k$ and
$\log\left|Y\right|>(1-H_{q}(\epsilon))k\log q$
where $H_{q}$ is the $q$-ary entropy function
$H_{q}(x)=-x\log_{q}\frac{x}{q-1}-(1-x)\log_{q}(1-x)$.
See appendix for proof.
###### Lemma 3.2.
Take an $m\times n$ real matrix $A$, positive reals $\epsilon,p,\lambda$, and
$Y\subset B_{p}^{n}(\lambda)$. If $\left|Y\right|>(1+1/\epsilon)^{m}$, then
there exist $z,\overline{z}\in B_{p}^{n}(\epsilon\lambda)$ and
$y,\overline{y}\in Y$ with $y\neq\overline{y}$ and
$A(y+z)=A(\overline{y}+\overline{z})$.
###### Proof.
If the statement is false, then the images of all $\left|Y\right|$ balls
$\\{y+B_{p}^{n}(\epsilon\lambda)\mid y\in Y\\}$ are disjoint. However, those
balls all lie within $B_{p}^{n}((1+\epsilon)\lambda)$, by the bound on the
norm of $Y$. A volume argument gives the result, as follows.
Let $S=AB_{p}^{n}(1)$ be the image of the $n$-dimensional ball of radius $1$
in $m$-dimensional space. This is a polytope with some volume $V$. The image
of $B_{p}^{n}(\epsilon\lambda)$ is a linearly scaled $S$ with volume
$(\epsilon\lambda)^{m}V$, and the volume of the image of
$B_{p}^{n}((1+\epsilon)\lambda)$ is similar with volume
$((1+\epsilon)\lambda)^{m}V$. If the images of the former are all disjoint and
lie inside the latter, we have
$\left|Y\right|(\epsilon\lambda)^{m}V\leq((1+\epsilon)\lambda)^{m}V$, or
$\left|Y\right|\leq(1+1/\epsilon)^{m}$. If $Y$ has more elements than this,
the images of some two balls $y+B_{p}^{n}(\epsilon\lambda)$ and
$\overline{y}+B_{p}^{n}(\epsilon\lambda)$ must intersect, implying the lemma.
∎
###### Theorem 3.1.
Any $C$-approximate deterministic recovery algorithm must have
$m\geq\frac{1-H_{\left\lfloor
n/k\right\rfloor}(1/2)}{\log(4+2C)}k\log\left\lfloor\frac{n}{k}\right\rfloor.$
###### Proof.
Let $Y$ be a maximal set of $k$-sparse $n$-dimensional binary vectors with
minimum Hamming distance $k$, and let $\gamma=\frac{1}{3+2C}$. By Lemma 3.1
with $q=\left\lfloor n/k\right\rfloor$ we have
$\log\left|Y\right|>(1-H_{\left\lfloor
n/k\right\rfloor}(1/2))k\log{\left\lfloor n/k\right\rfloor}$.
Suppose that the theorem is not true; then
$m<\log\left|Y\right|/\log(4+2C)=\log\left|Y\right|/\log(1+1/\gamma)$, or
$\left|Y\right|>(1+\frac{1}{\gamma})^{m}$. Hence Lemma 3.2 gives us some
$y,\overline{y}\in Y$ and $z,\overline{z}\in B_{1}(\gamma k)$ with
$A(y+z)=A(\overline{y}+\overline{z})$.
Let $w$ be the result of running the recovery algorithm on $A(y+z)$. By the
definition of a deterministic recovery algorithm, we have
$\displaystyle\left\|y+z-w\right\|_{1}\leq C\min_{k\mbox{-sparse
}y^{\prime}}\left\|y+z-y^{\prime}\right\|_{1}$
$\displaystyle\left\|y-w\right\|_{1}-\left\|z\right\|_{1}\leq
C\left\|z\right\|_{1}$
$\displaystyle\left\|y-w\right\|_{1}\leq(1+C)\left\|z\right\|_{1}\leq(1+C)\gamma
k=\tfrac{1+C}{3+2C}\,k,$
and similarly $\left\|\overline{y}-w\right\|_{1}\leq\frac{1+C}{3+2C}\,k$, so
$\displaystyle\left\|y-\overline{y}\right\|_{1}$
$\displaystyle\leq\left\|y-w\right\|_{1}+\left\|\overline{y}-w\right\|_{1}=\frac{2+2C}{3+2C}k<k.$
But this contradicts the definition of $Y$, so $m$ must be large enough for
the guarantee to hold. ∎
###### Corollary 3.1.
If $C$ is a constant bounded away from zero, then $m=\Omega(k\log(n/k))$.
## 4 Randomized Upper Bound for Uniform Noise
The standard way to prove a randomized lower bound is to find a distribution
of hard inputs, and to show that any deterministic algorithm is likely to fail
on that distribution. In our context, we would like to define a “head” random
variable $y$ from a distribution $Y$ and a “tail” random variable $z$ from a
distribution $Z$, such that any algorithm given the sketch of $y+z$ must
recover an incorrect $y$ with non-negligible probability.
Using our deterministic bound as inspiration, we could take $Y$ to be uniform
over a set of $k$-sparse binary vectors of minimum Hamming distance $k$ and
$Z$ to be uniform over the ball $B_{1}(\gamma k)$ for some constant
$\gamma>0$. Unfortunately, as the following theorem shows, one can actually
perform a recovery of such vectors using only $O(k)$ measurements; this is
because $\left\|z\right\|_{2}$ is very small (namely, $\tilde{O}(k/\sqrt{n})$)
with high probability.
###### Theorem 4.1.
Let $Y\subset\mathbb{R}^{n}$ be a set of signals with the property that for
every distinct $y_{1},y_{2}\in Y$, $\|y_{1}-y_{2}\|_{2}\geq r$, for some
parameter $r>0$. Consider “noisy signals” $x=y+z$, where $y\in Y$ and $z$ is a
“noise vector” chosen uniformly at random from $B_{1}(s)$, for another
parameter $s>0$. Then using an $m\times n$ Gaussian measurement matrix
$A=(1/\sqrt{m})(g_{ij})$, where $g_{ij}$’s are i.i.d. standard Gaussians, we
can recover $y\in Y$ from $A(y+z)$ with probability $1-1/n$ (where the
probability is over both $A$ and $z$), as long as
$s\leq O\left(\frac{rm^{1/2}n^{1/2-1/m}}{|Y|^{1/m}\log^{3/2}n}\right).$
To prove the theorem we will need the following two lemmas.
###### Lemma 4.1.
For any $\delta>0$, $y_{1},y_{2}\in Y$, $y_{1}\not=y_{2}$, and
$z\in\mathbb{R}^{n}$, each of the following holds with probability at least
$1-\delta$:
* •
$\|A(y_{1}-y_{2})\|_{2}\geq\frac{\delta^{1/m}}{3}\|y_{1}-y_{2}\|_{2}$, and
* •
$\|Az\|_{2}\leq(\sqrt{(8/m)\log(1/\delta)}+1)\|z\|_{2}$.
See the appendix for the proof.
###### Lemma 4.2.
A random vector $z$ chosen uniformly from $B_{1}(s)$ satisfies
$\Pr[\|z\|_{2}>\alpha s\log n/\sqrt{n}]<1/n^{\alpha-1}.$
See the appendix for the proof.
Proof of theorem. In words, Lemma 4.1 says that $A$ cannot bring faraway
signal points too close together, and cannot blow up a small noise vector too
much. Now, we already assumed the signals to be far apart, and Lemma 4.2 tells
us that the noise is indeed small (in $\ell_{2}$ distance). The result is that
in the image space, the noise is not enough to confuse different signals.
Quantitatively, applying the second part of Lemma 4.1 with $\delta=1/n^{2}$,
and Lemma 4.2 with $\alpha=3$, gives us
(3) $\|Az\|_{2}\leq O\left(\frac{\log^{1/2}n}{m^{1/2}}\right)\|z\|_{2}\leq
O\left(\frac{s\log^{3/2}n}{(mn)^{1/2}}\right)$
with probability $\geq 1-2/n^{2}$. On the other hand, given signal $y_{1}\in
Y$, we know that every other signal $y_{2}\in Y$ satisfies
$\|y_{1}-y_{2}\|_{2}\geq r$, so by the first part of Lemma 4.1 with
$\delta=1/(2n|Y|)$, together with a union bound over every $y_{2}\in Y$,
(4)
$\|A(y_{1}-y_{2})\|_{2}\geq\frac{\|y_{1}-y_{2}\|_{2}}{3(2n|Y|)^{1/m}}\geq\frac{r}{3(2n|Y|)^{1/m}}$
holds for every $y_{2}\in Y$, $y_{2}\not=y_{1}$, simultaneously with
probability $1-1/(2n)$.
Finally, observe that as long as $\|Az\|_{2}<\|A(y_{1}-y_{2})\|_{2}/2$ for
every competing signal $y_{2}\in Y$, we are guaranteed that
$\displaystyle\|A(y_{1}+z)-Ay_{1}\|_{2}$ $\displaystyle=$
$\displaystyle\|Az\|_{2}$ $\displaystyle<$
$\displaystyle\|A(y_{1}-y_{2})\|_{2}-\|Az\|_{2}$ $\displaystyle\leq$
$\displaystyle\|A(y_{1}+z)-Ay_{2}\|_{2}$
for every $y_{2}\not=y_{1}$, so we can recover $y_{1}$ by simply returning the
signal whose image is closest to our measurement point $A(y_{1}+z)$ in
$\ell_{2}$ distance. To achieve this, we can chain Equations (3) and (4)
together (with a factor of 2), to see that
$s\leq O\left(\frac{rm^{1/2}n^{1/2-1/m}}{|Y|^{1/m}\log^{3/2}n}\right)$
suffices. Our total probability of failure is at most $2/n^{2}+1/(2n)<1/n$.
The main consequence of this theorem is that for the setup we used in Section
3 to prove a deterministic lower bound of $\Omega(k\log(n/k))$, if we simply
draw the noise uniformly randomly from the same $\ell_{1}$ ball (in fact, even
one with a much larger radius, namely, polynomial in $n$), this “hard
distribution” can be defeated with just $O(k)$ measurements:
###### Corollary 4.1.
If $Y$ is a set of binary $k$-sparse vectors, as in Section 3, and noise $z$
is drawn uniformly at random from $B_{1}(s)$, then for any constant
$\epsilon>0$, $m=O(k/\epsilon)$ measurements suffice to recover any signal in
$Y$ with probability $1-1/n$, as long as
$s\leq O\left(\frac{k^{3/2+\epsilon}n^{1/2-\epsilon}}{\log^{3/2}n}\right).$
###### Proof.
The parameters in this case are $r=k$ and $|Y|\leq\binom{n}{k}\leq(ne/k)^{k}$,
so by Theorem 4.1, it suffices to have
$s\leq O\left(\frac{k^{3/2+k/m}n^{1/2-(k+1)/m}}{\log^{3/2}n}\right).$
Choosing $m=(k+1)/\epsilon$ yields the corollary. ∎
## 5 Randomized Lower Bound
Although it is possible to partially circumvent this obstacle by focusing our
noise distribution on “high” $\ell_{2}$ norm, sparse vectors, we are able to
obtain stronger results via a reduction from a communication game and the
corresponding lower bound.
The communication game will show that a message $Ax$ must have a large number
of bits. To show that this implies a lower bound on the number of rows of $A$,
we will need $A$ to be discrete. Hence we first show that discretizing $A$
does not change its recovery characteristics by much.
### 5.1 Discretizing Matrices
Before we discretize by rounding, we need to ensure that the matrix is well
conditioned. We show that without loss of generality, the rows of $A$ are
orthonormal.
We can multiply $A$ on the left by any invertible matrix to get another
measurement matrix with the same recovery characteristics. If we consider the
singular value decomposition $A=U\Sigma V^{*}$, where $U$ and $V$ are
orthonormal and $\Sigma$ is 0 off the diagonal, this means that we can
eliminate $U$ and make the entries of $\Sigma$ be either $0$ or $1$. The
result is a matrix consisting of $m$ orthonormal rows. For such matrices, we
prove the following:
###### Lemma 5.1.
Consider any $m\times n$ matrix $A$ with orthonormal rows. Let $A^{\prime}$ be
the result of rounding $A$ to $b$ bits per entry. Then for any
$v\in\mathbb{R}^{n}$ there exists an $s\in\mathbb{R}^{n}$ with
$A^{\prime}v=A(v-s)$ and
$\left\|s\right\|_{1}<n^{2}2^{-b}\left\|v\right\|_{1}$.
###### Proof.
Let $A^{\prime\prime}=A-A^{\prime}$ be the roundoff error when discretizing
$A$ to $b$ bits, so each entry of $A^{\prime\prime}$ is less than $2^{-b}$.
Then for any $v$ and $s=A^{T}A^{\prime\prime}v$, we have
$As=A^{\prime\prime}v$ and
$\displaystyle\left\|s\right\|_{1}$
$\displaystyle=\left\|A^{T}A^{\prime\prime}v\right\|_{1}\leq\sqrt{n}\left\|A^{\prime\prime}v\right\|_{1}$
$\displaystyle\leq m\sqrt{n}2^{-b}\left\|v\right\|_{1}\leq
n^{2}2^{-b}\left\|v\right\|_{1}.$
∎
### 5.2 Communication Complexity
We use a few definitions and results from two-party communication complexity.
For further background see the book by Kushilevitz and Nisan [KN97]. Consider
the following communication game. There are two parties, Alice and Bob. Alice
is given a string $y\in\\{0,1\\}^{d}$. Bob is given an index $i\in[d]$,
together with $y_{i+1},y_{i+2},\ldots,y_{d}$. The parties also share an
arbitrarily long common random string $r$. Alice sends a single message
$M(y,r)$ to Bob, who must output $y_{i}$ with probability at least $2/3$,
where the probability is taken over $r$. We refer to this problem as Augmented
Indexing. The communication cost of Augmented Indexing is the minimum, over
all correct protocols, of the length of the message $M(y,r)$ on the worst-case
choice of $r$ and $y$.
The next theorem is well-known and follows from Lemma 13 of [MNSW98] (see also
Lemma 2 of [BYJKK04]).
###### Theorem 5.1.
The communication cost of Augmented Indexing is $\Omega(d)$.
###### Proof.
First, consider the private-coin version of the problem, in which both parties
can toss coins, but do not share a random string $r$ (i.e., there is no public
coin). Consider any correct protocol for this problem. We can assume the
probability of error of the protocol is an arbitrarily small positive constant
by increasing the length of Alice’s message by a constant factor (e.g., by
independent repetition and a majority vote). Applying Lemma 13 of [MNSW98]
(with, in their notation, $t=1$ and $a=c^{\prime}\cdot d$ for a sufficiently
small constant $c^{\prime}>0$), the communication cost of such a protocol must
be $\Omega(d)$. Indeed, otherwise there would be a protocol in which Bob could
output $y_{i}$ with probability greater than $1/2$ without any interaction
with Alice, contradicting that $\Pr[y_{i}=1/2]$ and that Bob has no
information about $y_{i}$. Our theorem now follows from Newman’s theorem (see,
e.g., Theorem 2.4 of [KNR99]), which shows that the communication cost of the
best public coin protocol is at least that of the private coin protocol minus
$O(\log d)$ (which also holds for one-round protocols). ∎
### 5.3 Randomized Lower Bound Theorem
###### Theorem 5.2.
For any randomized $\ell_{1}/\ell_{1}$ recovery algorithm $(A,\mathscr{A})$,
with approximation factor $C=O(1)$, $A$ must have $m=\Omega(k\log(n/k))$ rows.
###### Proof.
We shall assume, without loss of generality, that $n$ and $k$ are powers of
$2$, that $k$ divides $n$, and that the rows of $A$ are orthonormal. The proof
for the general case follows with minor modifications.
Let $(A,\mathscr{A})$ be such a recovery algorithm. We will show how to solve
the Augmented Indexing problem on instances of size $d=\Omega(k\log(n/k)\log
n)$ with communication cost $O(m\log n)$. The theorem will then follow by
Theorem 5.1.
Let $X$ be the maximal set of $k$-sparse $n$-dimensional binary vectors with
minimum Hamming distance $k$. From Lemma 3.1 we have
$\log\left|X\right|=\Omega(k\log(n/k))$. Let
$d=\left\lfloor\log\left|X\right|\right\rfloor\log n$, and define $D=2C+3$.
Alice is given a string $y\in\\{0,1\\}^{d}$, and Bob is given $i\in[d]$
together with $y_{i+1},y_{i+2},\ldots,y_{d}$, as in the setup for Augmented
Indexing.
Alice splits her string $y$ into $\log n$ contiguous chunks
$y^{1},y^{2},\ldots,y^{\log n}$, each containing
$\left\lfloor\log\left|X\right|\right\rfloor$ bits. She uses $y^{j}$ as an
index into $X$ to choose $x_{j}$. Alice defines
$x=D^{1}x_{1}+D^{2}x_{2}+\cdots+D^{\log n}x_{\log n}.$
Alice and Bob use the common randomness $r$ to agree upon a random matrix $A$
with orthonormal rows. Both Alice and Bob round $A$ to form $A^{\prime}$ with
$b=\left\lceil(4+2\log D)\log n\right\rceil=O(\log n)$ bits per entry. Alice
computes $A^{\prime}x$ and transmits it to Bob.
From Bob’s input $i$, he can compute the value $j=j(i)$ for which the bit
$y_{i}$ occurs in $y^{j}$. Bob’s input also contains $y_{i+1},\ldots,y_{n}$,
from which he can reconstruct $x_{j+1},\ldots,x_{\log n}$, and in particular
can compute
$z=D^{j+1}x_{j+1}+D^{j+2}x_{j+2}+\cdots+D^{\log n}x_{\log n}.$
Set $w=x-z=\sum_{i=1}^{j}D^{i}x_{i}$. Bob then computes $A^{\prime}z$, and
using $A^{\prime}x$ and linearity, $A^{\prime}w$. Then
$\left\|w\right\|_{1}\leq\sum_{i=1}^{j}kD^{i}<k\frac{D^{1+j}}{D-1}<kD^{2\log
n}.$
So from Lemma 5.1, there exists some $s$ with $A^{\prime}w=A(w-s)$ and
$\left\|s\right\|_{1}<n^{2}2^{-3\log n-2\log D\log
n}\left\|w\right\|_{1}<k/n^{2}.$
Bob chooses another vector $u$ uniformly from $B_{1}^{n}(k)$, the $\ell_{1}$
ball of radius $k$, and computes $A(w-s-u)=A^{\prime}w-Au$.
Bob runs the estimation algorithm $\mathscr{A}$ on $A$ and $A(w-s-u)$,
obtaining $\hat{w}$. We have that $u$ is independent of $w$ and $s$, and that
$\left\|u\right\|_{1}\leq k(1-1/n^{2})\leq k-\left\|s\right\|_{1}$ with
probability
$\frac{\text{Vol}(B_{1}^{n}(k(1-1/n^{2})))}{\text{Vol}(B_{1}^{n}(k))}=(1-1/n^{2})^{n}>1-1/n$.
But $\\{w-u\mid\left\|u\right\|_{1}\leq
k-\left\|s\right\|_{1}\\}\subseteq\\{w-s-u\mid\left\|u\right\|_{1}\leq k\\}$,
so the ranges of the random variables $w-s-u$ and $w-u$ overlap in at least a
$1-1/n$ fraction of their volumes. Therefore $w-s-u$ and $w-u$ have
statistical distance at most $1/n$. The distribution of $w-u$ is independent
of $A$, so running the recovery algorithm on $A(w-u)$ would work with
probability at least $3/4$. Hence with probability at least $3/4-1/n\geq 2/3$
(for $n$ large enough), $\hat{w}$ satisfies the recovery criterion for $w-u$,
meaning
$\left\|w-u-\hat{w}\right\|_{1}\leq C\min_{k\mbox{-sparse
}w^{\prime}}\left\|w-u-w^{\prime}\right\|_{1}.$
Now,
$\displaystyle\left\|D^{j}x_{j}-\hat{w}\right\|_{1}$
$\displaystyle\leq\left\|w-u-D^{j}x_{j}\right\|_{1}+\left\|w-u-\hat{w}\right\|_{1}$
$\displaystyle\leq(1+C)\left\|w-u-D^{j}x_{j}\right\|_{1}$
$\displaystyle\leq(1+C)(\left\|u\right\|_{1}+\sum_{i=1}^{j-1}\left\|D^{i}x_{i}\right\|_{1})$
$\displaystyle\leq(1+C)k\sum_{i=0}^{j-1}D^{i}$
$\displaystyle<k\cdot\frac{(1+C)D^{j}}{D-1}$ $\displaystyle=kD^{j}/2.$
And since the minimum Hamming distance in $X$ is $k$, this means
$\left\|D^{j}x_{j}-\hat{w}\right\|_{1}<\left\|D^{j}x^{\prime}-\hat{w}\right\|_{1}$
for all $x^{\prime}\in X,x^{\prime}\neq x_{j}$333Note that these bounds would
still hold with minor modification if we replaced the $\ell_{1}/\ell_{1}$
guarantee with the $\ell_{2}/\ell_{1}$ guarantee, so the same result holds in
that case.. So Bob can correctly identify $x_{j}$ with probability at least
$2/3$. From $x_{j}$ he can recover $y^{j}$, and hence the bit $y_{i}$ that
occurs in $y^{j}$.
Hence, Bob solves Augmented Indexing with probability at least $2/3$ given the
message $A^{\prime}x$. The entries in $A^{\prime}$ and $x$ are polynomially
bounded integers (up to scaling of $A^{\prime}$), and so each entry of
$A^{\prime}x$ takes $O(\log n)$ bits to describe. Hence, the communication
cost of this protocol is $O(m\log n)$. By Theorem 5.1, $m\log
n=\Omega(k\log(n/k)\log n)$, or $m=\Omega(k\log(n/k))$. ∎
## References
* [BGI+08] R. Berinde, A. Gilbert, P. Indyk, H. Karloff, and M. Strauss. Combining geometry and combinatorics: a unified approach to sparse signal recovery. Allerton, 2008.
* [BYJKK04] Z. Bar-Yossef, T. S. Jayram, R. Krauthgamer, and R. Kumar. The sketching complexity of pattern matching. RANDOM, 2004.
* [CCFC04] M. Charikar, K. Chen, and M. Farach-Colton. Finding frequent items in data streams. Theor. Comput. Sci., 312(1):3–15, 2004.
* [CM05] G. Cormode and S. Muthukrishnan. An improved data stream summary: the count-min sketch and its applications. J. Algorithms, 55(1):58–75, 2005.
* [CRT06] E. J. Candès, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math., 59(8):1208–1223, 2006.
* [CW09] K. L. Clarkson and D. P. Woodruff. Numerical linear algebra in the streaming model. In STOC, pages 205–214, 2009.
* [DDT+08] M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk. Single-pixel imaging via compressive sampling. IEEE Signal Processing Magazine, 2008.
* [DM08] W. Dai and O. Milenkovic. Weighted superimposed codes and constrained integer compressed sensing. Preprint, 2008.
* [Don06] D. L. Donoho. Compressed Sensing. IEEE Trans. Info. Theory, 52(4):1289–1306, Apr. 2006.
* [EG88] T. Ericson and L. Györfi. Superimposed codes in $\mathbb{R}^{n}$. IEEE Trans. on Information Theory, 34(4):877–880, 1988.
* [FPRU10] S. Foucart, A. Pajor, H. Rauhut, and T. Ullrich. The gelfand widths of lp-balls for $0<p\leq 1$. J. Complexity, 26:629–640, 2010.
* [FR99] Z. Füredi and M. Ruszinkó. An improved upper bound of the rate of euclidean superimposed codes. IEEE Trans. on Information Theory, 45(2):799–802, 1999.
* [GG84] A. Y. Garnaev and E. D. Gluskin. On widths of the euclidean ball. Sov. Math., Dokl., page 200 204, 1984.
* [Glu84] E. D. Gluskin. Norms of random matrices and widths of finite-dimensional sets. Math. USSR-Sb., 48:173 182, 1984.
* [IN07] P. Indyk and A. Naor. Nearest neighbor preserving embeddings. ACM Trans. on Algorithms, 3(3), Aug. 2007.
* [Ind07] P. Indyk. Sketching, streaming and sublinear-space algorithms. Graduate course notes, available at `http://stellar.mit.edu/S/course/6/fa07/6.895/`, 2007.
* [Kas77] B. S. Kashin. Diameters of some finite-dimensional sets and classes of smooth functions. Math. USSR, Izv.,, 11:317 333, 1977.
* [KN97] E. Kushilevitz and N. Nisan. Communication Complexity. Cambridge University Press, 1997.
* [KNR99] I. Kremer, N. Nisan, and D. Ron. On randomized one-round communication complexity. Computational Complexity, 8(1):21–49, 1999.
* [KNW10] D. Kane, J. Nelson, and D. Woodruff. On the exact space complexity of sketching and streaming small norms. In SODA, 2010.
* [KT07] B. S. Kashin and V. N. Temlyakov. A remark on compressed sensing. Preprint, 2007.
* [MNSW98] P. B. Miltersen, N. Nisan, S. Safra, and A. Wigderson. On data structures and asymmetric communication complexity. J. Comput. Syst. Sci., 57(1):37–49, 1998.
* [Mut03] S. Muthukrishnan. Data streams: Algorithms and applications (invited talk at soda’03). Available at http://athos.rutgers.edu/$\sim$muthu/stream-1-1.ps, 2003.
* [vL98] J.H. van Lint. Introduction to coding theory. Springer, 1998.
* [Wai07] M. Wainwright. Information-theoretic bounds on sparsity recovery in the high-dimensional and noisy setting. IEEE Int’l Symp. on Information Theory, 2007.
## Appendix A Proof of Lemma 3.1
###### Proof.
We will construct a codebook $T$ of block length $k$, alphabet $q$, and
minimum Hamming distance $\epsilon k$. Replacing each character $i$ with the
$q$-long standard basis vector $e_{i}$ will create a binary $qk$-dimensional
codebook $S$ with minimum Hamming distance $2\epsilon k$ of the same size as
$T$, where each element of $S$ has exactly $k$ ones.
The Gilbert-Varshamov bound, based on volumes of Hamming balls, states that a
codebook of size $L$ exists for some
$L\geq\frac{q^{k}}{\sum_{i=0}^{\epsilon k-1}\binom{k}{i}(q-1)^{i}}.$
Using the claim (analogous to [vL98], p. 21, proven below) that for
$\epsilon<1-1/q$
$\sum_{i=0}^{\epsilon k}\binom{k}{i}(q-1)^{i}<q^{H_{q}(\epsilon)k},$
we have that $\log L>(1-H_{q}(\epsilon))k\log q$, as desired. ∎
###### Claim A.1.
For $0<\epsilon<1-1/q$,
$\sum_{i=0}^{\epsilon k}\binom{k}{i}(q-1)^{i}<q^{H_{q}(\epsilon)k}.$
###### Proof.
Note that
$q^{-H_{q}(\epsilon)}=\left(\frac{\epsilon}{(q-1)(1-\epsilon)}\right)^{\epsilon}(1-\epsilon)<(1-\epsilon).$
Then
$\displaystyle 1$ $\displaystyle=(\epsilon+(1-\epsilon))^{k}$
$\displaystyle>\sum_{i=0}^{\epsilon
k}\binom{k}{i}\epsilon^{i}(1-\epsilon)^{k-i}$
$\displaystyle=\sum_{i=0}^{\epsilon
k}\binom{k}{i}(q-1)^{i}\left(\frac{\epsilon}{(q-1)(1-\epsilon)}\right)^{i}(1-\epsilon)^{k}$
$\displaystyle>\sum_{i=0}^{\epsilon
k}\binom{k}{i}(q-1)^{i}\left(\frac{\epsilon}{(q-1)(1-\epsilon)}\right)^{\epsilon
k}(1-\epsilon)^{k}$ $\displaystyle=q^{-H_{q}(\epsilon)k}\sum_{i=0}^{\epsilon
k}\binom{k}{i}(q-1)^{i}$
∎
## Appendix B Proof of Lemma 4.1
###### Proof.
By standard arguments (see, e.g., [IN07]), for any $D>0$ we have
$\Pr\left[\|A(y_{1}-y_{2})\|_{2}\leq\frac{\|y_{1}-y_{2}\|_{2}}{D}\right]\leq\left(\frac{3}{D}\right)^{m}$
and
$\Pr[\|Az\|_{2}\geq D\|z\|_{2}]\leq e^{-m(D-1)^{2}/8}.$
Setting both right-hand sides to $\delta$ yields the lemma. ∎
## Appendix C Proof of Lemma 4.2
###### Proof.
Consider the distribution of a single coordinate of $z$, say, $z_{1}$. The
probability density of $|z_{1}|$ taking value $t\in[0,s]$ is proportional to
the $(n-1)$-dimensional volume of $B_{1}^{(n-1)}(s-t)$, which in turn is
proportional to $(s-t)^{n-1}$. Normalizing to ensure the probability
integrates to 1, we derive this probability as
$p(|z_{1}|=t)=\frac{n}{s^{n}}(s-t)^{n-1}.$
It follows that, for any $D\in[0,s]$,
$\Pr[|z_{1}|>D]=\int_{D}^{s}\frac{n}{s^{n}}(s-t)^{n-1}\;dt=(1-D/s)^{n}.$
In particular, for any $\alpha>1$,
$\displaystyle\Pr[|z_{1}|$ $\displaystyle>$ $\displaystyle\alpha s\log
n/n]=(1-\alpha\log n/n)^{n}<e^{-\alpha\log n}$ $\displaystyle=$ $\displaystyle
1/n^{\alpha}.$
Now, by symmetry this holds for every other coordinate $z_{i}$ of $z$ as well,
so by the union bound
$\Pr[\|z\|_{\infty}>\alpha s\log n/n]<1/n^{\alpha-1},$
and since $\|z\|_{2}\leq\sqrt{n}\cdot\|z\|_{\infty}$ for any vector $z$, the
lemma follows. ∎
|
arxiv-papers
| 2011-06-02T05:20:14 |
2024-09-04T02:49:19.285523
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Khanh Do Ba, Piotr Indyk, Eric Price, and David P. Woodruff",
"submitter": "Eric Price",
"url": "https://arxiv.org/abs/1106.0365"
}
|
1106.0368
|
# Study of scalar meson $a_{0}(1450)$ from $B\to a_{0}(1450)K^{*}$ Decays
Zhi-Qing Zhang 111Electronic address: zhangzhiqing@haut.edu.cn Department of
Physics, Henan University of Technology, Zhengzhou, Henan 450052, P.R.China
###### Abstract
In the two-quark model supposition for the meson $a_{0}(1450)$, which can be
viewed as either the first excited state (scenario I) or the lowest lying
state (scenario II), the branching ratios and the direct CP-violating
asymmetries for decays $B^{-}\to a^{0}_{0}(1450)K^{*-},a^{-}_{0}(1450)K^{*0}$
and $\bar{B}^{0}\to a^{+}_{0}(1450)K^{*-},a^{0}_{0}(1450)\bar{K}^{*0}$ are
studied by employing the perturbative QCD factorization approach. We find the
following results: (a) For the decays $B^{-}\to
a^{-}_{0}(1450)K^{*0},\bar{B}^{0}\to
a^{+}_{0}(1450)K^{*-},a^{0}_{0}(1450)\bar{K}^{*0}$, their branching ratios in
scenario II are larger than those in scenario I about one order. So it is easy
for the experiments to differentiate between the scenario I and II for the
meson $a_{0}(1450)$. (b)For the decay $B^{-}\to a^{0}_{0}(1450)K^{*-}$, due to
not receiving the enhancement from the $K^{*}-$emission factorizable diagrams,
its penguin operator contributions are the smallest in scenario II, which
makes its branching ratio drop into the order of $10^{-6}$. Even so, its
branching ratio in scenario II is still larger than that in scenario I about
$2.5$ times. (c) Even though our predictions are much larger than those from
the QCD factorization results, they are still consistent with each other
within the large theoretical errors from the annihilation diagrams. (d) We
predict the direct CP- violating asymmetry of the decay $B^{-}\to
a^{-}_{0}(1450)K^{*0}$ is small and only a few percent.
###### pacs:
13.25.Hw, 12.38.Bx, 14.40.Nd
## I Introduction
Along with many scalar mesons found in experiments, more and more efforts have
been made to study the scalar meson spectrum theoretically nato ; jaffe ; jwei
; baru ; celenza ; stro ; close1 . Unlike the pseudoscalar, vector, axial, and
tensor mesons constant of light quarks, which are reasonable in terms of their
$SU(3)$ classification and quark content, the scalar mesons are too many to
accommodate them in one nonet. In fact, the number of the current
experimentally known scalar mesons is more than 2 times that of a nonet. So it
is believed that there are at least two nonets below and above 1 GeV. Today,
it is still a difficult but interesting topic. Our most important task is to
uncover the mysterious structure of the scalar mesons. There are two typical
schemes for the classification to them nato ; jaffe . Scenario I (SI): the
nonet mesons below 1 GeV, including $f_{0}(600),f_{0}(980),K^{*}(800)$ and
$a_{0}(980)$, are usually viewed as the lowest lying $q\bar{q}$ states, while
the nonet ones near 1.5 GeV, including
$f_{0}(1370),f_{0}(1500)/f_{0}(1710),K^{*}(1430)$, and $a_{0}(1450)$, are
suggested as the first excited states. In scenario II (SII), the nonet mesons
near 1.5 GeV are treated as $q\bar{q}$ ground states, while the nonet mesons
below 1 GeV are exotic states beyond the quark model, such as four-quark bound
states. It should be four scalar mesons in each nonet, but there are five
nonet mesons near 1.5 GeV. People generally believe that
$K^{*}_{0}(1430),a_{0}(1450)$ and two isosinglet scalar mesons compose one
nonet, it means that one of the three isosinglet scalars
$f_{0}(1370),f_{0}(1500),f_{0}(1710)$ can not be explained as $q\bar{q}$ state
and might be a scalar glueball. There are many discussions amsler ; close2 ;
hexg ; ccl which one is most possible a scalar glueball based on a flavor-
mixing scheme for these three scalar mesons, which induces there are more
ambiguous about their inner structures. By contrast, the scalar mesons
$K^{*}_{0}(1430),a_{0}(1450)$ have been confirmed to a conventional $q\bar{q}$
meson in many approaches yao ; mathur ; lee ; bardeen . So the calculations
for the $B$ decays involved in either of these two scalar mesons in the final
states should be more trustworthy.
The production of the scalar mesons from B-meson decays provides a different
unique insight to the inner structures of these mesons. It provides various
factorization approaches a new usefulness. Here we would like to use the
perturbative QCD (PQCD) approach to study $a_{0}(1450)$ in decays $B^{-}\to
a^{0}_{0}(1450)K^{*-},a^{-}_{0}(1450)\bar{K}^{*0}$ and $\bar{B}^{0}\to
a^{+}_{0}(1450)K^{*-},a^{0}_{0}(1450)\bar{K}^{*0}$. Certainly, these decays
have been studied within the QCD factorization approach ccysv , in which the
factorizable annihilation diagrams are calculated through a phenomenological
parameter. So there are large theoretical errors for the QCD factorization
predictions. To make precise predictions of their branching ratios and CP-
violating asymmetries, it is necessary to make reliable calculations for the
contributions from the factorizable annihilation diagrams. By contrast, these
diagrams are calculable within the PQCD approach effectively.
In the following, $a_{0}(1450)$ is denoted as $a_{0}$ in some places for
convenience. The layout of this paper is as follows. In Sec. II, the relevant
decay constants and light-cone distribution amplitudes of relevant mesons are
introduced. In Sec. III, we then analyze these decay channels using the PQCD
approach. The numerical results and the discussions are given in Sec. IV. The
conclusions are presented in the final part.
## II decay constants and distribution amplitudes
For the wave function of the heavy B meson, we take
$\displaystyle\Phi_{B}(x,b)=\frac{1}{\sqrt{2N_{c}}}(P/_{B}+m_{B})\gamma_{5}\phi_{B}(x,b).$
(1)
Here only the contribution of Lorentz structure $\phi_{B}(x,b)$ is taken into
account, since the contribution of the second Lorentz structure
$\bar{\phi}_{B}$ is numerically small cdlu and has been neglected. For the
distribution amplitude $\phi_{B}(x,b)$ in Eq.(1), we adopt the following
model:
$\displaystyle\phi_{B}(x,b)=N_{B}x^{2}(1-x)^{2}\exp[-\frac{M^{2}_{B}x^{2}}{2\omega^{2}_{b}}-\frac{1}{2}(\omega_{b}b)^{2}],$
(2)
where $\omega_{b}$ is a free parameter, we take $\omega_{b}=0.4\pm 0.04$ Gev
in numerical calculations, and $N_{B}=91.745$ is the normalization factor for
$\omega_{b}=0.4$.
In the two-quark picture, the vector decay constant $f_{a_{0}}$ and the scalar
decay constant $\bar{f}_{a_{0}}$ for the scalar meson $a_{0}$ can be defined
as
$\displaystyle\langle a_{0}(p)|\bar{q}_{2}\gamma_{\mu}q_{1}|0\rangle$
$\displaystyle=$ $\displaystyle f_{a_{0}}p_{\mu},$ (3) $\displaystyle\langle
a_{0}(p)|\bar{q}_{2}q_{1}|0\rangle=m_{a_{0}}\bar{f}_{a_{0}},$ (4)
where $m_{a_{0}}(p)$ is the mass (momentum) of the scalar meson $a_{0}(1450)$.
The relation between $f_{a_{0}}$ and $\bar{f}_{a_{0}}$ is
$\displaystyle\frac{m_{{a_{0}}}}{m_{2}(\mu)-m_{1}(\mu)}f_{{a_{0}}}=\bar{f}_{{a_{0}}},$
(5)
where $m_{1,2}$ are the running current quark masses. For the scalar meson
$a_{0}(1450)$, $f_{a_{0}}$ will get a very small value after the $SU(3)$
symmetry breaking is considered. The light-cone distribution amplitudes for
the scalar meson $a_{0}(1450)$ can be written as
$\displaystyle\langle a_{0}(p)|\bar{q}_{1}(z)_{l}q_{2}(0)_{j}|0\rangle$
$\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2N_{c}}}\int^{1}_{0}dx\;e^{ixp\cdot z}$ (6)
$\displaystyle\times\\{p\\!\\!\\!/\Phi_{a_{0}}(x)+m_{a_{0}}\Phi^{S}_{a_{0}}(x)+m_{a_{0}}(n\\!\\!\\!/_{+}n\\!\\!\\!/_{-}-1)\Phi^{T}_{a_{0}}(x)\\}_{jl}.\quad\quad$
Here $n_{+}$ and $n_{-}$ are lightlike vectors:
$n_{+}=(1,0,0_{T}),n_{-}=(0,1,0_{T})$, and $n_{+}$ is parallel with the moving
direction of the scalar meson. The normalization can be related to the decay
constants:
$\displaystyle\int^{1}_{0}dx\Phi_{a_{0}}(x)=\int^{1}_{0}dx\Phi^{T}_{a_{0}}(x)=0,\,\,\,\,\,\,\,\int^{1}_{0}dx\Phi^{S}_{a_{0}}(x)=\frac{\bar{f}_{a_{0}}}{2\sqrt{2N_{c}}}\;.$
(7)
The twist-2 light-cone distribution amplitude $\Phi_{a_{0}}$ can be expanded
in the Gegenbauer polynomials:
$\displaystyle\Phi_{a_{0}}(x,\mu)$ $\displaystyle=$
$\displaystyle\frac{\bar{f}_{a_{0}}(\mu)}{2\sqrt{2N_{c}}}6x(1-x)\left[B_{0}(\mu)+\sum_{m=1}^{\infty}B_{m}(\mu)C^{3/2}_{m}(2x-1)\right],$
(8)
where the decay constants and the Gegenbauer moments $B_{1},B_{3}$ of
distribution amplitudes for $a_{0}(1450)$ have been calculated in the QCD sum
rules ccysp . These values are all scale dependent and specified below:
$\displaystyle{\rm scenarioI:}B_{1}$ $\displaystyle=$ $\displaystyle 0.89\pm
0.20,B_{3}=-1.38\pm 0.18,\bar{f}_{a_{0}}=-(280\pm 30){\rm MeV},$ (9)
$\displaystyle{\rm scenarioII:}B_{1}$ $\displaystyle=$ $\displaystyle-0.58\pm
0.12,B_{3}=-0.49\pm 0.15,\bar{f}_{a_{0}}=(460\pm 50){\rm MeV},\quad$ (10)
which are taken by fixing the scale at 1GeV.
As for the twist-3 distribution amplitudes $\Phi_{a_{0}}^{S}$ and
$\Phi_{a_{0}}^{T}$, we adopt the asymptotic form:
$\displaystyle\Phi^{S}_{a_{0}}$ $\displaystyle=$
$\displaystyle\frac{1}{2\sqrt{2N_{c}}}\bar{f}_{a_{0}},\,\,\,\,\,\,\,\Phi_{a_{0}}^{T}=\frac{1}{2\sqrt{2N_{c}}}\bar{f}_{a_{0}}(1-2x).$
(11)
For our considered decays, the vector meson $K^{*}$ is longitudinally
polarized. The longitudinal polarized component of the wave function is given
as
$\displaystyle\Phi_{K^{*}}=\frac{1}{\sqrt{2N_{c}}}\left\\{\epsilon/\left[m_{K^{*}}\Phi_{K^{*}}(x)+p/_{K^{*}}\Phi_{K^{*}}^{t}(x)\right]+m_{K^{*}}\Phi^{s}_{K^{*}}(x)\right\\},$
(12)
where the first term is the leading twist wave function (twist-2), while the
second and third term are subleading twist (twist-3) wave functions. They can
be parameterized as
$\displaystyle\Phi_{K^{*}}(x)$ $\displaystyle=$
$\displaystyle\frac{f_{K^{*}}}{2\sqrt{2N_{c}}}6x(1-x)\left[1+a_{1K^{*}}C^{3/2}_{1}(2x-1)+a_{2K^{*}}C^{3/2}_{2}(2x-1)\right],$
(13)
$\displaystyle\Phi^{t}_{K^{*}}(x)=\frac{3f^{T}_{K^{*}}}{2\sqrt{2N_{c}}}(1-2x),\quad\Phi^{s}_{K^{*}}(x)=\frac{3f^{T}_{K^{*}}}{2\sqrt{2N_{c}}}(2x-1)^{2},$
(14)
where the longitudinal decay constant $f_{K^{*}}=(217\pm 5)$Mev and the
transverse decay constant $f^{T}_{K^{*}}=(185\pm 10)$Mev, the Gegenbauer
moments $a_{1K^{*}}=0.03,a_{2K^{*}}=0.11$ pball and the Gegenbauer
polynomials $C^{\nu}_{n}(t)$ are given as
$\displaystyle C^{3/2}_{1}(t)$ $\displaystyle=$ $\displaystyle 3t,\qquad
C^{3/2}_{2}(t)=\frac{3}{2}(5t^{2}-1).$ (15)
## III the perturbative QCD calculation
Under the two-quark model for the scalar meson $a_{0}(1450)$ supposition, the
decay amplitude for $B\to a_{0}K^{*}$ can be conceptually written as the
convolution,
$\displaystyle{\cal A}(B\to
K^{*}a_{0})\sim\int\\!\\!d^{4}k_{1}d^{4}k_{2}d^{4}k_{3}\
\mathrm{Tr}\left[C(t)\Phi_{B}(k_{1})\Phi_{K^{*}}(k_{2})\Phi_{a_{0}}(k_{3})H(k_{1},k_{2},k_{3},t)\right],$
(16)
where $k_{i}$’s are momenta of the antiquarks included in each meson, and
$\mathrm{Tr}$ denotes the trace over Dirac and color indices. $C(t)$ is the
Wilson coefficient which results from the radiative corrections at a short
distance. In the above convolution, $C(t)$ includes the harder dynamics at a
larger scale than the $M_{B}$ scale and describes the evolution of local
$4$-Fermi operators from $m_{W}$ (the $W$ boson mass) down to the
$t\sim\mathcal{O}(\sqrt{\bar{\Lambda}M_{B}})$ scale, where
$\bar{\Lambda}\equiv M_{B}-m_{b}$. The function $H(k_{1},k_{2},k_{3},t)$
describes the four-quark operator and the spectator quark connected by a hard
gluon, whose $q^{2}$ is in the order of $\bar{\Lambda}M_{B}$ and includes the
$\mathcal{O}(\sqrt{\bar{\Lambda}M_{B}})$ hard dynamics. Therefore, this hard
part $H$ can be perturbatively calculated. The function
$\Phi_{(B,K^{*},a_{0})}$ are the wave functions of the vector mesons $B,K^{*}$
and the scalar meson $a_{0}$, respectively.
Since the $b$ quark is rather heavy, we consider the $B$ meson at rest for
simplicity. It is convenient to use the light-cone coordinate
$(p^{+},p^{-},{\bf p}_{T})$ to describe the meson’s momenta,
$\displaystyle p^{\pm}=\frac{1}{\sqrt{2}}(p^{0}\pm p^{3}),\quad{\rm
and}\quad{\bf p}_{T}=(p^{1},p^{2}).$ (17)
Using these coordinates, the $B$ meson and the two final state meson momenta
can be written as
$\displaystyle P_{B}=\frac{M_{B}}{\sqrt{2}}(1,1,{\bf 0}_{T}),\quad
P_{2}=\frac{M_{B}}{\sqrt{2}}(1-r^{2}_{a_{0}},r^{2}_{K^{*}},{\bf 0}_{T}),\quad
P_{3}=\frac{M_{B}}{\sqrt{2}}(r^{2}_{a_{0}},1-r^{2}_{K^{*}},{\bf 0}_{T}),$ (18)
respectively, where the ratio $r_{a_{0}(K^{*})}=m_{a_{0}(K^{*})}/M_{B}$, and
$m_{a_{0}(K^{*})}$ is the scalar meson $a_{0}$ (the vector meson $K^{*}$)
mass. Putting the antiquark momenta in $B$, $K^{*}$, and $a_{0}$ mesons as
$k_{1}$, $k_{2}$, and $k_{3}$, respectively, we can choose
$\displaystyle k_{1}=(x_{1}P_{1}^{+},0,{\bf k}_{1T}),\quad
k_{2}=(x_{2}P_{2}^{+},0,{\bf k}_{2T}),\quad k_{3}=(0,x_{3}P_{3}^{-},{\bf
k}_{3T}).$ (19)
For these considered decay channels, the integration over $k_{1}^{-}$,
$k_{2}^{-}$, and $k_{3}^{+}$ in Eq.(16) will lead to
$\displaystyle{\cal A}(B\to K^{*}a_{0})$ $\displaystyle\sim$
$\displaystyle\int\\!\\!dx_{1}dx_{2}dx_{3}b_{1}db_{1}b_{2}db_{2}b_{3}db_{3}$
(20)
$\displaystyle\cdot\mathrm{Tr}\left[C(t)\Phi_{B}(x_{1},b_{1})\Phi_{K^{*}}(x_{2},b_{2})\Phi_{a_{0}}(x_{3},b_{3})H(x_{i},b_{i},t)S_{t}(x_{i})\,e^{-S(t)}\right],\quad$
where $b_{i}$ is the conjugate space coordinate of $k_{iT}$, and $t$ is the
largest energy scale in function $H(x_{i},b_{i},t)$. In order to smear the
end-point singularity on $x_{i}$, the jet function $S_{t}(x)$ li02 , which
comes from the resummation of the double logarithms $\ln^{2}x_{i}$, is used.
The last term $e^{-S(t)}$ in Eq.(20) is the Sudakov form factor which
suppresses the soft dynamics effectively soft .
For the considered decays, the related weak effective Hamiltonian $H_{eff}$
can be written as buras96
$\displaystyle{\cal
H}_{eff}=\frac{G_{F}}{\sqrt{2}}\,\left[\sum_{p=u,c}V_{pb}V_{ps}^{*}\left(C_{1}(\mu)O_{1}^{p}(\mu)+C_{2}(\mu)O_{2}^{p}(\mu)\right)-V_{tb}V_{ts}^{*}\sum_{i=3}^{10}C_{i}(\mu)\,O_{i}(\mu)\right]\;,$
(21)
where the Fermi constant $G_{F}=1.16639\times 10^{-5}GeV^{-2}$ and the
functions $Q_{i}(i=1,...,10)$ are the local four-quark operators. We specify
below the operators in ${\cal H}_{eff}$ for $b\to s$ transition:
$\displaystyle\begin{array}[]{llllll}O_{1}^{u}&=&\bar{s}_{\alpha}\gamma^{\mu}Lu_{\beta}\cdot\bar{u}_{\beta}\gamma_{\mu}Lb_{\alpha}\
,&O_{2}^{u}&=&\bar{s}_{\alpha}\gamma^{\mu}Lu_{\alpha}\cdot\bar{u}_{\beta}\gamma_{\mu}Lb_{\beta}\
,\\\
O_{3}&=&\bar{s}_{\alpha}\gamma^{\mu}Lb_{\alpha}\cdot\sum_{q^{\prime}}\bar{q}_{\beta}^{\prime}\gamma_{\mu}Lq_{\beta}^{\prime}\
,&O_{4}&=&\bar{s}_{\alpha}\gamma^{\mu}Lb_{\beta}\cdot\sum_{q^{\prime}}\bar{q}_{\beta}^{\prime}\gamma_{\mu}Lq_{\alpha}^{\prime}\
,\\\
O_{5}&=&\bar{s}_{\alpha}\gamma^{\mu}Lb_{\alpha}\cdot\sum_{q^{\prime}}\bar{q}_{\beta}^{\prime}\gamma_{\mu}Rq_{\beta}^{\prime}\
,&O_{6}&=&\bar{s}_{\alpha}\gamma^{\mu}Lb_{\beta}\cdot\sum_{q^{\prime}}\bar{q}_{\beta}^{\prime}\gamma_{\mu}Rq_{\alpha}^{\prime}\
,\\\
O_{7}&=&\frac{3}{2}\bar{s}_{\alpha}\gamma^{\mu}Lb_{\alpha}\cdot\sum_{q^{\prime}}e_{q^{\prime}}\bar{q}_{\beta}^{\prime}\gamma_{\mu}Rq_{\beta}^{\prime}\
,&O_{8}&=&\frac{3}{2}\bar{s}_{\alpha}\gamma^{\mu}Lb_{\beta}\cdot\sum_{q^{\prime}}e_{q^{\prime}}\bar{q}_{\beta}^{\prime}\gamma_{\mu}Rq_{\alpha}^{\prime}\
,\\\
O_{9}&=&\frac{3}{2}\bar{s}_{\alpha}\gamma^{\mu}Lb_{\alpha}\cdot\sum_{q^{\prime}}e_{q^{\prime}}\bar{q}_{\beta}^{\prime}\gamma_{\mu}Lq_{\beta}^{\prime}\
,&O_{10}&=&\frac{3}{2}\bar{s}_{\alpha}\gamma^{\mu}Lb_{\beta}\cdot\sum_{q^{\prime}}e_{q^{\prime}}\bar{q}_{\beta}^{\prime}\gamma_{\mu}Lq_{\alpha}^{\prime}\
,\end{array}$ (27)
where $\alpha$ and $\beta$ are the $SU(3)$ color indices; $L$ and $R$ are the
left- and right-handed projection operators with $L=(1-\gamma_{5})$,
$R=(1+\gamma_{5})$. The sum over $q^{\prime}$ runs over the quark fields that
are active at the scale $\mu=O(m_{b})$, i.e.,
$(q^{\prime}\epsilon\\{u,d,s,c,b\\})$.
Figure 1: Diagrams contributing to the decay
$\bar{B}^{0}\to\bar{K}^{*0}a^{0}_{0}(1450)$ .
In Fig. 1, we give the leading order Feynman diagrams for the channel
$\bar{B}^{0}\to a^{0}_{0}(1450)K^{*0}$ as an example. For the fractorizable
and nonfactorizable emission diagrams Fig.1(a), 1(b) and 1(c), 1(d), if one
exchanges the $K^{*0}$ and $a^{0}_{0}$, the corresponding diagrams also exist.
But there are not this kind of exchange diagrams for the factorizable and
nonfactorizable annihilation diagrams, that is Fig.1 (e), 1(f) and 1(g), 1(h).
If we replace the $\bar{d}$ quark in both $\bar{B}^{0}$ and $a^{0}_{0}$ with
$\bar{u}$ quark, we will get the Feynman diagrams for the decay $B^{-}\to
a^{-}_{0}(1450)K^{*0}$. If we replace the $d(\bar{d})$ quark in
$a_{0}^{0}(K^{*0})$ with $u(\bar{u})$, we will get the Feynman diagrams for
the decay $\bar{B}^{0}\to a^{+}_{0}(1450)K^{*-}$. While there are not the
diagrams obtained by exchanging the two final state mesons for these two
channels. For the decay $B^{-}\to a^{0}_{0}(1450)K^{*-}$, its Feynman diagrams
are distinctive: the meson $a_{0}(1450)$ is emitted (the upper meson) in the
fractorizable (nonfactorizable) emission diagrams, while the meson $K^{*}$ is
the upper meson in the fractorizable (nonfactorizable) annihilation diagrams.
The detailed analytic formulae for the diagrams of each decay are not
presented and can be obtained from those of $B\to f_{0}(980)K^{*}$ zqzhang by
replacing corresponding wave functions and parameters.
Combining the contributions from different diagrams, the total decay
amplitudes for these decays can be written as
$\displaystyle\sqrt{2}{\cal M}(\bar{K}^{*0}a^{0}_{0})$ $\displaystyle=$
$\displaystyle\xi_{u}M_{eK^{*}}C_{2}-\xi_{t}\left[M_{eK^{*}}\frac{3C_{10}}{2}+M^{P2}_{eK^{*}}\frac{3C_{8}}{2}-(F_{ea_{0}}+F_{aa_{0}})\left(a_{4}-\frac{a_{10}}{2}\right)\right.$
(28)
$\displaystyle\left.-(M_{ea_{0}}+M_{aa_{0}})\left(C_{3}-\frac{1}{2}C_{9}\right)-(M^{P1}_{ea_{0}}+M^{P1}_{aa_{0}})\left(C_{5}-\frac{1}{2}C_{7}\right)\right.$
$\displaystyle\left.-F^{P2}_{aa_{0}}(a_{6}-\frac{1}{2}a_{8})\right],$
$\displaystyle{\cal M}(\bar{K}^{*0}a^{-}_{0})$ $\displaystyle=$
$\displaystyle\xi_{u}\left[M_{aa_{0}}C_{1}+F_{aa_{0}}a_{1}\right]-\xi_{t}\left[F_{ea_{0}}\left(a_{4}-\frac{a_{10}}{2}\right)+F_{aa_{0}}\left(a_{4}+a_{10}\right)\right.$
(29)
$\displaystyle\left.+M_{ea_{0}}\left(C_{3}-\frac{1}{2}C_{9}\right)+M_{aa_{0}}\left(C_{3}+C_{9}\right)+M^{P1}_{ea_{0}}\left(C_{5}-\frac{1}{2}C_{7}\right)\right.$
$\displaystyle\left.+M^{P1}_{aa_{0}}\left(C_{5}+C_{7}\right)+F^{P2}_{aa_{0}}(a_{6}+a_{8})\right],$
$\displaystyle\sqrt{2}{\cal M}(\bar{K}^{*-}a^{0}_{0})$ $\displaystyle=$
$\displaystyle\xi_{u}\left[M_{eK^{*}}C_{2}+M_{aa_{0}}C_{1}+F_{aa_{0}}a_{1}\right]-\xi_{t}\left[M_{eK^{*}}\frac{3}{2}C_{10}+M^{P2}_{eK^{*}}\frac{3}{2}C_{8}\right.$
(30)
$\displaystyle\left.+M_{aa_{0}}\left(C_{3}+C_{9}\right)+M^{P1}_{aa_{0}}\left(C_{5}+C_{7}\right)\right.$
$\displaystyle\left.+F_{aa_{0}}\left(a_{4}+a_{10}\right)+F^{P2}_{aa_{0}}(a_{6}+a_{8})\right],$
$\displaystyle{\cal M}(\bar{K}^{*-}a^{+}_{0})$ $\displaystyle=$
$\displaystyle\xi_{u}\left[F_{ea_{0}}a_{1}+M_{ea_{0}}C_{1}\right]-\xi_{t}\left[F_{ea_{0}}\left(a_{4}+a_{10}\right)+M_{ea_{0}}\left(C_{3}+C_{9}\right)\right.$
(31)
$\displaystyle\left.+M^{P1}_{ea_{0}}\left(C_{5}+C_{7}\right)+M_{aa_{0}}\left(C_{3}-\frac{1}{2}C_{9}\right)+M^{P1}_{aa_{0}}\left(C_{5}-\frac{1}{2}C_{7}\right)\right.$
$\displaystyle\left.+F_{aa_{0}}\left(a_{4}-\frac{1}{2}a_{10}\right)+F^{P2}_{aa_{0}}\left(a_{6}-\frac{1}{2}a_{8}\right)\right],$
The combinations of the Wilson coefficients are defined as usual zjxiao :
$\displaystyle a_{1}(\mu)$ $\displaystyle=$ $\displaystyle
C_{2}(\mu)+\frac{C_{1}(\mu)}{3},\quad
a_{2}(\mu)=C_{1}(\mu)+\frac{C_{2}(\mu)}{3},$ $\displaystyle a_{i}(\mu)$
$\displaystyle=$ $\displaystyle C_{i}(\mu)+\frac{C_{i+1}(\mu)}{3},\quad
i=3,5,7,9,$ $\displaystyle a_{i}(\mu)$ $\displaystyle=$ $\displaystyle
C_{i}(\mu)+\frac{C_{i-1}(\mu)}{3},\quad i=4,6,8,10.$ (32)
## IV Numerical results and discussions
We use the following input parameters in the numerical calculations pdg08 ;
barbar :
$\displaystyle f_{B}$ $\displaystyle=$ $\displaystyle
190MeV,M_{B}=5.28GeV,M_{W}=80.41GeV,$ (33) $\displaystyle V_{ub}$
$\displaystyle=$ $\displaystyle|V_{ub}|e^{-i\gamma}=3.93\times
10^{-3}e^{-i68^{\circ}},$ (34) $\displaystyle V_{us}$ $\displaystyle=$
$\displaystyle 0.2255,V_{tb}=1.0,V_{ts}=0.0387,$ (35)
$\displaystyle\tau_{B^{\pm}}$ $\displaystyle=$ $\displaystyle 1.638\times
10^{-12}s,\tau_{B^{0}}=1.530\times 10^{-12}s.$ (36)
Using the wave functions and the values of relevant input parameters, we find
the numerical values of the form factor $B\to a_{0}(1450)$ at zero momentum
transfer:
$\displaystyle F^{\bar{B}^{0}\to a_{0}}_{0}(q^{2}=0)$ $\displaystyle=$
$\displaystyle-0.42^{+0.04+0.04+0.05+0.06}_{-0.03-0.03-0.04-0.07},\quad\mbox{
scenario I},$ (37) $\displaystyle F^{\bar{B}^{0}\to a_{0}}_{0}(q^{2}=0)$
$\displaystyle=$ $\displaystyle
0.86^{+0.04+0.05+0.10+0.14}_{-0.03-0.04-0.09-0.11},\quad\;\;\;\mbox{ scenario
II},$ (38)
where the uncertainties are mainly from the Gegenbauer moments $B_{1}$,
$B_{3}$, the decay constant of the meson $a_{0}(1450)$, the $B$-meson shape
parameter $\omega=0.40\pm 0.04$ GeV. These predictions are larger than those
given in Ref.lirh , for using different values for the threshold parameter $c$
in the jet function. Certainly, they are consistent with each other in errors.
In the B-rest frame, the decay rates of $B\to a_{0}(1450)K^{*}$ can be written
as
$\displaystyle\Gamma=\frac{G_{F}^{2}}{32\pi m_{B}}|{\cal
M}|^{2}(1-r^{2}_{a_{0}}),$ (39)
where ${\cal M}$ is the total decay amplitude of each considered decay and
$r_{a_{0}}$ the mass ratio, which have been given in Sec. III. ${\cal M}$ can
be rewritten as
$\displaystyle{\cal
M}=V_{ub}V^{*}_{us}T-V_{tb}V^{*}_{ts}P=V_{ub}V^{*}_{us}\left[1+ze^{i(\delta-\gamma)}\right],$
(40)
where $\gamma$ is the Cabibbo-Kobayashi-Maskawa weak phase angle, and $\delta$
is the relative strong phase between the tree and the penguin amplitudes,
which are denote as ”T” and ”P”, respectively. The term $z$ describes the
ratio of penguin to tree contributions and is defined as
$\displaystyle
z=\left|\frac{V_{tb}V^{*}_{ts}}{V_{ub}V^{*}_{us}}\right|\left|\frac{P}{T}\right|.$
(41)
From Eq.(40), it is easy to write decay amplitude $\overline{\cal M}$ for the
corresponding conjugated decay mode. So the CP-averaged branching ratio for
each considered decay is defined as
$\displaystyle{\cal B}=(|{\cal M}|^{2}+|\overline{\cal
M}|^{2})/2=|V_{ub}V^{*}_{us}T|^{2}\left[1+2z\cos\gamma\cos\delta+z^{2}\right].$
(42)
Using the input parameters and the wave functions as specified in this and
previous sections, it is easy to get the branching ratios in two scenarios:
$\displaystyle{\cal B}(B^{-}\to
a^{0}_{0}(1450)K^{*-})=(2.8^{+0.4+1.0+0.6+0.1}_{-0.4-0.0-0.6-0.1})\times
10^{-6},ScenarioI,$ (43) $\displaystyle{\cal B}(B^{-}\to
a^{-}_{0}(1450)\bar{K}^{*0})=(3.3^{+0.6+0.4+0.8+2.7}_{-0.4-0.3-0.7-1.5})\times
10^{-6},ScenarioI,$ (44) $\displaystyle{\cal B}(\bar{B}^{0}\to
a^{+}_{0}(1450)K^{*-})=(3.6^{+0.6+0.3+0.8+2.0}_{-0.6-0.1-0.7-1.1})\times
10^{-6},ScenarioI,$ (45) $\displaystyle{\cal B}(\bar{B}^{0}\to
a^{0}_{0}(1450)\bar{K}^{*0})=(1.2^{+0.1+0.1+0.2+1.0}_{-0.1-0.2-0.3-0.6})\times
10^{-6},ScenarioI;$ (46) $\displaystyle{\cal B}(B^{-}\to
a^{0}_{0}(1450)K^{*-})=(7.0^{+0.9+1.6+1.7+0.2}_{-0.7-1.1-1.4-0.0})\times
10^{-6},ScenarioII,$ (47) $\displaystyle{\cal B}(B^{-}\to
a^{-}_{0}(1450)\bar{K}^{*0})=(3.0^{+0.2+0.2+0.7+1.2}_{-0.1-0.1-0.6-0.7})\times
10^{-5},ScenarioII,$ (48) $\displaystyle{\cal B}(\bar{B}^{0}\to
a^{+}_{0}(1450)K^{*-})=(2.8^{+0.3+0.1+0.7+0.8}_{-0.3-0.0-0.5-0.6})\times
10^{-5},ScenarioII,$ (49) $\displaystyle{\cal B}(\bar{B}^{0}\to
a^{0}_{0}(1450)\bar{K}^{*0})=(1.4^{+0.1+0.0+0.3+0.5}_{-0.1-0.1-0.3-0.4})\times
10^{-5},ScenarioII.$ (50)
In the above results, the first two errors come from the uncertainties of the
Gegenbauer moments $B_{1}$, $B_{3}$ of the scalar meson, and the third one is
from the decay constant of $a_{0}(1450)$. The last one comes from the
uncertainty in the $B$ meson shape parameter $\omega_{b}=0.40\pm 0.04$ GeV. We
also show the dependence of the branching ratios for these considered decays
on the Cabibbo-Kobayashi-Maskawa angle $\gamma$ in Fig. 2 and Fig. 3.
The branching ratios predicted by QCD factorization approach for these
considered decays in scenario II are listed as ccysv
$\displaystyle{\cal B}(B^{-}\to a^{0}_{0}(1450)K^{*-})$ $\displaystyle=$
$\displaystyle(2.2^{+4.9+0.7+22.5}_{-4.0-0.6-8.3})\times 10^{-6},$ (51)
$\displaystyle{\cal B}(B^{-}\to a^{-}_{0}(1450)\bar{K}^{*0})$ $\displaystyle=$
$\displaystyle(7.8^{+14.3+0.9+23.4}_{-11.0-0.7-9.1})\times 10^{-6},$ (52)
$\displaystyle{\cal B}(\bar{B}^{0}\to a^{+}_{0}(1450)K^{*-})$ $\displaystyle=$
$\displaystyle(4.7^{+4.4+1.0+14.6}_{-3.7-0.8-5.3})\times 10^{-6},$ (53)
$\displaystyle{\cal B}(\bar{B}^{0}\to a^{0}_{0}(1450)\bar{K}^{*0})$
$\displaystyle=$ $\displaystyle(2.5^{+4.4+1.0+14.6}_{-3.7-0.8-5.3})\times
10^{-6}.$ (54)
Though it is well known that the annihilation diagram contributions to
charmless hadronic B decays are power suppressed in the heavy-quark limit, as
emphasized in keum , these contributions may be important for some B meson
decays, here considered channels are just this kind of decays. For this kind
decays, the factorizable annihilation diagrams almost guide the final
branching ratios, so it is important to calculate correctly the amplitudes
from these diagrams. While the annihilation amplitude has endpoint divergence
even at twist-2 level in QCD factorization calculations, and one cannot
compute it in a self-consistent way and has to parameterize phenomenologically
the endpoint divergence. So it is difficult to avoid to bring many
uncertainties to the final results. In fact, the major uncertainties listed in
Eq.(51-54) are just from the contributions of annihilation diagrams. Comparing
with QCD factorization approach, PQCD approach can make a reliable calculation
from factorizable annihilation diagrams in $k_{T}$ factorization yu . The
endpoint singularity occurred in QCD factorization approach is cured here by
the Sudakov factor. Because of the large uncertainties from QCD factorization
approach, our predictions in scenario II are also in agreement with the QCD
factorization results within theoretical errors.
Figure 2: The dependence of the branching ratios for $\bar{B}^{0}\to
a^{+}_{0}(1450)K^{*-}$ (solid curve) and $\bar{B}^{0}\to
a^{0}_{0}(1450)\bar{K}^{*0}$ (dotted curve) on the Cabibbo-Kobayashi-Maskawa
angle $\gamma$. The left (right) panel is plotted in scenario I (II).
Figure 3: The dependence of the branching ratios for $B^{-}\to
a^{0}_{0}(1450)K^{*-}$ (solid curve) and $B^{-}\to
a^{-}_{0}(1450)\bar{K}^{*0}$ (dotted curve) on the Cabibbo-Kobayashi-Maskawa
angle $\gamma$. The left (right) panel is plotted in scenario I (II).
In Table 1, we list the values of the factorizable and nonfactorizable
amplitudes from the emission and annihilation topology diagrams of the
considered decays in both scenarios. $F_{e}(a)a_{0}$ and $M_{e}(a)a_{0}$ are
the $K^{*}-$meson emission (annihilation) factorizable contributions
nonfactorizable contributions from penguin operators, respectively. The upper
label $T$ denotes the contributions from tree operators. For the decays
$B^{-}\to a^{0}_{0}K^{*-}$ and $\bar{B}^{0}\to a^{0}_{0}\bar{K}^{*0}$, there
also exists the contributions from $a_{0}$ emission nonfactorizable diagrams.
Table 1: Decay amplitudes for decays $B^{-}\to a^{0}_{0}K^{*-},a^{-}_{0}\bar{K}^{*0}$, $\bar{B}^{0}\to a^{+}_{0}K^{*-},a^{0}_{0}\bar{K}^{*0}$ ($\times 10^{-2}\mbox{GeV}^{3}$) in the two scenarios. | | $F^{T}_{ea_{0}}$ | $F_{ea_{0}}$ | $M^{T}_{ea_{0}}+M^{T}_{eK^{*}}$ | $M_{ea_{0}}+M_{eK^{*}}$ | $M^{T}_{aa_{0}}$ | $M_{aa_{0}}$ | $F^{T}_{aa_{0}}$ | $F_{aa_{0}}$
---|---|---|---|---|---|---|---|---|---
$a^{0}_{0}K^{*-}$ (SI) | | … | … | $-32.6+59.5i$ | $-0.14+0.30i$ | $-1.8+3.2i$ | $0.11-0.06i$ | $-0.5-3.5i$ | $5.3+1.5i$
$a^{-}_{0}\bar{K}^{*0}$ (SI) | | … | -12.5 | $...$ | $-0.27+0.00i$ | $-2.5+4.5i$ | $0.16-0.08i$ | $-0.7-4.9i$ | $7.1+2.6i$
$a^{+}_{0}K^{*-}$(SI) | | 272.8 | -12.0 | $11.3-8.3i$ | $0.02-0.28i$ | … | $0.20-0.19i$ | … | $7.3+2.4i$
$a^{0}_{0}\bar{K}^{*0}$(SI) | | … | 8.9 | $-32.6+59.5i$ | $0.07+0.31i$ | … | $-0.14+0.13i$ | … | $-5.2-1.9i$
$a^{0}_{0}K^{*-}$ (SII) | | … | … | $47.9+39.0i$ | $0.23+0.18i$ | $6.7+1.6i$ | $-0.25-0.17i$ | $0.6+1.1i$ | $-5.2-7.9i$
$a^{-}_{0}\bar{K}^{*0}$ (SII) | | … | 22.9 | … | $-0.73+0.70i$ | $9.5+2.3i$ | $-0.36-0.25i$ | $1.0+1.5i$ | $-6.9-11.6i$
$a^{+}_{0}K^{*-}$(SII) | | -548.5 | 22.1 | $-1.5-6.4i$ | $-0.84+0.57i$ | … | $-0.56-0.30i$ | … | $-7.1-11.6i$
$a^{0}_{0}\bar{K}^{*0}$(SII) | | … | -16.2 | $47.9+39.0i$ | $0.72-0.33i$ | … | $0.40+0.21i$ | … | $5.0+8.2i$
In order to show the importance of the contributions from penguin operators,
we can show the branching ratio in another way:
$\displaystyle{\cal
B}=|V_{ub}V^{*}_{us}|^{2}(T_{r}^{2}+T_{i}^{2})+|V_{tb}V^{*}_{ts}|^{2}(P_{r}^{2}+P_{i}^{2})-|V_{ub}V^{*}_{us}V_{tb}V^{*}_{ts}|\cos\gamma(T_{r}P_{r}+T_{i}P_{i}).$
(55)
If the both sides of the upper equation are divided by the constant
$|V_{ub}V^{*}_{us}|^{2}$, one can get
$\displaystyle\frac{{\cal B}}{|V_{ub}V^{*}_{us}|^{2}}$ $\displaystyle=$
$\displaystyle(T_{r}^{2}+T_{i}^{2})+|\frac{V_{tb}V^{*}_{ts}}{V_{ub}V^{*}_{us}}|^{2}(P_{r}^{2}+P_{i}^{2})-|\frac{V_{tb}V^{*}_{ts}}{V_{ub}V^{*}_{us}}|\cos\gamma(T_{r}P_{r}+T_{i}P_{i})$
$\displaystyle=$
$\displaystyle(T_{r}^{2}+T_{i}^{2})+1936(P_{r}^{2}+P_{i}^{2})-16.3(T_{r}P_{r}+T_{i}P_{i}).$
(56)
From Eq.(56), we can find the contributions from tree operators are strongly
CKM-suppressed compared with those from penguin operators. Certainly, the
contributions from the conference of tree and penguin operators are also
small. So generally speaking, the branching ratios are proportional to
$(P_{r}^{2}+P_{i}^{2})$, that is to say if ones penguin operator contributions
are large, its branching ratio is also large. But the branching ratio of
$\bar{B}^{0}\to a^{+}_{0}(1450)K^{*-}$ for scenario I is excepted. It is
because the contributions from tree operators are enhanced very much by the
large Wilson coefficients $a_{1}=C_{2}+C_{1}/3$, which results they are very
large to survive the aforementioned suppression. So exactly speaking, the mode
$\bar{B}^{0}\to a^{+}_{0}(1450)K^{*-}$ is a tree-dominated decay in scenario
I. On the other side, the conferences from tree and penguin operators also
strengthen the final result. So, even though the contributions from the
penguin operators for this channel are the smallest in scenario I, instead, it
receives a larger branching ratio . Another abnormal decay channel is
$B^{-}\to a^{0}_{0}(1450)K^{*-}$. In scenario II, the branching ratios of
other three decays are at the order of $10^{-5}$, while the branching ratio of
decay $B^{-}\to a^{0}_{0}(1450)K^{*-}$ is the smallest one and only a few
times $10^{-6}$. The reason is that the contributions from penguin operators
of this decay are the smallest. Compared with the decay $B^{-}\to
a^{-}_{0}(1450)\bar{K}^{*0}$ though, the decay mode $a^{0}_{0}(1450)K^{*-}$
receives extra tree contributions $M^{T}_{eK^{*}}+M^{T}_{eK^{*}}$, which makes
its total tree contribution almost 7 times larger than that of the mode
$a^{-}_{0}(1450)\bar{K}^{*0}$, while as mentioned above, the tree
contributions are strongly suppressed and not much helpful to enhance the
branching ratio. Compared with other three decays, the decay $B^{-}\to
a^{0}_{0}(1450)K^{*-}$ does not receive the enhancement from the
$K^{*}$-emission factorizable diagrams and get the smallest contributions from
the penguin operators, which makes its branching ratio curve shown in Fig. 3
drop a lot. Certainly, the mode $a^{0}_{0}(1450)K^{*-}$ does not receive this
kind of enhancement (that is $F_{ea_{0}}$) in scenario I, too. In fact,
$F_{ea_{0}}$ and $F_{aa_{0}}$ shown in Table I are destructive for the other
three decays in both scenarios. The destruction induces the mode
$a^{0}_{0}\bar{K}^{*0}$ receives a smaller penguin amplitude compared with the
mode $a^{0}_{0}(1450)K^{*-}$ in scenario I.
Now we turn to the evaluations of the direct CP-violating asymmetries of the
considered decays in PQCD approach. The direct CP-violating asymmetry can be
defined as
$\displaystyle{\cal A}_{CP}^{dir}=\frac{|\overline{\cal M}|^{2}-|{\cal
M}|^{2}}{|{\cal M}|^{2}+|\overline{\cal
M}|^{2}}=\frac{2z\sin\gamma\sin\delta}{1+2z\cos\gamma\cos\delta+z^{2}}\;.$
(57)
Figure 4: Direct CP-violating asymmetries of $\bar{B}^{0}\to
a^{+}_{0}(1450)K^{*-}$ (solid curve) and $\bar{B}^{0}\to
a^{0}_{0}(1450)\bar{K}^{*0}$ (dotted curve), as functions of the Cabibbo-
Kobayashi-Maskawa angle $\gamma$. The left (right) panel is plotted in
scenario I (II).
Using the input parameters and the wave functions as specified in this and
previous sections, one can find the PQCD predictions (in units of $10^{-2}$)
for the direct CP-violating asymmetries of the considered decays
$\displaystyle{\cal A}_{CP}^{dir}(B^{-}\to
a^{0}_{0}(1450)K^{*-})=-50.1^{+8.4+4.7+0.5+2.6}_{-8.8-0.0-0.0-0.6},\mbox{
scenario I},$ (58) $\displaystyle{\cal A}_{CP}^{dir}(B^{-}\to
a^{-}_{0}(1450)\bar{K}^{*0})=2.0^{+1.5+0.4+0.2+0.8}_{-0.8-0.0-0.0-0.1},\mbox{
scenario I},$ (59) $\displaystyle{\cal A}_{CP}^{dir}(\bar{B}^{0}\to
a^{+}_{0}(1450)K^{*-})=48.0^{+19.2+1.5+0.1+13.5}_{-20.5-6.1-0.0-12.8},\mbox{
scenario I},$ (60) $\displaystyle{\cal A}_{CP}^{dir}(\bar{B}^{0}\to
a^{0}_{0}(1450)\bar{K}^{*0})=-50.8^{+20.2+0.1+0.0+12.7}_{-20.5-0.1-0.6-11.9},\mbox{
scenario I},$ (61) $\displaystyle{\cal A}_{CP}^{dir}(B^{-}\to
a^{0}_{0}(1450)K^{*-})=-11.4^{+1.5+0.3+0.1+2.0}_{-1.7-0.3-0.0-1.1},\mbox{
scenario II},$ (62) $\displaystyle{\cal A}_{CP}^{dir}(B^{-}\to
a^{-}_{0}(1450)\bar{K}^{*0})=-1.8^{+0.3+0.2+0.0+0.2}_{-0.2-0.1-0.0-0.2},\mbox{
scenario II},$ (63) $\displaystyle{\cal A}_{CP}^{dir}(\bar{B}^{0}\to
a^{+}_{0}(1450)K^{*-})=78.0^{+2.2+4.8+0.0+6.6}_{-2.5-5.4-0.1-8.8},\mbox{
scenario II},$ (64) $\displaystyle{\cal A}_{CP}^{dir}(\bar{B}^{0}\to
a^{0}_{0}(1450)\bar{K}^{*0})=20.0^{+2.5+1.7+0.0+0.8}_{-3.1-1.7-0.0-1.3},\mbox{
scenario II}.$ (65)
The main errors are induced by the uncertainties of $B_{1}$ and $B_{3}$ of
$a_{0}(1450)$, $f_{a_{0}}$ and $B$ meson shape parameter $\omega_{b}$.
The direct CP-violating asymmetries of these considered decays are displayed
in Fig. 4 and Fig. 5. From these figures, one can find the direct CP-violating
asymmetries of the decays $\bar{B}^{0}\to a^{+}_{0}(1450)K^{*-}$ and $B^{-}\to
a^{0}_{0}(1450)K^{*-}$ have the same sign in the two scenarios, while those of
the decays $B^{-}\to a^{-}_{0}(1450)\bar{K}^{*0}$ and $\bar{B}^{0}\to
a^{0}_{0}(1450)\bar{K}^{*0}$ have contrary signs in the two scenarios. If the
value of $z$ [defined in Eq.(40)] is very large, for example,
$z_{a^{-}_{0}\bar{K}^{*0}}=91.1$ (scenario I) and $78.8$ (scenario II), the
corresponding direct CP-violating asymmetry will be very small and only a few
percent. If the value of $z$ is small and only a few, for example,
$z_{a^{0}_{0}K^{*-}}=6.2$ (scenario II) and $z_{a^{0}_{0}\bar{K}^{*0}}=9.2$
(scenario II), the corresponding direct CP-violating asymmetry is large. If
the value of $z$ is very small and not far away from 1, then this condition is
complex , for the direct CP-violating asymmetry is very sensitive to the
relative strong phase angle $\delta$, for example, $z_{a^{+}_{0}K^{*-}}=0.88$
(scenario I) and $z_{a^{+}_{0}K^{*-}}=1.32$ (scenario II), though these two
values are close to each other, but their corresponding direct CP-violating
asymmetries are very different.
Figure 5: Direct CP-violating asymmetries of $B^{-}\to a^{0}_{0}(1450)K^{*-}$
(solid curve) and $B^{-}\to a^{-}_{0}(1450)\bar{K}^{*0}$ (dotted curve) as
functions of the Cabibbo-Kobayashi-Maskawa angle $\gamma$. The left (right)
panel is plotted in scenario I (II).
In order to characterize the symmetry breaking effects and the contribution
from tree operators and the electro-weak penguin, it is useful to define the
parameters below:
$\displaystyle R_{1}$ $\displaystyle=$ $\displaystyle\frac{{\cal
B}(\bar{B}^{0}\to a^{0}_{0}(1450)\bar{K}^{*0})}{{\cal B}(\bar{B}^{0}\to
a^{+}_{0}(1450)K^{*-})},$ (66) $\displaystyle R_{2}$ $\displaystyle=$
$\displaystyle\frac{{\cal B}(B^{-}\to a^{0}_{0}(1450)K^{*-})}{{\cal
B}(B^{-}\to a^{-}_{0}(1450)\bar{K}^{*0})},$ (67) $\displaystyle R_{3}$
$\displaystyle=$ $\displaystyle\frac{\tau(B^{0})}{\tau(B^{-})}\frac{{\cal
B}(B^{-}\to a^{-}_{0}(1450)\bar{K}^{*0})}{{\cal B}(\bar{B}^{0}\to
a^{+}_{0}(1450)K^{*-})}.$ (68)
Considering the ratios of the branching ratios is a more transparent
comparison between the predictions and the data because they are less
sensitive to the nonperturbative inputs. So the large deviation of these
ratios from the standard-model predictions could reveal a signal of new
physics. When we ignore the tree diagrams and electro-weak penguins,
$R_{1},R_{2}$, and $R_{3}$ should be equal to $0.5,0.5$, and $1.0$. From our
calculations, their values are:
$\displaystyle R_{1}=0.33,R_{2}=0.85,R_{3}=0.86,\mbox{ scenario I},$ (69)
$\displaystyle R_{1}=0.50,R_{2}=0.23,R_{3}=1.00,\mbox{ scenario II}.$ (70)
One can find the ratios $R_{1}$ and $R_{3}$ for scenario II are in agreement
well with the predictions, while there is a large deviation for the ratio
$R_{2}$, and the reason is the aforementioned smallest penguin operator
contributions for the channel $B^{-}\to a^{0}_{0}(1450)K^{*-}$. For scenario
I, there are large deviation for all three ratios and the deviation for
$R_{2}$ is the largest. These ratios can be tested by the future experiments.
## V Conclusion
In this paper, by using the decay constants and light-cone distribution
amplitudes derived from QCD sum-rule method, we calculate the branching ratios
and the direct CP-violating asymmetries of decays $B\to a_{0}(1450)K^{*}$ in
the PQCD factorization approach and find that
* •
For the decays $B^{-}\to a^{-}_{0}(1450)K^{*0},\bar{B}^{0}\to
a^{+}_{0}(1450)K^{*-},a^{0}_{0}(1450)\bar{K}^{*0}$, their branching ratios in
scenario II are larger than those in scenario I about one order. So it is easy
for the experiments to differentiate between the lowest lying state and the
first excited state for the meson $a_{0}(1450)$.
* •
For the decay $B^{-}\to a^{0}_{0}(1450)K^{*-}$, due to not receiving the
enhancement from the $K^{*}-$emission factorizable diagrams, its penguin
operator contributions are the smallest in scenario II, which makes its
branching ratio drop into the order of $10^{-6}$, even so, its branching ratio
in scenario II is still larger than that in scenario I about $2.5$ times.
* •
The PQCD predictions are much larger than QCD factorization results. Because
the latter can not make a reliable calculation from factorizable annihilation
diagrams and bring large uncertainties into the branching ratios, so they are
still consistent with each other within the large theoretical errors.
* •
For these considered decays, their tree contributions are strongly CKM
suppressed and they are penguin document decay modes. But the decay
$\bar{B}^{0}\to a^{+}_{0}(1450)K^{*-}$ is abnormal in scenario I, for it
receives an enhancement from the large Wilson coefficients
$a_{1}=C_{2}+C_{1}/3$, which makes its tree contribution survive the
suppression.
* •
The direct CP-violating asymmetry is determined by the ratio of penguin to
tree contributions, that is $z$. Generally speaking, if the value of $z$ is
large, the corresponding direct CP-violating asymmetry will be small, vice
versa. While if the value of $z$ is very small and close to 1, the direct CP-
violating asymmetry will be sensitive to the relative strong phase angle
$\delta$.
## Acknowledgment
This work is partly supported by the National Natural Science Foundation of
China under Grant No. 11047158, and by Foundation of Henan University of
Technology under Grant No.150374.
## References
* (1) N.A. Tornqvist, Phys. Rev. Lett. 49, 624 (1982).
* (2) G.L. Jaffe Phys. Rev. D 15, 267 (1977); Erratum-ibid.Phys. Rev. D 15 281 (1977); A.L. Kataev, Phys. Atom. Nucl. 68, 567 (2005), Yad. Fiz. 68, 597(2005); A. Vijande, A. Valcarce, F. Fernandez and B. Silvestre-Brac, Phys. Rev. D72, 034025 (2005).
* (3) J. Weinstein , N. Isgur , Phys. Rev. Lett. 48, 659 (1982); Phys. Rev. D 27, 588 (1983); 41, 2236 (1990); M.P. Locher et al., Eur. Phys. J. C 4, 317 (1998).
* (4) V. Baru et al., Phys. Lett. B586, 53 (2004).
* (5) L. Celenza, et al., Phys. Rev. C 61, 035201 (2000) .
* (6) M. Strohmeier-Presicek, et al., Phys. Rev. D 60, 054010 (1999) .
* (7) F.E. Close, A. Kirk, Phys. Lett. B 483 345 (2000).
* (8) C.Amsler and F.E. Close, Phys. Lett. B 353, 385 (1995); Phys. Rev. D 53, 295 (1996).
* (9) F.E. Close, Q. Zhao, Phys. Rev. D 71, 094022 (2005).
* (10) X.G. He, X.Q. Li, X. Liu, and X.Q. Zeng, Phys. Rev. D 73, 051502 (2006).
* (11) H.Y. Cheng, C.K. Chua, K.F. Liu, Phys. Rev. D 74, 094005 (2006).
* (12) W.M. Yao et al., (Particle Data Group), J.Phys. G 33, 1 (2006).
* (13) N. Mathur, et al., Phys. Rev. D 76, 114505, 2007.
* (14) L.J. Lee and D. Weingarten, Phys. Rev. D 61, 014015 (2000); M. Gockeler et al., Phys. Rev. D 57, 5562 (1998); S. Kim and S. Ohta, Nucl. Phts. Proc. Suppl. B 53, 199 (1997); A. Hart, C. Mcneile, and C. Michael, Nucl. Phys. Proc. Suppl. B 119, 266 (2003); T. Burch et al., Phys. Rev. D 73, 094505 (2006).
* (15) W.A. Bardeen et al., Phys. Rev. D 65, 014509 (2002); T. Kunihiro et. al, Phys. Rev. D 70, 034504 (2004); S. Prelovsek et al., Phys. Rev. D 70, 094503 (2004).
* (16) H.Y. Cheng, C.K. Chua and K.C. Yang, Phys. Rev. D 77, 014034 (2008).
* (17) C.D. Lu and M.Z. Yang, Eur. Phys. J. C 28, 515 (2003).
* (18) H.Y. Cheng , C.K. Chua , K.C. Yang Phys. Rev. D 73, 014017 (2006).
* (19) P. Ball, G. W. Jones and R. Zwicky, Phys. Rev. D 75, 054004 (2007).
* (20) H.N. Li, Phys. Rev. D 66, 094010 (2002).
* (21) H.N. Li and B. Tseng, Phys. Rev. D 57, 443 (1998).
* (22) G. Buchalla , A.J. Buras , M.E. Lautenbacher, Rev. Mod. Phys. 68, 1125 (1996).
* (23) Z.Q. Zhang, J.D. Zhang, Eur. Phys. J. C 67, 163 (2010).
* (24) Z.J. Xiao, Z.Q. Zhang, X. Liu, L.B. Guo, Phys. Rev. D 78, 114001 (2008).
* (25) Particle Data Group, C. Amsler et al., Phys. Lett. B 667, 1 (2008).
* (26) BaBar Collaboration, P. del Amo Sanchez, et al., arXiv:hep-ex/1005.1096.
* (27) R.H. Li, C.D. Lu, W. Wang, X.X. Wang, Phys. Rev. D 79, 014013 (2009).
* (28) Y.Y. Keum, H.N. Li, A.I. Sanda, Phys. Rev. D 63, 054008 (2001); Y.Y. Keum, H.N. Li, Phys. Rev. D 63, 0754006 (2001).
* (29) X.Q. Yu, Y. Li, C.D. Lu, Phys. Rev.D 73, 017501 (2006).
|
arxiv-papers
| 2011-06-02T06:09:18 |
2024-09-04T02:49:19.293049
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Zhi-Qing Zhang",
"submitter": "Zhi-Qing Zhang",
"url": "https://arxiv.org/abs/1106.0368"
}
|
1106.0411
|
11institutetext: University of Glasgow 22institutetext: University of
Cambridge
# QUANTUM-LIKE UNCERTAIN CONDITIONALS FOR TEXT ANALYSIS
Alvaro Francisco Huertas-Rosero and C. J. van Rijsbergen
{alvaro 1122 keith}@dcs.gla.ac.uk
###### Abstract
Simple representations of documents based on the occurrences of terms are
ubiquitous in areas like Information Retrieval, and also frequent in Natural
Language Processing. In this work we propose a logical-probabilistic approach
to the analysis of natural language text based in the concept of Uncertain
Conditional, on top of a formulation of lexical measurements inspired in the
theoretical concept of ideal quantum measurements. The proposed concept can be
used for generating topic-specific representations of text, aiming to match in
a simple way the perception of a user with a pre-established idea of what the
usage of terms in the text should be. A simple example is developed with two
versions of a text in two languages, showing how regularities in the use of
terms are detected and easily represented.
## 1 Introduction
How do prior expectations/knowledge affect the way a user approaches a text,
and how they drive the user’s attention from one place of it to another? This
is a very important but tremendously complex question; it is indeed as complex
as human perception of text can be. Including such effects in the
representation of text may be a relatively easy way to enhance the power of a
text retrieval or processing system. In this work we will not address the
question, but assume a simple answer to it, and follow it while building
theoretical concepts that can constitute natural language text for retrieval
of similar processing tasks.
The key concept to be defined will be an Uncertain conditional defined between
lexical measurements, which will allow us to exploit structures and features
from both Boolean and Quantum logics to include certain features in a text
representation.
Automatic procedures for acquiring information about term usage in natural
language text can be viewed as lexical measurements, and can be put as
statements such as [term $t$ appears in the text]111In this paper we will use
the convention that anything between square brackets [ and ] is a proposition,
to which it is possible to assign true/false values. These can be regarded as
a set of propositions. Some relations between propositions have the properties
of an order relation $\sqsubseteq$: for example, when one is a particular case
of the other, e.g $P_{1}=$ [term “research” appears in this text] and $P_{2}=$
[term “research” appears in this text twice] we can say that $P_{2}\sqsubseteq
P_{1}$ or that $P_{2}$ is below $P_{1}$ according to this ordering.
The set of propositions ordered by relation $\sqsubseteq$ can be called a
lattice when two conditions are fulfilled [2]: 1) a proposition exists that is
above all the others (supremum), and 2) a proposition exists that is below all
the others (infimum). When any pair of elements of a set has an order
relation, the set is said to be totally ordered, as is the case with sets of
integer, rational or real numbers and the usual order “larger or equal/smaller
or equal than ” $\geqslant/\leqslant$. If there are pairs that are not
ordered, the set is partially ordered.
Two operations can be defined in a lattice: the join [$A\land B$] is the
higher element that is below $A$ and $B$ and the meet [$A\lor B$] is the lower
element that is above $A$ and $B$. In this work, only lattices where both the
join and the meet exist and are unique. These operations are sometimes also
called conjunction and disjunction, but we will avoid these denominations,
which are associated with more subtle considerations elsewhere [5].
In terms of only ordering, another concept can be defined: the complement. Whe
referring to propositions, this can also be called negation. For a given
proposition $P$, the complement is a proposition $\lnot P$ such that their
join is the supremum $sup$ and their meet is the infimum $inf$:
$[P\land\lnot P=inf]\land[P\lor\lnot P=sup]$ (1)
Correspondences between two ordered sets where orderings are not altered are
called valuations. A very useful valuation is that assigning “false” or “true”
to any lattice of propositions, where {“false”,“true”} is made an ordered set
by stating [“false” $\sqsubseteq$ “true”]. With the example it can be checked
that any sensible assignation of truth to a set of propositions ordered with
$\sqsubseteq$ will preserve the order. Formally, a valuation $V$ can be
defined:
$V:\\{P_{i}\\}\rightarrow\\{Q_{i}\\},\text{ such that
}(P_{i}\sqsubseteq_{P}P_{j})\Rightarrow(V(P_{i})\sqsubseteq_{Q}V(P_{j}))$ (2)
where $\sqsubseteq_{P}$ is an order relation defined in $\\{P_{i}\\}$ and
$\sqsubseteq_{Q}$ is an order relation defined in $\\{Q_{i}\\}$. Symbol
$\Rightarrow$ represents material implication: $[X\Rightarrow Y]$ is true
unless $X$ is true and $Y$ is false.
Another very important and useful kind of valuations is that of probability
measures: they assign a real number between 0 and 1 to every proposition.
Valuations allow for a different way of defining the negation or complement:
for a proposition $P$, the complement $\lnot P$ is such that in any valuation
$V$, when $P$ is mapped to one extreme of the lattice (supremum $sup$ or
infimum $inf$) then $\lnot P$ will be mapped to the other
$[[V(P)=sup]\iff[V(\lnot P)=inf]]\land[[V(\lnot P)=sup]\iff[V(P)=inf]]$ (3)
For Boolean algebras, this definition will be equivalent to that based on
order only (1), but this is not the case for quantum sets of propositions.
A lattice and a valuation can be completed with a way to assess if a process
to use some propositions to infer others is correct. The rules that have to be
fulfilled by these processes are called rules of inference. In this work we do
not aim to assessing the correctness of a formula, but define instead a
probability measure for relations [$A\hskip 3.0pt\boxed{R}\hskip 3.0ptB$]. So
we will not be exactly defining some kind of logic but using something that
formally resembles it. The kind of logic this would resemble is Quantum Logic,
which will be explained next.
### 1.1 Conditionals in Quantum Logics
The description of propositions about objects behaving according to Quantum
Mechanics have posed a challenge for Boolean logics, and it was suggested that
the logic itself should be modified to adequately deal with these propositions
[19]. Von Neumann’s proposal was to move from standard propositional systems
that are isomorphic to the lattice of subsets of a set (distributive lattice
[2]), to systems that are instead isomorphic to the lattice of subspaces of a
Hilbert subspace (orthomodular lattice [1]).
A concept that is at the core of de difference between Boolean and Quantum
structures is that of compatibility. Quantum propositions may be incompatible
to others, which means that, by principle, they cannot be simultaneously
verified. A photon, for example, can have various polarisation states, which
can be measured either as linear polarisation (horizontal and vertical) or
circular (left or right) but not both at a time: they are incompatible
measurements. The propositions about a particular polarisation measure can be
represented in a 2D space as two pairs of perpendicular lines
$\\{\\{[H],[V]\\},\\{[L],[R]\\}\\}$, as is shown in figure 1. The lattice of
propositions would be completed with the whole plane $[plane]$ and the point
where the lines intersect $[point]$. The order relation $\sqsubseteq$ is “to
be contained in”, so $[point]$ is contained in every line, and all the lines
are contained in the $[plane]$.
Figure 1: System of propositions regarding the polarisation of a photon. On
the left, spaces representing the propositions. On the right, order relations
between them, represented with arrows. Subsets of orthogonal (mutually
excluding) propositions are shown enclosed in dotted boxes.
The fact that the measurements are pairwise exclusive is not reflected in the
lattice itself, but in the kind of valuations that are allowed: when $[H]$ is
true, $[V]$ can only be false, but neither $[L]$ nor $[R]$ are determined.
This can be described with valuation into a 3-element totally ordered set
$\\{false\sqsubseteq non-determined\sqsubseteq true\\}$, together with two
rules: 1) [only one proposition can be mapped to “true” and only one to
“false”] and 2) [if one proposition from an orthogonal pair is mapped to “non-
determined”, the other has to be mapped to “non-determined” as well].
The rudimentary formulation of valuation rules given in the example can be, of
course, improved, which can be done using a geometrical probability measure.
According to Gleason’s theorem [9] this probability measure can be defined by
choosing a set of orthogonal directions in space with weights that sum up to 1
$\\{w_{i},\vec{e}_{i}\\}$, with weights that sum up to one, and computing the
weighted average of the fraction of these vectors that lies within each
considered subset222This is not the standard formulation of the Quantum
probability measure, but is entirely equivalent, as follows:
$V(\Pi)=\sum w_{i}\frac{||\Pi\vec{e}_{i}||}{||\vec{e}_{i}||}$ (4)
The weighted orthogonal set $\\{w_{i},\vec{e}_{i}\\}$ is entirely equivalent
to what is called density operator $\rho$ and equation (4) is equivalent to
the trace formula $V_{\rho}(\Pi)=Tr(\Pi\rho)$.
The valuations suggested in the example can be obtained by taking two of the
orthogonal polarisations as $\vec{e}_{1}$ and $\vec{e}_{2}$ and interpreting
probability 1 as “true”, probability 0 as “false” and intermediates as “non-
determined”.
Defining conditionals in an orthomodular lattice has been a much discussed
issue [8, 16], and this paper does not aim to contribute to the polemic;
however, we will consider two aspects of the problem from the perspective of
practical applicability: the role of valuation in the definition of a logic,
and the role of complement or negation.
#### 1.1.1 Conditionals and the Ramsey Test
Material implication $A\rightarrow B=\lnot A\lor B$ is known to be problematic
when requirements other than soundness are considered (like relevancy [15],
context[12], etc.) and other kinds of implication are preferred in areas like
Information Retrieval [17]. A key issue in the consideration of an implication
is what is the interpretation of $[A\rightarrow B]$ when $A$ is false. One
possible approach to this issue is to consider “what if it were true”, which
amounts to adopting counterfactual conditional. If we are interested in a
probability measure rather than a true/false valuation, we may as well
evaluate how much imagination do we need to put into the “what if” statement:
how far it is from the actual state of things. This is an informal description
what is called the Ramsey test [7]. A simplified version of the Ramsey test
can be stated as follows:
> To assess how acceptable a conditional $A\rightarrow B$ is given a state of
> belief, we find the least we could augment this state of belief to make
> antecedent $A$ true, and then assess how acceptable the consequent $B$ is
> given this augmented state of belief.
In this work we will interpret state of beliefs as a restriction of the set of
possible valuations (including probability measures) that we will use to
characterise a system of propositions: in the case of a purely Quantum
formulation, it would mean imposing condition on the weighted orthogonal sets.
We will adopt a similar interpretation for lexical measurements in the next
section.
### 1.2 Uncertain Conditional and Text Analysis
It has been suggested that high-level properties of natural language text such
as topicality and relevance for a task can be described by means of
conditional (implication) relations [18, chapter 5], giving rise to a whole
branch of the area of Information Retrieval devoted to logic models [13], [20,
chapter 8]. In this work we will focus on the detection of patterns in the use
of words that can also be put as implication-like relations.
In this work we will focus on lexical measurements as propositions, and will
adopt the concept of Selective Eraser (SE) as a model for lexical measurements
[11]. A SE $E(t,w)$ is a transformation on text documents that preserves text
surrounding the occurrences of term $t$ within a distance of $w$ terms, and
erases the identity of tokens not falling within this distance.
A norm $|\cdot|$ for documents $D$ is also defined, that counts the number of
defined tokens (can be interpreted as remaining information). Order relations,
as well as Boolean operations, can be defined for these transformations, and
the resulting lattices are known to resemble those of Quantum propositions.
Order relations between SEs are defined for a set of documents $\\{D_{i}\\}$
as:
$[E(t_{1},w_{1})\geqslant E(t_{2},w_{2})]\iff\forall
D\in\\{D_{i}\\},[E(t_{1},w_{1})E(t_{2},w_{2})D=E(t_{2},w_{2})D]$ (5)
Since a SE erases a fraction of the terms in a document, every document
defines a natural valuation for SEs on documents which is simply the count of
unerased terms in a transformed document. This will be represented with
vertical bars $|\cdot|$
$V_{D}(E(t,w))=|E(t,w)D|$ (6)
We can also define a formula analogous to (4) defined by a set of weights and
a set of documents $\\{\omega_{i},D_{i}\\}$
$V(E(t,w))=\sum\omega_{i}\frac{|E(t,w)D_{i}|}{|D_{i}|}$ (7)
An intuition that will guide this work is that of the point-of-view-oriented
user. A user that is making a shallow reading of a text will expect only
familiar terms and patterns, and will have a diminished ability to discern
others that he or she does not expect. We will assume here that a topical
point of view will be associated to sets of lexical propositions that are both
likely and rich in order relations.
## 2 Conditionals for SEs
### 2.1 Material Implication
Using the concepts explained in the last section, we can start defining
conditionals for SEs. Material implication, for example, is defined as:
$(A\Rightarrow_{m}B)=(\lnot A)\lor B$ (8)
Two properties of probability measures can be used to evaluate a probability
measure for this implication:
$\begin{matrix}V(\lnot A)=1-V(A)\\\ V(A\lor B)=V(A)+V(B)-V(A\land
B)\end{matrix}$ (9)
Within a single document, the probability measure would then be:
$V_{D}(E(a,w_{a})\Rightarrow_{m}E(b,w_{b}))=1-V(E(a,w_{a}))+V(E(a,w_{a})\land
E(b,w_{b}))=\\\
=1-\frac{|E(a,w_{a})D|}{|D|}+\min_{[E(c,w_{c})\geqslant_{D}E(a,w_{a})]\land[E(c,w_{c})\geqslant_{D}E(b,w_{b})]}\frac{|E(c,w_{c})D|}{|D|}$
(10)
This formula has all the known problems of material implication, like that of
being 1 whenever $E(a,w_{a})$ annihilates the document completely, so it will
give probability 1 to documents without any occurrence of $a$ or $b$. We have
used a particular probability measure to avoid the cumbersome interpretation
of what a meet and a join of SEs are. Strictly speaking, a join $E_{1}\lor
E_{2}$ would be a transformation including both $E_{1}$ and $E_{2}$. Within a
single document a SE can always be found (even though it will very likely not
be unique), but for a set of documents, the existence of join and meet defined
in this way is not guaranteed.
### 2.2 Subjunctive Conditional
A much more useful probability is that of the subjunctive (Stalnaker)
conditional $\hskip 2.0pt\Box\rightarrow$. The base for computing this is the
Ramsey test, which starts by assuming the antecedent as true with a minimum
change of beliefs. In this work we interpret that as taking the document
transformed by the “antecedent” eraser $E(a,w_{a})D$ as the whole document,
and then compute the fraction of it that would be preserved by the further
application of the “consequent” eraser $E(b,w_{b})(E(a,w_{a})D)$. This
produces a formula resembling a conditional probability:
$V_{D}(E(a,w_{a})\hskip 2.0pt\Box\rightarrow
E(b,w_{b}))=P_{D}(E(b,w_{b})|E(a,w_{a}))=\frac{|E(a,w_{a})E(b,w_{b})D|}{|E(a,w_{a})D|}$
(11)
This number will be 1 when $E(b,w_{b})\geqslant E(a,w_{a})$, and will be
between 0 and 1 whenever $|E(a,w_{a})D|\neq 0$.
This formula still has problems when $a$ is not in the document, because in
that case both $|E(a,w_{a})E(b,w_{b})D|=0$ and $|E(a,w_{a})D|=0$. A standard
smoothing technique can be used in this cases using averages on a whole
collection or estimates of them:
$\begin{matrix}|E(a,w_{a})E(b,w_{b})\tilde{D}_{0}|=|E(a,w_{a})E(b,w_{b})D_{0}|+\mu|E(a,w_{a})E(b,w_{b})D_{avg}|\\\
|E(a,w_{a})E(b,w_{b})\tilde{D}_{0}|=|E(a,w_{a})D_{0}|+\mu|E(a,w_{a})D_{avg}|\end{matrix}$
(12)
Conditional probability when the terms are not present in an actual document
would be $\frac{|E(a,w_{a})E(b,w_{b})D_{avg}|}{|E(a,w_{a})D_{avg}|}$. This
value should be given the interpretation of “undetermined”.
The final formula proposed for the probability of implication is then:
$P_{D}(E(a,w_{a})\hskip 2.0pt\Box\rightarrow
E(b,w_{b}))=\frac{|E(a,w_{a})E(b,w_{b})D|+\mu|E(a,w_{a})E(b,w_{b})D_{avg}|}{|E(a,w_{a})D|+\mu|E(a,w_{a})D_{avg}|}$
(13)
### 2.3 Topic-Specific Lattices
If we think of a user going through a text document in a hurried and shallow
way, we may assume that his attention will be caught by familiar terms, and
then he or she will get an idea of the vocabulary involved that is biased
towards the distribution of terms around these familiar set.
Suppose we take a set of SEs with a fixed width centred in different (but
semantically related) terms. We will assume that the pieces of text preserved
by these can be thought as a lexical representation of the topic. In this
text, we can look for order relations between narrower SEs centred in the same
terms or others, as a representation of the document.
If a text is very long, or there are a large number of documents taken as a
corpus to characterise lexical relations in a topic, it is not convenient to
require strict conditions like $E(a,w_{a})E(b,w_{b})D=E(b,w_{b})D$ for al
large document $D$ or for all documents $D_{i}$ in a large set, because then
recognised order relations would be very scarce. A more sensible approach
would be to assess a probability within the text preserved by the SEs that
define the topic, which would be:
$P_{topic}(E(a,w_{a})\hskip 2.0pt\Box\rightarrow E(b,w_{b}))=\\\
=\max_{k_{i}}\left(P_{topic}([E(k_{i},w_{t})E(a,w_{a})]\hskip
2.0pt\Box\rightarrow[E(k_{i},w_{t})E(b,w_{b})])\right)$ (14)
Restricting ourselves to the set of keywords $\\{k_{i}\\}$, the maximum value
would always be for the topic-defining SE with the same central term as the
antecedent SE $E(a,w_{a})$ ($a=k_{i}$), which simplifies the formula to:
$P_{topic}(E(a,w_{a})\hskip 2.0pt\Box\rightarrow E(b,w_{b}))=\\\
=\frac{|E(a,w_{a})E(b,w_{b})E(a,w_{t})D|+\mu|E(a,w_{a})E(b,w_{b})E(a,w_{t})D_{avg}|}{|(E(a,w_{a})D|+\mu|E(a,w_{a})D_{avg}|}$
(15)
for any $w_{a}<w_{t}$, where $w_{t}$ is the width of the SEs used to define
the topic. For large values of $w_{t}$ this would be equivalent to general
formula (13).
## 3 An Example
A particular topic might define its own particular sub-language; this is a
well known fact, and an interesting matter for research [10]. The differences
between these sub-languages and the complete language have been studied for
very wide topics, such as scientific research domains [6]. In this work, we
will aim to much more fine-grained topics, which could be found dominating
different parts of a single document. Fine-grained sub-languages such as those
would not depart from the whole language of the document significantly enough
to be described grammatically or semantically as a sub-language in its own
right, but will be rather a preference of some lexical relations over others.
As an illustration of how SE-based Uncertain Conditionals can be used to
explore and describe the use of language characteristic of a particular, fine-
grained topic, we will use two versions of a single document in different
languages, and find the relations between terms chosen to define a topic. We
have chosen the literary classic novel Don Quixote as the subject for
examining lexical features. Two versions were used of this novel: the original
in spanish [3], as it has been made available by project Gutenberg, and a
translation to English by John Ormsby, obtained from the same site [4].
language | No. of tokens | No. of terms
---|---|---
Spanish | 387675 | 24144
English | 433493 | 15714
Table 1: Characteristics of the Spanish and English version of don Quixote as
a plain text sequence
In this text, we define a topic by the keywords
$\\{sword,hand,arm,helmet,shield\\}$ and their Spanish equivalents
$\\{espada,mano,brazo,yelmo,adarga\\}$ and the width for the topic-defining
SEs was chosen to be 10. Co-occurrence studies have found that the most
meaningful distances between terms are from 2 to 5 [14], so we took twice the
highest recommended co-occurrence distance to capture also relations between
terms within non-erased windows. Information about the text and the topics is
given in table 1.
Order relations were tested with formula (15), and those implying the lower
values of $w_{a}$ and $w_{b}$ (widths of antecedent and consequent) were taken
as representative. The values can be seen in table 2.
| sword | hand | arm | helmet | shield
---|---|---|---|---|---
sword | trivial | P(2$\sqsupseteq$3)=87% | P(1$\sqsupseteq$3)= 93% | - | P(8$\sqsupseteq$3) = 59%
hand | P(2$\sqsupseteq$ 3) = 96% | trivial | P(2$\sqsupseteq$3)= 71% | - | -
arm | P(2$\sqsupseteq$1)=96% | P(2$\sqsupseteq$3)=87% | trivial | P(1$\sqsupseteq$3)=71% | P(3$\sqsupseteq$4) = 53%
helmet | - | - | - | trivial | -
shield | P(7 $\sqsupseteq$3)=88% | - | P(3 $\sqsupseteq$3)=87% | - | trivial
| espada | mano | brazo | yelmo | adarga
---|---|---|---|---|---
espada | trivial | P(4$\sqsupseteq$3)=67% | P(6$\sqsupseteq$3)= 85% | - | P(2$\sqsupseteq$7) = 52%
mano | P(2$\sqsupseteq$ 3) = 89% | trivial | P(4$\sqsupseteq$3)= 75% | - | P(4$\sqsupseteq$3)= 63%
brazo | P(5$\sqsupseteq$3)=89% | P(3$\sqsupseteq$3)=94% | trivial | - | P(1$\sqsupseteq$3) = 74%
yelmo | - | - | - | trivial | -
adarga | P(6 $\sqsupseteq$3)=94% | P(3 $\sqsupseteq$3)=94% | - | - | trivial
Table 2: Order relations between SEs with the lower values of window width,
within a topic defined by a set of erasers of width 10 centred in the same 5
words, both in their English and Spanish version. Relations ($N_{1}\sqsupseteq
N_{2}$) represent relations $E(t_{row},w_{1})\hskip 2.0pt\Box\rightarrow
E(t_{column},w_{2})$
### 3.1 Anomalies in the Ordering
Table 2 shows apparently paradoxical results. Relations $E(sword,2)\hskip
2.0pt\Box\rightarrow E(hand,3)$ and $E(hand,2)\hskip 2.0pt\Box\rightarrow
E(sword,3)$, both with probabilities above $87\%$, do not fulfill the
properties of an order relation when considered together with
$E(sword,3)\hskip 2.0pt\Box\rightarrow E(sword,2)$ and $E(hand,3)\hskip
2.0pt\Box\rightarrow E(hand,2)$ (see figure 2). This is a result of putting
together partially incompatible scenarios: $E(sword,2)\hskip
2.0pt\Box\rightarrow E(hand,3)$ is evaluated in the text preserved by
$E(sword,10)$ and $E(hand,2)\hskip 2.0pt\Box\rightarrow E(sword,3)$ is
evaluated in the text preserved by $E(hand,10)$.
Figure 2: Anomalous ordering of four SEs in the English topical lattice
Anomalies in the order can be resolved by simply choosing some of the
relations on the basis of their higher probability (in this case,
$E(hand,2)\sqsupseteq E(sword,3)$ with 96% over $E(sword,2)\hskip
2.0pt\Box\rightarrow E(hand,3)$ wiwth 87%, or collapsing the involved SEs in a
class of equivalence, so the inconsistency is removed.
### 3.2 Lattices for Two Languages
The sets of relations obtained are strikingly similar for the two languages,
with more differences polysemic terms like “arm” (which appears in spanish
with different terms for its noun meaning and for its verb meaning) and
“sword” which corresponds to different kinds of weapons with their own name in
Spanish, from which “sword” is just the most frequent. Moreover, the anomaly
in the orderings of SEs centred in “sword” and “hand” does not appear between
their spanish counterparts “espada” and “mano”, but is replaced by a very
similar pair of relations.
This kind of analysis provides a promising way of finding regularities between
different languages, or even analogies between different terms in the same
language. It is easy to isolate the transformations needed to go from the
English Lattice to the Spanish one, as a lattice morphism. The differences of
both could even suggest a valuation, a mapping to a simpler common lattice.
## 4 Discussion and Conclusion
In this work, we have shown how the framework of SEs provides a natural
platform to define logical relations resembling those employed in Boolean
logics, but also more complex ones, like the subjunctive conditional.
Quantitative implementation follows naturally from the parallel between
lexical measurements and quantum ideal measurements, producing a formula that
is both simple and easy to compute for concrete cases.
The proposed formula also allows to include relations restricted to only a
chosen bit of the text, that surrounding the occurrences of keywords. This
allows to extract relations between terms that can be expected to be
characteristic of the text about a particular topic.
The proposed formula was applied to a simple example, with very interesting
results. Two main features can be observed in the results:
1. 1.
Anomalies can appear in the resulting order relation, coming from the
existence of transformations that are incompatible in the sense of quantum
incompatibility. These can be removed easily if a proper lattice-valued
representation is to be obtained, but can also be studied as an evidence of
useful patterns as well.
2. 2.
The relation structures between SEs make visible common features of the
representation of a text in different languages: terms that mean something
similar will be embedded into similar patterns of relations.
As a matter for future research, both observations can be explored further:
the causes and characteristics of the anomalies in order relations between SEs
as assessed by uncertain conditionals, and the possibility of putting the
multi-language representation in terms of morphisms between lattices of SEs.
In particular, having similar lattices for two versions of the same text in
different languages invites to find an optimal way of defining a common
valuation that would assign both lattices to a simpler third lattices with
their common features. This, in particular, is a very promising direction of
research, and a novel approach to multi-lingual text processing.
## Acknowledgements
This work was developed under the funding of the Renaissance Project “Towards
Context-Sensitive Information Retrieval Based on Quantum Theory: With
Applications to Cross-Media Search and Structured Document Access” EPSRC
EP/F014384/1, School of Computing Science of the University of Glasgow and
Yahoo! (funds managed by Prof. Mounia Lalmas).
## References
* [1] E. G. Beltrametti and G. Cassinelli. The logic of Quantum Mechanics. Addison Wesley, 1981.
* [2] S. Burris and H. P. Sankappanavar. A Course on Universal Algebra. Springer, 1981.
* [3] M. de Cervantes-Saavedra. Don Quijote. Project Gutemberg, 1615.
* [4] M. de Cervantes-Saavedra. The ingenious gentleman Don Quixote of La Mancha. Project Gutemberg, translation by John Ormsby (1885) edition, 1615.
* [5] B. D’Hooghe and J. Pyackz. On some new operations on orthomodular lattices. International Journal of Theoretical Physics, 39(3):641–652, 2000\.
* [6] C. Friedman, P. Kra, and A. Rzhetsky. Two biomedical sublanguages: a description based on the theories of Zellig Harris. Journal of Biomedical Informatics, 35(4):222–235, 2002.
* [7] P. Gärdenfors. Belief revisions and the Ramsey test for conditionals. The Philosophical Review, 95(1):81–93, 1986.
* [8] M. Gardner. Is Quantum Logic Really Logic? Philosophy of Science, 38(4):508–529, 1971.
* [9] A. M. Gleason. Measures of the closed subspaces of the hilbert space. Journal of Mathematics and Mechanics, 6:885–893, 1957.
* [10] Z. Harris. Discourse and Sublanguage, chapter 11, pages 231–236. Berlin and New York: de Gruyter, 1982.
* [11] A. Huertas-Rosero, L. Azzopardi, and C. van Rijsbergen. Characterising through erasing: A theoretical framework for representing documents inspired by quantum theory. In C. J. v. R. P. D. Bruza, W. Lawless, editor, Proc. 2nd AAAI Quantum Interaction Symposium, pages 160–163, Oxford, U. K., 2008. College Publications.
* [12] T. Huibers and P. Bruza. Situations, a general framework for studying information retrieval. Information Retrieval: New systems and current research, 2, 1994\.
* [13] M. Lalmas. Logical models in information retrieval: Introduction and overview. Information Processing and Management, 34(1):19–33, 1998.
* [14] K. Lund and C. Burgess. Producing high-dimensional semantic spaces from lexical cooccurrence. Behavior Research Methods, Instruments and Computers, 28(2):203–208, 1996.
* [15] E. D. Mares. Relevant Logic: A Physlosophical Interpretations. Cambridge University Press, 2004. Preface: The author claims that this kind of logic is suitable for dealing with semantics Introductions: Non Sequitur is bad.
* [16] M. Pavicic and N. Megill. Is Quantum Logic a Logic?, chapter 2, pages 23–47. Elsevier, 2004.
* [17] C. J. van Rijsbergen. A new theoretical framework for information retrieval. In SIGIR ’86: Proceedings of the 9th annual international ACM SIGIR conference on Research and development in information retrieval, pages 194–200, New York, NY, USA, 1986. ACM.
* [18] C. J. van Rijsbergen. The Geometry of Information Retrieval. Cambridge University Press, 2004.
* [19] J. von Neumann and G. Birkhoff. The logic of quantum mechanics. Annals of Mathematics, 43:298 – 331, 1936.
* [20] D. Widdows and P. Kanerva. Geometry and Meaning. Cambridge University Press, 2004.
|
arxiv-papers
| 2011-06-02T12:14:33 |
2024-09-04T02:49:19.303683
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Alvaro Francisco Huertas-Rosero and C. J. van Rijsbergen",
"submitter": "\\'Alvaro Francisco Huertas-Rosero",
"url": "https://arxiv.org/abs/1106.0411"
}
|
1106.0492
|
# Probing the Physical Properties of Directly Imaged Gas Giant Exoplanets
Through Polarization
Mark S. Marley1 and Sujan Sengupta2
1NASA Ames Research Center, MS-245-3, Moffett Field, CA 94035, U.S.A.
2Indian Institute of Astrophysics, Koramangala 2nd Block, Bangalore 560 034,
India
E- mail: Mark.S.Marley@NASA.govE-mail: sujan@iiap.res.in
(Submitted 2011 March 10.)
###### Abstract
It is becoming clear that the atmospheres of the young, self-luminous
extrasolar giant planets imaged to date are dusty. Planets with dusty
atmospheres may exhibit detectable amounts of linear polarization in the near-
infrared, as has been observed from some field L dwarfs. The asymmetry
required in the thermal radiation field to produce polarization may arise
either from the rotation-induced oblateness or from surface inhomogeneities,
such as partial cloudiness. While it is not possible at present to predict the
extent to which atmospheric dynamics on a given planet may produce surface
inhomogeneities substantial enough to produce net non-zero disk integrated
polarization, the contribution of rotation-induced oblateness can be
estimated. Using a self-consistent, spatially homogeneous atmospheric model
and a multiple scattering polarization formalism for this class of exoplanets,
we show that polarization on the order of 1% may arise due to the rotation-
induced oblateness of the planets. The degree of polarization for cloudy
planets should peak at the same wavelengths at which the planets are brightest
in the near-infrared. The observed polarization may be even higher if surface
inhomogeneities exist and play a significant role. Polarized radiation from
self-luminous gas giant exoplanets, if detected, provides an additional tool
to characterize these young planets and a new method to constrain their
surface gravity and masses.
###### keywords:
polarization – scattering – planets and satellites: atmospheres –
stars:atmosphere.
††pagerange: LABEL:xx–LABEL:yy††pubyear: 2011
## 1 Introduction
Several young, self-luminous gas giant planets have been detected by direct
imaging (Chauvin et al., 2004; Marois et al., 2008, 2010; Lagrange et al.,
2010; Lafrenière et al., 2010) around nearby stars. These objects are now
being characterized by photometry and even spectroscopy (Bowler et al., 2010;
Patience et al., 2010; Currie et al., 2011; Barman et al., 2011) in an attempt
to characterize their atmospheres and constrain the planetary masses. In the
next few years many more such planets are almost certainly to be detected by
ground-based adaptive optics coronagraphs, such as the P1640 coronagraph on
Palomar, the Gemini Planet Imager, and SPHERE on the VLT (Beichman et al.,
2010).
The characterization of the mass of a given directly imaged planet can be
problematical, since such planets typically lie at large star-planet
separations (tens of AU and greater) and are thus not amenable to detection by
radial-velocity methods. Instead masses must be estimated either by comparison
of photometry and spectroscopy to planetary evolutionary and atmospheric
models or by their gravitational influence on other planets or disk (e.g.
Kalas et al., 2005; Fabrycky & Murray-Clay, 2010). Model comparisons as a
method for constraining mass can be ambiguous, however. Evolution models which
predict luminosity as a function of age have yet to be fully tested in this
mass range for young planets and at very young ages ($<100$ Myr) the model
luminosity can depend on the unknown initial conditions (Marley et al., 2007;
Fortney et al., 2008). The masses of the planets around HR 8799 estimated by
cooling models are apparently inconsistent with standard model spectra (Bowler
et al., 2010; Barman et al., 2011; Currie et al., 2011) and can lead to rapid
orbital instabilities if circular, face-on orbits are assumed (Fabrycky &
Murray-Clay, 2010). Finally the mass of the planetary mass companion to the
brown dwarf 2M1207 b (Chauvin et al., 2004) inferred from fitting of spectral
models to observed near-infrared colors is discrepant with the mass inferred
from the companion’s luminosity and the age of the primary (Mohanty et al.,
2007).
Discrepancies such as these may arise because young exoplanets exist in a
gravity-effective temperature ($g,T_{\rm eff}$) regime in which both the
evolutionary and atmospheric models have yet to be validated. Fits of
photometry and spectroscopy to predictions of atmosphere models depend upon
the veracity of the models themselves, which—in the $T_{\rm eff}$ range of
interest—in turn sensitively depend upon model cloud profiles, which are as
yet uncertain. Extensive experience with fitting models to brown dwarf spectra
and photometry (Cushing et al., 2008; Stephens et al., 2009) reveals that
while effective temperature can be fairly tightly constrained, gravity
determinations are usually less precise, uncertain in some cases by almost an
order of magnitude in $g$. While there are low gravity spectral indicators
recognized from surveys of young objects (Cruz et al., 2009; Kirkpatrick et
al., 2006) these have yet to be calibrated by studies of binary objects which
allow independent measures of mass. Ideally for a single object with a given
radius $R$, evolution model luminosity, which (for a known parallax)
constrains $R^{2}T_{\rm eff}^{4}$ would be fully consistent with $(g,T_{\rm
eff})$ constraints from atmosphere model fitting. But, as noted above, this is
often not in fact the case, as the derived luminosity, mass, and radii of the
companion to 2M1207 b as well as the HR 8799 planets are not fully internally
self-consistent with standard evolution models.
Given the likely future ubiquity of direct detections of young, hot Jupiters
and the clear need for additional independent methods to constrain planet
properties, we have explored the utility of polarization as an additional
method for characterizing self-luminous planets.
Polarization of close-in giant exoplanets whose hot atmosphere favours the
presence of silicate condensates, is discussed by Seager, Whitney & Sasselov
(2000) and by Sengupta & Maiti (2006). While these authors considered the
polarization of the combined light from an unresolved system of star and
planet, Stam, Hovenier & Waters (2004) presented the polarization of the
reflected light of a resolved, directly-imaged Jupiter-like exoplanet. Since
the polarized light of a close-in exoplanet is combined with the unpolarized
continuum flux of the star which cannot be resolved, the amount of observable
polarization in such case is extrmeley low – of the order of magnitude of
planet-to-star flux ratio. Polarization measurements of directly-imaged
exoplanets in reflected light is also challenging. The removal of scattered
light from the primary star must be precise in both polarization channels so
that the planet’s intrinsic polarization (which is a differential measurement)
can be accurately determined. In any case no extrasolar planet has yet been
imaged in scattered light, an accomplishment that will likely require a space-
based coronagraph (e.g., Boccaletti et al., 2011). Measuring polarization of
thermally emitted radiation—as we propose here—is also difficult but does not
require a planet to be close to the star (where the starlight suppression is
most difficult) so that it is bright in reflected light. Furthermore
extrasolar planets have already been imaged which raises the possibility of
polarization observations.
It is clear from comparisons of model spectra to data that most of the
exoplanets directly imaged to date have dusty atmospheres (Marois et al.,
2008; Bowler et al., 2010; Lafrenière et al., 2010; Barman et al., 2011;
Currie et al., 2011; Skemer et al., 2011). Clear atmospheres lacking dust
grains can be polarized, but only at blue optical wavelengths where gaseous
Rayleigh scattering is important (Sengupta & Marley, 2009). Since even the
hottest young exoplanets will not emit significantly in the blue, grain
scattering must be present for there to be measurable polarization in the
near-infrared where warm giant planets are bright (Sengupta & Marley, 2010).
There are two temperature ranges within which we expect a gas giant exoplanet
to possess significant atmospheric condensates. The first is L-dwarf like
planets (roughly $1000<T_{\rm eff}<2400\,\rm K$) where iron and silicate
grains condense in the observable atmosphere. The lower end of this range in
the planetary mass regime is as yet uncertain. The second temperature range
occurs in cool planets with atmospheric water clouds ($T_{\rm eff}<400\,\rm
K$). There have yet been no confirmed detections of such planets. Here we will
focus on the first category since such objects are brighter, more easily
detectable, and the comparison to the field dwarfs is possible.
Although survey sizes are fairly small, linear polarization of field L dwarfs
has been detected. Menard et al. (2002) and Zapatero Osorio et al. (2005) both
report that a fraction of L dwarfs, particularly the later, dustier spectral
types, are intrinsically polarized. Sengupta & Marley (2010) find that the
observed polarization can plausibly arise from emission of cloudy, oblate
dwarfs, although to produce the required oblateness (20% or more) the dwarfs
must have fairly low gravity for a field dwarf ($g\sim 300\,\rm m\,s^{-2}$)
and rapid rotation. The required rotation periods are brisk, as little as 2
hours or less, but are compatible with observed rotational velocities in at
least some cases (see Sengupta & Marley (2010) for a discussion). Sengupta &
Marley (2010) further find that the near-infrared polarization is greatest at
$T_{\rm eff}\sim 1600\,\rm K$ where their model condensate clouds are both
optically thick and still prominent enough in the photosphere to maximally
affect the polarization.
Surface inhomogeneities can also give rise to a net polarization (Menard &
Delfosse, 2004) and experience from the solar system confirms that irregularly
spaced clouds are to be expected. Both Jupiter’s and Saturn’s thermal emission
in the five-micron spectral window is strongly modulated by horizontally
inhomogeneous cloud cover and it would not be surprising to find similar
morphology in the atmospheres of exoplanets. In the presence of surface
inhomogeneity, the asymmetry that produces the net non-zero disk-integrated
polarization would increase and hence a combination of oblate photosphere and
surface inhomogeneity can give rise to detectable levels of polarization.
Exoplanets are even better candidates than L dwarfs to have an oblate shape
and be polarized. With a lower mass and roughly the same radius as a brown
dwarf (and thus a lower gravity), a rapidly rotating planet can be
significantly oblate and consequently produce a polarization signal even
without surface inhomogeneities. Here we explore the conditions under which
the thermal emission from a warm, young exoplanet may be polarized and
consider the scientific value of measuring exoplanet polarization. We first
look at the issue of oblateness, then present a set of cloudy model atmosphere
calculations relevant to planets amenable to direct detection and discuss
under which conditions their thermal emission may be polarized. Finally we
discuss our findings and explore how the characterization of an extrasolar
planet may be enhanced by polarization observations.
## 2 Young Giant Exoplanets
### 2.1 Evolution
When giant planets are young they are thermally expanded and boast larger
radii and smaller gravity. Fortney et al. (2008) have computed evolution
models for gas giant planets with masses ranging from 1 to $10\,\rm M_{J}$ for
ages exceeding $10^{6}\,\rm yr$ and we can use their results to predict the
oblateness of thermally expanded young Jupiters with various rotation rates.
Those authors modeled two types of evolution models. The first variety, termed
‘hot starts’, was most traditional and assumed the planets formed from hot,
distended envelopes of gas which rapidly collapsed. This calculation is
comparable to that of most other workers in the field. They also presented
calculations for planets fomed by the core accretion planet formation process
(see Lissauer & Stevenson (2007)) which (depending on details of the assumed
boundary condition for the accretion shock) produces planets that are
initially much smaller and cooler than in the ‘hot start’ scenario.
For the calculations here we choose to use the ‘hot start’ evolutionary
calculation. We do this for several reasons. First, these models provide a
reasonable upper limit to the radius at young ages and thus bound the problem.
Second, at the large orbital separations that will, at least initially, be
probed by ground based adaptive optics coronagraphic imaging, the core
accretion mechanism may be inefficient at forming planets. Thus the gaseous
collapse scenario may be more relevant choice. Finally the three planets
observed around HR 8799 are all much brighter than predicted by the Fortney et
al. (2008) cold-start, but not the hot-start, cooling tracks.
Figure 1 presents model evolution tracks for non-irradiated giant exoplanets
from Fortney et al. (2008). On this figure planets age from the right to the
left as effective temperature falls, the planets contract, and their surface
gravity, $g$, increases. The dashed lines denote isochrones. This figure
guides our selection of atmosphere models to evaluate for polarization
studies. Groundbased coronagraphic searches for planets are expected to foucus
on stars younger than about 200 Myr (e.g., McBride et al., 2011). From the
figure we see that at ages of 10 to 200 Myr we expect exoplanets with masses
falling between 1 and $10\,\rm M_{J}$ to have $g$ roughly in the range of 15
to $200\,\rm m\,s^{-2}$.
### 2.2 Shape
Both Jupiter and Saturn are oblate. The fractional difference,
$f=1-R_{p}/R_{e}$, between their equatorial and polar radii, known as
oblateness, are 0.065 and 0.11 respectively. The extent to which their
equators bulge outwards depends on their surface gravity, $g$, and rotation
rate, $\Omega$, as well as their internal distribution of mass. The Darwin-
Radau relationship (Barnes & Fortney, 2003) connects these quantities of
interest:
$\displaystyle
f=\frac{\Omega^{2}R_{e}}{g}\left[\frac{5}{2}\left(1-\frac{3K}{2}\right)^{2}+\frac{2}{5}\right]^{-1}$
(1)
Here $K=I/(MR_{e}^{2})$, $I$ is the moment of inertia of the spherical
configuration, and $M$ and $R_{e}$ are the mass and equatorial radii.
The relationship for the oblateness $f$ of a stable polytropic gas
configuration under hydrostatic equilibrium is also derived by Chandrasekhar
(1933) and can be written as
$\displaystyle f=\frac{2}{3}C\frac{\Omega^{2}R_{e}}{g}$ (2)
where $C$ is a constant whose value depends on the polytropic index.
The above two relationships provide the same value of oblateness for any
polytropic configuration. Equating Eq. (1) and Eq. (2), we obtain
$\displaystyle
C=\frac{3}{2}\left[\frac{5}{2}\left(1-\frac{3K}{2}\right)^{2}+\frac{2}{5}\right].$
(3)
Substituting the value of $K$ for a polytrope of index $n$ gives the value of
the corresponding $C$. For example, $K=0.4,0.261,0.205,0.155,0.0754$ for
$n=0,1,1.5,2,3$ respectively. The corresponding values of $C$ derived by
Chandrasekhar (1933)(p. 553, Table 1). are 1.875, 1,1399, 0.9669, 0.8612,
0.7716 respectively.
The interiors of gas giant planets can be well approximated as $n=1$
polytropes. For the observed mass, equitorial radii, and rotation rates of
Jupiter and Saturn, expression (2) predicts, with $n=1$, an oblateness of
0.064 and 0.11, in excellent agreement with the observed values. Figure 2
presents the oblateness computed employing Eq. (2) as applied to 1 and
$10\,\rm M_{J}$ planets at three different ages, 10, 100, and 1,000 Myr using
the Fortney et al. (2008) hot-start cooling tracks. Also shown is the
oblateness (0.44) at which a uniformly rotating $n=1.0$ polytrope becomes
unstable (James, 1964). Clearly for rotation rates comparable to those seen
among solar system planets we can expect a substantial degree ($f>0.10$) of
rotational flattening. As gas giants age and contract the same rotation rate
produces much less oblate planets. However for young, Jupiter mass planets
rotation rates of 7 to 10 hours can easily produce $f\sim 0.2$ even for
planets as old as 100 Myr. More rapid rotation rates may produce even greater
degrees of flattening. L dwarfs, with much higher surface gravity, must have
even more rapid rotation rates to exhibit even modest flattening (Sengupta &
Marley, 2010).
## 3 Polarization of Young Exoplanets
To explore the degree of polarization expected for various planet masses and
ages we considered a selection of one-dimensional, plane-parallel,
hydrostatic, non-gray, radiative-convective equilibrium atmosphere models with
sixty vertical layers (Ackerman & Marley, 2001; Marley et al., 2002; Freedman
et al., 2008) for specified effective temperatures, $800<T_{\rm eff}<1200\,\rm
K$ and surface gravities $g=30$ and $100\,\rm m\,sec^{-2}$. We focus on this
apparently limited parameter range since all gas giant exoplanets with masses
below $10\,\rm M_{J}$ will have cooled below 1200 K by an age of 30 Myr (see
Figure 1 and also Fortney et al. (2008)). The median age for nearby ($<75\,\rm
pc$) young stars that are likely targets for planet imaging surveys is
$50\,\rm Myr$ (McBride et al., 2011). For our study we choose a lower limit of
800 K, well below the $T_{\rm eff}$ at which Sengupta & Marley (2010)
predicted maximal polarization for field L dwarfs. At such temperatures dust
clouds, if present globally across the disk, will lie at high optical depth
and we expect produce a smaller polarization signal than the warmer objects.
Indeed 800 K is well below the field dwarf L to T transition temperature of
1200 to 1400 K (Stephens et al. (2009) and references therein) by which point
most signs of clouds have departed. However there exists growing evidence that
there is a gravity dependence to the effective temperature at which clouds are
lost from the atmosphere and certainly the planets such as those orbiting HR
8799 are still dusty at effective temperatures near 1000 K (Bowler et al.,
2010). Observation of a polarization signal in a cooler exoplanet would
provide powerful evidence for atmospheric dust.
Some of the more massive young exoplanets ($M>8\,\rm M_{J}$) may have
gravities in excess of our $100\,\rm m\,sec^{-2}$ upper limit, but as we show
below little oblateness-induced polarization is expected at high gravity in
this $T_{\rm eff}$ range (see also Sengupta & Marley (2010)). For example a
surface gravity of $g=100\,\rm m\,sec^{-2}$ and $T_{\rm eff}=1000\,\rm K$
approximately describes an $8\,\rm M_{J}$ planet at an age of 100 Myr while
values of $30\,\rm m\,sec^{-2}$ and 800 K are expected for a $2\,\rm M_{J}$
planet at an age of 60 Myr. We choose these values and a few others to
illustrate the parameter space and the sensitivity of the results to
variations in gravity and effective temperature.
Each model includes atmospheric silicate and iron clouds computed with
sedimentation efficiency (Ackerman & Marley, 2001) $f_{\rm sed}=2$.
Preliminary studies by our group suggest that even dustier models with $f_{\rm
sed}\sim 1$ might be necessary to reproduce the HR 8799 planets. However our
previous work (Sengupta & Marley, 2010) has demonstrated that while $f_{\rm
sed}=1$ atmospheres do show greater polarization than $f_{\rm sed}=2$, the
difference is slight when integrated over the disk. Other cloud modeling
approaches are reviewed by Helling et al. (2008). Some of these alternative
cloud modeling formulations, such as those employed by Helling and
collaborators (e.g., Helling & Woitke, 2006; Helling et al., 2008), predict a
greater abundance of small particles high in the atmosphere than the Ackerman
& Marley approach. Such a haze of small particles could potentially produce a
larger polarization signal than we derive here. Polarization measurements may
thus help provide insight into the veracity of various approaches.
As in Sengupta & Marley (2010) we employ the gas and dust opacity, the
temperature-pressure profile and the dust scattering asymmetry function
averaged over each atmospheric pressure level derived by the atmosphere code
in a multiple scattering polarization code that solves the radiative transfer
equations in vector form to calculate the two Stokes parameter $I$ and $Q$ in
a locally plane-parallel medium (Sengupta & Marley, 2009). For each model
layer we fit a Henyey-Greenstein phase function to the particle scattering
phase curve predicted by a Mie scattering calculation. A combined Henyey-
Greenstein-Rayleigh phase matrix (Liu & Weng, 2006) is then used to calculate
the angular distribution of the photons before and after scattering. In the
near-infrared the contribution of Rayleigh scattering by the gas to the
overall scattering is negligible and the scattering is treated in the Henyey-
Greenstein limit with the particle phase function computed from Mie theory.
Specifically the off diagonal terms of the scattering phase matrix are
described by White (1979) and are very similar to the pure Rayleigh case. For
the diagonal elements the Henyey-Greenstein elements are used. In the limit of
the scattering asymmetry parameter approaching zero the matrix converges to
the Rayleigh scattering limit. Finally, the angle dependent $I$ and $Q$ are
integrated over the rotation-induced oblate disk of the object by using a
spherical harmonic expansion method and the degree of polarization is taken as
the ratio of the disk integrated polarized flux ($F_{Q}$) to the disk
integrated total flux ($F_{I}$). The detailed formalisms as well as the
numerical methods are provided in Sengupta & Marley (2009).
Figures 3 and 4 illustrate typical input properties of the models employed
here. Figure 3 shows a model temperature-pressure profile along with iron and
silicate condensate grain sizes as computed by our cloud model. Figure 4 shows
the mean layer single scattering albedo, $\overline{\omega_{0}}$, and
scattering asymmetry parameter, $\overline{\cos\theta}$, as a function of
wavelength near the peak opacity of the cloud. Within strong molecular bands
the single scattering albedo approaches zero since gas absorption dominates
over the cloud opacity. Below the cloud base and far above the cloud both the
albedo and asymmetry parameters are essentially zero in the near infrared as
gaseous Rayleigh scattering makes little contribution to the opacity at those
wavelengths. Note that for the computed particle sizes the cloud is strongly
forward scattering at wavelengths where cloud opacity dominates molecular
absorption in agreement with a recent study by de Kok et al. (2011).
## 4 Results and Discussions
Figure 5 presents the computed thermal emission and polarization spectra of an
approximately 10 Myr old 2 Jupiter mass planet assuming rotation periods of 5
and 6 hrs. The striking dependence of polarization on the rotation rate arises
from the sensitivity of oblateness to rotation period as seen in Figure 2.
Generally speaking the degree of polarization is highest at those wavelengths
of low gaseous opacity where the cloud is visible while at other wavelengths,
inside of atomic and molecular absorption bands, flux emerges from higher in
the atmosphere and is less influenced by cloud scattering. While the degree of
polarization peaks at the shortest wavelengths shown (from the influence of
gaseous Rayleigh scattering), there is very little flux at optical
wavelengths. However in the near-infrared, where windows in the molecular
opacity allow flux to emerge from within the clouds, the computed degree of
polarization approaches 1%. In these spectral regions the planets will be
bright, the contrast with the primary star favorable, and thus the
polarization may be more easily detectable at this level. Beyond about
$2.2\,\rm\mu m$ thermal emission emerges from above the cloud tops and thus
there is no signature of the scattering and the net polarization is near zero.
This pattern of polarization is diagnostic of atmospheric clouds and is easily
distinguished from other sources of polarization, for example a
circumplanetary disk.
Figures 6 and 7 show warmer model cases for the same gravity with similar
behavior. These cases would apply to quite young planets at an age of less
than ten million years, but illustrate that the degree of polarization does
not dramatically increase at higher effective temperatures. Figure 8 shows the
variation with gravity. With a fixed rotation period of five hours, models
with $g$ of 56 and $100\,\rm m\,s^{-2}$ show very little polarization at any
wavelength. These models would correspond to approximately 4 to 6 Jupiter mass
planets at ages greater than 10 million years, perhaps typical of the planet
types that may be directly imaged. The sensitivity of polarization to gravity
seen in this figure illustrates the promise of polarization, in the right
circumstances, to provide a new constraint on exoplanet mass.
Figure 9 generalizes these trends, showing the predicted polarization in $I$
and $J$ bands as a function of the rotational period $P_{rot}$. For a fixed
surface gravity and viewing angle, $i$, the degree of polarization does not
vary substantially within the range of $\rm T_{eff}$ between 800 and 1200 K.
The polarization profiles in both bands increase with decreasing rotation
period and the polarization is generally greater in $J$ than in $I$ band. As
is the case for brown dwarfs (Sengupta & Marley, 2010), for a given rotation
period the polarization decreases with lower $i$.
All of the cases shown in Figure 9 have an oblateness less than 0.44, the
stability limit for an $n=1$ polytrope. For $g=30\,\rm m\,s^{-2}$ the
stability limit is reached at a rotation period of about 4 hours, slightly
less than the lower limit shown on the figure. Such short rotation periods may
in fact be a natural consequence of giant planet formation in a circumstellar
binary as the angular momentum of accreting gas naturally produces rapid
rotation rates (Ward & Canup, 2010).
We conclude that a self-luminous gas giant planet–even with a homogeneous
cloud distribution–will exhibit notable polarization (greater than a few
tenths of percent) in the near infrared if the planet is (1) cloudy, (2)
significantly oblate, and (3) viewed at a favorable geometry. An oblate shape
is the easiest to obtain at low masses and modest rotation rates or higher
masses and more rapid rotation rates. Higher effective temperatures, which
would produce more dust higher in the atmosphere and more polarization
(Sengupta & Marley, 2010), are generally excluded by the evolution for ages
greater than a few million years. More massive planets, which take longer to
cool, have higher gravity and thus a smaller oblateness and less polarization
(Figures 2 & 9) for a given rotation rate. Given these considerations we
believe the cases we have presented here are among the more favorable for
homogenous cloud cover. While we have not considered every combination of
parameters, the models presented here along with perturbations of those models
we have also studied lead us to conclude that uniformly cloudy planets will
not present polarization greater than a few percent and polarization is most
likely to be found for young, low mass, rapidly rotating planets.
However inhomogeneous cloud cover, which we have not modeled, may also affect
the polarization spectra. Indeed an inhomogeneous distribution of atmospheric
dust (e.g., Jupiter-like banding) would not be unexpected. Such banding may
provide further asymmetry (Menard & Delfosse, 2004) and hence increase (or
even decrease) the net non-zero polarization. A non-uniform cloud distribution
may be the mechanism that underlies the L to T-type transition among brown
dwarfs (Ackerman & Marley, 2001; Burgasser et al., 2002; Marley et al., 2010)
and variability has been detected in some transition dwarfs (Artigau et al.,
2009; Radigan et al., 2011). Cloud patchiness is also observed in images of
thermal emission from Jupiter and Saturn taken in the M-band (five-micron)
spectral region (e.g. Westphal, 1969; Westphal et al., 1974; Orton et al.,
1996; Baines et al., 2005), so patchiness may indeed be common. Polarization
arising from patchy clouds still requires the presence of some clouds of
course, thus any polarization detection provides information on the presence
of condensates and–by extension–constrains the atmospheric temperature.
## 5 CONCLUSIONS
The next decade is expected to witness the discovery of a great many self-
luminous extrasolar giant planets (Beichman et al., 2010). The masses,
atmospheric composition and thermal structure of these planets will be
characterized by photometry and spectroscopy. For some systems, other
constraints, such as dynamical interactions with dust disks or potential
instabilities arising from mutual gravitational interactions (e.g., Fabrycky &
Murray-Clay, 2010) may also contribute. Here we demonstrate that measurable
linear polarization in $I$ or $J$ bands reveals the presence of atmospheric
condensates, thereby placing limits on atmospheric composition and
temperature. Polarization of thermal emission from a homogeneously cloudy
planet is most favored for young, low mass, and rapidly rotating planets. A
diagnostic characteristic of cloud-induced polarization is that the
polarization peaks in the same spectral bandpasses as the flux from the planet
because photons are emerging from within the cloud itself as opposed to higher
in the atmosphere (Figures 5 through 8).
Assuming that our atmospheric and condensate cloud models are reasonably
accurate, we conclude that any measured polarization greater than about 1%
likely can be attributed to inhomogeneities in the global cloud deck. While we
have not considered every possible model case, we find that our most favorable
plausible cases do not produce notably greater polarization. Other cloud
models (e.g., Helling & Woitke, 2006; Helling et al., 2008) which incorporate
more small particles high in the atmosphere may well produce a different
result, thus polarization may help to distinguish such cases. For a fixed
rottion period, the oblateness and thus polarization increases with decreasing
surface gravity. In such situations polarization may provide a new constraint
on gravity and mass. However for gravity in excess of about 50 $\rm m\,s^{-2}$
and for $T_{\rm eff}<1200\,\rm K$ (corresponding to planet masses greater than
about $4\,\rm M_{J}$) we do not expect detectable amounts of polarization.
Warmer and higher gravity field L dwarfs, can show measurable polarization
since such cloud decks are higher in the atmosphere. For directly imaged
exoplanets, however, we do not expect to encounter such high effective
temperatures. For exoplanets with plausible $T_{\rm eff}<1200\,\rm K$, Figure
9 shows that even if the rotation period is as rapid as 4.5 hrs. and the
viewing angle is $90^{o}$ at which the polarization is maximum, the percentage
degree of polarization in thermal emission is not more that a few times of
$10^{-2}$.
The aim of our study was to better understand the information conveyed by
polarization about the properties of extrasolar giant planets directly imaged
in their thermal emission. We have found that in some cases polarization can
provide additional constraints on planet mass, atmospheric structure and
cloudiness. Combined with other constraints, polarization adds to our
understanding, although there remain ambiguities. A study of the polarization
signature of partly cloudy planets would yield further insight into the value
of polarization measurements for constraining extrasolar giant planet
properties.
## 6 Acknowledgements
We thank the anonymous referee for helpful comments that improved the
manuscript. MSM recognizes the NASA Planetary Atmospheres Program for support
of this work.
## References
* Ackerman & Marley (2001) Ackerman, A. & Marley, M. S. 2001, ApJ, 556, 872.
* Artigau et al. (2009) Artigau, É., Bouchard, S., Doyon, R., & Lafrenière, D. 2009, ApJ, 701, 1534
* Baines et al. (2005) Baines, K. H., et al. 2005, Earth Moon and Planets, 96, 119
* Barman et al. (2011) Barman, T. S., Macintosh, B., Konopacky, Q. M., & Marois, C. 2011, Ap.J., 733, 65
* Barnes & Fortney (2003) Barnes, J. W. & Fortney, J. J. 2003, ApJ, 588, 545.
* Beichman et al. (2010) Beichman, C. A., et al. 2010, Proc. Astron. Soc. Pac., 122, 162
* Boccaletti et al. (2011) Boccaletti, A., et al. 2011, Experimental Astronomy, in press.
* Bowler et al. (2010) Bowler, B. P., Liu, M. C., Dupuy, T. J., & Cushing, M. C. 2010, arXiv:1008.4582
* Burgasser et al. (2002) Burgasser, A. J., Marley, M. S., Ackerman, A. S., Saumon, D., Lodders, K., Dahn, C. C., Harris, H. C., & Kirkpatrick, J. D. 2002, Ap.J.Lett., 571, L151
* Chandrasekhar (1933) Chandrasekhar, S. 1933, MNRAS, 93, 539
* Chauvin et al. (2004) Chauvin, G., Lagrange, A.-M., Dumas, C., Zuckerman, B., Mouillet, D., Song, I., Beuzit, J.-L., & Lowrance, P. 2004, Aston. Astoph., 425, L29
* Cruz et al. (2009) Cruz, K. L., Kirkpatrick, J. D. & Burgasser, A. J. 2009, A.J., 137, 3345.
* Currie et al. (2011) Currie, T., et al. 2011, Ap J., 729, 128
* Cushing et al. (2008) Cushing, M. C. et al. 2008, ApJ, 678,1372.
* de Kok et al. (2011) de Kok, R., Helling, Ch., Stam, D., Woitke, P., & Witte, S. 2011, arXiv:1105.3062
* Fabrycky & Murray-Clay (2010) Fabrycky, D. C. & Murray-Clay, R. A. 2010, Ap.J., 710, 1408.
* Fortney et al. (2008) Fortney, J. J., Marley, M. S., Saumon, D., & Lodders, K. 2008, Ap.J., 683, 1104
* Freedman et al. (2008) Freedman, R. S. et al. 2008, ApJS, 174, 71.
* Helling et al. (2008) Helling, C., et al. 2008, MNRAS, 391, 1854
* Helling & Woitke (2006) Helling, C., & Woitke, P. 2006, A&Ap., 455, 325
* Helling et al. (2008) Helling, C., Dehn, M., Woitke, P., & Hauschildt, P. H. 2008, Ap.J.Let., 675, L105
* James (1964) James, R. 1964, ApJ, 140, 552
* Kalas et al. (2005) Kalas, P., Graham, J. R., & Clampin, M. 2005, Nature, 435, 1067
* Kirkpatrick et al. (2006) Kirkpatrick, J. D. et al. 2006, ApJ, 639, 1120.
* Lafrenière et al. (2010) Lafrenière, D., Jayawardhana, R., & van Kerkwijk, M. H. 2010, Ap.J., 719, 497
* Lagrange et al. (2010) Lagrange, A.-M., et al. 2010, Science, 329, 57
* Lissauer & Stevenson (2007) Lissauer, J. J., & Stevenson, D. J. 2007, Protostars and Planets V, 591
* Liu & Weng (2006) Liu,Q. & Weng, F. 2006, Applied Optics, 45, 7475.
* Marley et al. (2002) Marley, M. S. et al. 2002, ApJ, 568, 335\.
* Marley et al. (2007) Marley, M. S., Fortney, J. J., Hubickyj, O., Bodenheimer, P., & Lissauer, J. J. 2007, Ap.J., 655, 541
* Marley et al. (2010) Marley, M. S., Saumon, D., & Goldblatt, C. 2010, Ap.J.Lett., 723, L117
* Marois et al. (2008) Marois, C., Macintosh, B., Barman, T., Zuckerman, B., Song, I., Patience, J., Lafreniere, D., & Doyon, R. 2008, Science, 322, 5906
* Marois et al. (2010) Marois, C., Zuckerman, B., Konopacky, Q. M., Macintosh, B., & Barman, T. 2010, Nature, 468, 1080
* McBride et al. (2011) McBride, J., Graham, J., Macintosh, B., Beckwith, S., Marois, C., Poyneer, L., Wiktorowicz, S. 2011, arXiv:1103.6085
* Menard et al. (2002) Ménard, F. et al. 2002, A&A, 396, L35.
* Menard & Delfosse (2004) Menard, F. & Delfosse, X. 2004, Semaine de l‘Astrophysique Francaise, ed. F. Combes et al. (EDP-Sciences : Paris 2004) 305.
* Mohanty et al. (2007) Mohanty, S., Jayawardhana, R., Huélamo, N., & Mamajek, E. 2007, ApJ, 657, 1064
* Orton et al. (1996) Orton, G., et al. 1996, Science, 272, 839
* Patience et al. (2010) Patience, J., King, R. R., de Rosa, R. J., & Marois, C. 2010, A&A, 517, A76
* Radigan et al. (2011) Radigan, J., et al. 2011, ApJ in prep
* Seager, Whitney & Sasselov (2000) Seager, S., Whitney, B. A., & Sasselov, D. D. 2000, ApJ, 540, 504
* Sengupta & Maiti (2006) Sengupta, S. & Maiti, M. 2006, ApJ, 639, 1147
* Sengupta & Marley (2009) Sengupta, S. & Marley, M. S. 2009, ApJ, 707, 716
* Sengupta & Marley (2010) Sengupta, S. & Marley, M. S. 2010, ApJL, 722, L142
* Skemer et al. (2011) Skemer, A. J., Close, L. M., Szűcs, L., Apai, D., Pascucci, I., & Biller, B. A. 2011, ApJ, 732, 107
* Stam, Hovenier & Waters (2004) Stam, D. M., Hovenier, J. W., & Waters L. B. F. M. 2004, A & A, 428, 663
* Stephens et al. (2009) Stephens, D. C. et al. 2009, ApJ,702,154.
* Ward & Canup (2010) Ward, W. R., & Canup, R. M. 2010, Astron. J., 140, 1168
* Westphal (1969) Westphal, J. A. 1969, Ap.J.L., 157, L63
* Westphal et al. (1974) Westphal, J. A., Matthews, K., & Terrile, R. J. 1974, Ap.J.L., 188, L111
* White (1979) White, R. L. 1979, ApJ, 229, 954
* Zapatero Osorio et al. (2005) Zapatero Osorio, M. R. et al. 2005, ApJ, 621, 445.
Figure 1: Evolution through time in $T_{\rm eff}$ – $g$ space of non-
irradiated, metallicity $[M/H]=0.0$ giant planets of various masses. Solid
lines are evolution tracks at various fixed masses ranging from 1 to $10\,\rm
M_{J}$. Dashed lines are isochrones for various fixed ages since an arbitrary
‘hot start’ initial model (Fortney et al., 2008). Figure 2: Rotationally
induced oblateness as a function of rotational period for 1 and $10\,\rm
M_{J}$ planets with three different ages (10 - solid, 100 - dashed, and 1000
Myr - dotted). Horizontal line is the stability limit for $n=1$ polytropes
assuming solid body rotation. More rapidly rotating planets would form a
triaxial ellipsoidal shape and eventually bifurcate.
Figure 3: Atmospheric temperature ($T$, lower scale) and cloud particle size,
$r_{\rm eff}$, (upper scale) as a function of pressure, $P$, for one adopted
model atmosphere with $T_{\rm eff}=1000\,\rm K$, $g=30\,\rm m\,s^{-2}$, and
$f_{\rm sed}=2$. From Figure 2 this corresponds to approximately a $2.5\,\rm
M_{J}$ planet at an age of a few million years. Particle sizes are shown for
iron and silicate (forsterite) grains, the two most significant contributors
to cloud opacity. Shown is the “effective radius” ($r_{\rm eff}$) of the
particles, a mean size precisely defined in Ackerman & Marley (2001). Figure
4: Scattering properties as a function of wavelength, $\lambda$, of a model
layer near the 1 bar pressure level for the model atmosphere shown in Figure
3. Shown are the layer single scattering albedo, $\overline{\omega_{0}}$,
solid, and the layer asymmetry parameter, $\overline{\cos\theta}$, dashed. In
strong molecular bands gaseous absorption dominates over scattering, thus
lowering the mean layer albedo. Figure 5: The emergent flux (A) and the disk-
integrated degree of linear polarization $P(\%)$ (B) of non-irradiated
exoplanets at different wavelengths at viewing angle $i=90^{\circ}$
(equatorial view). In (B), the top solid line represents the polarization
profile for a rotational period $P_{\rm rot}=5\,\rm hr$ while the bottom solid
line represents that for 6 hr. Note that while the polarization can be high at
blue wavelengths there is very little flux there. Figure 6: Same as figure 5
but with $T_{\rm eff}=1000K$. This is the result for the model characterized
in Figures 3 and 4. Figure 7: Same as figure 5 but with $T_{\rm eff}=1200K$.
Figure 8: The emergent flux (A) and the disk-integrated degree of linear
polarization (B) of non-irradiated exoplanets for a fixed $T_{\rm eff}=1000$K
but for different surface gravities. In (B), the solid lines from top to
bottom represent the polarization profiles for surface gravity $g=30$, 56 and
$100\,\rm m\,s^{-2}$ respectively. The difference in the emergent flux (A) for
a fixed effective temperature but surface gravity varying over this range is
not noticeable on this scale. Figure 9: Scattering polarization profiles of
non-irradiated exoplanets with different rotational periods. The solid lines
represent the percentage degree of linear polarization in J-band while the
broken lines represent that in I-band. A variety of model cases are shown, all
assume $f_{\rm sed}=2$. Cases are shown for viewing angle $i=90^{\circ}$
(equator view) and $i=60$ and $45^{\circ}$.
|
arxiv-papers
| 2011-06-02T19:40:01 |
2024-09-04T02:49:19.316372
|
{
"license": "Public Domain",
"authors": "Mark S. Marley and Sujan Sengupta",
"submitter": "Mark S. Marley",
"url": "https://arxiv.org/abs/1106.0492"
}
|
1106.0494
|
Universal ultracold collision rates for polar molecules of two alkali-metal
atoms
Paul S. Julienne,∗a Thomas M. Hanna,b and Zbigniew Idziaszekc
Submitted to Physical Chemistry Chemical Physics, themed issue on cold
molecules, 2011
Universal collision rate constants are calculated for ultracold collisions of
two like bosonic or fermionic heteronuclear alkali-metal dimers involving the
species Li, Na, K, Rb, or Cs. Universal collisions are those for which the
short range probability of a reactive or quenching collision is unity such
that a collision removes a pair of molecules from the sample. In this case,
the collision rates are determined by universal quantum dynamics at very long
range compared to the chemical bond length. We calculate the universal rate
constants for reaction of the reactive dimers in their ground vibrational
state $v=0$ and for vibrational quenching of non-reactive dimers with $v\geq
1$. Using the known dipole moments and estimated van der Waals coefficients of
each species, we calculate electric field dependent loss rate constants for
collisions of molecules tightly confined to quasi-two-dimensional geometry by
a one-dimensional optical lattice. A simple scaling relation of the quasi-two-
dimensional loss rate constants with dipole strength, trap frequency and
collision energy is given for like bosons or like fermions. It should be
possible to stabilize ultracold dimers of any of these species against
destructive collisions by confining them in a lattice and orienting them with
electric field of less than 20 kV/cm.
## 1 Introduction
††footnotetext: a Joint Quantum Institute , NIST and the University of
Maryland, Gaithersburg, Maryland 20899-8423 USA Tel: +1-301-975-2596; E-mail:
psj@umd.edu††footnotetext: b Joint Quantum Institute , NIST and the University
of Maryland, Gaithersburg, Maryland 20899-8423 USA. ††footnotetext: c Faculty
of Physics, University of Warsaw, Hoża 69, 00-681 Warsaw, Poland..
Quite spectacular success has been achieved in recent years in working with
gases or lattices of ultracold atoms cooled to temperatures on the order of 1
$\mu$K or less. This has permitted the achievement of quantum degeneracy with
either bosonic or fermionic isotopes of various atomic species where the
thermal de Broglie wavelength becomes of the same order as or larger than the
mean distance between atoms. Quite precise control of the various properties
of such systems is possible through state selection, trap design, and magnetic
tuning of scattering resonances. A number of reviews or books covering this
work have appeared, including topics such as Bose-Einstein condensation, 1, 2
the quantum properties of fermionic gases, 3, 4 and magnetically tunable
Feshbach resonance control of collision properties. 5, 6, 7, 8, 9 In addition,
much work has been directed towards creating lattice structures of such cold
atoms using standing wave light patterns to make optical lattices of various
geometric configurations. 10, 11, 12, 13
Recent work has now succeeded in making ultracold molecules in their stable
vibrational and rotational ground electronic state with temperature $T<1$
$\mu$K by using magnetic and electromagnetic field control of ensembles of
ultracold atoms. This has been achieved for 40K87Rb fermions14 and 133Cs2
bosons 15, and for 87Rb2 bosons in the collisionally unstable $v=0$ level of
the lowest ${}^{3}\Sigma_{u}^{+}$ state. 16 These successful experiments used
a two-step process to make the molecules. First, magneto-association of two
very cold atoms makes a very weakly bound “Feshbach molecule”, 8 which is then
converted by a coherent STIRAP process to make a ground state molecule in its
$v=0$, $J=0$ ground state in a single state of nuclear spin, where $v$ and $J$
are the respective quantum numbers for vibration and rotation. This builds on
pioneering earlier work in much more dilute cold atomic gases around 100
$\mu$K to make ground state RbCs 17, Cs2, 18 or LiCs 19 molecules. Cold
molecules open up many new opportunities for study, 20, 21, 22 since they have
more complex internal structure and different long range potentials. If
dipolar, their properties and collisions can be controlled by an electric
field in addition to magnetic or electro-magnetic fields. 23, 24
Having ultracold molecules also introduces the possibility of chemistry and
reactions at ultralow energies, with precise control of the initial internal
states and translational energy of the reactants. 25, 22 Having such
collisions is good if one wishes to study such chemistry following the precise
preparation of the initial states. 26 Collisions are bad, however, if one
wishes to keep the molecules for simulating condensed matter systems or doing
complex control like quantum computing, since reactive collisions can rapidly
remove molecules from the gas in question. Dipolar molecules offer some
special features such as the possibility of orienting them by an electric
field. Reaction rates can be strongly suppressed if the molecules are oriented
to have repulsive dipolar interactions while confined to move in a two-
dimensional (2D) plane by an optical lattice wave guide. Such suppression of
reaction rates in this quasi-2D geometry has been both predicted 27, 28, 29
and demonstrated for 40K87Rb. 30 Controlling internal spin can also be used to
decrease reaction rates in the case of fermionic molecules like 40K87Rb, since
identical fermions can only collide via odd partial waves that have
centrifugal barriers to reaction. 31, 32, 33
Here we examine some basic aspects of ultracold chemistry of highly reactive
molecules. We restrict our considerations to the special case that the two
interacting molecules have a unit probability of a chemical reaction or an
inelastic quenching collision if they approach one another within typical
chemical interaction distances, on the order of 1 nm or less. Thus, we are
considering the quantum threshold limit to the standard Langevin model. 32, 34
In this highly reactive limit the long range potential between the molecules
controls how they get together subject to experimental control on an ultralow
energy scale. This paper will examine various universal aspects of reaction
rates that are solely governed by such long range interactions of two reactive
molecules. 34, 28 Such interactions are sensitive to the Bose/Fermi character
of the molecules, to the traps used to confine them, and to electric,
magnetic, and electromagnetic fields in the case of polar molecules. We will
consider, in particular, the ten different molecular dimers comprised of two
alkali metal atoms from the group Li, Na, K, Rb, and Cs.
The theory will be confined to the lowest temperatures where only the lowest
partial waves allowed by symmetry contribute. We will apply both analytic and
numerical approaches to molecules colliding in the 3D geometry of free space
or in the quasi-2D geometry of a confining optical lattice. The basic theory
related to ultracold molecules, their formation, states, collisions, dipolar
properties, and response to external fields is discussed in detail in the
introductory book by Krems et al. 35 and the Faraday Discussions 142. 36 We
will review this theory as needed here and apply it to dipolar mixed alkali
dimer molecules.
## 2 Collisions in free space
### 2.1 Ultracold polar molecules
While general cooling methods such as buffer gas cooling and Stark
deceleration to load a molecule trap are being developed for a variety of
molecules, 21, 22, 35 such schemes so far have been restricted to temperatures
significantly above 1 mK and very low phase space density, many orders of
magnitude removed from quantum degeneracy. To date, high phase space density
has been restricted to molecules that can be made directly by associating two
atoms that are at or near the quantum degenerate regime of temperature and
density. While initial proposals to do this involved photoassociation of the
atoms, 37, 38, 39 it has turned out that magnetoassociation 6, 8 provides an
efficient and effective tool to make near threshold bound states known as
Feshbach molecules, 8, 9 with binding energies $E/h$ less than 1 MHz. These
can then be optically converted via a coherent Raman process to much more
deeply bound vibrational levels, 40, 41 including the ground state. 14, 15
Typical temperatures are well below 1 $\mu$K and densities can be on the order
of $10^{12}$ molecules/cm3 or even larger. Since this can be done by coherent
quantum dynamics that causes no heating, the molecules have the same
temperature as the initial atoms. It is even possible to make an optical
lattice array of many single trapping cells, 41, 15 in which exactly two atoms
are present. Upon associating the atoms, one then has an array of molecules,
each of which is trapped in its own lattice cell, with confinement possible to
tens of nm and intercell spacing of hundreds of nm.
In principle any diatomic molecule could be made that is comprised of atoms
that can be cooled and trapped, including species such as Li, Na, K, Rb, Cs,
Ca, Sr, Yb, or Cr. In practice so far, ultracold molecule formation has been
concentrated on alkali-metal atom dimers. Among the alkali-metal-species Li,
Na, K, Rb, and Cs, it is known 42 that five mixed dimers have exoergic
reactive collision channels when in their vibrational and rotational ground
state, $v=0$, $J=0$, namely, LiNa, LiK, LiRb, LiCs, and KRb. The other five,
NaK, NaRb, NaCs, KCs, and RbCs, have no reactive channels for $v=0$, $J=0$.
A molecule in a pure rotational eigenstate has no dipole moment. Calculating
the energy of a molecular dipole in an electric field $\bf{F}$ is explained in
Chapter 2 of the book by Krems et al. 35. We apply this method to the case of
a ${}^{1}\Sigma^{+}$ symmetric top rotor in vibrational level $v$ in its
ground rotational state $J=M=0$, where $M$ is the projection of total angular
momentum $J$ along the direction of $\bf{F}$. Upon expanding the wave function
in symmetric top basis states $|JM\Lambda v\rangle=|J00v\rangle$, where the
body-frame projection $\Lambda=0$, the energy $E_{g}(F)$ of the lowest energy
ground state $|g(F)\rangle$ of the manifold is found by diagonalizing the
Hamiltonian matrix $H^{\mathrm{mol}}$ with matrix elements:
$\displaystyle H_{JJ}^{\mathrm{mol}}$ $\displaystyle=$ $\displaystyle
B_{v}J(J+1)\,$ (1) $\displaystyle
H_{J,J+1}^{\mathrm{mol}}=H_{J+1,J}^{\mathrm{mol}}$ $\displaystyle=$
$\displaystyle\frac{J+1}{\sqrt{(2J+1)(2J+3)}}\,Ed_{m}\,,$ (2)
where $B_{v}=\hbar^{2}\langle 000v|1/r^{2}|000v\rangle/M$ is the rotational
constant for the molecule with mass $M$, $r$ is the interatomic separation,
and $d_{m}$ is the body-frame permanent molecular dipole moment. The energy
$E_{g}(F)$ approaches the energy of the $v,J=0$ molecular level as $F\to 0$.
The field-dependent dipole moment for the field-dressed ground state is
$d(F)=-\partial E_{g}(F)/\partial F$, which approaches $d(F)\to
d_{m}\,(Fd_{m})/(3B_{v})$ as $F\to 0$.
In order to make estimates for the dipolar collision properties of the ground
and lowest vibrational levels of the ten mixed-alkali-dimer species, we will
take the dipole moments $d_{m}=d_{e}$ and rotational constants $B_{e}$ for the
ground ${}^{1}\Sigma^{+}$ state evaluated at the equilibrium internuclear
distance $R_{e}$ from the calculations of Aymar and Dulieu. 43 Figure 1 shows
the field-dependent dipole moments $d(F)$ as a function of $F$ for the
respective reactive and non-reactive sets of species.
A key aspect to note from Fig. 1 is that for all species except LiNa and KRb,
dipole moments on the order of 0.4 au (1 D $=$ $3.336\times 10^{-30}$ Coulomb
meter) can be achieved at relatively modest electric fields of 10 kV/cm or
less. As $F$ increases, $d(F)$ approaches and eventually reaches the magnitude
of the molecular dipole $d_{m}$. This is not yet achieved at the 20 kV/cm
maximum in Fig. 1. The KRb experiments on collisions in an electric field were
carried out with $F$ up to 5 kV/cm and were able to reach dipole moments only
on the order of 0.08 au (0.2 D). 44, 30 Except for LiNa, which has an even
smaller dipole moment than KRb, all of the other species should be capable of
being used with dipole moments on the order of 0.4 au (1D) or more to enable
quite interesting experiments to control threshold collision dynamics.
Fig. 1: Dipole moment $d(F)/D$ versus $F$ for the five reactive mixed alkali-
metal-species (upper panel) and for the five non-reactive mixed alkali-metal-
species (lower panel), where $D=0.3934$ au $=$ $3.336\times 10^{-30}$ Cm.
### 2.2 Characteristic energy and length scales
In order to get a good understanding of the properties of ultracold molecule
interactions, it is important to understand the various length scales $L$ and
corresponding energy scales $\hbar^{2}/(2\mu L^{2})$ associated with ultracold
phenomena. These are the de Broglie wavelength $2\pi/k$ and the lengths
$R_{e}$, $\bar{a}$, $a_{h}$, and $a_{d}$ associated with the respective
chemical, van der Waals, harmonic, and dipolar interactions. For a relative
collision kinetic energy of $E=\hbar^{2}k^{2}/(2\mu)$, where $\mu=M/2$ is the
reduced mass of the molecule pair of mass $M$, we define a characteristic
energy-dependent length $a_{k}=1/k$. The characteristic van der Waals length
is $\bar{a}=[2\pi/\Gamma(1/4)^{2}]\,(2\mu C_{6}/\hbar^{2})^{1/4}$, where
$-C_{6}/R^{6}$ is the van der Waals dispersion potential, $R$ is the
intermolecular separation, and $2\pi/\Gamma(1/4)^{2}\approx 0.47799$. 39, 9
The harmonic length for trapping frequency $\Omega$ is the characteristic
length of ground state motion $a_{h}=\sqrt{\hbar/(\mu\Omega)}$. The potential
energy of interaction between two molecular dipoles separated by distance $R$
is $d(F)^{2}(1-3\cos^{2}\theta)/R^{3}$, where $\theta$ is the angle between
dipole orientation and the intermolecular axis. The dipole length
$a_{d}(F)=\mu d(F)^{2}/\hbar^{2}$ is defined to be the length where
$\hbar^{2}/(\mu a_{d}^{2})=d(F)^{2}/a_{d}^{3}$.
In order to estimate $\bar{a}$, we need the $C_{6}$ value for the various
species. For a polar molecule there are two contributions to the effective
$C_{6}$ interaction. One is the electronic contribution $C_{6}^{\mathrm{el}}$,
due to the second-order response through excited electronic states. A simple
Unsold approximation gives $C_{6}^{\mathrm{el}}=(3/4)U\alpha^{2}$, where
$U\approx 0.055(7)$ atomic units is a mean excitation energy and $\alpha$ is
the dipole polarizability. 45, 46 This gives a magnitude on the order of
$C_{6}^{\mathrm{el}}\approx 10^{4}$ au (1 au $=E_{h}a_{0}^{6}=9.573\times
10^{-26}$ J nm6) for the mixed alkali dimers. For most species, the much
larger and dominant contribution to $C_{6}$ is the rotational dipole part with
a magnitude $C_{6}^{\mathrm{rot}}=d_{m}^{4}/(6B_{v})$. 23 Thus, for estimation
purposes for the vibrational ground state, we use
$C_{6}\approx(3/4)U\alpha^{2}+d_{e}^{4}/(6B_{e})$. This approximation can be
compared to the calculations of Kotochigova 47 for KRb and RbCs, where our
estimates give respectively $10500+2800$ au and $15000+113000$ au for the two
contributing terms, giving $C_{6}$ values 10 and 20 per cent less than the
corresponding ab initio calculations. Since the length $\bar{a}$ is not very
sensitive to $C_{6}$, varying only as $C_{6}^{1/4}$, this approximation makes
small errors on the order of 5 per cent or less in our qualitative estimates
of $\bar{a}$. Consequently our estimates for universal van der Waals rate
constants are likely to be in error by 15 per cent or less for all species
except perhaps LiNa, which has the smallest dipole moment.
Fig. 2: Characteristic length scales $R_{e}$ (black dots), $\bar{a}$
(squares), $a_{h}$ at $\Omega=2\pi(30\,\mathrm{kHz})$ (blue dots), thermal
expectation value of $1/k$ at 200 nK (crosses), and $a_{d}$ (stars for $d=0.4$
au (1 D) and diamonds for $d=d_{m}$) for the ten mixed alkali-metal species.
Figure 2 illustrates the various characteristic lengths for the ten mixed
alkali dimers. The shortest distance is the chemical bond length $R_{e}$,
which is much less than 1 nm, 43 with a corresponding energy scale $E/h$ on
the order of the chemical bond energy of 100 THz. The next largest energy
scale, on the order of MHz, is that of the van der Waals length $\bar{a}$,
which ranges between 6 nm for KRb and 30 nm for LiCs. The harmonic confinement
by an optical lattice typically has $\Omega/(2\pi)$ on the order of tens of
kHz. The Figure shows that the confinement $a_{h}$ is on the order of 100 nm
for $\Omega=2\pi(30\,\mathrm{kHz})$. Figure 2 also shows the thermal average
of $1/k$ is on the order of a few hundred nm and is larger than either
$\bar{a}$ or $a_{h}$ when collision energy $T$ is 200 nK, or $k_{B}T/h=4$ kHz,
a typical value for ultracold systems. Note that the product
$ka_{h}=(2E/(\hbar\Omega))^{1/2}$ is independent of the mass of the species
for a given trap frequency $\Omega$. The trap-induced confinement $a_{h}$ is
less than $1/k$ when $E<\frac{1}{2}\hbar\Omega$, or $E/k_{B}<$ 700 nK, for an
$\Omega=2\pi(30\,\mathrm{kHz})$ trap.
The dipole length depends on the electric field $F$. Figure 2 shows $a_{d}(F)$
for two values of the dipole strength $d(F)$, namely, for 0.4 au (1 D) if this
magnitude is possible, and for its maximum allowed value $d_{m}$. These dipole
length scales, respectively on the order of 1000 nm and 10000 nm, are the
largest length scales in the problem, with the exception of LiNa, which has
the smallest dipole moment. At $d(F)=0.4$ au, the corresponding energy scales
are on the order of 100 Hz, but over 1 kHz for LiNa, LiK and NaK.
### 2.3 Universal collision rate constants
The basic theory of ultracold collisions is covered in Chapters 1 and 3 of
Krems et al. 35 and by Hutson 7, 48 and Chin et al. 9. We will summarize here
the essential points using the formulation with quantum defect theory (QDT) by
Idziaszek and Julienne 34, 33 For the moment we will ignore the role of
internal spin structure in the ground molecular state, 49, 26 since we are
considering highly reactive collisions where this seems not to be relevant. 31
The essence of the QDT approach is the separation of energy and length scales,
as seen in Fig. 2. The overall effect of the short range “chemistry” zone with
$R\ll\bar{a}$ is summarized in the outer van der Waals zone by two
dimensionless QDT parameters $s$ and $0\leq y\leq 1$, which respectively
represent a short range phase and the chemical reactivity in the entrance
channel $j$; the scattering length of the entrance channel is parameterized by
$s$, which can take on any value, and $y=1$ represents unit probability of
short range loss from the entrance channel, whereas $y=0$ means no loss. In
the ultracold domain, we only need to consider the lowest partial wave $j$ to
index the channel, where $j=0$ for $s$-waves for like bosons or for unlike
bosons or fermions, and $j=1$ for $p$-waves for like fermions or for unlike
bosons or fermions.
The case of $y=1$ corresponds to a special “universal” case, where there is
unit probability of short range loss. In this case, the elastic and inelastic
or reactive collision rates depend only on the quantum scattering by the long
range potential at distances $R\gtrsim\bar{a}$. Any incoming scattering flux
that penetrates inside $\bar{a}$ experiences no reflection back into the
entrance channel since it is lost to reactive or inelastic channels. No
scattering resonances can exist in this case. The rate constant for loss from
the entrance channel $K_{j}^{\mathrm{ls}}(E)$ for a relative collision kinetic
energy $E$ for a van der Waals potential takes on the following very simple
universal form, independent of the details of the short range potential or
dynamics, at low collision energy where $k\bar{a}\ll 1$,
$K_{0}^{\mathrm{ls}}=g\frac{4\pi\hbar}{\mu}\bar{a}\,\,\,\mathrm{and}\,\,\,K_{1}^{\mathrm{ls}}(E)=3g\frac{4\pi\hbar}{\mu}\,(k\bar{a})^{2}\bar{a}_{1}\,,$
(3)
where the identical particle factor $g=2$ if the collision partners are
identical bosons or identical fermions, and
$\bar{a}_{1}=\bar{a}\Gamma(\frac{1}{4})^{6}/(144\pi^{2}\Gamma(\frac{3}{4})^{2})\approx
1.064\bar{a}$. 34 If the colliding particles are not identical, then the rate
constant is $K^{\mathrm{ls}}=K_{0}^{\mathrm{ls}}+K_{1}^{\mathrm{ls}}(E)$ with
$g=1$, and the second term becomes much smaller than the first as $k\to 0$.
According to the thresold law for a van der Waals potential, the thermal
average is independent of temperature $T$ for the $s$-wave but varies linearly
with $T$ for the $p$-wave,
$K_{1}^{\mathrm{ls}}(T)=1513\,\bar{a}^{3}\,k_{B}T/h\,,$ (4)
where the factor for identical fermions comes from
$\Gamma(\frac{1}{4})^{6}/\Gamma(\frac{3}{4})^{2}\approx 1513$. The density $n$
of a uniform gas of identical bosons or fermions varies as
$\dot{n}=-K_{j}^{\mathrm{ls}}(T)n^{2}$. In the case of a two-species gas, the
two densities $n_{1}$ and $n_{2}$ vary as
$\dot{n}_{1}=\dot{n}_{2}=-K^{\mathrm{ls}}n_{1}n_{2}$.
Fig. 3: Universal rate constants for like bosons, $K_{0}^{\mathrm{ls}}$( red
diamonds), like fermions $K_{1}^{\mathrm{ls}}$ (blue dots), and the unitarity
limit, $K_{ju}^{\mathrm{ls}}$ (green star for $j=0$, green plus for $j=1$),
for van der Waals collisions at $T=200$ nK for the ten mixed alkali-metal
species. These are expected to give the correct order of magnitude of
$K_{j}^{\mathrm{ls}}$ for reaction of $v=0,J=0$ for the five reactive species
LiNa, LiK, LiRb, LICs, KRb and for vibrational quenching of low $v\geq 1$
levels for the five species that are not reactive for $v=0$, NaK, NaRb, NaCs,
KCs, and RbCs. As $T\to 0$, $K_{1}^{\mathrm{ls}}(T)$ varies linearly with $T$,
$K_{ju}^{\mathrm{ls}}(T)$ varies as $1/\sqrt{T}$, and $K_{0}^{\mathrm{ls}}$ is
independent of $T$. As $T$ increases the universal rate limits only apply as
long as $K_{j}^{\mathrm{ls}}(T)<K_{ju}^{\mathrm{ls}}(T)$.
The universal rate will only apply to species with $y=1$ and thus to all five
of the highly reactive species in the upper panel of Fig. 1. This has been
demonstrated for KRb for $T<1$ $\mu$K. 31 However, there is good evidence that
excited vibrational levels of alkali dimer molecules collisionally quench to
lower vibrational levels with near-universal rate constants on the order of
$10^{-10}$ cm${}^{3}/$s. This is suggested by theoretical calculations 50, 51
on collisions of alkali-metal atoms with alkali-metal dimers, and by
experimental measurements on Cs with Cs2 52, 53, Cs and Rb with RbCs 54, and
Cs with LiCs. 55 The experiments are at higher $T$ where more than the lowest
partial wave may contribute. The RbCs work 54 included a specific theoretical
calculation of the $T$-dependent universal rate constants summed over partial
waves that agreed with the measured quenching rate constants. Therefore, we
will calculate the universal rate constants for the five non-reactive species
in the lower panel of Fig. 1 and assume that they give the order of magnitude
of the vibrational quenching rate constant in the $T\to 0$ limit for alkali-
metal dimer molecules in states with $v\geq 1$ due to a collision with an
alkali-metal atom or another dimer molecule. Future experiments can check
whether this assumption gives good approximate magnitudes for the actual
quenching rate constants.
Figure 3 shows the universal rate constants $K_{0}^{\mathrm{ls}}$ and
$K_{1}^{\mathrm{ls}}(T=200\mathrm{nK})$ for identical bosons and fermions
respectively, as well as the $s$-wave unitarity limit
$K_{0u}(T)=g(\pi\hbar/\mu)\langle 1/k\rangle_{T}\,,$ (5)
where $\langle 1/k\rangle_{T}$ represents a thermal average of $1/k$. The
unitary limit $K_{ju}(T)$ gives the upper bound on the rate constant.
Consequently the universal rate constants only apply if they are smaller than
this limit, requiring $k\bar{a}\lesssim 1/4$ for $j=0$ and
$k^{3}\bar{a}^{3}\lesssim 1/4$ for $j=1$. The corresponding $p$-wave unitarity
limit $K_{1u}(T)$ for like fermions is 3 times larger than $K_{0u}(T)$ for
like bosons.
All five alkali-metal species have bosonic isotopes, but only Li and K have
stable fermionic isotopes. Consequently stable NaRb, NaCs, and RbCs fermions
do not exist, but 22Na and 134Cs have half lives longer than 2 years. The
fermionic molecules tend to have rate constants much less than the unitarity
limit at this $T$. The bosonic molecules are closer to their unitarity limit
for 200 nK, especially those with the highest dipole moments.
Since the decay rate per molecule for a gas of density $n$ is
$K_{j}^{\mathrm{ls}}n$, the lifetime of the molecule is
$\tau=(K_{j}^{\mathrm{ls}}n)^{-1}$. Thus, taking $n=10^{12}$ cm-3 as a typical
ultracold gas density, one gets lifetimes of 1 s and 1 ms for respective rate
constants of $10^{-12}$ cm${}^{3}/$s and $10^{-9}$ cm${}^{3}/$s. Thus reactive
bosons and unlike fermions will have typical lifetimes of a few ms and the
less reactive identical fermions will have typical lifetimes in the 10 ms to
100 ms range for $T$ around 200 nK, depending on species. Two exceptions are
KRb and LiNa fermions, which have relatively small dipole moments. A lifetime
on the order of 1 s has been achieved for like KRb fermions in the 200nK
range. 31 The identical fermion lifetime will increase as $1/T$ as $T$
decreases.
### 2.4 Effect of an electric field
When an electric field is turned on, the reaction rate constant for like
fermions increases towards unitarity with increasing field strength. This has
been explained quite simply by Quémemér and Bohn, 32, 31 who adapted a
Langevin model to quantum threshold conditions. This can be done more
rigorously through a quantum defect model of the threshold barrier penetration
probability. 33 Detailed analysis of free space collisions of two $J=0$
${}^{1}\Sigma$ dipolar molecules have been treated quite well in these
references and others. 56, 57, 58, 59 Here, we discuss approximations suitable
for universal ultracold collisions of such dipolar molecules as the electric
field $F$ is turned on. As collision energy increases above the threshold
regime, universal aspects of dipolar collisions also appear. 59 Since more
than one partial wave can contribute to the collision rate as $F$ increases,
we include a sum over all partial waves. This is important for ultracold
elastic collisions, although ultracold reaction or loss collision rates tend
to be dominated by the contribution from the lowest partial wave in our range
of electric field strengths.
Since scattering flux is incoming only inside the van der Waals radius
$\bar{a}$ for such collisions and there are no scattering resonances, the
details of the anisotropic potential for $R\ll\bar{a}$ are not important. The
collision rates are determined by quantum scattering by the anisotropic long-
range potential, which needs to be fully treated in a numerical approach that
couples different $|JM0v\rangle$ rotational states through the electric field
and the anisotropic potential. For this purpose we use a coupled channels
expansion in a $|JM0v\rangle$ basis. For the present we consider only
collisions of the lowest Stark level that correlates with the $J=M=0$
molecular state for $F=0$. When $R$ is very large, the Stark shift in energy
levels is very small compared to the spacing $2B_{v}J$ to the $J=1$ rotational
level. The dominant contributions to the potential are the direct dipole-
dipole interaction, varying as $R^{-3}$, and an isotropic second-order
dispersion interaction given by a van der Waals constant $C_{6}(F)$ that
depends on $F$. When $R$ becomes small enough, at distances $R\ll\bar{a}$, the
strong dipole-dipole interaction becomes larger than $2B_{v}J$, and a
perturbation treatment of the dispersion interaction breaks down. Effectively,
the two interacting dipoles become more strongly coupled to one another than
to the imposed electric field $F$. A coupled channels expansion is neded to
account for this changing coupling during the course of the collision.
The interaction part of the Hamiltonian for the coupled channels expansion
reads
$H_{\mathrm{int}}=\sum_{j=1}^{2}H_{j}^{\mathrm{mol}}+V_{\mathrm{dd}}(\mathbf{R})+V_{\mathrm{el}}(\mathbf{R}),$
(6)
where
$H_{j}^{\mathrm{mol}}=B_{\nu}\mathbf{J}_{j}^{2}+\mathbf{F}\cdot\mathbf{d}_{j}$
is the Hamiltonian of a single molecule defined in Sec. 2.1, with
$\mathbf{J}_{j}$ and $\mathbf{d}_{j}$ denoting the angular momentum and the
dipole moment operators of molecule $j=1,2$ respectively. The dipole-dipole
interaction
$V_{\mathrm{dd}}(\mathbf{R})=\frac{\mathbf{d}_{1}\cdot\mathbf{d}_{2}-3(\mathbf{d}_{1}\cdot\mathbf{e}_{R})(\mathbf{e}_{R}\cdot\mathbf{d}_{2})}{R^{3}}$
(7)
depends on the projection of the dipole operator $\mathbf{d}_{j}$ on the axis
$\mathbf{e}_{R}=\mathbf{R}/R$ connecting the center of masses of the two
molecules. Finally, $V_{\mathrm{el}}(\mathbf{R})$ describes the electronic
contribution to the dispersion interaction between molecules, given in Section
2.2 as $V_{\mathrm{el}}(\mathbf{R})=-C_{6}^{\mathrm{el}}R^{-6}$ with
$C_{6}^{\mathrm{el}}$ given by an Unsold approximation.
The most complete approach is to expand the full Hamiltonian, including the
kinetic energy operator, in the basis
$|J_{1}M_{1}0v\rangle|J_{2}M_{2}0v\rangle|\ell m\rangle$ of eigenstates of the
symmetric top $|J_{j}M_{j}0v\rangle$ associated with a molecule $j$ in level
$v$ and the eigenstates of the angular momentum $\mathbf{L}$ of the relative
motion $|\ell m\rangle$. Here, $\ell$ is the partial wave quantum number and
$m$ denotes the projection of $\mathbf{L}$ on the quantization axis in the
laboratory frame. This Hamiltonian conserves the projection of the total
angular momentum on the electric field axis $M_{\mathrm{TOT}}=M_{1}+M_{2}+m$,
and in the absence of the electric field also the total angular momentum
$\mathbf{J}_{\mathrm{TOT}}=\mathbf{J}_{1}+\mathbf{J}_{2}+\mathbf{L}$. The
matrix elements of the $H_{j}$ and the $V_{\mathrm{dd}}$ terms have been
extensively discussed in the literature, 27, 60 and we do not need to give
explicit formulas here.
Typically the number of channels that have to be included in the quantum
dynamics in the full rotational basis is very large. For example, the number
of eigenstates with angular momentum $J_{i}\leq J_{\mathrm{max}}=5$ and
partial wave quantum numbers $\ell\leq\ell_{\mathrm{max}}=10$ with
$M_{\mathrm{TOT}}=0$ and a symmetry of the wave function corresponding to
bosons is approximately 5700. This makes the coupled channels calculations
highly computationally demanding. Hence, in our study of the effects of the
full rotational expansion we apply the adiabatic approximation, diagonalizing
the interaction part of the Hamiltonian at different values of the
intermolecular distance $r$ and electric field $F$. This is motivated by the
fact that the collisions of polar molecules in the universal $y=1$ regime
exhibit no resonances, and in free-space can be accurately modeled in the
framework of the adiabatic potentials. 33 Our calculations with $n$ coupled
adiabatic channels are designated $n$-channel adiabatic (rotational basis),
and here we consider only $n=1$.
When the molecules are far apart, they interact very weakly with one another
and each molecule is described by a field-dressed state $|g(F)\rangle$ with
energy $E_{g}(F)$ and dipole $d(F)$ along the field direction, found by
diagonalizing the single molecule Hamiltonian described in Section 2.1. At
long range $H_{\mathrm{int}}$ in Eq. (6) can be replaced by
$V_{\mathrm{int}}(R,\theta)=-\frac{C_{6}(F)}{R^{6}}+d(F)^{2}\frac{1-3\cos^{2}\theta}{R^{3}}$
(8)
where $\theta$ is the angle between the axis of the electric field and the
intermolecular axis $\mathbf{R}$, and we take the zero of energy to be
$E_{g}(F)=0$ for a given field $F$. The dispersion coefficient
$C_{6}(F)=C_{6}^{\mathrm{el}}+C_{6}^{\mathrm{rot}}(F)$ includes a field-
dependent rotational contribution which can be evaluated from the second-order
degenerate perturbation theory formula (see, e.g., Ref. 47):
$-\frac{C_{6}^{\mathrm{rot}}(F)}{R^{6}}=-\sum_{e\ell^{\prime}m_{\ell}^{\prime}}\frac{\langle
g\ell m_{\ell}|V_{\mathrm{dd}}|e\ell^{\prime}m_{\ell}^{\prime}\rangle\langle
e\ell^{\prime}m_{\ell}^{\prime}|V_{\mathrm{dd}}|g\ell
m_{\ell}\rangle}{E_{e}-E_{g}}\,.$ (9)
Here, $g$ and $e$ respectively represent field-dependent ground and excited
product states of the two molecules from the spectrum of solutions to
$H_{j}^{\mathrm{mol}}$. Symmetrizing appropriately, we decompose the single-
molecule field dressed states into a $|JM0v\rangle$ basis, and then calculate
the matrix elements of $V_{\mathrm{dd}}$ using the approach of Chapter 2 of
Ref. 35 We evaluate the sum over rotational states $J$ and partial waves
$\ell$, and sum over projections such that $M_{\mathrm{TOT}}$ is conserved. We
neglect the other degrees of freedom, which contribute to the non-rotational
part of $C_{6}$, and have been calculated for some species through full
electronic structure calculations. 47 Figure 4 gives an example of
$C_{6}^{\mathrm{rot}}(F)$ for the $v=0,J=0$ ground state of 87Rb133Cs. The
Figure shows that $C_{6}^{\mathrm{rot}}(F)$ decreases appreciably with
increasing $F$, due mostly to increasing size of the energy denominators in
Eq. (9).
Fig. 4: Field-dependent rotational contribution $C_{6}^{\mathrm{rot}}(F)$ to
the van der Waals coefficient for the interaction of two ground state RbCs
molecules (solid blue line) as a function of electric field $F$. Also shown is
the dipole moment (dashed green line).
A description in terms of the dispersion potential in (9) breaks down at small
distances, when the dipole-dipole interaction becomes larger than $2B_{\nu}J$.
The characteristic length at which this occurs is
$a_{B}=\left(d_{m}^{2}/B_{\nu}\right)^{1/3}$. 27 This length is on the order
of $0.4\bar{a}$ to $0.6\bar{a}$ for the molecules we consider here.
Consequently, since $a_{B}$ is already effectively in the short range regime
where all incoming flux of particles is lost, a description of collisions
using Eq. (8) should be valid. We have carried out coupled channels (CC)
calculations to test this, by using a partial wave expansion in $|\ell
m\rangle$ states with the full Hamiltonian including the interaction term in
the form of Eq. (8). We call this method the CC (vdW $+$ dipole) calculation.
In addition, we have implemented an adiabatic approximation in this expansion,
designated either 1-channel adiabatic (vdw $+$ dipole) or 2-channel adiabatic
(vdw $+$ dipole).
Figure 5 shows the elastic and reactive rates calculated with several
approximations applied to free space collisions of bosonic and fermionic KRb
prepared in the $F$-dependent state that correlates with the $v=0,J=0$ state
as $F\to 0$. For simplicity, we assume the same reduced mass and the dipole
moment for bosons as for fermionic 40K87Rb. The approximations are: the CC
(vdw $+$ dipole), the 1- and 2-channel adiabatic (vdw $+$ dipole) and the
1-channel adiabatic (rotational basis). All approaches predict almost
identical rate constants, except for the elastic rates for bosons at high
electric fields. In this case the rates given by the CC (vdw $+$ dipole) are
larger than those calculated using the adiabatic approximations. The
discrepancy stems from the $\ell$-changing transitions induced by the off-
diagonal elements in the dipole-dipole interaction, which are absent in the
1-channel adiabatic approximation. At high values of the electric field the
elastic rates for bosons contain a significant contribution from the higher
partial waves. This contribution can be included by considering more than one
adiabatic potential. This is shown in Fig. 5 for bosons using the 2-channel
adiabatic (vdw $+$ dipole) approximation, which goes much of the way to
eliminating the difference between the adiabatic and coupled channels
approaches. Interestingly we have found numerically that calculated rates
change by an insignificant amount if we use $C_{6}(0)$ instead of $C_{6}(F)$.
Presumably this is because the van der Waals dispersion term has become very
small in comparison to the centrifugal and dipolar terms in the range that
determines the dynamics.
Fig. 5: Field-dependent rate constants $K_{0}^{\mathrm{ls}}(F)$ and
$K_{1}^{\mathrm{ls}}(F)$ at $E/k_{B}=200\mathrm{nK}$ versus electric field $F$
for free space collisions of KRb bosons (left panel) and fermions (right
panel). The figure shows the CC (vdW $+$ dipole) calculation (solid lines) and
approximations based on the 1-channel adiabatic (rotational basis) (points),
1-channel adiabatic (vdw $+$ dipole) (thin solid line), and the 2-channel
adiabatic (vdw $+$ dipole) approximations (dot-dash line).
## 3 Collisions in quasi-2D geometry
### 3.1 Optical lattices
The very low kinetic energy of ultracold atoms and molecules allows them to be
controlled and manipulated by very weak forces. One important control is
provided by optical lattices, by which two counter-propagating light beams in
the $z$ direction define a standing wave light pattern that defines a series
of lattice cells separated by distances $L$ in the $z$ direction on the order
of hundreds of nm to more than a $\mu$m. We consider a single isolated cell
with local harmonic confinement along $z$ with frequency $\Omega/(2\pi)$ on
the order of tens of kHz. If $k_{B}T\ll\hbar\Omega$, the atoms or molecules
are tightly confined to the ground state of the cell in the $z$ direction,
while they are free to move in the $x-y$ plane. Collisions under such
conditions are called quasi-2D collisions. On the other hand, two orthogonal
sets of counterpropagating light beams can provide tight confinement in the
$x$ and $y$ directions, with free motion in the $z$ direction. This gives rise
to quasi-1D collisions. The theory of quasi-2D and quasi-1D collisions, and
their threshold laws, have been worked out in some detail, 61, 62, 63, 64, 65,
66, 67, 68 including collisions of ultracold molecular dipoles in quasi-2D.24,
27, 28, 29, 69, 70
The big advantage of reduced dimensional collisions in an optical lattice is
the extra control one gets over collision rates with dipolar molecules, since
the dipole length $a_{d}$ can easily exceed the confinement length $a_{h}$.
Consequently, electric fields can produce oriented molecules and control their
approach at long range, making it either attractive or repulsive. In the
latter case, short range reactive or loss collisions can be effectively turned
off, thus greatly increasing the lifetime of the sample.
### 3.2 Collision rates in reduced dimension
Here we will give examples of the universal collision rate constants for
quasi-2D collisions, when the molecules collide in an electric field ${\bf F}$
in the direction $z$ of tight confinement by a 1D optical lattice. The species
40K87Rb has already been treated in some detail for such a case, 29, 28, 69
and the predicted suppression of reactive collision rates by the repulsive
dipole potential barrier has been verified experimentally. 44 Here we examine
the transition from van der Waals dominated to dipole-dominated collisions for
the other species of mixed alkali-metal dimers. The van der Waals limit
applies when $a_{d}\ll\bar{a}$, whereas the dipolar limit applies when
$a_{d}\gg a_{h}$. In the latter case, collision rates approach a universal
dipolar limit, as described in the literature. 59, 71, 72, 70
We use here the form of the elastic and inelastic collision rates from Micheli
et al. 28, since this makes it easy to relate collisional properties in
different dimensions $N$, where $N=1,2,3$ refers to quasi-1D, quasi-2D, and
free space collisions respectively. 65 The “core” of the collision at short
distances $R\ll\bar{a}$ is assumed to occur in full 3D geometry and to be
represented by a normal spherical harmonic expansion in partial waves. The
long-range part of the collision occurs in reduced dimensionality due to the
tight confinment in 1 or 2 dimensions with characteristic length
$a_{h}\gg\bar{a}$ (see Fig. 3). The kinetic energy
$E=\hbar^{2}\kappa^{2}/(2\mu)$ of free motion in the respective $(z)$,
$(x,y)$, or $(x,y,z)$ directions for $N=1,2,3$ is assumed to be
$\ll\hbar\Omega$, so the colliding molecules are confined to the ground state
of the harmonic trap. This assumption can be easily relaxed to permit
collisions of molecules in other trap levels. 44, 32
The wave function can be expanded in a set of partial waves $j$ suitable for
each dimension. 65 The contributions to the elastic and inelastic rate
constants for collisions with relative momentum $\hbar\kappa$ in dimension $N$
are 28
$\displaystyle K_{j}^{\mathrm{el}}(\kappa)$ $\displaystyle=$
$\displaystyle\frac{\pi\hbar}{\mu}g_{N}\frac{|1-S_{jj}(\kappa)|^{2}}{\kappa^{N-2}}\,\,\,$
(10) $\displaystyle K_{j}^{\mathrm{ls}}(\kappa)$ $\displaystyle=$
$\displaystyle\frac{\pi\hbar}{\mu}g_{N}\frac{1-|S_{jj}(\kappa)|^{2}}{\kappa^{N-2}}\,,$
(11)
where the factor $g_{N}=1/\pi\,,2/\pi\,,2$ for molecules colliding in
identical spin states in $N=1,2,3$ dimensions, respectively. The indices of
the lowest partial wave are $j=0$ for bosons and $j=1$ for fermions. In 3D,
$j=1$ has 3 components of its projection $M$ that have to be summed over. In
quasi-2D with $\bf{F}$ along the confined direction $z$, $j$ refers to the
projection $M$ of relative rotational angular momentum along $z$, with two
components $+1$ and $-1$ that have to be summed over for $j=1$; in quasi-1D,
it refers to the symmetric ($j=0)$ or antisymmetric ($j=1$) state propagating
along $z$. In the case of unlike species both $j=0$ and 1 contribute. The
contribution from the lowest index is dominant for ultracold universal
reactive collisions in an electric field, and any additional partial waves can
be summed over in other cases to get the total rate constants
$K^{\mathrm{ls}}$ or $K^{\mathrm{el}}$. The loss rate for a gas with
$N$-dimensional density n is $\dot{n}=-K^{\mathrm{ls}}n^{2}$, where $n$ has
units of cm-1, cm-2, and cm-3 for $N=1,2,3$ respectively.
Micheli et al 28 worked out analytic expressions for any $N$ for the universal
$K^{\mathrm{ls}}$ or $K^{\mathrm{el}}$ for a van der Waals potential. The loss
rate constants for $N=2$ are $K_{0}^{\mathrm{ls}}=2(\hbar/\mu)P_{0}$ for $M=0$
bosons and $K_{1}^{\mathrm{ls}}=4(\hbar/\mu)P_{1}$ for fermions summed over
both $|M|=1$ components. The prefactor $2(\hbar/\mu)$ or $4(\hbar/\mu)$ gives
the unitarity upper bound, and the transmission function
$P_{j}=1-|S_{jj}|^{2}$ gives the dynamical probability of getting from the
asymptotically prepared entrance channel to small $R$, where loss occurs:
$P_{0}=4\sqrt{\pi}\frac{\bar{a}}{a_{h}}\,f_{0}(\kappa){\hskip
28.45274pt}P_{1}=6\sqrt{\pi}\frac{\bar{a}_{1}}{a_{h}}\,(\kappa\bar{a})^{2}\,,$
(12)
where $f_{0}(\kappa)$ is a complicated function that has a logarithmic
singularity as $\kappa\to 0$. However, this singularity is not important in
the experimental nK range (it is only dominant at much lower energies for
typical trapping geometries). For practical purposes $f_{0}(\kappa)$ can be
taken to be on the order of unity and nearly independent of $\kappa$ for
likely current experiments.
We calculate $K^{\mathrm{ls}}$ here for quasi-2D geometry as a function of
electric field $F$ using the numerical method described by Micheli et al., 28
similar to the method of Quémenér and Bohn. 29, 69 In our case of universal
collisions, we impose incoming-only boundary conditions on the wave function
for $R\ll\bar{a}$, which corresponds to loss of all scattering flux that
reaches this region from long range. 34, 28 The calculation thus requires only
the van der Waals, dipolar, trapping, and centrifugal potentials, with a
spherical basis expanded in spherical harmonics at short range, switching to a
cylindrical basis of confined states at long range. The details of short range
“chemistry” are irrelevant, since only loss from the entrance channel occurs
there, due to reaction of vibrational quenching. Since there is no back
reflection to long range in the entrance channel, scattering resonances do not
exist in this universal case.
Fig. 6: Universal loss rate constant $K_{0}^{\mathrm{ls}}$ for quasi-2D
collisions at $E/k_{B}=200$ nK in an $\Omega=2\pi(50\,\mathrm{kHz})$ trap for
bosonic ($j=0$) 6Li40K, 39K133Cs and 87Rb133Cs (dashed lines) and
$K_{1}^{\mathrm{ls}}$ for fermionic 6Li39K, 40K87Rb and 40K133Cs (solid
lines). In the case of the nonreactive species KCs and RbCs, these rate
constants are expected to apply to vibrational quenching for low vibrational
levels with $v\geq 1$. The black arrow shows the unitarity limit $2\hbar/\mu$
for bosonic 6Li40K. The solid points show the fit function in Eq. 13 when
$a_{d}>a_{h}$.
Figure 6 shows our calculations of $K_{j}^{\mathrm{ls}}$ as the electric field
$F$ is increased for a variety of species at a collision energy of $E/k_{B}=$
200 nK for confinement in an $\Omega=2\pi(50\,\mathrm{kHz})$ trap. These go to
the universal van der Waals limits as $F\to 0$. As we found for free space
collisions, our numerical calculations change by an insignificant amount if we
use $C_{6}(0)$ instead of $C_{6}(F)$ as $F$ increases. Consequently the van
der Waals potential becomes progressively less important as $F$ increases and
the dipole potential becomes dominant on distance scales larger than the
confinement length $a_{h}$. We also find that for the low energies and
moderate dipole strengths that we explore in quasi-2D geometry, the lowest
partial wave is the dominant contribution to the universal loss rate constant,
so little change would be seen in $K_{j}^{\mathrm{ls}}$ in Fig. 6 if the
contributions from higher M values were included. Elastic collisions, however,
will be affected more, but they can be treated by simple approximations.59, 71
The 40K87Rb case in Fig. 6 is the one reported and explained by Micheli et al.
28 and Quémenér and Bohn. 29, 69 The other cases are new. A useful figure of
merit is a rate constant of $10^{-7}$ cm${}^{2}/$s, which corresponds to a 1s
lifetime of a 2D gas with density $n=10^{7}$ cm-2, comparable to the number
achieved experimentally with 40K87Rb. 44 Smaller rate constants correspond to
lifetimes of longer than 1 s at this density. As the dipole strength $d(F)$
increases, $K_{j}^{\mathrm{ls}}$ decreases below $10^{-7}$ cm${}^{2}/$s when
$d(F)$ is large enough. Consequently, we predict that stabilization of the
heavier non-reactive species RbCs and KCs in their low vibrational levels
should be possible at dipole strength of only 1 D or less, at electric field
below 10 kV/cm (see Fig. 2). Of course, RbCs and KCs in v=0 is nonreactive.
Even a light reactive species LiK with weaker confinement should be stabilized
in such a trap at a field on the order of 20 kV/cm.
Figure 6 also demonstrates the difference between the bosonic and fermionic
form of the same species LiK and KCs. As already shown for KRb, the fermionic
rate is suppressed at zero field relative to the bosonic rate, due to the
centrifugal barrier to the $p$-wave. At 200nK the suppression ratio
$K_{1}^{\mathrm{ls}}/K_{0}^{\mathrm{ls}}\approx$ 10 and 100 for KCs and LiK
respectively, similar to the ratios in 3D (see Fig. 3). As field increases,
the fermionic and bosonic rate constants for the same species move away from
the van der Waals limit and are both suppressed, eventually becoming nearly
equal as $d(F)$ continues to increase. The figure also shows a comparison to
an approximate formula which applies to bosons and fermions in the strong
dipole limit where $a_{d}\gg a_{h}$,
$K_{\mathrm{D}}^{\mathrm{ls}}=2\frac{\hbar}{\mu}288(\kappa
a_{h})^{4}e^{-3.0314\,(a_{d}/a_{h})^{2/5}}\,,$ (13)
where the factor 288 comes from fitting our numerical calculations. The factor
$2^{8/5}\approx 3.0314$ in the exponent is replaced by 2 if $a_{d}$ and
$a_{h}$ are defined with the total molecular mass instead of the reduced mass.
The agreement with the calculations shows that this expression accurately
captures the scaling with mass and $d(F)$ for a wide range of species. It
seems especially good for the bosons even when $a_{d}$ is only slightly larger
than $a_{h}$. The quantity to the right of $2\hbar/\mu$ in Eq. (13) represents
the transmission function $P_{0}(\Omega,F,E)$ and is similar to expressions
for the semiclassical tunneling probability through the long range potential
barrier proportional to $\exp\left(-\mathrm{C}\,(a_{d}/a_{h})^{2/5}\right)$,
where $\mathrm{C}$ is a constant, as described by several authors. 24, 27, 28,
71, 72, 70 Since the rate constants $K_{0}^{\mathrm{ls}}$ and
$K_{1}^{\mathrm{ls}}$ are nearly equal in the strong dipole limit, this
implies $P_{1}\approx P_{0}/2$. Our fit for $P_{0}(\Omega,F,E)$ in Eq. (13) is
based on the calculated universal quantum dynamics and does not rely on any
semiclassical or adiabatic approximations. It gives realistic scaling with the
experimentally variable parameters: trap strength $\Omega$, electric field $F$
and temperature, which respectively control confinement $a_{h}$, dipole length
$a_{d}$ and the range of collision energy $E$.
Fig. 7: Universal loss rate constant $K_{j}^{\mathrm{ls}}$ for quasi-2D
collisions of bosonic 7Li133Cs (dashed lines) and fermionic 6Li133Cs (solid
lines) at $E/k_{B}=200$ nK for a range of trap strengths $\Omega/(2\pi)$. The
solid points show the fit $K_{D}^{\mathrm{ls}}$ in Eq. 13 when $a_{d}>a_{h}$.
The arrow indicates the unitarity limit for bosons, $2\hbar/\mu$. Fig. 8:
Universal loss rate constant $K_{0}^{\mathrm{ls}}$ (points) versus collision
energy $E/k_{B}$ for bosonic 7Li133Cs at a dipole strength of $d=0.4$ au (1 D)
for $\Omega/(2\pi)=$ 30 kHz, 50 kHz, and 100 kHz. The solid line shows the fit
function in Eq. (13).
Figure 7 shows our calculations for bosonic and fermionic isotopes of the
reactive species LiCs, which has the largest dipole moment of any of the mixed
alkali species. Since the dipole moment varies slowly with vibrational quantum
number, this rate should apply to a range of low vibrational levels.
Stabilization in quasi-2D geometry should be possible at relatively modest
electric fields. The formula in Eq. (13) is again very good for this species,
verifying that in the strong dipole limit the probability $P_{j}$ decreases
with increasing $\Omega$. This is unlike the van der Waals limit, where
$P_{j}$ from Eq. (12) scales as $\sqrt{\Omega}$. Consequently Fig. 7 shows the
switching between the two limits for dipole strengths where $a_{d}\approx
a_{h}$. Again, we find that the universal loss rates for bosons and fermions
are nearly the same in the strong dipole limit.
Figure 8 shows that the $K_{j}^{\mathrm{ls}}$ in the strong dipole limit
scales as $E^{2}$ for $j=0$ and 1 in the interesting experimental range of 50
nK to 500 nK, consistent with the $E^{2}$ scaling found by Ticknor. 72 Some
departure from this scaling is evident below 50 nK and above 500 nK in Fig. 8.
In contrast, Eq. (12) shows that in the van der Waals limit as $F\to 0$
$K_{1}^{\mathrm{ls}}$ scales as $E$ and $K_{0}^{\mathrm{ls}}$ is nearly
independent of $E$.
We have concentrated here on reactive or loss collisions in quasi-2D geometry,
instead of elastic collisions. We have also calculated the latter as a
function of field. Micheli et al 28 found that a simple unitarized Born
approximation was accurate for the two lowest partial waves for ultracold KRb.
As dipole increases it is necessary to include more than the lowest partial
wave. Simple universal expressions for the strong dipole limit can be worked
out. 71, 70 The only important aspect of quasi-2D elastic scattering to note
here is that the elastic rate constant continues to increase towards the
unitarity limit (of several partial waves) as electric field increases, and
thus the ratio of elastic to loss collison rates becomes very large with
increasing field, due to the strong suppression of the latter. Thus, it is
hoped that fast elastic collisions can thermalize a quasi-2D gas of molecular
dipoles, which are stable with long lifetimes relative to loss on the time
scale of an experiment.
## 4 Conclusion
We have characterized the universal reactive and inelastic relaxation rate
constants in free space and in quasi-2D planar geometry of the ten mixed
alkali-metal dimer polar molecular species in the near-threshold limit of
ultracold collisions where the lowest partial waves contribute. We consider
collisions between molecules in a single state of vibration, rotation ($J=0$)
and internal spin, where the bosonic or fermionic character of the molecule
leads to different rate constants. The universal rate constants are
independent of the details of short range chemical interactions and depends
only on the long range approach of the colliding species, which is subject to
experimental control by electric and electromagnetic forces. The universal
rates apply as long as short range chemical dynamics has unit probability of
chemical reaction or inelastic quenching that results in loss of the initially
prepared ultracold molecules. The species LiNa, LiK, LiRb, LiCs, and KRb have
such unit short range chemical reaction probabilities when in their
vibrational ground state $v=0$, and are expected to have universal rate
constants in all $v$. The species NaK, NaRb, NaCs, KCs and RbCs are non-
reactive in $v=0$, but may have universal vibrational quenching rate constants
when in vibrationally excited states with $v\geq 1$. The non-universal
collisions of the $v=0$ level of these non-reactive species are predicted to
have electric field-dependent scattering resonances, 56, 73, 33, 70 which
remain to be more fully explored theoretically and experimentally.
In the absence of an electric field, the universal van der Waals rate
constants are determined only by the van der Waals $C_{6}$ coefficient, the
reduced mass of the pair, and the collision energy according to the known
threshold laws. The important parameter is the van der Waals length $\bar{a}$,
which scales as $(\mu C_{6})^{1/4}$ and thus is relatively insensitive to
$C_{6}$. We use simple estimates of $C_{6}$ based on the electronic and
rotational properties of the molecules at their equilibrium internuclear
separation $R_{e}$ that should be adequate approximations for low values of
vibrational quantum number $v$. Although the normal second-order expansion of
the dispersion energy breaks down when two molecules approach one another when
the dipolar interaction exceeds the rotational spacing, the distance at which
this happens is around half of $\bar{a}$, where it does not significantly
affect the reaction rate due to quantum scattering at distances on the order
of $\bar{a}$ or larger. We find that the ultracold universal reactive or
inelastic loss rate constants for bosonic mixed alkali-metal dimers should be
within a few factors of the unitarity upper bound to the rate constant,
whereas rate constants for fermionic species tend to be significantly smaller
by factors of 10 to 100 around 200 nK, scaling linearly with temperature.
When an electric field $F$ is present, the collision rates vary with $F$ and
switch from the van der Waals universal limit to a universal strong dipolar
limit. We show that the universal ultracold collision rates are not
significantly affected by the F-dependent changes in the second-order
dispersion interaction, but can be calculated in 3D from rotationally
adiabatic potentials. We consider in detail the $F$-dependent universal loss
rate constants in quasi-2D geometry when the molecules are oriented in the
$z$-direction by an electric field and tightly confined in that direction by a
harmonic trap such that $\kappa a_{h}\ll 1$, where $\hbar\kappa$ is the
collisional momentum and $a_{h}$ is the harmonic confinement length. Our fully
numerical calculations connect the wave function in the universal 3D short
range core of the collision with incoming boundary conditions to the reduced
dimensional quasi-2D wave function at long range. As in 3D the collision
switches from the van der Waals limit to a strong dipole limit as $F$
increases, where the rate constants for bosons and fermions of the same
species are the same. We obtain a simple scaling formula that provides a good
approximation of the universal rate constants for all ten species in the
ultracold domain below 1 $\mu$K and for trapping frequencies in the tens of
kHz range that are experimentally feasible. The universal strong dipolar limit
of the rate constant has simple scaling with the mass, trapping frequency, and
molecular dipole strength. In this geometry the dipoles have long range
repulsive interactions, and universal collision rate constants can be
decreased to be several orders of magnitude less than the unitarity limit, so
that gases of oriented dipoles of even highly reactive species can be
stabilized against loss collisions under practical experimental conditions
that should be achievable with current techniques for cooling, trapping, and
optical lattice design and with relatively modest electric fields on the order
of 10 kV/cm . We expect these considerations will also be valid for quasi-1D
geometry, or collisions in “tubes” with tight confinement in the two
directions orthogonal to the “tube.”
While we have considered molecular collisions for states with low vibrational
quantum numbers $v$, the van der Waals $C_{6}$ coefficient and, perhaps more
importantly, the molecular dipole moment will vary with $v$. Consequently,
there is some degree of control possible by “vibrational tuning” of these
parameters. Eventually, at very high $v$ near the dissociation limit, the
expectation value of the dipole moment will become very small, and electric
field control will be lost. But there is a wide range of vibrational levels
that should be accessible for experimental control, so that the $v=0$ level
need not be the target level of STIRAP, especially if higher $v$ levels are
easier to reach. Since the short range collision loss probability is maximal
for universal collisions, it is not necessary to have a non-reactive $v=0$
species to work with, if collisions can be shielded in quasi-2D geometry. One
could envision achieving a lattice of “preformed pairs” of unassociated atoms
in an array of fully confining 3D trapping cells (zero-dimensional collisions
of atoms bound to the cell), as has been proposed. 37, 74 Converting such atom
pairs to Feshbach molecules by magnetoassociation and moving the latter to a
desired $v$ by STIRAP should allow the study of controlled collisions upon
subsequent removal of the fully confining lattice by turning off some or all
of the lattice lasers, resulting in lattices with quasi-1D or quasi-2D
geometry or even a free space gas. A broad class of such collisions should
have rate constants approximated by the universal ones described here.
## 5 Acknowledgments
This work was supported in part by an AFOSR MURI grant on ultracold polar
molecules and in part by a Polish Government Research Grant for the years
2011-2014.
## References
* Dalfovo _et al._ 1999 F. Dalfovo, S. Giorgini, L. P. Pitaevskii and S. Stringari, _Rev. Mod. Phys._ , 1999, 71, 463–512.
* Stringari and Pitaevskii 2003 S. Stringari and L. Pitaevskii, _Bose-Einstein condensation_ , Oxford University Press, London, 2003.
* Giorgini _et al._ 2008 S. Giorgini, L. P. Pitaevskii and S. Stringari, _Rev. Mod. Phys._ , 2008, 80, 1215–1274.
* Pethick and Smith 2008 C. J. Pethick and H. Smith, _Bose-Einstein condensation in dilute gases_ , Cambridge University Press, 2008.
* Timmermans _et al._ 1999 E. Timmermans, P. Tommasini, M. Hussein and A. Kerman, _Phys. Rep._ , 1999, 315, 199–230.
* Hutson and Soldán 2006 J. Hutson and P. Soldán, _Int. Rev. Phys. Chem._ , 2006, 25, 497–526.
* Hutson 2007 J. M. Hutson, _New J. Phys._ , 2007, 9, 152.
* Köhler _et al._ 2006 T. Köhler, K. Góral and P. S. Julienne, _Rev. Mod. Phys._ , 2006, 78, 1311–1361.
* Chin _et al._ 2010 C. Chin, R. Grimm, P. S. Julienne and E. Tiesinga, _Rev. Mod. Phys._ , 2010, 82, 1225.
* Jessen and Deutsch 1996 P. S. Jessen and I. H. Deutsch, _Adv. At. Mol. Opt. Phys._ , 1996, 37, 95–136.
* Bloch 2005 I. Bloch, _Nature Phys._ , 2005, 1, 23–30.
* Bloch _et al._ 2008 I. Bloch, J. Dalibard and W. Zwerger, _Rev. Mod. Phys._ , 2008, 80, 885.
* Greiner and Fölling 2008 M. Greiner and S. Fölling, _Nature_ , 2008, 453, 736–738.
* Ni _et al._ 2008 K.-K. Ni, S. Ospelkaus, M. H. G. de Miranda, A. Pe’er, B. Neyenhuis, J. J. Zirbel, S. Kotochigova, P. S. Julienne, D. S. Jin and J. Ye, _Science_ , 2008, 322, 231–235.
* Danzl _et al._ 2010 J. G. Danzl, M. J. Mark, E. Haller, M. Gustavsson, R. Hart, J. Aldegunde, J. M. Hutson and H.-C. Nägerl, _Nature Phys._ , 2010, 6, 265–270.
* Lang _et al._ 2008 F. Lang, K. Winkler, C. Strauss, R. Grimm and J. Hecker Denschlag, _Phys. Rev. Lett._ , 2008, 101, 133005.
* Sage _et al._ 2005 J. M. Sage, S. Sainis, T. Bergeman and D. DeMille, _Phys. Rev. Lett._ , 2005, 94, 203001.
* Viteau _et al._ 2008 M. Viteau, A. Chotia, M. Allegrini, N. Bouloufa, O. Dulieu, D. Comparat and P. Pillet, _Science_ , 2008, 321, 232–234.
* Deiglmayr _et al._ 2008 J. Deiglmayr, A. Grochola, M. Repp, K. Mörtlbauer, C. Glück, J. Lange, O. Dulieu, R. Wester and M. Weidemüller, _Phys. Rev. Lett._ , 2008, 101, 133004.
* DeMille 2002 D. DeMille, _Phys. Rev. Lett._ , 2002, 88, 067901.
* Doyle _et al._ 2004 J. Doyle, B. Friedrich, R. Krems and F. Masnou-Seeuws, _Eur. Phys. J. D_ , 2004, 31, 149–164.
* Carr _et al._ 2009 L. D. Carr, D. DeMille, R. V. Krems and J. Ye, _New J. Phys_ , 2009, 11, 055049.
* Micheli _et al._ 2006 A. Micheli, G. K. Brennen and P. Zoller, _Nature Phys._ , 2006, 2, 341.
* Büchler _et al._ 2007 H. P. Büchler, E. Demler, M. Lukin, A. Micheli, N. Prokof’ev, G. Pupillo and P. Zoller, _Phys. Rev. Lett._ , 2007, 98, 060404.
* Krems 2008 R. V. Krems, _Phys. Chem. Chem. Phys._ , 2008, 10, 4079–4092.
* Julienne 2009 P. S. Julienne, _Faraday Discuss._ , 2009, 142, 361.
* Micheli _et al._ 2007 A. Micheli, G. Pupillo, H. P. Büchler and P. Zoller, _Phys. Rev. A_ , 2007, 76, 043604.
* Micheli _et al._ 2010 A. Micheli, Z. Idziaszek, G. Pupillo, M. A. Baranov, P. Zoller and P. S. Julienne, _Phys. Rev. Lett._ , 2010, 105, 073202.
* Quéméner and Bohn 2010 G. Quéméner and J. L. Bohn, _Phys. Rev. A_ , 2010, 81, 060701.
* de Miranda _et al._ 2010 M. H. G. de Miranda, A. Chotia, B. Neyenhuis, D. Wang, G. Quéméner, S. Ospelkaus, J. L. Bohn, J. Ye and D. S. Jin, _arXiv:1010.3731v1_ , 2010.
* Ospelkaus _et al._ 2010 S. Ospelkaus, K.-K. Ni, D. Wang, M. H. G. de Miranda, B. Neyenhuis, G. Quéméner, P. S. Julienne, J. L. Bohn, D. S. Jin and J. Ye, _Science_ , 2010, 327, 853–857.
* Quéméner and Bohn 2010 G. Quéméner and J. L. Bohn, _Phys. Rev. A_ , 2010, 81, 022702.
* Idziaszek _et al._ 2010 Z. Idziaszek, G. Quéméner, J. L. Bohn and P. S. Julienne, _Phys. Rev. A_ , 2010, 82, 020703.
* Idziaszek and Julienne 2010 Z. Idziaszek and P. S. Julienne, _Phys. Rev. Lett._ , 2010, 104, 113202.
* Krems _et al._ 2009 R. V. Krems, W. C. Stwalley and B. Friedrich, _Cold Molecules: Theory, Experiment, Applications_ , CRC Press, New York, 2009.
* 36 Faraday Discussions, Vol. 142, Cold and Ultracold Molecules (2009).
* Damski _et al._ 2003 B. Damski, L. Santos, E. Tiemann, M. Lewenstein, S. Kotochigova, P. S. Julienne and P. Zoller, _Phys. Rev. Lett._ , 2003, 90, 110401.
* Jaksch _et al._ 2002 D. Jaksch, V. Venturi, J. I. Cirac, C. J. Williams and P. Zoller, _Phys. Rev. Lett._ , 2002, 89, 040402.
* Jones _et al._ 2006 K. M. Jones, E. Tiesinga, P. D. Lett and P. S. Julienne, _Rev. Mod. Phys._ , 2006, 78, 483–535.
* Ospelkaus _et al._ 2008 S. Ospelkaus, A. Pe’er, K.-K. Ni, J. J. Zirbel, B. Neyenhuis, S. Kotochigova, P. S. Julienne, J. Ye and D. S. Jin, _Nature Phys._ , 2008, 4, 622–626.
* Danzl _et al._ 2009 J. G. Danzl, M. J. Mark, E. Haller, M. Gustavsson, R. Hart, A. Liem, H. Zellmer and H.-C.Nägerl, _New J. Phys._ , 2009, 11, 055036.
* Zuchowski and Hutson 2010 P. S. Zuchowski and J. M. Hutson, _Phys. Rev. A_ , 2010, 81, 060703.
* Aymar and Dulieu 2005 M. Aymar and O. Dulieu, _J. Chem. Phys._ , 2005, 122, 204302.
* Ni _et al._ 2010 K.-K. Ni, S. Ospelkaus, D. Wang, G. Quéméner, B. Neyenhuis, M. H. G. de Miranda, J. L. Bohn, J. Ye and D. S. Jin, _Nature_ , 2010, 464, 1324–1328.
* Azizi and Dulieu 2004 S. Azizi and M. A. O. Dulieu, _Eur. Phys. J. D._ , 2004, 31, 195?203.
* Deiglmayr _et al._ 2008 J. Deiglmayr, M. Aymar, R. Wester, M. Weidemüller and O. Dulieu, _J. Chem. Phys._ , 2008, 129, 064309.
* Kotochigova 2010 S. Kotochigova, _New J. Phys._ , 2010, 12, 073041.
* Hutson and Soldán 2007 J. M. Hutson and P. Soldán, _Int. Rev. Phys. Chem._ , 2007, 26, 1–28.
* Aldegunde _et al._ 2008 J. Aldegunde, B. A. Rivington, P. S. Zuchowski and J. M. Hutson, _Phys. Rev. A_ , 2008, 78, 033434.
* Quéméner _et al._ 2005 G. Quéméner, P. Honvault, J.-M. Launay, P. Soldán, D. E. Potter and J. M. Hutson, _Phys. Rev. A_ , 2005, 71, 032722.
* Quéméner _et al._ 2007 G. Quéméner, J.-M. Launay and P. Honvault, _Phys. Rev. A_ , 2007, 75, 050701.
* Staanum _et al._ 2006 P. Staanum, S. D. Kraft, J. Lange, R. Wester and M. Weidemüller, _Phys. Rev. Lett._ , 2006, 96, 023201.
* Deiglmayr _et al._ 2011 J. Deiglmayr, M. Repp, A. Grochola, O. Dulieu, R. Wester and M. Weidemüller, _J. Phys. Conf. Series_ , 2011, 264, 012014.
* Hudson _et al._ 2008 E. R. Hudson, N. B. Gilfoy, S. Kotochigova, J. M. Sage and D. DeMille, _Phys. Rev.Lett._ , 2008, 100, 203201.
* Weidemüller 2010 M. Weidemüller, _Dipolar Effects in an Ultracold Gas of LiCs Molecules_ , 2010, talk at EuroQUAM Conference, 13 September 2010, Ischgl, Austria.
* Ticknor and Bohn 2005 C. Ticknor and J. L. Bohn, _Phys. Rev. A_ , 2005, 72, 032717.
* Ticknor 2007 C. Ticknor, _Phys. Rev. A_ , 2007, 76, 052703.
* Ticknor 2008 C. Ticknor, _Phys. Rev. Lett._ , 2008, 100, 133202.
* Bohn _et al._ 2009 J. L. Bohn, M. Cavagnero and C. Ticknor, _New Journal of Physics_ , 2009, 11, 055039.
* Bohn 2009 J. L. Bohn, _Chapter 2 of Cold Molecules: Theory, Experiment, Applications, ed. by R. V. Krems, W. C. Stwalley, B. Friedrich, CRC Press,_ , 2009, 39–69.
* Olshanii 1998 M. Olshanii, _Phys. Rev. Lett._ , 1998, 81, 938–941.
* Sadeghpour _et al._ 2000 H. Sadeghpour, J. Bohn, M. Cavagnero, B. Esry, I. Fabrikant, J. Macek and A. Rau, _J. Phys. B_ , 2000, 33, R93–R140.
* Petrov and Shlyapnikov 2001 D. S. Petrov and G. V. Shlyapnikov, _Phys. Rev. A_ , 2001, 64, 012706.
* Bergeman _et al._ 2003 T. Bergeman, M. G. Moore and M. Olshanii, _Phys. Rev. Lett._ , 2003, 91, 163201.
* Naidon and Julienne 2006 P. Naidon and P. S. Julienne, _Phys. Rev. A_ , 2006, 74, 062713.
* Naidon _et al._ 2007 P. Naidon, E. Tiesinga, W. F. Mitchell and P. S. Julienne, _New J. Phys._ , 2007, 9, 19.
* Li _et al._ 2008 Z. Li, S. V. Alyabyshev and R. V. Krems, _Physical Review Letters_ , 2008, 100, 073202.
* Li and Krems 2009 Z. Li and R. V. Krems, _Phys. Rev. A_ , 2009, 79, 050701.
* Quéméner and Bohn 2011 G. Quéméner and J. L. Bohn, _Phys. Rev. A_ , 2011, 83, 012705.
* D’Incao and Greene 2011 J. P. D’Incao and C. Greene, _Phys. Rev. A_ , 2011, 83, 030702.
* Ticknor 2009 C. Ticknor, _Phys. Rev. A_ , 2009, 80, 052702.
* Ticknor 2010 C. Ticknor, _Phys. Rev. A_ , 2010, 81, 042708.
* Roudnev and Cavagnero 2009 V. Roudnev and M. Cavagnero, _J. Phys. B_ , 2009, 42, 044017.
* Freericks _et al._ 2010 J. K. Freericks, M. M. Máska, A. Hu, T. M. Hanna, C. J. Williams, P. S. Julienne and R. Lemanski, _Phys. Rev. A_ , 2010, 81, 011605.
|
arxiv-papers
| 2011-06-02T19:48:07 |
2024-09-04T02:49:19.325249
|
{
"license": "Public Domain",
"authors": "Paul S. Julienne, Thomas M. Hanna, Zbigniew Idziaszek",
"submitter": "Paul Julienne",
"url": "https://arxiv.org/abs/1106.0494"
}
|
1106.0698
|
# The data reduction pipeline for the Hi-GAL survey
A. Traficante1,9, L. Calzoletti2,8, M. Veneziani3,9, B. Ali5, G. de Gasperis1,
A.M. Di Giorgio6, F. Faustini2, D. Ikhenaode4, S. Molinari6, P. Natoli1,2,7,
M. Pestalozzi6, S. Pezzuto6, F. Piacentini3, L. Piazzo4, G. Polenta2,8 and E.
Schisano6
1Dipartimento di Fisica, Università di Roma “Tor Vergata”, Italy
2ASI Science Data Center, I-00044 Frascati (Rome)
3Dipartimento di Fisica, Università di Roma “La Sapienza”, Italy
4DIET - Dipertimento di Ingegneria dell’ Informazione, Elettronica e
Telecomunicazioni, Università di Roma “La Sapienza”, Italy
5Nasa Herschel Science Center, Caltech, Pasadena, CA
6INAF-Istituto Fisica Spazio Interplanetario I-00133 Rome
7INFN Sezione di Tor Vergata, Rome, Italy
8INAF, Osservatorio Astronomico di Roma, Via Frascati 33, I-00040 Monte Porzio
Catone, Italy
9Spitzer Science Center, Caltech, Pasadena, CA E-mail:
alessio.traficante@roma2.infn.it
(Submitted)
###### Abstract
We present the data reduction pipeline for the Hi-GAL survey. Hi-GAL is a key
project of the Herschel satellite which is mapping the inner part of the
Galactic plane ($|l|\leqslant$ $70^{\circ}$ and $|b|\leqslant 1^{\circ}$),
using 2 PACS and 3 SPIRE frequency bands, from $70$$\mu$m to $500$$\mu$m. Our
pipeline relies only partially on the Herschel Interactive Standard
Environment (HIPE) and features several newly developed routines to perform
data reduction, including accurate data culling, noise estimation and minimum
variance map-making, the latter performed with the ROMAGAL algorithm, a deep
modification of the ROMA code already tested on cosmological surveys. We
discuss in depth the properties of the Hi-GAL Science Demonstration Phase
(SDP) data.
###### keywords:
instrumentation – data reduction
††pagerange: The data reduction pipeline for the Hi-GAL
survey–References††pubyear: 2002
## 1 Introduction
The Herschel Space Observatory was launched from Kourou in May 2009 aboard an
Ariane 5 rocket. Two of three scientific instruments on the focal plane (PACS
and SPIRE) are capable to observe the infrared sky with unprecedented angular
resolution and sensitivity, providing photometric observations in 6 different
bands ($70$$\mu$m, $100$$\mu$m, $160$$\mu$m, $250$$\mu$m, $350$$\mu$m and
$500$$\mu$m: Pilbratt et al., 2010, and reference therein).
The PACS photometer is composed of two bolometer arrays: a $64\times 32$ pixel
matrix arranged from 8 monolithic subarrays of $16\times 16$ pixels each
centered on the $70\mu$m and $100\mu$m wavelength (blue and green bands), and
a $32\times 16$ pixel matrix organized in two subarrays for the band centered
on $160\mu$m (red band), see Poglitsch et al. (2010).
SPIRE comprises a three band photometer, operating in spectral bands centered
on $250\mu$m, $350\mu$m and $500\mu$m. Every band uses a matrix of germanium
bolometers (139, 88 and 43 respectively) coupled to hexagonally packed conical
feed horns (Griffin et al., 2010).
In order to handle science data provided by the Herschel instruments,
including the data retrieval from the Herschel Science Archive, the data
reduction through the standard pipelines and the scientific analysis, an
official software environment called HIPE (Herschel Interactive Processing
Environment, Ott et al., 2010) is available from ESA.
The raw data provided by the satellite are reduced in HIPE to generate
scientific data (so-called Level 2) and intermediate products of the data
reduction process (Level 0 and Level 1 data).
In this paper, we describe the dedicated pipeline created to obtain maps for
Hi-GAL (Herschel Infrared Galactic Plane Survey, Molinari et al., 2010a). Hi-
GAL aims to homogeneously cover with observations in 5 contiguous IR bands
between 70$\mu$m and 500$\mu$m a 2 degrees wide stripe of galactic plane
between $l=-70^{\circ}$ and $l=70^{\circ}$.
The Galactic plane shows emission varying from point-like sources to large-
scale structures and with intensity varying over a wide dynamic range. In this
work we show that the existing standard reduction strategy (which is based on
HIPE version 4.4.0, released in November, 11th 2010) is not optimized to
reduce Hi-GAL data and that a dedicated pipeline can enhance the quality of
the Level 2 products.
After Herschel successfully passed the Performance Verification Phase (PV
Phase), two fields of the Hi-GAL survey were acquired during the Science
Demonstration Phase (SDP): 2x2 square degree areas of the Galactic plane
centered on 30∘ of longitude (hereafter, $\textit{l}=30^{\circ}$) and on 59∘
($\textit{l}=59^{\circ}$).
We describe the data reduction tools used to obtain high quality maps from SDP
data, with the aim to provide a reliable environment for the Routine Phase
(RP) data. The maps provided by our pipeline are successfully used for several
works like, e.g. , Molinari et al. (2010b), Martin et al. (2010), Peretto et
al. (2010).
The paper is organized as follows: in Section 2 we describe the acquisition
strategy for Hi-GAL data; in Section 3 we describe the pre-processing steps of
the data reduction pipeline, necessary to prepare data for the map making and
the tools that we have developed to that purpose. In Section 4 we describe the
ROMAGAL map making algorithm used in order to estimate the maps. ROMAGAL is
used in place of the MadMap code which is the map making algorithm offered in
HIPE. The quality of the ROMAGAL maps for both PACS and SPIRE instruments,
related to the SDP observations, will be analyzed in Section 5; in Section 6
we draw our conclusions.
## 2 The Hi-GAL acquisition strategy
Hi-GAL data are acquired in PACS/SPIRE Parallel
mode111http://herschel.esac.esa.int/Docs/PMODE/html/parallel_om.html, in which
the same sky region is observed by moving the satellite at a constant speed of
60$\arcsec$/sec and acquiring images simultaneously in five photometric bands:
$70$$\mu$m and $160$$\mu$m for PACS and $250$$\mu$m, $350$$\mu$m and $500\mu$m
for SPIRE.
The whole data acquisition is subdivided in $2^{\circ}\times 2^{\circ}$ fields
of sky centered on the Galactic plane. Every Hi-GAL field is composed of the
superposition of two orthogonal AOR (Astronomical Observation Requests). Each
of them is based on a series of consecutive, parallel and partly overlapped
scan legs covering $2^{\circ}\times 2^{\circ}$ square degrees. The scanning
strategy adopted for Hi-GAL is fully described in (Molinari et al., 2010a).
The superposition is performed in order to obtain a suitable data redundancy
and for better sampling the instrumental effect like the high-frequency
detector response (Molinari et al., 2010a).
The acquisition rate for the parallel mode is 40 Hz for PACS and 10 Hz for
SPIRE, although the PACS data are averaged on-board for an effective rate of 5
Hz and 10 Hz for the $70$$\mu$m and $160$$\mu$m array respectively. The
implications of the PACS data compression are detailed in Section 5.3.
An example of the scanning strategy of the Hi-GAL survey is shown in Figure 1.
The coverage map of the PACS blue array is shown on the left panel and on the
right panel we highlight the superposition of one scan leg to the following
ones, by enlarging the bottom right corner of the left image. Two calibration
blocks for each AOR, during which the instrument observes two internal
calibration sources located on the focal plane, were provided during the SDP
observations. They appear as higher than mean coverage areas and are marked by
black and green circles for the 2 AORs in Figure 1. Higher coverage zones are
also clearly visible in the slewing region at the end of each scan leg, where
the telescope decelerates and then accelerates before initiating the next scan
leg.
Figure 1: Left: the coverage map of PACS blue array. Right: a zoom of the
bottom right corner where is clear the effect of the superposition from one
scan-leg to the next. Black and green circles highlight the calibration blocks
that, during PV and SDP phase, were observed twice during which the internal
calibration sources are observed.
## 3 Map making pre-processing
The aim of the pre-processing is to prepare Hi-GAL data for mapmaking.
While map making is performed using a Fortran parallel code borrowed from
cosmological observations (see Section 4), the preprocessing is done through a
series of IDL and jython tools to be run on the data one after the other.
After having tried to map Hi-GAL data using the standard tools provided within
HIPE, we decided to develop our own routines that we tailored specifically for
the reduction of data affected by bright and irregular background, as in the
Galactic plane. In fact, high-pass filtering used in HIPE to cure the long-
time drift also removes a large part of the diffuse Galactic emission.
Furthermore, the standard deglitching embedded in HIPE , the MMT
(Multiresolution Median Transform, Starck et al., 1995), generates false
detections in correspondence to the very bright sources when we apply this
task to the PACS data. On the other hand, the deglitching procedure based on
wavelet analysis used by SPIRE does not affect the data, given also the lower
spatial resolution compared to the PACS one. We therefore use the HIPE task
for SPIRE data only.
Herschel data are stored in subsequent snapshots of the sky acquired by the
entire detector array, called frames. In a first processing step, the standard
modules within HIPE are used to generate Level 1 data for both PACS (except
the deglitching task) and SPIRE. Thus, the data are rearranged into one time
series per detector pixel, called Time Ordered Data (TOD), in which the
calibrated flux ( Jy beam-1 for SPIRE and Jy sr-1 for PACS) and the celestial
coordinates are included. At the end of this step, TODs are exported outside
HIPE in fits format. In the subsequent processing steps, TODs are managed by a
series of IDL tools, in order to produce final TODs free of systematic effects
due to electronics and of glitches and corrupted chunks of data due to cosmic
rays. To each TOD a flag file is attached to keep track of any flagging done
during data reduction steps.
Preprocessing includes identification of corrupted TODs (or part of them),
drift removal and deglitching. The following steps will be the Noise
Constraint Realization (NCR) and ROMAGAL mapmaking; they will be both
described in detail in the next Sections. The summary of the entire pipeline
is shown in the diagram in Figure 2.
Figure 2: Schematic representation of the Hi-GAL data reduction pipeline
### 3.1 Corrupted TODs
A TOD can be partially corrupted by the random hiting of charged particles
(cosmic rays) which produce strong and spiky signal variations called
glitches. Two different effects on the TODs can be identified: the glitch
corrupts a single or few consecutive samples, generating spiky changes along
the TOD. This is the most common effect and in Section 3.3 we describe how to
detect and mask the data for PACS, as well as to mask any possible residual
glitches for SPIRE. Very powerful glitches, on the other hand, make the
detector signal unstable for a considerable interval of time (see, e.g.,
Billot et al., 2010). This effect depends on the glitch amplitude and on the
bolometers time response. These events affect a much larger part of the TOD
that cannot be used for mapmaking.
Their impact results in a bias on the bolometer timeline with a low-frequency
drift which can involve a considerable number of samples, as shown in Figure
3.
In that Figure, blue crosses represent the observed timeline of one bolometer
of the blue array.
Automatic identification of the (partially) corrupted TODs exploits the first
derivative of the TOD to detect extraordinary “jumps” in the signal. In order
to determine what portion of the TOD is to be flagged, the flagging tool
exploits the fact that the detector pixels response that have been hit by a
cosmic ray is mostly exponential. Data samples ranging from the jump to the
time sample at which an exponential fit reaches 98% of the signal level before
the event are identified as bad data and stored as such in the corresponding
flag file. In case the exponential fit does not reach 98% of the starting
value before the end of the TOD, then all data starting from the hit will be
flagged as is the case in Figure 3. This procedure is applied both in the
cases of a changing in responsivity or a detector offset alteration. In the
latter, we estimate the fit with an exponent equal to 0. The described
procedure was adopted to process both PACS and SPIRE data.
Figure 3: Timeline of a PACS blue bolometer. The exponential decay illustrates
the change in responsivity after frame 40000 due to the impact of a powerful
glitch.
### 3.2 Drift removal
After having identified corrupted data we proceed to the elimination of
changes in responsivity over time. The procedures are in principle identical
for PACS and SPIRE data, the only differences account for different
composition of the detector arrays and the data acquisition of the two
instruments.
The signal in PACS TODs that exit HIPE does not represent the true sky but is
dominated by the telescope background and the (unknown) zero level of the
electronics. The electronics further introduce significant pixel-to-pixel
offsets. For each TOD, we mitigate the effect of pixel-to-pixel offset by
calculating and then subtracting the median level for each pixel from each
readout. This ensures that all pixels have median level equal to 0. The median
value is preferred over mean because the median is a much better
representation of the sky+telescope background flux, and is much less
sensitive to signal from astrophysical sources.
Also, the subtraction of this offset from each TOD does not alter the signal
in the final map, but it introduces only a global offset constant over the
entire area covered in the observation. However it should be kept in mind that
bolometers are inherently differential detectors which bear no knowledge of an
absolute signal value; besides, any optimized map making methods like the one
we employ (see Section 4) produce maps with an unknown offset value which
needs to be independently calibrated. So it is important to reduce all the
bolometers to the same median value, regardless of its amount. The pixel-to-
pixel median subtraction has the effect seen in Figure 4. Diffuse emission and
compact sources are clearly visible in the frame.
Figure 4: Blue PACS frame after the median subtraction on each pixel. Diffuse
emission and compact source are visible in the frame.
Still, when plotting a detector pixel timeline we see that the signal
decreases systematically from the start to the end of the observation. This
trend is likely due to a combination of small changes in the thermal bath of
the array and to small drifts in the electronics. The former affects the
entire array, while the latter affects subunits of the detector array (in
fact, PACS blue is divided into 8 units, PACS red into 2, electronically
independent units (Poglitsch et al., 2008)). The drift is then a combination
of these two effects: drifts of the entire array and drift of a single
subunit. These effects are dominant with respect to the $1/f$ noise pattern,
which will be described in the next Sections.
Figure 5: Median behavior computed on the whole array for each frame. The
slow-drift behavior is due to the electronics and the thermal bath.
It is in principle not a trivial task to decide which drift has to be
subtracted first: the drift from the thermal bath (affecting the entire
detector array) or the drift from the readout electronics (affecting sub-
arrays differently)? Ideally both should be subtracted, if only it were
possible to separate each component accurately, as the net effect in the data
is the sum of both.
Our methodology for removing the correlated signal drifts (on both the
bolometer module/unit level and the array level) is based on tracing the low
signal envelope of the unit or array median levels. In Figure 5, this envelope
is the curve defined by the lowest signal values. and estimated as follows:
* i
We compute the median value of the entire bolometer array/unit for each
bolometer readout. Figure 5 shows one example for the entire array.
* ii
The median values thus obtained are segmented and grouped by scan legs. Each
scan leg is composed of $\sim 1000$ frames and we observed 54 scan leg for
each $2^{\circ}\times 2^{\circ}$ Hi-GAL field.
* iii
For each scan leg we compute the minimum value of the array/unit medians.
* iv
The resulting set of minimum median values for all scan legs are fit with a
polynomial.
The median value for each array/unit readout is chosen because it is closest
to the actual sky+telescope background. However, as clearly seen in Figure 5,
in the presence of strong astrophysical sources the median value is incorrect
for our purposes. The strong sources appear as signal spikes in Figure 5.
Hence, we take the next step of finding the minimum value from the set of
medians belonging to a single scan leg. The idea is that at some point during
the scan leg the median was indeed a true representation of the local
sky+telescope and relatively free of source emission. This step allows us to
reject the sources at the expense of degrading our time-resolution to scan-leg
duration ($\sim 240$ sec). The polynomial fit allows us to estimate the drift
behavior at the same time resolution as the PACS signal (5Hz and 10Hz for
70$\mu$m and160$\mu$m band respectively). We further note that the correlated
signal drift is relatively flat over a single scan leg; hence, the minimum
value is not significantly affected by the presence of the monotonic signal
drift itself in the first place.
The minimum median method discussed above removes background levels from
spatial emission structures that are of the order or larger than the scan legs
yet preserves the spatial structures smaller than scan legs. In essence,
information about the absolute calibration zero-point (Miville-Deschênes &
Lagache, 2005) is lost but all spatial structures within our map boundaries
are preserved.
In Figure 6 we reported the minimum median values of each subarray. The common
downward trend is due to the (common) thermal bath drift, while the different
readout electronics are responsible for the differences in the subarray
slopes.
We therefore decide to subtract the subarray drifts in order to consider both
the thermal bath and the readout electronics behaviors, but separately for
each subunit.
Figure 6: Interpolation of the minima of the median evaluated on every scan
leg. Each curve refers to a PACS subarray. The curves mimic the same behavior
with slopes due to the different subarray electronics.
Once the individual subarray drift is removed, the remaining dispersion on the
whole array is only a residual scatter due to the intrinsic efficiency of the
removal tool, as shown in Figure 7.
Figure 7: Minima of the median evaluated on the whole PACS blue array after
the subarray drift subtraction. The dispersion is due to the intrinsic
efficiency of the drifts subtraction tool. There is no residual behavior and
the scatter is one order of magnitude under the value of the initial drift.
SPIRE array detectors are not divided into subarrays, so every procedure that
has to be run 8 or 2 times for PACS data is only performed once per SPIRE
band. SPIRE uses blind bolometers (5 in total) as thermistors to evaluate the
most relevant correlated noise component: the bath temperature fluctuations. A
standard pipeline module uses this information to perform an effective removal
of the common drift present along the scan observation. HIPE also corrects for
the delay between the detector data and the telescope position along the scan,
using an electrical low-pass filter response correction. But despite these
initial (and very effective) corrections, we apply the drift removal tool to
SPIRE data in the same way as for PACS data: we fit a polynomial to the
minimum of the median of each scan leg (calculated over the entire detector
array), that we then subtract to all TODs. Experience on the data shows that a
residual long time drift is often present in SPIRE data.
Finally, when removing drifts it is important to know how the observational
scans are oriented. In fact, as the Galactic plane is very bright, scans
across the plane will give rise to an increase of signal, on top of the
general drift. On the other hand, when the scan is almost parallel to the
plane of the Galaxy, the signal can dominated by its bright emission, also on
the evaluation of the minima of the median.
In this case, the curve to fit is estimated only in the scan legs where the
median is not affected by the signal.
Since the procedure is not automatic, care has to used when choosing what
polynomial to fit and subtract from the data, in order not to eliminate
genuine signal from the sky. From our experience the best choice is a first or
a second degree polynomial, depending on the signal behavior observed.
Higher polynomial degree can be necessary when part of the drift has to be
extrapolated in order to avoid signal contamination.
An example for one subarray is shown in Figure 8.
Figure 8: Blue curves: interpolation of the minima of the median for one
subarray of PACS blue channel when the scan direction is almost parallel to
the Galactic plane. Red line: the fit that we choose to evaluate the drift
without considering the minima affected by the Galactic emission (central
bump).
Each scan leg is bigger then the overlapping region of the two scan directions
for the scanning strategy adopted by Hi-GAL (see Figure 1). Since the Hi-GAL
fields are squared regions, the slowly-traversed direction of the AOR within
the overlapping region have a length comparable with the scan leg. Thus, we
assume that even if there is a signal gradient along the slowly-traversed
direction of the AOR, it is not filtered out by the array medians subtraction.
### 3.3 Deglitching
To remove outliers in the time series of the bolometers we exploit the spatial
redundancy provided by the telescope movement which ensures that each sky
pixel of the final map is observed with different bolometers. Outliers
detection is done with the standard sigma-clipping algorithm: given a sample
of $N$ values, first estimates for the mean and for the standard deviations
are derived; then all the values that differ from the mean by more than $n$
standard deviations are considered outliers and removed.
For this algorithm the choice of $n$, the parameter that defines the threshold
above which a value is considered an outlier, is usually arbitrary: a certain
$n$ is chosen, very often equal to 3, without any statistical justification.
Recently Pezzuto (2010) has derived a formula that starting from the
properties of the error function for a Gaussian distribution and exploiting
the discreteness of the number of available measures, relates $n$ to the size
of the sample. The formula is
$n=-0.569+\sqrt{-0.072+4.99\log(N)}$ (1)
As a consequence, in the central region of the map, where the coverage (and so
$n$) is high, the number of standard deviation is larger than in the outskirt
of the map where the coverage is low. For instance, if a sky pixel has been
observed with 40 bolometers, the above formula gives $n=2.25$; so, once we
have estimated the mean $m$ and the standard deviations $\sigma$, all the
values $x_{i}$ such that ABS$(x_{i}-m)>2.25\sigma$ are flagged as outliers. If
a pixel has been observed with 20 bolometers the threshold lowers to
1.96$\sigma$.
This procedure is automatically iterated until outliers are no longer found.
However, the procedure converges within 1 iteration in $\sim$98% of the cases
in which we have applied the analysis.
The outliers detection is done in this way for both instruments, however for
SPIRE, as explained before, we also make use of the standard deglitching
algorithm (wavelet based) implemented in the official pipeline. But we found
some weak glitches left in the SPIRE TODs so that we decided to run our
deglitching algorithm also on SPIRE data.
The number of glitches found is on the average about 15%, a value which is
likely larger than the real percentage. For PACS we are now working on a
different way to associate each bolometer to the sky pixels, taking into
account the finite size of the sky pixels. For the first test cases we run,
the percentage of detected glitches is now around 5-6%.
## 4 The ROMAGAL Map Making algorithm
The ROMAGAL algorithm is based on a Generalized Least Square (GLS) approach
(Lupton, 1993). Since the TOD is a linear combination of signal and noise, we
can model our dataset $\textbf{d}_{k}$ for each detector $k$ as (Wright,
1996):
$\textbf{d}_{k}=P\textbf{m}+\textbf{n}_{k}$ (2)
where $P$ is the pointing matrix, which associates to every sample of the
timeline a direction in the sky, m is our map estimator of the “true” sky and
$\textbf{n}_{k}$ is the noise vector.
The observed sky, $P\textbf{m}$, is the map estimator of the “true sky”
convolved with the instrumental transfer function and the optical beam.
However, in case of circularly symmetric beam profile, $\mathbf{m}$ is a beam
smeared, pixelised image of the sky.
In this case the pointing matrix has only a non-zero entry per row
corresponding to the sky pixel observed at a given time. Since the beam
profiles for PACS (Poglitsch et al., 2008) and SPIRE (Griffin et al., 2008)
are only weakly asymmetric we can put ourselves in this simple case. Note that
the transpose of the $P$ operator performs a data binning (without averaging)
into the sky pixels.
Equation 2 holds only if the noise vector of each detector $\textbf{n}_{k}$ is
composed of statistical random noise, with Gaussian distribution and null
average. All the relevant systematic effects (offset, glitches) have then to
be removed with an accurate data preprocessing before map production, as
explained in Section 3.
The formalism can be easily extended in the case of multidetector analysis. In
this case the vector d contains the data relative to each detector. Rather,
one has to take care to upgrade also the noise vector n, accordingly with the
correct noise value for each detector.
The GLS algorithm produces minimum noise variance sky maps. Noise properties
for each detector have to be previously estimated and provided in input to the
algorithm as described in Section 4.1.
The GLS estimate for the sky, $\tilde{\textbf{m}}$, is (Natoli et al., 2001)
$\tilde{\textbf{m}}=(P^{T}\textbf{N}^{-1}P)^{-1}P^{T}\textbf{N}^{-1}\textbf{d}$
(3)
where $\textbf{N}=\langle\textbf{nn}^{T}\rangle$ is the noise covariance
matrix, which takes into account noise time correlation between different
samples. Such correlation is particularly high at low frequencies because of
the $1/f$ (or long memory) noise. In case of uncorrelated noise (or white
noise) the N matrix becomes diagonal and the problem is greatly simplified. If
we further assume stationary uncorrelated noise, Equation 3 reduces to:
$\tilde{\textbf{m}}=(P^{T}P)^{-1}P^{T}\textbf{d}.$ (4)
$P^{T}P$ is the number of observations of a pixel of the map, so we are
averaging the different TOD values into that pixels assigning the same weight
to each sample. We will refer to this map estimate as “naive” or “binned” in
the following.
When non negligible noise correlation is present, as in the case of PACS
(Poglitsch et al., 2008) and SPIRE (Schulz et al., 2008), Equation 3 must be
solved. This is a challenging computational task since it requires, in
principle, the inversion of the large (of the order of the number of pixels in
the map) matrix $P^{T}\textbf{N}^{-1}P$, which is the covariance matrix of the
GLS estimator (Lupton, 1993). One key simplifying assumption is to invoke that
the noise is stationary. In this case, the N matrix has a Toeplitz form which
can be approximately treated as circulant, ignoring boundary effects (Natoli
et al., 2001). A circulant matrix is diagonal in Fourier space and its inverse
is also circulant, so the product between $\textbf{N}^{-1}$ and a vector is a
convolution between the same vector and a filter provided by any of the rows
of the matrix. In the following we will refer to any of these rows as a noise
filter. Its Fourier transform is the inverse of the noise frequency power
spectrum.
Considering the conditions listed above, the GLS map making algorithm performs
the following operations, starting with rewriting the Equation 3 in the form
$(P^{T}\textbf{N}^{-1}P)\textbf{m}_{0}-P^{T}\textbf{N}^{-1}\textbf{d}=\textbf{r}$
(5)
where $\textbf{m}_{0}$ is the starting map used at the first iteration,
generally the naive map.
The $P\textbf{m}_{0}$ product projects the map onto a timeline. Application of
$\textbf{N}^{-1}$: this is a convolution which can be performed in Fourier
space. Application of $P^{T}$: this step projects the convolved timeline back
into a map.
The second term performs the convolution with the filter (applying
$\textbf{N}^{-1}$ to the data vector d in Fourier space) and then the
projection of the convolved timeline into a map (applying $P^{T}$ to the
product $\textbf{N}^{-1}\textbf{d}$).
Then, we need to evaluate the residual r. If the residual is higher than a
fixed threshold, it is used to produce a new map, $\textbf{m}_{1}$, as
described in Hestenes & Stiefel (1952). This map will be considered instead of
$\textbf{m}_{0}$ for evaluating again the Equation 5, until convergence. This
is achieved by running a Conjugate Gradient algorithm, an iterative method
useful to obtain a numerical solution of a system (Hestenes & Stiefel, 1952),
until convergence is reached with the residual lower then the threshold.
The algorithm outlined is the same described in (“unroll, convolve and bin”:
Natoli et al., 2001) and is implemented in the ROMAGAL code. The next section
explains the strategy employed to estimate the noise filters used by ROMAGAL
directly from in-flight data.
### 4.1 Noise estimation
In order to estimate the noise filters for ROMAGAL we need to investigate the
noise statistical properties in the timelines. Data are mostly affected by two
kind of statistical noise: $1/f$ noise due both to the electronics and thermal
background radiation from the telescope or the instruments, and photon noise
(see Poglitsch et al. 2008; Schulz et al. 2008).
The detector $1/f$ noise arises in the electronic chain and it impacts
particularly regions with low signal-to-noise ratio (SNR), where only diffuse
emission is present. In those regions it can be of the same order of magnitude
of the signal or even higher. In these cases GLS treatment is particularly
effective.
Photon noise is due to statistical fluctuation in the photon counts. This
process follows poissonian statistic, so the SNR is proportional to the square
root of counts. Since poisson distribution tends to be Gaussian for large
numbers, we can approximate photon noise as Gaussian on the map if the number
of counts is large enough.
Since bolometers are organized in matrix and sub-matrix, the signal of a
bolometer can be correlated with the signal of another, generally adjacent,
bolometer. These effects could be both statistical and deterministic. We
already described how to remove the deterministic common mode from TOD (like
the thermal bath variations, see Section 3).
One possible source of statistical cross-correlated noise is the crosstalk
between bolometers: the signal of one pixel may contaminate the signal of its
neighbors through the capacitive inductive coupling, generating a common mode
called “electrical crosstalk”. On the contrary, “optical crosstalk” is due to
diffraction or aberrations in the optical system which could drive an
astronomical source to fall on inappropriate detectors (Griffin et al., 2008).
We then analyze the residual contribution of the statistical component of the
correlated noise. We found that the residual correlated noise level in each
pixel is negligible with respect to the intrinsic detector noise level for
both PACS and SPIRE instruments, as described in the following.
In principle, the noise properties vary significantly across the array and we
had to estimate the noise power spectrum for each bolometer. To do that we
have processed “blank” sky mode (i.e. filled with negligible contribution from
sky signal) data acquired during the PV Phase.
In Figure 9 we show a typical noise spectrum estimated for a pixel of the
$160\mu$m PACS band (black) and the cross-spectrum between two adjacent
bolometers (red). The cross-spectrum evaluates the impact of the cross-
correlated noise in the frequency domain between two different bolometers. The
level of the cross-correlated noise is at least 4 order of magnitude below the
auto-correlated noise power spectrum of each pixel. Note that this means we do
not see any relevant cross-correlated noise, despite the fact that crosstalk
can be present into the timeline.
In Figure 10 we show noise power spectra of the 250$\mu$m SPIRE band
bolometers. Also in this case the cross-spectrum is negligible.
Noise spectra of both PACS and SPIRE display low-frequency noise excess
($1/f$). In case of SPIRE spectra (Figure 10) a high frequency rise is also
evident, which is due to the deconvolution of the bolometric response
function. PACS spectra do not show this behavior because the bolometer
transfer function is not deconvoluted by the standard pipeline.
Figure 9: Black line: typical noise spectrum of a PACS $160$$\mu$m detector,
estimated on blank sky data. Red line: cross spectrum between two detectors of
the same subarray. The level of the cross-correlated noise is significantly
under the noise level of each single bolometer, so we can reasonably neglect
it. Figure 10: Same as Figure 9 for a SPIRE $250$$\mu$m bolometer. For SPIRE
also the noise level of cross-spectrum is reasonably negligible with respect
to the auto spectrum level.
### 4.2 From ROMA to ROMAGAL
The ROMAGAL GLS code has been optimized to recover the features in the Hi-GAL
fields with high accuracy.
Hi-GAL observes the Galactic plane where the dynamic range of sources spans
over several orders of magnitudes. This poses strong constraints on the map
making algorithm: both the weak diffuse emission and the bright compact
sources in e.g., star forming regions have to be recovered with high accuracy.
The signal often exhibits steep gradients that are hard to follow for the GLS
solver, which relies on the assumption that the sky signal does not vary
significantly within a resolution element (see below 5.3). At the same time,
several systematics affect the dataset. As explained above, many of them are
cured at the preprocessing level. However, their removal generates a
conspicuous amount of transient flagging, that must be correctly handled by
the GLS code.
The core of ROMAGAL is the same of the ROMA code (de Gasperis et al., 2005)
where the input-output routines have been deeply modified to adapt to the HIPE
generated dataset. ROMAGAL inputs are the TOD generated by HIPE, pointing and
transient flagging. These have the same format for both PACS and SPIRE.
ROMAGAL outputs are fits file containing the optimal map, arranged in standard
gnomonic projection routines. The code is written in FORTRAN 95 and relies on
the MPI library for parallel calls. It runs on the Hi-GAL dedicated machine,
called BLADE, a cluster of 104 processors at 2.5 GHz each and 208 Gb RAM
total. Its nodes are interconnected with MPI-infiniBAND. The machine is
located at IFSI-Rome.
As explained in the previous Section, the computation of Equation 3 is
unfeasible due to the size of the map’s covariance matrix. However, we assume
the noise of each Hi-GAL field to be stationary to set up an FFT based solver
built upon a conjugate gradient iterative algorithm (see Section 4). Such a
scheme can estimate the final maps with a precision of order of
$\epsilon=10^{-8}$ in $\sim$ 150 iterations for Hi-GAL . ROMAGAL computational
time scales linearly with the size of the dataset and only weakly with the
number of pixels in the output maps. The scaling with the number of processors
is highly optimal in the range of cores required for the Hi-GAL analysis
($<50$). For the largest channels (PACS blue band), a final GLS map of a
$2^{\circ}\times 2^{\circ}$ field requires about 16 Gbytes of RAM and 1400 sec
on 8 cores. Due to the high number of array pixels (2048), this channel is the
largest dataset to analyze as well as the most demanding in terms of
computational resources. Further information on resource consumptions can be
found in Table 1.
Band | Total Time (sec) | RAM (Gb)
---|---|---
$70\mu$m | $\sim$ 1400 | 16
$160\mu$m | $\sim$ 1000 | 8
$250\mu$m | $\sim$ 180 | 4
$350\mu$m | $\sim$ 130 | 1
$500\mu$m | $\sim$ 100 | 1
Table 1: Time and minimum RAM amount required from ROMAGAL for each PACS and
SPIRE band using 8 BLADE processors.
### 4.3 Optimal treatment of transient flagging
As mentioned above, the timelines are inspected for bad data samples that must
be excluded from map making as part of the preprocessing pipeline. Bad data
can arise due to a variety of reasons. They are generally caused by transient
events, either unexpected (e.g., glitches, anomalous hardware performance) or
expected (e.g., detectors saturating because of a bright source, observation
of a calibrator). Once identified, a flag record is generated and stored for
these anomalous events, so that their contribution can be safely excluded from
map making. Flags in the TOD pose a potential problem for ROMAGAL because its
solver is based on the FFT as discussed in the previous section. The FFT
requires the timeline to be continuous and uniformly sampled. Since noise in
the PACS and SPIRE data is correlated, just excising the flagged sampled to
fictitiously create a continuous timeline would interfere with noise
deconvolution, and is thus not a safe option. Instead, we advocate using a
suitable gap-filling technique. The rest of this section is mostly devoted to
defining which, among the various options for gap filling, is best suited for
the Hi-GAL maps.
It is important to realize that the output map will depend on the content of
the flagged sections of the TOD even if these values are not associated to any
(real) map pixel. This is due to the convolution performed within the solver
that “drags” out data from the flagged section of the timeline, even if, when
the data are summed back into a map, the $P$ operator is applied only to the
unflagged samples. Since one is not in control of the content of the flagged
section of the timeline, a kind of gap-filling must be performed in any case.
We have tested different recipes for gap-filling making extensive use of
simulations. We have treated separately the signal and noise components of the
timelines, running separately noise dominated and signal dominated cases,
because the behavior towards flags of the two components is different, as it
will be shown below.
The simplest form of gap filling is to remove the content of the flags
altogether, replacing the flagged sections with a nominal constant (usually
null) value. This works very well on a signal-only simulation of the Hi-GAL
field. However, it fails dramatically when noise is present, as evident from
Figure 11 (left panel), where a markedly striped pattern in the reconstructed
map is seen (in this simulation, the Hi-GAL noise has been amplified by a
factor 100 to make its pattern more evident). The reason for this failure is
readily understood: The GLS map making employed requires noise stationarity
(see Section 4.2 above), which is obviously not preserved by zeroing the gaps.
A less obvious effect is that even if gaps are filled with a noise realization
with the correct statistical properties, but unconstrained, the GLS map making
is bound to fail as well, as shown in the middle panel of Figure 11. A noise
realization is said to be constrained when it respects certain boundary
conditions (Hoffman & Ribak, 1991), which in our case are represented by the
noise behavior outside the gap. Unconstrained noise inside the gap, despite
having the correct statistical properties, creates a border discontinuity that
causes the GLS map maker to behave sub-optimally (Stompor et al., 2002). We
have employed a code to create Gaussian noise constrained (NCR) realizations,
based on the algorithm set forth by Hoffman & Ribak (1991). The code uses in
input the noise pattern and statistical properties, as measured from the
timelines. The results on noisy simulations are excellent, as set forth by the
third (rightmost) panel in Figure 11. Note, however, that Figure 11 refers to
a noise dominated simulation. We now turn to discuss the effect of a non
negligible signal contribution (a far more realistic case for Hi-GAL ).
Figure 11: Shown are the results obtained in a noise dominated regime (normal
Hi-GAL noise is amplified by a factor 100). Left panel is the map obtained
replacing flagged data samples with null values (clearly it does not work),
middle panel is the map obtained replacing data samples with unconstrained
noise realization (does not work either), right panel shows the map obtained
using our NCR code (does work).
We have verified that the presence of non negligible signal in the timelines
does not affect the results provided that the NCR is performed using the
underlying noise field as a baseline. Measuring the latter in presence of
signal is however impractical. It would be significantly simpler if the NCR
could be run directly on the timelines themselves, constraining thus the
(fake) noise realization within the gap to the outside noise plus signal
(true) values. Unfortunately, this poses a problem for Hi-GAL data: large
signal jumps are present in the field and the resulting gap filling
realizations are affected in a non negligible manner by the boundary
conditions, at least with the present version of ROMAGAL. This behavior is
different from what happens for experiments aimed at the Cosmic Microwave
Background (see e.g. Masi et al. (2006)) where NCR codes are routinely run on
the timelines as they are. In order to find a workaround that would spare us
the inconvenience of estimating the underlying noise field to serve as a NCR
input, we have modified the flag treatment of the ROMAGAL code as explained in
the following.
The original version of ROMAGAL makes use of a single extra pixel (dubbed
“virtual pixel”) to serve as junk bin where the contents of the gaps are sent
when applying the $P^{T}$ operator within the ROMAGAL solver. This approach,
as stated above, works excellently in presence of both signal and noise,
irrespective of their relative amplitude, provided the NCR code assumes the
underlying noise field as a baseline to perform the realization. In order to
relax this assumption, we have modified ROMAGAL to take into account not a
single virtual pixel but an entire virtual map. In other words, we introduce a
virtual companion for each pixel of the map, and use it as a junk bin to
collect the output from the gaps they correspond to. The hope is to
redistribute the content of the flagged sections more evenly, preventing
artifacts. This approach obtains satisfactory results when the NCR code is run
on the original (signal plus noise) timelines, as shown in Figure 12.
Figure 12: Top row: For a signal dominated case, shown are the relative
differences between the input simulated map and the output obtained with
ROMAGAL (in the “virtual map” mode) with NCR performed on the underlying noise
(left), on the original signal plus noise TOD (middle), and without NCR after
replacing the gaps with null values. The latter case is clearly striped, but
no signal related artifacts are present. Bottom row: For a noise dominated
case, shown again are the relative difference versus the input map obtained by
ROMAGAL with “virtual map”, assuming NCR on underlying noise only (left),
without NCR (middle) and with NCR on the signal plus noise TOD (right). As
expected, the left panel shows the best residuals, but the right one appears
as a good compromise (see also text).
To summarize our findings:
* •
Using a NCR realization code is always necessary in order to avoid artifact
striping effect into the map
* •
If the NCR code is run on the underlying noise-only timelines (which are
cumbersome to estimate) we obtain the best quality output map, with no signal
related artifacts and a noise standard deviation which is lower (by a factor
$\sim 2$) with respect to the case in which no NCR is performed.
* •
Running the NCR on the original timelines is possible with the “virtual map”
approach. No signal artifacts are detected in the difference maps and the
advantage in terms of noise standard deviation with respect to the no NCR case
is still present, and of order 10% to 20% on average.
We have therefore chosen this latter approach as the baseline for the Hi-GAL
pipeline.
## 5 ROMAGAL maps
In this section we analyze the final map obtained running our dedicated
pipeline. We analyze the Point Spread Function (PSF) for the five bands in
order to fix the resolution of the ROMAGAL maps. We compare the final GLS map
with the naive map and we point out the differences and the capability to
recover the diffuse emission from the GLS map. Finally we discuss about the
noise residuals on the maps.
### 5.1 Point Spread Function and pixel size
The angular resolution (pixel size) of the final map is a free parameter. Its
choice is a compromise between competing requirements. A small pixel size
assures a correct sampling of the PSF;
indeed, assuming a Gaussian profile for the PSF (which is reasonable as
discussed in the following), the Nyquist theorem imposes that to better sample
a 2-d image we need to set a pixel size which is at most one third of its FWHM
value.
On the other hand, a too small pixel size can cause the loss of redundancy,
useful to reduce the white noise level, and (even) some non-observed pixels in
the final map.
The diffraction limited beam of PACS at $70\mu$m is $5.2\arcsec$. Thus, we
should build the map with a pixel size of at least $1.8\arcsec$. However, due
to the limited bandwidth available for transmission, especially in PACS/SPIRE
Parallel mode, PACS frames are coadded on-board the satellite before
broadcasting. For the $70$$\mu$m channel, a coaddition of 8 consecutive frames
is applied by on-board software. Since the acquisition rate is 40Hz and the
scanning speed for Hi-GAL is set to $60\arcsec/s$, two close frames are
snapshots of the sky acquired $1.5\arcsec$ apart. Due to coaddition, the
satellite provides one averaged frame every $12\arcsec$; in spatial
coordinates, this is twice the beam width of the PACS blue channel. The
measured PSF then is not the diffraction limited beam but it results in an
elongation along the scan direction due to the convolution of the coaddition
with the beam. As shown in Lutz (2009), the observations of Vesta and
$\alpha$Tau with the blue channel evidenced a FWHM equal to $5.86\arcsec\times
12.16\arcsec$ as a result of a 2-d Gaussian fitting, elongated in the scan
direction.
PACS $160\mu$m is also affected by the averaging on-board, but only 4 frames
are coadded together. The nominal instrumental beam is $12.0\arcsec$, while
the measured is $11.64\arcsec\times 15.65\arcsec$ (Lutz, 2009), elongated
along the scan direction. However, in this case we can sample the beam without
issues, and the effect of coaddition on the final map is negligible.
For the Hi-GAL fields the scanning strategy consists of two orthogonal AORs,
therefore the redundancy regularizes the PSF, resulting in approximately 2-d
symmetric Gaussian profile, as shown in Table 2.
According to the values reported in Table 2, we observe quasi-symmetric beams
with an averaged ellipticity of less than $15\%$ for both blue and red channel
and the axis oriented randomly with respect to the scan direction.
We choose a pixel size of $3.2\arcsec$ for the PACS $70\mu$m band which
samples the observed beam at Nyquist frequency. Below this threshold, the
diffuse emission areas become too noisy due to the low SNR. Similarly, we can
choose a pixel size of $4.5\arcsec$ for red band without loosing redundancy.
SPIRE does not suffer from on-board coadding and the detectors were built to
reach the band diffraction limit. In-flight data show that the SPIRE beam is
well approximated by a 2-d asymmetric Gaussian curve with the major axis
orientation independent of the scan direction, with an ellipcticity not bigger
then $10\%$ (see Sibthorpe et al. (2010)). We set the pixel size for each
SPIRE band equal to one third of the nominal beam. In Table 2 we also report
the beam measured in the SPIRE maps.
The average ellipticity we observe agrees with found by Sibthorpe et al.
(2010). On the contrary, while the FWHM for the two axis found by Sibthorpe et
al. (2010) are in agreement with the nominal (within the error), our measured
beam results in a FWHM larger then the nominal of $\sim 25\%$.
Band | Nominal Beam (arcsec) | Measured Beam (arcsec) | Ellipticity | Pixel Size (arcsec)
---|---|---|---|---
$70\mu$m | $5.2\times 5.2$ | $\sim 9.7\times\sim 10.7$ | 14.6% | 3.2
$160\mu$m | $12.0\times 12.0$ | $\sim 13.2\times\sim 13.9$ | 14.7% | 4.5
$250\mu$m | $18\times 18$ | $\sim 22.8\times\sim 23.9$ | 8.3% | 6.0
$350\mu$m | $24\times 24$ | $\sim 29.3\times 31.3$ | 8.8% | 8.0
$500\mu$m | $34.5\times 34.5$ | $\sim 41.1\times\sim 43.8$ | 9.7% | 11.5
Table 2: Nominal (2nd column), map-measured beam (two AOR, 3rd column) and
ellipticity (4th column) of each band.
### 5.2 Hi-GAL SDP results
The quality of the outcome can be qualified by the comparison between the igls
maps with the naive maps. In fact, the naive map is the simple averaging of
signal recorded by every spatial pixel and it represents “the least
compromised view of the sky”.
Since the TOD are created at the end of the preprocessing steps, when the data
are a combination of only signal and $1/f$ noise, we expect the $1/f$
residuals in the naive map as well as a “pure” sky map produced by ROMAGAL.
In Figure 13 a comparison between the naive map and the ROMAGAL map of
$\textit{l}=30^{\circ}$ field at $70\mu$m is shown. The GLS code is capable to
remove the $1/f$ residuals without loosing any signal both on bright sources
and on diffuse emission.
Figure 13: Left: particular of the ROMAGAL map of the red PACS array,
$\textit{l}=30^{\circ}$ field. Right: particular of the naive map, same band
and field. The $1/f$ noise is evident on the naive map, as well as its
minimization is evident on the GLS map without loosing the signal of the
diffuse component.
In particular, we choose three main proxies:
* •
the difference between naive and igls should show only a pattern due to the
$1/f$ noise residuals in the binned map. The pattern of this low-frequency
noise is recognizable as stripes superimposed on the sky signal in the naive
map. The stripes are the consequence of the $1/f$ noise due to the adopted
scanning strategy.
In Figure 14 we show the difference between igls map and naive map. The $1/f$
noise is removed in the igls map but not in the naive. The residual stripes
due to the low-frequency noise are clearly visible.
Figure 14: Particular of map difference between PACS $160\mu$m igls and naive,
same region of the previous Figure. The stripes due to $1/f$ removal made in
the igls are evident.
* •
the source fluxes should be the same between the igls and naive maps.
This item is quantified by the map difference, where the pattern is due only
to the noise without any residual signal, except across the very brilliant
source. This last effect is discussed on the next Section.
* •
a statistical analysis of the background noise level should show a decrease of
the rms value in the igls with respect to the naive map.
In Tables 3 and 4 we report the rms residuals of the PACS and SPIRE maps
respectively, calculated in a diffuse emission area of each map. Since the
flux between the maps is conserved, a decreasing of the rms noise level
assures an increasing of S/N ratio in the ROMAGAL maps.
PACS $\textit{l}=30^{\circ}$ field
---
Band | rms igls (MJy/pixel) | rms naive(MJy/pixel) | ratio
$70\mu$m | 0.0085 | 0.026 | $\sim$ 3.1
$160\mu$m | 0.047 | 0.102 | $\sim$ 2.2
PACS $\textit{l}=59^{\circ}$ field
Band | rms igls (MJy/pixel) | rms naive(MJy/pixel) | ratio
$70\mu$m | 0.004545 | 0.02208 | $\sim$ 4.9
$160\mu$m | 0.01899 | 0.03586 | $\sim$ 1.9
Table 3: rms compared from GLS and naive map on both the SDP observations, for
PACS bands, measured on a background region of $50\times$50 pixels. In the
last column is reported the ratio between the naive and GLS rms.
The ratio between the naive and igls rms shows an improvement of a factor
$\sim 2-5$ in the PACS ROMAGAL maps, and a factor $\sim 1-2$ in the SPIRE
case. The difference is mostly due to an intrinsically different $1/f$ noise
level.
SPIRE $\textit{l}=30^{\circ}$ field
---
Band | rms igls (MJy/beam) | rms naive(MJy/beam) | ratio
$250\mu$m | 0.1749 | 0.2868 | $\sim$ 1.6
$350\mu$m | 0.1569 | 0.2302 | $\sim$ 1.5
$500\mu$m | 0.2659 | 0.4065 | $\sim$ 1.5
SPIRE $\textit{l}=59^{\circ}$ field
Band | rms igls (MJy/beam) | rms naive(MJy/beam) | ratio
$250\mu$m | 0.09857 | 0.1123 | $\sim$ 1.1
$350\mu$m | 0.0734 | 0.08164 | $\sim$ 1.1
$500\mu$m | 0.1073 | 0.2101 | $\sim$ 1.9
Table 4: rms compared from GLS and naive map on both the SDP observations, for
SPIRE bands, measured on a background region of $50\times$50 pixels. In the
last column is reported the ratio between the naive and GLS rms. The $1/f$
noise is less evident in the SPIRE bolometers with respect to the PACS one,
but its effect is still remarkable.
### 5.3 Consistency of data analysis assumptions
One of the assumptions of ROMAGAL, as well as of all Fourier based GLS map
making codes, is that the underlying sky signal is constant within a chosen
resolution element. If this is not the case, artifacts (stripes) will be
generated in the final map, contributing to the so called pixelization noise
(Poutanen et al., 2006). In the case of Hi-GAL the situation is complicated by
several effects:
* •
On-board coaddition of samples: each PACS 70$\mu$m (160 $\mu$m) frame is the
result of an on-board average of eight (four) consecutive frames reducing the
effective sampling frequency of the instrument (see Section 5.1). Thus, sky
signal is low-pass filtered by only partially effective data-windowing, rather
then telescope response, leaving room for signal aliasing.
The map making code is quite sensitive to aliasing since it works in Fourier
space. The situation is worsened by the large dynamic range of the Hi-GAL
fields, especially when scanning across bright sources.
* •
Time bolometer response induces signal distortions along the scan. While
within HIPE the SPIRE detector response is deconvoluted from the data (Griffin
et al., 2008), the same is not true for PACS. Redundancy in each pixel is
obtained by scans coming from different directions, thus the effect
contributes further signal mismatch at the pixel level.
* •
Pointing error: as analyzed in detail in Garcia Lario et al. (2007) the
pointing performance of Herschel, which mean the capability of assign
coordinates to each sample in a given reference frame, can be affected by
several pointing error effects; the main contributor is due to the angular
separation between the desired direction and the actual instantaneous
direction, based on the position-dependent bias within the star-trackers.
The Herschel AOCS goal is that the mismatch between real coordinates and
assigned coordinates along the scan-leg is smaller than $1.2\arcsec$ at 1
sigma (Garcia Lario et al., 2007). So that a 2$\sigma$ event becomes
significant compared to the PSF of the PACS blue band.
Figure 15: Zoom around a compact sources for the PACS blue optimal map. The
striping dragged along the scan directions are remarkable.
All of the above effects challenge the basic assumptions (no signal aliasing
and no sub-pixel effect) under which ROMAGAL works. Our simulations suggest
that signal aliasing contribute significantly more than the other two. The net
result on maps is the striping of bright sources in the GLS maps. An example
is shown in Figure 15 for the PACS blue band.
It is important to notice that the stripes are present only around the point-
like sources (where - of course - the signal aliasing is more evident),
regardless of their magnitude. However, magnitude influences the amplitude of
the stripes. For a source peak which is only 10 times higher that the rms
background, the intensity of the stripes are within $1\sigma$ from the
background dispersion. For the more intense sources, the stripes magnitude in
the pixels surrounding the PSF shape can be $100\sigma$ times away from the
background value.
Since these stripes are produced within the GLS solver, which performs
repetitive convolutions along the scan directions, but do not affect the naive
map the obvious workaround is to implement dedicated map making which
considers a different analysis around the bright sources.
However, the detailed accounting of the above effects and the enhanced map
making strategies to address them will be the subject of a forthcoming paper
Piazzo et al. (2011).
## 6 Summary and conclusions
This paper describes in detail all steps of the processing of Herschel data,
from originally downloaded frames to high-quality maps, used in the framework
of Hi-GAL survey. Hi-GAL data are taken in fast scan mode ($60\arcsec$/sec )
and simultaneously by PACS and SPIRE (Parallel mode). We test our pipeline
reducing data from the Science Demonstration Phase and present results, taking
as proxy for the quality of the final images their comparison with naive maps.
We divided data processing into two distinct phases: preprocessing and
mapmaking. Pre-processing aims to accurately remove systematics and random
effects in the data, in order to prepare them for the ROMAGAL map making
algorithm, which implements the minimum variance GLS approach in order to
minimize the noise into the data. It turns out that NCR is a fundamental step
in the pre-processing because ROMAGAL, as an FFT mapmaking code needs
continuous and regular data time series as input.
Noise residuals in the diffuse emission of the two test fields (SDP Hi-GAL
data, two 2${}^{\circ}\times 2^{\circ}$ tiles centered on the Galactic plane
at l = 30∘ and l = 59∘) show that we obtain optimal maps, getting rid of
glitches and systematic drifts, as well as minimizing the 1/ f and white noise
component. The remaining effects, which do not affect the overall quality of
the maps except across the bright source on the PACS 70$\mu$m maps, are under
investigation that will appear in a dedicated publication to be available
shortly.
## Acknowledgements
Special thanks to Göran Pilbratt for allowing the using of PV Phase blank
data.
## References
* Billot et al. (2010) Billot N., Sauvage M., Rodriguez L., Horeau B., Kiss C., Aussel H., Okumura K., Boulade O., Altieri B., Poglitsch A., Agnèse P., 2010, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 7741 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, CEA bolometer arrays: the first year in space
* de Gasperis et al. (2005) de Gasperis G., Balbi A., Cabella P., Natoli P., Vittorio N., 2005, A&A, 436, 1159
* Garcia Lario et al. (2007) Garcia Lario P., Heras A. M., Sanchez-Portal M., 2007, Herschel Observer Manual
* Griffin et al. (2008) Griffin M., Dowell C. D., Lim T., Bendo G., Bock J., Cara C., Castro-Rodriguez N., Chanial P., Clements D., Gastaud R., Guest S., Glenn J., Hristov V., et al. 2008, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 7010 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, The Herschel-SPIRE photometer data processing pipeline
* Griffin et al. (2008) Griffin M., Swinyard B., Vigroux L., Abergel A., Ade P., André P., Baluteau J., Bock J., Franceschini A., Gear W., Glenn J., Huang M., Griffin D., et al. 2008, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 7010 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Herschel-SPIRE: design, ground test results, and predicted performance
* Griffin et al. (2010) Griffin M. J., Abergel A., Abreu A., Ade P. A. R., André P., Augueres J., Babbedge T., Bae Y., Baillie T., Baluteau J., Barlow M. J., Bendo G., Benielli D., et al. 2010, A&A, 518, L3+
* Hestenes & Stiefel (1952) Hestenes M. R., Stiefel E., 1952, Journal of Research of the National Bureau of Standards, 49, 409
* Hoffman & Ribak (1991) Hoffman Y., Ribak E., 1991, ApJL, 380, L5
* Lupton (1993) Lupton R., 1993, Statistics in theory and practice. Princeton University Press, Princeton
* Lutz (2009) Lutz D., 2009, PACS photometer point spread function
* Martin et al. (2010) Martin P. G., Miville-Deschênes M., Roy A., Bernard J., Molinari S., Billot N., Brunt C., Calzoletti L., et al. 2010, A&A, 518, L105+
* Masi et al. (2006) Masi S., Ade P. A. R., Bock J. J., Bond J. R., Borrill J., Boscaleri A., Cabella P., Contaldi C. R., Crill B. P., de Bernardis P., de Gasperis G., de Oliveira-Costa A., de Troia G., et al. 2006, A&A, 458, 687
* Miville-Deschênes & Lagache (2005) Miville-Deschênes M., Lagache G., 2005, ApJS, 157, 302
* Molinari et al. (2010b) Molinari S., Swinyard B., Bally J., Barlow M., Bernard J., Martin P., Moore T., Noriega-Crespo A., Plume R., Testi L., Zavagno A., et al. 2010b, A&A, 518, L100+
* Molinari et al. (2010a) Molinari S., Swinyard B., Bally J., Barlow M., Bernard J., Martin P., Moore T., Noriega-Crespo A., Plume R., Testi L., Zavagno A., et al. 2010a, PASP, 122, 314
* Natoli et al. (2001) Natoli P., de Gasperis G., Gheller C., Vittorio N., 2001, A&A, 372, 346
* Ott et al. (2010) Ott S., Science Centre H., Space Agency E., 2010, ArXiv e-prints
* Peretto et al. (2010) Peretto N., Fuller G. A., Plume R., Anderson L. D., Bally J., Battersby C., Beltran M. T., Bernard J., Calzoletti L., Digiorgio A. M., et al. 2010, A&A, 518, L98+
* Pezzuto (2010) Pezzuto S., 2010, ”A new approach to the detection of outliers with the sigma-clipping method applied to PACS photometry observation”. Poster presented at ”The impact of Herschel surveys on ALMA Early Science” Garching, 16-19 November 2010
* Piazzo et al. (2011) Piazzo L., Ikhenaode D., Natoli P., Pestalozzi P., Piacentini F., Traficante A., 2011, ”Artifacts removal for GLS map makers”, in preparation
* Pilbratt et al. (2010) Pilbratt G. L., Riedinger J. R., Passvogel T., Crone G., Doyle D., Gageur U., Heras A. M., Jewell C., Metcalfe L., Ott S., Schmidt M., 2010, A&A, 518, L1+
* Poglitsch et al. (2008) Poglitsch A., Waelkens C., Bauer O. H., Cepa J., Feuchtgruber H., Henning T., van Hoof C., Kerschbaum F., Krause O., Renotte E., Rodriguez L., Saraceno P., Vandenbussche B., 2008, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 7010 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, The Photodetector Array Camera and Spectrometer (PACS) for the Herschel Space Observatory
* Poglitsch et al. (2010) Poglitsch A., Waelkens C., Geis N., Feuchtgruber H., Vandenbussche B., Rodriguez L., Krause O., Renotte E., van Hoof C., Saraceno P., Cepa J., Kerschbaum F., et al. 2010, A&A, 518, L2+
* Poutanen et al. (2006) Poutanen T., de Gasperis G., Hivon E., Kurki-Suonio H., Balbi A., Borrill J., Cantalupo C., Doré O., Keihänen E., et al. 2006, A&A, 449, 1311
* Schulz et al. (2008) Schulz B., Bock J. J., Lu N., Nguyen H. T., Xu C. K., Zhang L., Dowell C. D., Griffin M. J., Laurent G. T., Lim T. L., Swinyard B. M., 2008, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 7020 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Noise performance of the Herschel-SPIRE bolometers during instrument ground tests
* Sibthorpe et al. (2010) Sibthorpe B., Ferlet M., Schulz B., 2010, SPIRE Beam Model Release Note. ICC, London
* Starck et al. (1995) Starck J., Murtagh F., Louys M., 1995, in R. A. Shaw, H. E. Payne, & J. J. E. Hayes ed., Astronomical Data Analysis Software and Systems IV Vol. 77 of Astronomical Society of the Pacific Conference Series, Astronomical Image Compression Using the Pyramidal Median Transform. pp 268–+
* Stompor et al. (2002) Stompor R., Balbi A., Borrill J. D., Ferreira P. G., Hanany S., Jaffe A. H., Lee A. T., Oh S., Rabii B., Richards P. L., Smoot G. F., Winant C. D., Wu J., 2002, PRD, 65, 022003
* Wright (1996) Wright E. L., 1996, paper presented at the IAS CMB Data Analysis Workshop (astro-ph/9612006)
|
arxiv-papers
| 2011-06-03T16:05:12 |
2024-09-04T02:49:19.342746
|
{
"license": "Public Domain",
"authors": "A. Traficante, L. Calzoletti, M. Veneziani, B. Ali, G. de Gasperis,\n A.M. Di Giorgio, D. Ikhenaode, S. Molinari, P. Natoli, M. Pestalozzi, S.\n Pezzuto, F. Piacentini, L. Piazzo, G. Polenta, E. Schisano",
"submitter": "Alessio Traficante",
"url": "https://arxiv.org/abs/1106.0698"
}
|
1106.0739
|
# Highly covariant quantum lattice gas model of the Dirac equation
Jeffrey Yepez Air Force Research Laboratory, 29 Randolph Road, Hanscom AFB,
Massachusetts 01731, USA
(November 17, 2010)
###### Abstract
We revisit the quantum lattice gas model of a spinor quantum field theory—the
smallest scale particle dynamics is partitioned into unitary collide and
stream operations. The construction is covariant (on all scales down to a
small length $\ell$ and small time $\tau=c\,\ell$) with respect to Lorentz
transformations. The mass $m$ and momentum $p$ of the modeled Dirac particle
depend on $\ell$ according to newfound relations
$m=m_{\circ}\cos\frac{2\pi\ell}{\lambda}$ and
$p=\frac{\hbar}{\ell}\sin\frac{2\pi\ell}{\lambda}$, respectively, where
$\lambda$ is the Compton wavelength of the modeled particle. These relations
represent departures from a relativistically invariant mass and the de Broglie
relation—when taken as quantifying numerical errors the model is physically
accurate when $\ell\ll\lambda$. Calculating the vacuum energy in the special
case of a massless spinor field, we find that it vanishes (or can have a small
positive value) for a sufficiently large wave number cutoff. This is a marked
departure from the usual behavior of such a massless field.
quantum computation, quantum lattice gas, Dirac equation, discrete particle
dynamics, quantum field theory model, vacuum energy density, cosmological
constant
###### pacs:
03.67.Lx, 03.65.Pm, 04.25.Dm, 98.80.Es
††preprint: v1.0
## I Introduction
We consider the dynamics of a spinor quantum field where spacetime becomes
discrete at scales smaller than some fundamental length. In particular, we
revisit the quantum computational lattice representation known as the quantum
lattice gas model, a dynamical Feynman chessboard model of the Dirac equation
Feynman (1946); Feynman and Hibbs (1965). Variations, rediscoveries and
improvements of the Feynman chessboard model have appeared over the years,
including a model in 3+1 dimensions by Riazanov Riazanov (June 1958), an Ising
spin chain representation by Jacobson and Schulman Jacobson (1984); Jacobson
and Schulman (1984), a fundamental deterministic model by ’t Hooft ’t Hooft
(1988, 1997), a lattice Boltzmann model by Succi and Benzi quantum Succi and
Benzi (1993), a unitary model by Bialynicki-Birula Bialynicki-Birula (1994),
and quantum lattice gas models in 1+1 dimensions by Meyer Meyer (1996) and in
3+1 dimensions by this author Yepez (2005). We presently consider a
representation that retains 4-momentum conservation
$E^{2}=(cp)^{2}+(mc^{2})^{2}$ of special relativity down to a small length
scale $\ell$ and time scale $\tau$. The low-energy limit of the lattice model
is defined as the dynamical regime where the Compton wavelength $\lambda$ of
the quantum particle represented by an amplitude field $\psi(x)$ is much
larger than the small scale. $\psi(x)$ is treated as continuous for
$\lambda\ggg\ell$. Continuous derivatives emerge as effective quantum
operators and the particle physics may be represented by the Dirac Lagrangian
${\cal L}_{\text{\tiny
Dirac}}=\overline{\psi}(i\gamma^{\mu}\partial_{\mu}-m_{\circ})\psi$, where the
Dirac matrices are $\gamma^{\mu}=(\gamma^{0},\gamma^{i})$, the spacetime
derivatives are $\partial_{\mu}=(\partial_{t},\partial_{i})$ for $i=1,2,3$,
and $m_{\circ}$ is the “invariant” particle mass (here expressed in natural
units with $\hbar=1$ and $c=1$ for convenience).
This paper is organized as follows. We begin in Sec. II by formally
introducing the quantum lattice gas model as a Langanian based theory. Then,
in Sec. III, we present a mapping procedure whereby the discrete dynamics of a
quantum lattice gas model is made equivalent to the Dirac equation. This
procedure leads to analytical form of the particle momentum that is a
modification of the de Broglie relation. In Sec. IV, we present a deviation of
the quantum lattice gas stream and collide operators that form the basis of
our quantum algorithm for the Dirac equation. In particular, we derive a
unitary expression for the collision operator that is serves as a mechanism to
give the spinor field its mass. Our primary intent is to show that the quantum
lattice gas, taken as a numerical tool for this quantum computational physics
application, provides a high degree of numerical accuracy. Then, in Sec. V, we
examine the newfound requirements to have the dynamical equation of motion of
the quantum lattice gas model equal the Dirac equation at a selected small
scale and explore the consequences of these requirements. We present a
calculation of the vacuum energy density of a spinor field, treating the
special case of a massless spinor field. Following a detailed analysis of the
behavior the error terms, one finds an alternate theoretical purpose of the
quantum lattice gas as a toy model. It provides an example where the vacuum
energy of a massless spinor field can vanish or be very small. That is, one
can take the quantum lattice gas as a toy model of Planckian scale physics and
thus set the small scale sizes $\ell$ and $\tau$ to the Planck length
$\ell_{\text{\tiny P}}=\sqrt{\hbar G/c^{3}}$ and Planck time
$\tau_{\text{\tiny P}}=\ell/c$, providing a route for a small positive
cosmological constant. In Sec. VI is a conclusion and summary.
## II Quantum lattice gas model
The proposed high-energy (small scale) quantum lattice gas representation may
be formally expressed by the Lagrangian density of the form
${\cal L}^{\text{\tiny
grid}}\\!\\!=\overline{\psi}\left[i\gamma^{0}\frac{e^{\tau\partial_{t}}-e^{-\tau(\gamma^{0}\cdot\gamma^{i}\partial_{i}+im\gamma^{0})}}{\tau}\right]\psi={\cal
L}_{\text{\tiny Dirac}}+{\cal O}(\tau^{2}).$ (1a) By the least action
principle, this Lagrangian density leads to the equation of motion of the form
$e^{\tau\partial_{t}}\psi(x)=e^{-\tau\gamma^{0}\cdot\gamma^{i}\partial_{i}}e^{-i\tau
m\gamma^{0}}\psi(x).$ (1b) Equation (1b) is the equation of motion of a
quantum lattice gas, a unitary model for a system of noninteracting Dirac
particles. On the right-hand side of (1b), free chiral particle motion is
given by a stream operator $U_{\text{\tiny
S}}=e^{i\tau\gamma^{0}\cdot\gamma^{i}p_{i}}$, with momentum operator
$p_{i}=-i\partial_{i}$. A mass-generating interaction that causes a lefthanded
particle to flip into a right-handed particle (and vice versa) is given by a
unitary collision operator $U_{\text{\tiny C}}=e^{-i\tau m\gamma^{0}}$. The
product $U_{\text{\tiny QLG}}=U_{\text{\tiny S}}U_{\text{\tiny C}}$ is the
local evolution operator of a quantum lattice gas system acting on the spinor
field $\psi^{\text{\tiny T}}(x)={\begin{pmatrix}\psi_{\text{\tiny
L}}(x)&\psi_{\text{\tiny R}}(x)\end{pmatrix}}$. The lefthand side of (1b) is a
newly computed state $\psi^{\prime}(x)\equiv e^{\tau\partial_{t}}\psi(x)$ at
time $t+\tau$, so (1b) may be written as a quantum algorithmic map
$\psi^{\prime}(x)=U_{\text{\tiny S}}U_{\text{\tiny C}}\psi(x)\mapsto\psi(x),$
(1c)
taken to be homogeneously applied at all points $\bm{x}$ of space and at all
increments $t$ of time. In natural units ($\hbar=1$ and $c=1$), the quantum
lattice gas model (1c) is specified in $1+1$ dimensions by the following
unitary operators:
$\displaystyle U_{\text{\tiny S}}^{z}$ $\displaystyle=$ $\displaystyle
e^{i\ell p_{z}\sigma_{z}}$ (2a) $\displaystyle U_{\text{\tiny C}}$
$\displaystyle=$
$\displaystyle\sqrt{1-m_{\circ}^{2}\tau^{2}}-i\sigma_{x}e^{i\sigma_{z}\,\ell
p_{z}}\,m_{\circ}\tau,$ (2b)
where $m_{\circ}$ is the mass of the modeled Dirac particle in the low-energy
limit $\ell/\lambda\sim 0$.
In the low-energy limit, the ${\cal O}(\tau^{2})$ error terms on the righthand
side of (1a) become negligible, so ${\cal L}^{\text{\tiny grid}}\sim{\cal
L}_{\text{\tiny Dirac}}$ is covariant with respect to Lorentz transformations
in this limit. Yet, can ${\cal L}^{\text{\tiny grid}}$ be manifestly covariant
at high-energies $\ell/\lambda\sim 1$? We consider how to achieve the
covariance of (1a) at a small scale: it necessarily occurs when the high-
energy equation (1b)—or equivalently the quantum lattice gas equation (1c)—has
the form of the Dirac equation $(\gamma^{\mu}p_{\mu}+m)\psi(x)=0$.
The model is an expression of the simple idea of a spacetime manifold that
becomes discrete below a small scale $\ell$. The prescriptions needed to make
(1c) equivalent to the Dirac equation are derived in the following section.
## III Imposing covariance at the small scale
Here we show that (1c) in the high-energy limit can be made equivalent to the
Dirac equation. We begin with a local evolution operator as a composition of
“qubit rotations”
$U_{\hat{\bm{n}}_{2}}=e^{-i\frac{\beta_{2}}{2}\hat{\bm{n}}_{2}\cdot\bm{\sigma}}$
and
$U_{\hat{\bm{n}}_{1}}=e^{-i\frac{\beta_{1}}{2}\hat{\bm{n}}_{1}\cdot\bm{\sigma}}$:
$\displaystyle
U_{\hat{\bm{n}}_{2}}(\beta_{2})U_{\hat{\bm{n}}_{1}}(\beta_{1})\\!\\!$
$\displaystyle=$
$\displaystyle\\!\\!e^{-i\frac{\beta_{2}}{2}\hat{\bm{n}}_{2}\cdot\bm{\sigma}}e^{-i\frac{\beta_{1}}{2}\hat{\bm{n}}_{1}\cdot\bm{\sigma}}$
(3b) $\displaystyle=$
$\displaystyle\cos\frac{\beta_{1}}{2}\cos\frac{\beta_{2}}{2}-\sin\frac{\beta_{1}}{2}\sin\frac{\beta_{2}}{2}\hat{\bm{n}}_{1}\cdot\hat{\bm{n}}_{2}$
$\displaystyle-$ $\displaystyle
i\Big{[}\sin\frac{\beta_{1}}{2}\cos\frac{\beta_{2}}{2}\hat{\bm{n}}_{1}+\cos\frac{\beta_{1}}{2}\sin\frac{\beta_{2}}{2}\hat{\bm{n}}_{2}$
$\displaystyle-\sin\frac{\beta_{1}}{2}\sin\frac{\beta_{2}}{2}\hat{\bm{n}}_{1}\times\hat{\bm{n}}_{2}\Big{]}\cdot\bm{\sigma},$
where $\bm{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})$ is a vector of Pauli
matrices, $\hat{\bm{n}}_{1}$ and $\hat{\bm{n}}_{2}$ are unit vectors
specifying the respective principal axes of rotation, and $\beta_{1}$ and
$\beta_{2}$ are real-valued rotation angles.111In (3b) we used the identity
$(\bm{a}\cdot\bm{\sigma})\cdot(\bm{b}\cdot\bm{\sigma})=\bm{a}\cdot\bm{b}+i\,\left(\bm{a}\times\bm{b}\right)\cdot\bm{\sigma}$.
Let us take $U_{\text{\tiny
S}}^{z}=e^{-i\frac{\beta_{2}}{2}\hat{\bm{n}}_{2}\cdot\bm{\sigma}}$ as our
stream operator and $U_{\text{\tiny
C}}=e^{-i\frac{\beta_{1}}{2}\hat{\bm{n}}_{1}\cdot\bm{\sigma}}$ as our
collision operator. Without loss of generality, we may choose the principle
axis of rotation along the $\hat{\bm{z}}$ to generate streaming,
$U_{\text{\tiny S}}^{z}=e^{i\ell
p_{z}\sigma_{z}/\hbar}=e^{-i\frac{\beta_{2}}{2}\sigma_{z}},$ (4a) and treat
the quantum algorithmic map as if it were applied in 1+1 dimensions.222 The
reduction from 3+1 to 1+1 dimensions is allowed because the algorithm has the
product form $\psi^{\prime}(x)=U_{\text{\tiny S}}U_{\text{\tiny
C}}\psi(x)\mapsto\psi(x),$ where $U_{\text{\tiny
S}}=e^{-i\frac{\pi}{4}\sigma_{y}}U_{\text{\tiny
S}}^{x}e^{i\frac{\pi}{4}(\sigma_{y}+\sigma_{x})}U_{\text{\tiny
S}}^{y}e^{-i\frac{\pi}{4}\sigma_{x}}U_{\text{\tiny
S}}^{z}=e^{i\tau\gamma^{0}\cdot\gamma^{i}p_{i}}$, with Dirac matrices
$\gamma^{0}=\sigma_{x}\otimes\bm{1}$ and
$\gamma^{i}=i\sigma_{y}\otimes\sigma_{i}$ in the chiral representation Succi
and Benzi (1993); Yepez (2005). Streaming in each of the spatial directions
occurs independently, so for simplicity we can choose to consider a Dirac wave
moving along $\hat{\bm{z}}$. In this frame a general collision operator is
$U_{\text{\tiny
C}}=e^{-i\frac{\beta_{1}}{2}(\alpha\sigma_{x}+\beta\sigma_{y}+\gamma\sigma_{z})},$
(4b)
where $\alpha$, $\beta$, and $\gamma$ are real valued components subject to
the constraint $\alpha^{2}+\beta^{2}+\gamma^{2}=1$. The unitary operators (4)
are applied locally and homogeneously at all the points in the system. That
is, we consider a construction whereby the two principal unit vectors
specifying the axes of rotation are
$\displaystyle\hat{\bm{n}}_{1}$ $\displaystyle=$
$\displaystyle(\alpha,\beta,\gamma)\qquad\qquad\hat{\bm{n}}_{2}=(0,0,1).$ (5)
With this choice, $\hat{\bm{n}}_{1}\times\hat{\bm{n}}_{2}=(\beta,-\alpha,0)$
and $\hat{\bm{n}}_{1}\cdot\hat{\bm{n}}_{2}=\gamma$, so (3) is a quite general
representation of a quantum lattice gas evolution operator
$\displaystyle U_{\text{\tiny S}}^{z}\,U_{\text{\tiny C}}$
$\displaystyle\stackrel{{\scriptstyle(\ref{axes_of_2_qubit_rotations})}}{{=}}$
$\displaystyle\cos\frac{\beta_{1}}{2}\cos\frac{\beta_{2}}{2}-\gamma\sin\frac{\beta_{1}}{2}\sin\frac{\beta_{2}}{2}$
$\displaystyle-$ $\displaystyle
i\left(\alpha\sin\frac{\beta_{1}}{2}\cos\frac{\beta_{2}}{2}-\beta\sin\frac{\beta_{1}}{2}\sin\frac{\beta_{2}}{2}\right)\sigma_{x}$
$\displaystyle-$ $\displaystyle
i\left(\beta\sin\frac{\beta_{1}}{2}\cos\frac{\beta_{2}}{2}+\alpha\sin\frac{\beta_{1}}{2}\sin\frac{\beta_{2}}{2}\right)\sigma_{y}$
$\displaystyle-$ $\displaystyle
i\left(\gamma\sin\frac{\beta_{1}}{2}\cos\frac{\beta_{2}}{2}+\cos\frac{\beta_{1}}{2}\sin\frac{\beta_{2}}{2}\right)\sigma_{z}.$
The Dirac equation for a relativistic quantum particle of mass $m_{\circ}$ may
be written as
$i\hbar\partial_{t}\psi=-c\,p_{z}\sigma_{z}\psi+m_{\circ}c^{2}\sigma_{x}\psi.$
(7)
Its time-difference form may be written as
$\psi^{\prime}=\left(1+\frac{ic\,p_{z}\tau}{\hbar}\sigma_{z}-\frac{im_{\circ}c^{2}\tau}{\hbar}\sigma_{x}\right)\psi,$
(8)
for small $\tau$ and for momentum operator $p_{z}=-i\hbar\partial_{z}$. We may
view the unitary operator acting on the right-hand side of (8) as the
effective low-energy operator obtained from the quantum lattice gas operator
(III)
$U_{\text{\tiny S}}^{z}\,U_{\text{\tiny
C}}\xrightarrow{\text{small}~{}\ell}1+\frac{ic\,p_{z}\tau}{\hbar}\sigma_{z}-\frac{im_{\circ}c^{2}\tau}{\hbar}\sigma_{x}.$
(9)
To establish a correspondence between (III) and (9), we simply choose the
real-valued components of $\hat{\bm{n}}_{1}$ to satisfy the following three
conditions:
$\displaystyle\alpha\sin\frac{\beta_{1}}{2}\cos\frac{\beta_{2}}{2}$
$\displaystyle-$
$\displaystyle\beta\sin\frac{\beta_{1}}{2}\sin\frac{\beta_{2}}{2}=\frac{m_{\circ}c^{2}\tau}{\hbar}$
(10a) $\displaystyle\beta\sin\frac{\beta_{1}}{2}\cos\frac{\beta_{2}}{2}$
$\displaystyle+$
$\displaystyle\alpha\sin\frac{\beta_{1}}{2}\sin\frac{\beta_{2}}{2}=0$ (10b)
$\displaystyle\gamma\sin\frac{\beta_{1}}{2}\cos\frac{\beta_{2}}{2}$
$\displaystyle+$ $\displaystyle\ \
\cos\frac{\beta_{1}}{2}\sin\frac{\beta_{2}}{2}=-\frac{c\,p_{z}\tau}{\hbar}.$
(10c) Additionally, we should respect the reality condition that
$\hat{\bm{n}}_{1}$ have unit norm333 Alternatively, instead of (10d), we could
impose the condition that
$\cos\frac{\beta_{1}}{2}\cos\frac{\beta_{2}}{2}-\gamma\sin\frac{\beta_{1}}{2}\sin\frac{\beta_{2}}{2}=1,$
forcing (III) to be identical to (9). However, in this case, the resulting
solution for components of $\hat{\bm{n}}_{1}$ has $\alpha$ imaginary, and this
breaks the unitarity of $U_{\text{\tiny C}}$. So, we impose (10d) to strictly
enforce unitarity. $\alpha^{2}+\beta^{2}+\gamma^{2}=1$ (10d)
that we established above with the collision operator (4b). For the sake of
simplicity, let us start with a specialized construction whereby
$\hat{\bm{n}}_{1}$ is perpendicular to $\hat{\bm{n}}_{2}$. The solution of
(10) in this special case is
$\alpha=\cos\frac{\beta_{2}}{2}\qquad\beta=-\sin\frac{\beta_{2}}{2}\qquad\gamma=0.$
(11)
Inserting (11) into (10a) gives
$\sin\frac{\beta_{1}}{2}=\frac{m_{\circ}c^{2}\tau}{\hbar},$ and in turn (10c)
is
$\sqrt{1-\left(\frac{m_{\circ}c^{2}\tau}{\hbar}\right)^{2}}\sin\frac{\beta_{2}}{2}=-\frac{c\,p_{z}\tau}{\hbar}.$
In (4a) we chose $-\ell\,p_{z}/\hbar={\beta_{2}}/{2}$, so in turn we have
$\sqrt{1-\left(\frac{m_{\circ}c^{2}\tau}{\hbar}\right)^{2}}\sin\frac{\ell
p_{z}}{\hbar}=\frac{c\,p_{z}\tau}{\hbar}.$ (12)
This is a grid equation that relates the cell sizes $\ell$ and $\tau$ to the
mass and momentum of the quantum particle in an intrinsic way. Equation (12)
can be interpreted as a rather fundamental relativistic relationship between
particles and points. In place of the theory of special relativity for
classical particle dynamics in a continuum, here we have constructed a
lattice-based version of special relativity for particle dynamics emerging at
a small scale where the spacetime foam has a regular structure.
Let us consider some implications of (12). Squaring (12) gives
$\left(\frac{\hbar}{\tau}\sin\frac{\ell
p_{z}}{\hbar}\right)^{2}-\left(m_{\circ}c^{2}\sin\frac{\ell
p_{z}}{\hbar}\right)^{2}=(cp_{z})^{2}.$ (13a)
Then adding $m_{\circ}^{2}c^{4}$ to both sides, we have
$\left(\frac{\hbar}{\tau}\right)^{2}\sin^{2}\frac{\ell
p_{z}}{\hbar}+\left(m_{\circ}c^{2}\right)^{2}\cos^{2}\frac{\ell
p_{z}}{\hbar}=(cp_{z})^{2}+(m_{\circ}c^{2})^{2}.$ (13b)
This is a candidate grid-level relativistic energy equation that leads us to
define a grid momentum $p_{z}^{\text{\tiny grid}}$ and a grid mass $m$
dependent on $\ell$ as follows:
$\displaystyle p_{z}^{\text{\tiny grid}}$ $\displaystyle\equiv$
$\displaystyle\frac{\hbar}{c\tau}\sin\frac{\ell p_{z}}{\hbar}\qquad\qquad
m\equiv m_{\circ}\cos\frac{\ell p_{z}}{\hbar}.$ (14)
Hence, the lefthand side of (13b) can be interpreted as a redefinition of the
Dirac particle’s kinetic and rest energies. Inserting the de Broglie relation
($p_{z}=h/\lambda$ momentum eigenvalue), the grid mass and momentum become
$\displaystyle p_{z}^{\text{\tiny grid}}$ $\displaystyle=$
$\displaystyle\frac{\hbar}{c\tau}\sin\frac{2\pi\ell}{\lambda}\qquad\qquad
m=m_{\circ}\,\cos\frac{2\pi\ell}{\lambda}.$ (15)
In the low-energy limit defined by $\lambda\ggg 2\pi\ell$, expanding (12) to
first order implies that the space and time cell sizes are linearly related by
the speed light $\ell=c\,\tau,$ an intuitive relationship that we expect to
hold. In the low-energy limit, (15) reduces to
$\displaystyle p_{z}$ $\displaystyle=$
$\displaystyle\frac{h}{\lambda}\qquad\qquad m=m_{\circ}.$ (16)
That is, the low-energy limit of (15) corresponds to a usual quantum particle
with an invariant mass that is entirely independent of the particle’s
momentum, and the quantum particle acts like a wave according to standard
quantum mechanics. However, there is a marked departure from standard quantum
mechanics in the high-energy limit in the region $\lambda\lesssim 20\ell$ as
shown in Fig. 1.
## IV The algorithm in natural units
For expediency, let us now switch our dimensional convention to the natural
units, $\hbar=1$ and $c=1$.444In the natural units $\hbar=1$ and $c=1$, length
and time have like dimension of length (i.e. $[\ell]=[\tau]=L$) while mass,
momentum, and energy values have like dimension of inverse length (i.e.
$[m]=[p]=[E]=L^{-1}$). Any expression written in the natural units can be
converted back to an expression in the dimensionful $M,L,T$ units by simply
reinserting the speed of light and Planck’s constant by
$\ell\mapsto\frac{\ell}{c}$, $m\mapsto\frac{mc^{2}}{\hbar}$,
$p\mapsto\frac{pc}{\hbar}$, and $E\mapsto\frac{E}{\hbar}$. We can write (12)
as
$\displaystyle\sqrt{1-m_{\circ}^{2}\tau^{2}}\,\sin(\ell p_{z})$
$\displaystyle=$ $\displaystyle p_{z}\tau$ (17a)
$\displaystyle\sqrt{1-m_{\circ}^{2}\tau^{2}}\,\cos(\ell p_{z})$
$\displaystyle=$ $\displaystyle\sqrt{1-E^{2}\tau^{2}},$ (17b) or equivalently
as $e^{i\ell
p_{z}}=\exp\left[i\cos^{-1}\sqrt{\frac{1-E^{2}\tau^{2}}{1-m_{\circ}^{2}\tau^{2}}}\right].$
Furthermore, our solution (11) implies that the rotation axis for the
collision operator is $\hat{\bm{n}}_{1}=\hat{\bm{x}}\cos\ell
p_{z}+\hat{\bm{y}}\sin\ell p_{z},$ and in turn this implies that the hermitian
generator of (4b) is $\hat{\bm{n}}_{1}\cdot\bm{\sigma}=\sigma_{x}\cos\ell
p_{z}+\sigma_{y}\sin\ell p_{z}=\sigma_{x}e^{i\sigma_{z}\,\ell p_{z}}.$ Hence,
since $\sin\frac{\beta_{1}}{2}=m_{\circ}\tau$, we can explicitly represent the
collision operator $U_{\text{\tiny
C}}=e^{-i\frac{\beta_{1}}{2}\hat{\bm{n}}_{1}\cdot\bm{\sigma}}$ in terms of the
mass and momentum of the quantum particle as
$\displaystyle U_{\text{\tiny C}}$ $\displaystyle=$
$\displaystyle\sqrt{1-m_{\circ}^{2}\tau^{2}}-i\hat{\bm{n}}_{1}\cdot\bm{\sigma}\,m_{\circ}\tau$
(18a) $\displaystyle=$
$\displaystyle\sqrt{1-m_{\circ}^{2}\tau^{2}}-i\sigma_{x}e^{i\sigma_{z}\,\ell
p_{z}}\,m_{\circ}\tau.$ (18b)
Multiplying by the stream operator $U_{\text{\tiny S}}^{z}=e^{i\ell
p_{z}\sigma_{z}}$, the evolution operator (III) can now be explicitly
calculated
$\displaystyle U_{\text{\tiny S}}^{z}U_{\text{\tiny C}}\\!\\!\\!\\!\\!$
$\displaystyle=$ $\displaystyle\\!\\!\\!e^{i\ell
p_{z}\sigma_{z}}\sqrt{1-m_{\circ}^{2}\tau^{2}}-ie^{i\ell
p_{z}\sigma_{z}}\sigma_{x}e^{i\sigma_{z}\,\ell p_{z}}\,m_{\circ}\tau\qquad\ $
(19a) $\displaystyle=$ $\displaystyle e^{i\ell
p_{z}\sigma_{z}}\sqrt{1-m_{\circ}^{2}\tau^{2}}-i\sigma_{x}\,m_{\circ}\tau$
(19c) $\displaystyle=$ $\displaystyle\sqrt{1-m_{\circ}^{2}\tau^{2}}\cos
p_{z}\ell+i\sigma_{z}\sqrt{1-m_{\circ}^{2}\tau^{2}}\sin p_{z}\ell$
$\displaystyle-\ i\sigma_{x}\,m_{\circ}\tau$
$\displaystyle\stackrel{{\scriptstyle(\ref{grid_equation_natural_unit_component_form_a})}}{{\stackrel{{\scriptstyle(\ref{grid_equation_natural_unit_component_form_b})}}{{=}}}}$
$\displaystyle\sqrt{1-E^{2}\tau^{2}}+i\sigma_{z}p_{z}\tau-\
i\sigma_{x}\,m_{\circ}\tau$ (19d) $\displaystyle=$
$\displaystyle\sqrt{1-E^{2}\tau^{2}}+iE\tau\left(\sigma_{z}\frac{p_{z}}{E}-\sigma_{x}\,\frac{m_{\circ}}{E}\right).$
(19e)
This result leads us to define the rotation axis
$\hat{\bm{n}}_{12}\equiv-\frac{m_{\circ}}{E}\,\hat{\bm{x}}+\frac{p_{z}}{E}\,\hat{\bm{z}}.$
(20)
Since $(\hat{\bm{n}}_{12}\cdot\bm{\sigma})^{2}=1$ (an involution), we are free
to write (19e) in a manifestly unitary form
$e^{-i\frac{\beta_{12}}{2}\hat{\bm{n}}_{12}\cdot\bm{\sigma}}$ as follows:
$\displaystyle U_{\text{\tiny S}}^{z}U_{\text{\tiny C}}$ $\displaystyle=$
$\displaystyle\exp\left[i\cos^{-1}\\!\left(\sqrt{1-E^{2}\tau^{2}}\right)\hat{\bm{n}}_{12}\cdot\bm{\sigma}\right]\qquad$
(21a) $\displaystyle\stackrel{{\scriptstyle(\ref{n_12_m_p_E_form})}}{{=}}$
$\displaystyle\exp\left[i\,\frac{\cos^{-1}\sqrt{1-E^{2}\tau^{2}}}{E}\left(\sigma_{z}p_{z}-\sigma_{x}\,{m_{\circ}}\right)\right]\qquad$
(21b) $\displaystyle\cong$ $\displaystyle
e^{\left.-i\,\ell\middle(-\sigma_{z}p_{z}+\sigma_{x}\,{m_{\circ}}\right)},\qquad$
(21c)
where in the last line we made the identification
$\cos(E\ell)=\sqrt{1-E^{2}\tau^{2}},$ (22)
which is exact to one part in $10^{139}$ (i.e. accurate to 4th order in
$\ell$). This is equivalent to identifying the gate angle with the ratio of
the small scale length to the particle energy, $-{\beta_{12}/2}=\ell/E$. Then,
since $U_{\text{\tiny S}}^{z}U_{\text{\tiny C}}\equiv e^{-ih^{\text{\tiny
grid}}\tau}$, the high-energy Hamiltonian may be written as
$h^{\text{\tiny
grid}}\stackrel{{\scriptstyle(\ref{lattice_gas_operator_in_exp_form_c})}}{{=}}\left.\frac{\ell}{\tau}\middle(-p_{z}\sigma_{z}+m_{\circ}\sigma_{x}\right).$
(23)
Thus, we have successfully demonstrated that the quantum lattice gas equation
(1c) is equivalent to the Dirac equation as the hamiltonian generating its
unitary dynamics is the Dirac hamiltonian, even at the small scale.
## V Consequences of the modified de Broglie relation
Imposing covariance on the high-energy representation (1) leads to two
departures (15), that we derived in the previous section, from the correct
behavior given by relativistic quantum field theory. First, the momentum of
the quantum particle must obey a small length $\ell$ dependent momentum
relation
$p=\frac{\hbar}{\ell}\sin\frac{2\pi\ell}{\lambda}$ (24a) in place of the de
Broglie relation $p=h/\lambda$. Second, the mass of the Dirac particle is no
longer taken as an invariant quantity—it must depend on the small length as
well as the particle’s Compton wavelength
$m=m_{\circ}\cos\frac{2\pi\ell}{\lambda},$ (24b)
where $m_{\circ}$ is a fixed constant, otherwise interpreted in the low-energy
limit as the invariant particle mass. Plots of (24) are given in Fig. 1 for
$m_{\circ}$ set to the electron mass. Notice that (24a) vanishes for
$\lambda=\ell$ and $\lambda=2\ell$ and oscillates about zero as
$\lambda\rightarrow 0$. Also notice that (24b) vanishes at $\lambda=4\ell$ and
is in fact negative for $\lambda=2\ell$ and $\lambda=3\ell$, again oscillating
as $\lambda\rightarrow 0$. These departures from standard quantum mechanics
and special relativity theory are rather consequential at very small scales,
departing on scales $\lesssim 20\ell$ that can be viewed as a region where the
errors in the lattice gas model are dominate. Yet, at the relevant larger
scales (far above the small scale $\ell$), the physics of the quantum lattice
gas model is indistinguishable from that predicted by the relativistic quantum
field theory representation of Dirac fields.
Figure 1: Log-log plot of (24) for mass (red dots) and momentum (blue dots)
in GeV of a single proton versus its wavelength with the small scale set to
the Planck length, $\ell\equiv\ell_{\text{\tiny P}}=1.616\times 10^{-35}$m.
The straight lines are the de Broglie relation of quantum mechanics,
$p=h/\lambda$ (blue dashed line), and the invariant mass of special
relativity, $m_{\circ}=0.511$ MeV (red dashed line). Respectively, the slopes
are $-1$ and $0$ for the standard theories. The intersection of the mass and
momentum lines occurs at the Compton wavelength of the Dirac particle. The two
insets are linear plots in the extreme ultraviolet region,
$\ell/4\leq\lambda\leq 20\ell$, where the lattice theory departs from quantum
mechanics and special relativity.
Yet, in the context of the toy model, we should use (24) to calculate the
vacuum energy associated with a spin-$\frac{1}{2}$ fermion. When we integrate
over all space to determine the total density contained in a Dirac field we
have
$\displaystyle\rho_{\text{\tiny tot}}\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\int\frac{d^{3}k}{c^{2}(2\pi)^{3}}\sqrt{(pc)^{2}+(mc^{2})^{2}}$
(25a)
$\displaystyle\stackrel{{\scriptstyle(\ref{discrete_relativity_premises})}}{{=}}$
$\displaystyle\frac{\hbar}{2\pi^{2}c\ell^{4}}\int_{0}^{k_{c}\ell}d(k\ell)({k\ell})^{2}\sqrt{\sin^{2}k\ell+\epsilon^{2}\cos^{2}k\ell},\qquad$
(25b)
where the quantity $\epsilon\equiv{m_{\circ}c\ell}/{\hbar}$ is small when
$m_{\circ}$ is much less than the mass,
$m_{\circ}\ll\hbar\,(\ell^{2}/\tau)^{-1}$. If we take $\epsilon=0$ to model a
massless relativistic particle, then performing the integration of (25b)
yields
$\displaystyle\rho_{\text{\tiny vac}}^{\text{\tiny theory}}\\!\\!\\!$
$\displaystyle=$
$\displaystyle\\!\\!\\!\left.\frac{\hbar}{2\pi^{2}c\ell^{4}}\left(2k\ell\sin
k\ell-(k^{2}\ell^{2}-2)\cos k\ell\right)\right|_{0}^{k_{c}\ell},\qquad$ (26)
which can be either positive, zero, or negative depending sensitively on the
value of the wave number cutoff $k_{c}$ as well as on the value of the small
scale $\ell$.
If one takes the quantum lattice gas as a simplistic representation of
Planckian scale physics (choosing $\ell=\ell_{\text{\tiny P}}$) where (24) is
interpreted as physical behavior instead of numerical grid error, then its
prediction of an allowable small value of the vacuum energy of a massless
spinor field may be viewed as a new physical mechanism. That is, if we use
(24) to calculate the vacuum energy associated with a spin-$\frac{1}{2}$
fermion, then the toy model can avoid the cosmological constant problem—it is
not necessarily $10^{121}$ times too large. Taking $k_{c}$ as a parameter in
(26), at $k_{c}=(4.08557...)/\ell$, we find that $\rho_{\text{\tiny
vac}}^{\text{\tiny theory}}=0$, as shown in Fig. 2. The experimentally
observed value, $\rho_{\text{\tiny vac}}^{\text{\tiny obs.}}=9.9\times
10^{-27}\text{kg}/\text{m}^{3}$ (such as obtained by the Wilkinson Microwave
Anisotropy Probe) is obtained at a slightly smaller wave number cutoff,
corresponding to a grid scale about four times smaller than $\ell$, a sub-
Planckian length scale.555Our simplistic estimate is further simplified by not
considering the inflationary epoch of space under an extremely high vacuum
energy density, just the dynamics of a fermionic field when the spacetime is
flat with a small positive cosmological constant when its discreteness below
the Planck scale becomes relevant.
Figure 2: Comparison of theoretical predictions of the vacuum energy density
as a function of wave number cutoff times the Planck length, $k_{c}\ell$. The
dashed curve is the standard quantum field theory prediction and the solid
curve is the quantum lattice gas prediction (26), where $\rho^{\text{\tiny
theory}}_{\text{\tiny vac}}=0$ at $k_{c}\,\ell=4.08557$. The theoretical
prediction at $k_{c}\ell\sim 1$ is $10^{121}$ times too large for both curves,
whereas for the solid curve $k_{c}\ell\gtrapprox 4$ is very close to the
experimentally observed value.
Alternatively, one can choose the Planck scale to be smaller than the grid
scale, $\ell_{\text{\tiny P}}\ll\ell\ll\lambda$. In this case, we still have
$\rho_{\text{\tiny vac}}^{\text{\tiny theory}}(4.08557k_{c}\ell)=0$. There
exists a real-valued number $C\lesssim 4.08557$ for which $\rho_{\text{\tiny
vac}}^{\text{\tiny obs.}}(Ck_{c}\ell)=\rho_{\text{\tiny vac}}^{\text{\tiny
obs.}}$, although we have not predicted this number and thus do not address
the fine-tunning problem. Considerations regarding an additional fundamental
length scale, in addition to the Planck scale, have recently appeared in Ref.
Klinkhamer (2007), including references therein.
## VI Conclusion
We revisited the quantum lattice gas model with a unitary evolution operator
$U_{\text{\tiny S}}U_{\text{\tiny C}}$ applied at a small scale $\ell$ that
advances a Dirac spinor field, represented on a grid, forward by a small time
scale increment $\tau$. We derived the conditions for which the generator of
evolution is the Dirac Hamiltonian, $U_{\text{\tiny S}}U_{\text{\tiny C}}\cong
e^{\left.-i\,\ell\middle(-\sigma_{z}p_{z}+\sigma_{x}\,{m_{\circ}}\right)}$. We
quantified the error of the quantum lattice gas model as a departure from
standard quantum mechanical behavior for the particle momentum going as
$p=(\hbar/\ell)\sin(2\pi\ell/\lambda)$ and as its departure from a
relativistically invariant particle mass going as
$m=m_{\circ}\cos(2\pi\ell/\lambda)$. In this regard, the quantum lattice gas
model (2) is numerically accurate only to scales $\gtrsim 20\ell$, even though
it retains covariant behavior down for scales $\gtrsim\ell$. Yet, the
numerical error of the model can be taken as a good feature—providing a
mechanism for a small positive cosmological constant.
There have been a number of theoretical attempts employing, for example,
supersymmetry Nilles (1984), string theory Dine and Seiberg (1985), and the
anthropic principle Vilenkin (1998) to bridge the known chasm between the
large quantum field theory prediction of $\rho_{\text{\tiny vac}}^{\text{\tiny
qft}}\sim 10^{110}\text{eV}^{4}$ for the late-time, zero-temperature vacuum
energy density of “empty” space and the observed value of $\rho_{\text{\tiny
vac}}^{\text{\tiny obs.}}\sim 10^{-11}\text{eV}^{4}$ associated with a small
positive cosmological constant. The toy model presented herein gives a value
of $k_{c}$ for which the vacuum energy vanishes. This is the root of the
equation $2k_{c}\ell\sin k_{c}\ell-(k_{c}^{2}\ell^{2}-2)\cos k_{c}\ell-2=0$.
However, the theoretical considerations presented above do not tell us why the
wave number cutoff should be fined tuned so that $\rho_{\text{\tiny
vac}}^{\text{\tiny theory}}=\rho_{\text{\tiny vac}}^{\text{\tiny
obs.}}$.666There are several routes whereby a high-energy quantum lattice gas
model may predict a small positive vacuum energy density. At $k_{c}=2\pi/\ell$
we have $\rho_{\text{\tiny vac}}^{\text{\tiny
theory}}\approx-10^{97}\text{kg}/\text{m}^{3}$. Thus, it is possible that such
a large negative $\rho_{\text{\tiny vac}}^{\text{\tiny m=0}}\lll 0$
(associated with massless chiral matter or with gauge matter comprised of
paired massless fermions) can just nearly cancel a large positive
$\rho_{\text{\tiny vac}}^{\text{\tiny m>0}}\ggg 0$ (associated with massive
baryonic matter) leaving a small residual $\rho_{\text{\tiny
vac}}^{\text{\tiny obs}}\gtrsim 0$ that is experimentally observable. At ever
larger cuts, the value of the vacuum energy oscillates wildly about zero.
Nevertheless, the quantum lattice gas model appears to be one potential route
to reconcile a discretized quantum field theory, at least a version modified
at a small scale by (24), with the well accepted experimental observation of a
positive cosmological constant by employing a plausible wave number cutoff
parameter that corresponds to a fundamental grid scale. In a subsequent paper,
we will numerically evaluate the novel unitary collision operator (18b)
employed in a quantum algorithmic simulation of the dynamical behavior of a
system of Dirac particles.
## VII Acknowledgements
Thanks to G. Vahala, J. Erlich, and N.H. Margolus for helpful comments.
## References
* Feynman (1946) R. P. Feynman, California Institute of Technology CIT archives (1946).
* Feynman and Hibbs (1965) R. P. Feynman and A. Hibbs, _Quantum Mechanics and Path Integrals_ (McGraw-Hill, 1965), problem 2-6 on page 34.
* Riazanov (June 1958) G. Riazanov, Soviet Physics JETP 6 (33), 7 pages (June 1958).
* Jacobson (1984) T. Jacobson, Journal of Physics A: Math. Gen. 17, 2433 (1984).
* Jacobson and Schulman (1984) T. Jacobson and L. Schulman, Journal of Physics A: Math. Gen. 17, 375 (1984).
* ’t Hooft (1988) G. ’t Hooft, J. Stat. Phys. 53, 323 (1988).
* ’t Hooft (1997) G. ’t Hooft, Found. Phys. Lett. 10, 105 (1997).
* Succi and Benzi (1993) S. Succi and R. Benzi, Physica D 69, 327 (1993).
* Bialynicki-Birula (1994) I. Bialynicki-Birula, Physical Review D 49, 6920 (1994).
* Meyer (1996) D. A. Meyer, Journal of Statistical Physics 85, 551 (1996).
* Yepez (2005) J. Yepez, Quantum Information Processing 4, 471 (2005).
* Nilles (1984) H. P. Nilles, Physics Reports 110, 1 (1984).
* Dine and Seiberg (1985) M. Dine and N. Seiberg, Physics Letters B 162, 299 (1985).
* Vilenkin (1998) A. Vilenkin, Phys. Rev. Lett. 81, 5501 (1998).
* Klinkhamer (2007) F. Klinkhamer, JETP Letters 86, 73 (2007).
|
arxiv-papers
| 2011-06-03T20:16:03 |
2024-09-04T02:49:19.355047
|
{
"license": "Public Domain",
"authors": "Jeffrey Yepez",
"submitter": "Jeffrey Yepez",
"url": "https://arxiv.org/abs/1106.0739"
}
|
1106.0771
|
# Effect of Fermi Surface Nesting on Resonant Spin Excitations in
Ba1-xKxFe2As2
J.-P. Castellan S. Rosenkranz E. A. Goremychkin D. Y. Chung I. S. Todorov
Materials Science Division, Argonne National Laboratory, Argonne, IL
60439-4845, USA M. G. Kanatzidis Materials Science Division, Argonne
National Laboratory, Argonne, IL 60439-4845, USA Department of Chemistry,
Northwestern University, Evanston, IL 60208-3113, USA I. Eremin Institute
for Theoretical Physics III, Ruhr University Bochum, 44801 Bochum, Germany J.
Knolle Max-Planck-Institut für Physik komplexer Systeme, D-01187 Dresden,
Germany A. V. Chubukov S. Maiti Department of Physics, University of
Wisconsin-Madison, Madison,Wisconsin 53706, USA M. R. Norman F.
Weber111Current Address: Karlsruhe Institute of Technology, Institute of Solid
State Physics, 76021 Karlsruhe, Germany H. Claus Materials Science Division,
Argonne National Laboratory, Argonne, IL 60439-4845, USA T. Guidi R. I.
Bewley ISIS Pulsed Neutron and Muon Facility, Rutherford Appleton Laboratory,
Chilton, Didcot OX11 0QX, United Kingdom R. Osborn Materials Science
Division, Argonne National Laboratory, Argonne, IL 60439-4845, USA
ROsborn@anl.gov
###### Abstract
We report inelastic neutron scattering measurements of the resonant spin
excitations in Ba1-xKxFe2As2 over a broad range of electron band filling. The
fall in the superconducting transition temperature with hole doping coincides
with the magnetic excitations splitting into two incommensurate peaks because
of the growing mismatch in the hole and electron Fermi surface volumes, as
confirmed by a tight-binding model with $s_{\pm}$-symmetry pairing. The
reduction in Fermi surface nesting is accompanied by a collapse of the
resonance binding energy and its spectral weight caused by the weakening of
electron-electron correlations.
The connection between magnetism and unconventional superconductivity is one
of the most challenging issues in condensed matter physics. In unconventional
superconductors, such as the copper oxidesBonn:2006p34943 , heavy
fermionsPfleiderer:2009p31671 , organic charge-transfer
saltsMcKenzie:1997p34926 , and now the iron pnictides and
chalcogenidesDeLaCruz:2008p8095 ; Lynn:2009p28738 ; Taillefer:2010p34241 , the
superconducting state occurs in the presence of strong magnetic correlations
and sometimes coexists with magnetic order, fostering models of
superconducting pairing mediated by magnetic fluctuationsMazin:2008p11687 ;
Dahm:2009p15572 ; Yu:2009p31175 . Such models lead to unusual superconducting
gap symmetries, such as the $d$-wave symmetry observed in the high-temperature
copper oxide superconductors. Although there have been reports of energy gap
nodes in a few of the iron superconductors, most appear to have weakly
anisotropic gapsPaglione:2010p34248 . This is consistent both with
conventional $s$-wave pairing, in which the gap has the same sign over the
entire Fermi surface, and with unconventional $s_{\pm}$-wave pairing, in which
the gaps on the disconnected hole and electron Fermi surfaces have opposite
signMazin:2008p11687 .
The first spectroscopic evidence of unconventional $s_{\pm}$-wave symmetry was
provided by inelastic neutron scattering on optimally-doped Ba0.6K0.4Fe2As2
with the observation of a resonant spin excitation at the wavevector,
$\mathbf{Q}$, connecting the nearly cylindrical hole and electron Fermi
surfaces, centered at the zone center ($\Gamma$-point) and zone boundary
(M-point), respectively, i.e., at $\mathbf{Q}_{0}=(\pi,\pi)$ in the
crystallographic Brillouin zoneChristianson:2008p14965 . Similar excitations
have now been observed in a wide range of iron-based superconductors
Chi:2009p20185 ; Lumsden:2009p20184 ; Inosov:2009p31864 ; Lumsden:2010p31952 .
Within an itinerant model, the resonance arises from an enhancement in the
superconducting phase of the band electron susceptibility,
$\chi(\mathbf{Q},\omega)$, caused by coherence factors introduced by pair
formationKorshunov:2008p13468 ; Maier:2009p21834 . If
$\Delta_{\mathbf{k}+\mathbf{Q}}=-\Delta_{\mathbf{k}}$, where
$\Delta_{\mathbf{k}}$ are the values of the energy gap at points $\mathbf{k}$
on the Fermi surface, i.e., if Q connects points whose gaps have opposite
sign, the magnetic susceptibility of superconducting but otherwise non-
interacting fermions Re $\chi_{0}(\mathbf{Q},\omega)$ diverges logarithmically
upon approaching $2\Delta$ from below. In the Random Phase Approximation
(RPA), the full susceptibility is given by
$\chi(\mathbf{Q},\omega)=\chi_{0}(\mathbf{Q},\omega)\left[1-J(\mathbf{Q})\chi_{0}(\mathbf{Q},\omega)\right]^{-1}$
(1)
where $J(\mathbf{Q})$ represents electron-electron interactions. The
interactions produce a bound exciton with an energy $\Omega$ below $2\Delta$
given by $J(\mathbf{Q})\mathrm{Re}\,\chi_{0}(\mathbf{Q},\Omega)=1$, i.e. with
a binding energy of $2\Delta-\Omega$Eschrig:2006p23369 .
Figure 1: (a-d) Inelastic neutron scattering from Ba1-xKxFe2As2 measured in
the superconducting phase at a temperature of 5 K using incident neutron
energies (Ei) of 30 meV and 60 meV, at (a) $x=0.3$ (Ei=30 meV), (b) $x=0.5$
(Ei=60 meV), (c) $x=0.7$ (Ei=30 meV), scaled by a factor 2.0, and (d) $x=0.9$
(Ei=30 meV), scaled by a factor 2.667. (e-h) The magnetic scattering vs Q
integrated over the energy transfer range of the inelastic peak, compared to a
fitted model that includes one (e) or two Gaussian peaks (f-h) (dashed black
lines) and a non-magnetic background (dotted line), given by the sum of a
quadratic term, consistent with single-phonon scattering, and a constant term,
consistent with multiple phonon scattering. The energy integration range is
(e) 9-14 meV, (f) 12-18 meV, (g) 10-14 meV and (h) 10-15 meV.
This letter addresses the evolution of the resonant spin excitations with band
filling. Resonant spin excitations in the iron-based superconductors have
mostly been observed in compounds close to optimal doping where the hole and
electron pockets have similar size. We have now studied the magnetic
excitations in Ba1-xKxFe2As2 over a broad range of hole dopings in which the
mismatch between the hole and electron Fermi surface volumes becomes
increasingly significant. Our results therefore provide insight into the
influence of Fermi surface nesting on the unconventional superconductivity. At
moderate doping, there is a longitudinal broadening of the wavevector of the
magnetic response and then, at higher doping, a split into two incommensurate
peaks, which is correlated with a fall in Tc. The scaling of the resonant peak
energy to the maximum energy of the superconducting gap, $\Omega/2\Delta$, is
not universal as has been claimedYu:2009p31175 , but renormalizes continuously
to 1 with increasing hole concentration. This represents a reduction in the
exciton binding energy due to weakening electron-electron interactions and is
accompanied by a collapse of the resonant spectral weightEschrig:2006p23369 .
The reason for choosing Ba1-xKxFe2As2 in our investigation is that the
superconducting phase extends over a much broader range of dopant
concentration with hole-doping $(0.125\leq x\leq 1.0)$ than with electron-
doping $(0.08\leq x\leq 0.32)$Canfield:2010p34239 . According to ARPES
dataSato:2009p24879 , the shift of the chemical potential from optimal doping
at $x=0.4$ to the extreme overdoping at $x=1$ approximately doubles the radius
of the hole pockets and shrinks the electron pockets, which vanish close to
$x=1$.
We have prepared polycrystalline samples with $x=0.3$, 0.5, 0.7, and 0.9, to
supplement our earlier measurements of $x=0.4$. Details of the sample
synthesis procedures are reported elsewhereAvci:2011p36647 . The inelastic
neutron measurements were performed on the Merlin spectrometer at the ISIS
Pulsed Neutron Facility, UK, using incident energies of 30 and 60 meV and
temperatures of 5 K and 50 K, i.e., below and above Tc. The data were placed
on an absolute intensity scale by normalization to a vanadium standard.
Figure 2: Calculated Im$\chi(Q,\omega)$ with increasing hole concentration
based on a four-band tight-binding model with two circular hole pockets and
two elliptical electron pockets, with ellipticity $\epsilon=0.5$ and chemical
potentials (a) $\mu=0.0$, (b) $\mu=0.3$, and (c) $\mu=0.5$. The intensity map
corresponds to states/eV.
Fig. 1(a-d) summarizes the data at low temperature showing that inelastic
peaks are visible in all compositions centered in energy between $\sim 11$ meV
and 15 meV. The most striking observation is the pronounced $Q$-broadening of
the inelastic scattering at $x=0.5$ and its split into two incommensurate
peaks at $x=0.7$ and 0.9. This is seen most clearly in Fig. 1(e-h), which
shows the wavevector dependence of the energy-integrated intensity. The
magnetic contribution is fit to one or two peaks symmetrically centered around
$Q_{0}\sim 1.2$ Å-1, with phonons and multiple scattering contributing a
quadratic $Q$-dependent and $Q$-independent intensity, respectively. After
correction for the Fe2+ form factor, the split peaks have approximately equal
intensity. Since we are measuring polycrystalline samples, the $l$-dependence
is spherically averaged, but this cannot explain the size of the splitting.
The absolute values of $Q$ at $(\pi,\pi,0)$ and $(\pi,\pi,\pm\pi)$ are 1.15
Å-1 to 1.25 Å-1, respectively, whereas the peaks at $x=0.7$ are at 0.94 Å-1
and 1.52 Å-1.
In order to understand the incommensurability, we have performed calculations
of the doping dependence of the dynamic magnetic susceptibility using a simple
tight-binding model of two hole pockets centered around the $\Gamma$-point and
two electron pockets centered around the M-points. To make quantitative as
well as qualitative comparisons to experiments, we use the Fermi velocities
and the size of the Fermi pockets based on Refs. Singh:2008p8173 ;
Mazin:2008p11687 . A key parameter is the ellipticity, $\epsilon$, of the
electron pockets. Perfect nesting requires $\epsilon$ to be zero, i.e.,
circular electron pockets. With increasing ellipticity, the magnetic peak
broadens and extends to larger $\Delta Q$ around $Q_{0}$, because the
increasing mismatch of the hole and electron Fermi surfaces weakens the
singularity in the magnetic response at $Q_{0}$. The intensity is also lower
because of the weaker nesting and smaller size of the superconducting gap on
this Fermi surface.
In the following, we set $\epsilon=0.5$, which gives the best description of
the spin waves in the parent compoundKnolle:2010p33082 , to investigate the
doping evolution of Im$\chi(Q,\omega)$ for various hole dopings, i.e. for
positive values of the hole doping parameter, $\mu$. In agreement with the
experiments, the magnetic response is initially commensurate and a well-
resolved splitting is only found at $\mu=0.5$. The critical doping at which
the magnetic peak splits, $\mu_{c}$, also depends on the ellipticity. For
larger (smaller) $\epsilon$, the splitting becomes visible at larger (smaller)
values of $\mu$.
A comparison of Fig. 1 and Fig. 2 demonstrates that the calculations reproduce
the observed behavior as a function of potassium doping. The values of the
calculated incommensurability vs $\mu$ are plotted with the experimental data
in Fig. 3a, assuming $\mu\sim x$. The good agreement, with $\mu_{c}\sim 0.4$,
shows that the observed incommensurability is consistent with the change in
Fermi surface geometry with hole doping. Furthermore, the value of the
incommensurability at $x=0.9$ is in agreement with neutron results from pure
KFe2As2Lee:2011hm , which were also interpreted as interband scattering. Fig.
3b shows that the onset of incommensurability with $x$ occurs just when Tc
starts to fall, showing a direct correlation between the degree of Fermi
surface nesting and the pairing strength.
Figure 3: (a) The wavevector, $Q$, of the magnetic excitations in
Ba1-xKxFe2As2, determined from the peak centers in Fig. 1(e-h) vs $x$ (solid
circles). The open circles are from the theoretical calculations in Fig. 2
assuming $\mu=x$. (b) Tc (solid circles) and $\Omega$ (open circles)
determined from the resonantly enhanced component of the inelastic peaks shown
in Fig. 1, i.e., after subtracting the 50 K data from the 5 K data. (c) The
ratio of the resonant excitation energy to twice the maximum superconducting
energy gap, $\Omega/2\Delta$ (solid circles), using
$2\Delta/\mathrm{k_{B}T_{c}}=7.5$,Nakayama:2011eh , and the resonant spectral
weight (open circles). The inset shows the linear dependence of the spectral
weight vs $2\Delta-\Omega$. All other lines are guides to the eye.
The doping dependence of the inelastic peak energies, $\Omega$, is summarized
in Fig. 3b, where they are plotted vs $x$, along with Tc, which falls from 38
K at $x=0.4$ to 7 K at $x=0.9$. ARPES measurements suggest that the gap scales
as $2\Delta/\mathrm{k_{B}T_{c}}\sim 7.5$Nakayama:2011eh . With this
assumption, $\Omega/2\Delta$ is observed to increase continuously from 0.52 at
$x=0.3$ to 0.98 at $x=0.7$ (Fig. 3c). This is clearly inconsistent with the
postulated universal scaling of $\Omega/2\Delta\sim 0.64$Yu:2009p31175 ,
proposed to characterize all unconventional superconductors, including the
copper oxides and heavy fermions.
Figure 4: Inelastic neutron scattering from Ba1-xKxFe2As2 vs energy transfer
in meV measured at a temperature of 5 K (blue circles) and 50 K (red circles)
using incident neutron energies (Ei) of 30 meV and 60 meV. (a) $x=0.3$ (Ei=30
meV), (b) $x=0.4$ (Ei=60 meV) from Ref. Christianson:2008p14965 , (c) $x=0.5$
(Ei=60 meV), and (d) $x=0.7$ (Ei=30 meV). The $Q$-integration ranges are
(a,b,c) 1.0 to 1.4 Å-1 and (d) 1.2 to 2.0 Å-1, i.e. only the peak at
higher-$Q$ is included for $x=0.7$ so the data are plotted on an expanded
scale to correct for this and the reduction in Fe2+ form factor. The resonant
enhancement at $x=0.7$ (d), is also observed in the lower-$Q$ peak.
At $x=0.9$, where the magnetic scattering peaks at $\sim 13$ meV,
$\Omega/2\Delta$ would be greater than 1, which is inconsistent with the
requirement that the resonance is a bound state with a maximum energy of
$2\Delta$. In order to explain this anomaly, it is necessary to look at how
much of the observed magnetic spectral weight is enhanced in the
superconducting phase. Fig. 4 shows the energy spectra for $0.3\leq x\leq
0.7$, including $x=0.4$ published earlierChristianson:2008p14965 , at 5 K and
50 K, i.e., both below and above Tc. The data have been converted to
Im$\chi(Q,\omega)$ by correcting for the Bose temperature factor. At $x=0.3$,
0.4, and 0.5, the resonant enhancement of the intensity below Tc is clearly
evident, but at $x=0.7$, it is only just statistically significant. Fig. 4(d)
shows the resonant enhancement in the high-Q peak, but it is also evident in
the low-Q peak. Fig. 3(c) shows that the resonant spectral weight determined
by subtracting the 50 K data from the 5 K data decreases sharply with
increasing $x$, falling to zero at $x\sim 0.72$. We had insufficient time to
measure the $x=0.9$ spectra above Tc, but the trend at lower $x$ suggests that
there would be no resonant enhancement below Tc.
This collapse in the resonant spectral weight is clearly linked to the
increase of $\Omega/2\Delta$ to 1 (Fig. 3c), a correlation that is predicted
by RPA models developed to explain neutron scattering results in the copper
oxide superconductorsEschrig:2006p23369 ; Pailhes:2006p32258 . In Equation 1,
the precise value of the resonance energy is dictated by the interaction term
$J$, so $\Omega/2\Delta$ is predicted to shift towards 1 as the electron
correlations weaken. This was the interpretation of point-contact tunneling on
overdoped Bi2Sr2CaCu2O8+δZasadzinski:2001p32155 and it is quite plausible
that electron correlations are diminished in Ba1-xKxFe2As2 as Fermi surface
nesting and the consequent antiferromagnetic correlations are weakened by hole
doping. The itinerant models predict that the reduction in the resonant
spectral weight is directly proportional to the reduction in the exciton
binding energy, $2\Delta-\Omega$Abanov:2001p34962 , in excellent agreement
with the linear relation shown in the inset to Fig. 3c.
The resonant spin excitations in Ba1-xKxFe2As2 prove to be sensitive probes of
both the symmetry of the superconducting gap and the Fermi surface geometry.
The incommensurability is seen to be a signature of imperfect nesting, and its
onset with increasing $x$ is correlated with the decline in Tc and the
collapse of the resonant spectral weight. The close correspondence between the
strength of Fermi surface nesting and superconductivity lends considerable
weight to models in which magnetic fluctuations provide the ‘pairing glue’ in
the iron pnictide and chalcogenide superconductors.
This work was supported by the Materials Sciences and Engineering Division of
the Office of Basic Energy Sciences, Office of Science, U.S. Department of
Energy, under contract No. DE-AC02-06CH11357.
## References
* (1) D. A. Bonn, Nat Phys 2, 159 (2006).
* (2) C. Pfleiderer, Rev Mod Phys 81, 1551 (2009).
* (3) R. McKenzie, Science 278, 820 (1997).
* (4) C. de la Cruz et al., Nature 453, 899 (2008).
* (5) J. Lynn and P. Dai, Physica C 469, 469 (2009).
* (6) L. Taillefer, Ann Rev Cond Matt Phys 1, 51 (2010).
* (7) I. Mazin, D. Singh, M. Johannes, and M. H. Du, Phys Rev Lett 101, 057003 (2008).
* (8) T. Dahm et al., Nat Phys 5, 217 (2009).
* (9) G. Yu, Y. Li, E. M. Motoyama, and M. Greven, Nat Phys 5, 873 (2009).
* (10) J. Paglione and R. L. Greene, Nat Phys 6, 645 (2010).
* (11) A. Christianson et al., Nature 456, 930 (2008).
* (12) S. Chi et al., Phys Rev Lett 102, 107006 (2009).
* (13) M. Lumsden et al., Phys Rev Lett 102, 107005 (2009).
* (14) D. S. Inosov et al., Nat Phys 6, 178 (2009).
* (15) M. D. Lumsden et al., Nat Phys 6, 182 (2010).
* (16) M. M. Korshunov and I. Eremin, Phys Rev B 78, 140509 (2008).
* (17) T. A. Maier, S. Graser, D. Scalapino, and P. Hirschfeld, Phys Rev B 79, 134520 (2009).
* (18) M. Eschrig, Adv Phys 55, 47 (2006).
* (19) P. C. Canfield and S. L. Bud’ko, Ann Rev Cond Matt Phys 1, 27 (2010).
* (20) T. Sato et al., Phys Rev Lett 103, 047002 (2009).
* (21) S. Avci et al., Phys Rev B 83, 172503 (2011).
* (22) D. Singh and M.-H. Du, Phys Rev Lett 100, 237003 (2008).
* (23) J. Knolle, I. Eremin, A. V. Chubukov, and R. Moessner, Phys Rev B 81, 140506 (2010).
* (24) C. Lee et al., Phys Rev Lett 106, 067003 (2011).
* (25) K. Nakayama et al., Phys Rev B 83, 020501 (2011).
* (26) S. Pailhès et al., Phys Rev Lett 96, 257001 (2006).
* (27) J. Zasadzinski et al., Phys Rev Lett 87, 067005 (2001).
* (28) A. Abanov, A. Chubukov, and J. Schmalian, J Electron Spectrosc 117-118, 129 (2001).
|
arxiv-papers
| 2011-06-03T21:42:08 |
2024-09-04T02:49:19.363232
|
{
"license": "Public Domain",
"authors": "J.-P. Castellan, S. Rosenkranz, E. A. Goremychkin, D. Y. Chung, I. S.\n Todorov, M. G. Kanatzidis, I. Eremin, J. Knolle, A. V. Chubukov, S. Maiti, M.\n R. Norman, F. Weber, H. Claus, T. Guidi, R. I. Bewley, and R. Osborn",
"submitter": "Ray Osborn",
"url": "https://arxiv.org/abs/1106.0771"
}
|
1106.0776
|
# Semantics for Possibilistic Disjunctive Programs ††thanks: This is a revised
and improved version of the papers _Semantics for Possibilistic Disjunctive
Programs_ appeared in C. Baral, G. Brewka and J. Schipf (Eds), Ninth
International Conference on Logic Programming and Nonmonotonic Reasoning
(LPNMR-07), LNAI 4483. _Semantics for Possibilistic Disjunctive Logic
programs_ which appears in S. Constantini and W. Watson (Eds), Answer Set
Programming: Advantage in Theory and Implementation.
JUAN CARLOS NIEVES
Universitat Politècnica de Catalunya
Software Department (LSI)
c/Jordi Girona 1-3 E-08034 Barcelona Spain
jcnieves@lsi.upc.edu MAURICIO OSORIO
Universidad de las Américas - Puebla
CENTIA
Sta. Catarina Mártir Cholula Puebla 72820 México
osoriomauri@gmail.com ULISES CORTÉS
Universitat Politècnica de Catalunya
Software Department (LSI)
c/Jordi Girona 1-3 E-08034 Barcelona Spain
ia@lsi.upc.edu
(March 22, 2011; May 23, 2011)
###### Abstract
In this paper, a possibilistic disjunctive logic programming approach for
modeling uncertain, incomplete and inconsistent information is defined. This
approach introduces the use of possibilistic disjunctive clauses which are
able to capture incomplete information and incomplete states of a knowledge
base at the same time.
By considering a possibilistic logic program as a possibilistic logic theory,
a construction of a possibilistic logic programming semantic based on answer
sets and the proof theory of possibilistic logic is defined. It shows that
this possibilistic semantics for disjunctive logic programs can be
characterized by a fixed-point operator. It is also shown that the suggested
possibilistic semantics can be computed by a resolution algorithm and the
consideration of optimal refutations from a possibilistic logic theory.
In order to manage inconsistent possibilistic logic programs, a preference
criterion between inconsistent possibilistic models is defined; in addition,
the approach of cuts for restoring consistency of an inconsistent
possibilistic knowledge base is adopted. The approach is illustrated in a
medical scenario.
###### keywords:
Answer Set Programming, Uncertain Information, Possibilistic Reasoning.
## 1 Introduction
Answer Set Programming (ASP) is one of the most successful logic programming
approaches in Non-monotonic Reasoning and Artificial Intelligence applications
[Baral (2003), Gelfond (2008)]. In [Nicolas et al. (2006)], a possibilistic
framework for reasoning under uncertainty was proposed. This framework is a
combination between ASP and possibilistic logic [Dubois et al. (1994)].
Possibilistic Logic is based on possibilistic theory in which, at the
mathematical level, degrees of possibility and necessity are closely related
to fuzzy sets [Dubois et al. (1994)]. Due to the natural properties of
possibilistic logic and ASP, Nicolas et al.’s approach allows us to deal with
reasoning that is at the same time _non-monotonic_ and _uncertain_. Nicolas et
al.’s approach is based on the concept of _possibilistic stable model_ which
defines a semantics for _possibilistic normal logic programs_.
An important property of possibilistic logic is that it is _axiomatizable_ in
the necessity-valued case [Dubois et al. (1994)]. This means that there is a
formal system (a set of axioms and inferences rules) such that from any set of
possibilistic fomulæ ${\mathcal{}F}$ and for any possibilistic formula $\Phi$,
$\Phi$ is a logical consequence of ${\mathcal{}F}$ if and only if $\Phi$ is
derivable from ${\mathcal{}F}$ in this formal system. A result of this
property is that the inference in possibilistic logic can be managed by both a
syntactic approach (axioms and inference rules) and a possibilistic model
theory approach (interpretations and possibilistic distributions).
Equally important to consider is that the answer set semantics inference can
also be characterized as _a logic inference_ in terms of the proof theory of
intuitionistic logic and intermediate logics [Pearce (1999), Osorio et al.
(2004)]. This property suggests that one can explore extensions of the answer
set semantics by considering the inference of different logics.
Since in [Dubois et al. (1994)] an axiomatization of possibilistic logic has
been defined, in this paper we explore the characterization of a possibilistic
semantics for capturing possibilistic logic programs in terms of the proof
theory of possibilistic logic and the standard answer set semantics. A nice
feature of this characterization is that it is applicable to _disjunctive as
well as normal possibilistic logic programs_ , and, with minor modification,
to possibilistic logic programs containing a _strong negation operator_.
The use of possibilistic disjunctive logic programs allow us to capture
_incomplete information_ and _incomplete states of a knowledge base_ at the
same time. In order to illustrate the use of possibilistic disjunctive logic
programs, let us consider a scenario in which uncertain and incomplete
information is always present. This scenario can be observed in the process of
_human organ transplanting_. There are several factors that make this process
sophisticated and complex. For instance:
* •
the transplant acceptance criteria vary ostensibly among transplant teams from
the same geographical area and substantially between more distant transplant
teams [López-Navidad et al. (1997)]. This means that the acceptance criteria
applied in one hospital could be invalid or at least questionable in another
hospital.
* •
there are lots of factors that make the diagnosis of an organ donor’s disease
in the organ recipient unpredictable. For instance, if an organ donor $D$ has
hepatitis, then an organ recipient $R$ could be infected by an organ of $D$.
According to [López-Navidad and Caballero (2003)], there are cases in which
the infection can occur; however, the recipient can _spontaneously_ clear the
infection, for example hepatitis. This means that an organ donor’s infection
can be _present_ or _non-present_ in the organ recipient. Of course there are
infections which can be prevented by treating the organ recipient post-
transplant.
* •
the clinical state of an organ recipient can be affected by several factors,
for example malfunctions of the graft. This means that the clinical state of
an organ recipient can be _stable_ or _unstable_ after the graft because the
graft can have _good graft functions_ , _delayed graft functions_ and
_terminal insufficient functions_ 111Usually, when a doctor says that an organ
has _terminally insufficient functions_ , it means that there are no clinical
treatments for improving the organ’s functions..
It is important to point out that the transplant acceptance criteria rely on
the kind of organ (kidney, heart, liver, _etc._) considered for transplant and
the clinical situation of the potential organ recipients.
Let us consider the particular case of a kind of kidney transplant with organ
donors who have a kind of infection, for example: endocarditis, hepatitis. As
already stated, the clinical situation of the potential organ recipients is
relevant in the organ transplant process. Hence the clinical situation of an
organ recipient is denoted by the predicate $cs(t,T)$, such that $t$ can be
_stable_ , _unstable_ , 0-_urgency_ and $T$ denotes a moment in time. Another
important factor, that is considered, is the state of the organ’s functions.
This factor is denoted by the predicate $o(t,T)$ such that $t$ can be
_terminal-insufficient functions_ , _good-graft functions_ , _delayed-graft
functions_ , _normal-graft functions_ and $T$ denotes a moment in time. Also,
the state of an infection in both the organ recipient and the organ donor are
considered, these states are denoted by the predicates $r\\_inf(present,T)$
and $d\\_inf(present,T)$ respectively so that $T$ denotes a moment in time.
The last predicate that is presented is $action(t,T)$ such that $t$ can be
_transplant_ , _wait_ , _post-transplant treatment_ and $T$ denotes a moment
in time. This predicate denotes the possible actions of a doctor. In Figure
1222This finite state automata was developed under the supervision of
Francisco Caballero M. D. Ph. D. from the Hospital de la Santa Creu I Sant
Pau, Barcelona, Spain., a finite state automata is presented. In this
automata, each node represents a possible situation where an organ recipient
can be found and the arrows represent the doctor’s possible actions. Observe
that we are assuming that in the initial state the organ recipient is
clinically stable and he does not have an infection; however, he has a kidney
whose functions are terminally insufficient. From the initial state, the
doctor’s actions would be either to perform a kidney transplantat or just
wait333In the automata of Figure 1, we are not considering the possibility
that there is a waiting list for organs. This waiting list has different
policies for assigning an organ to an organ recipient..
Figure 1: An automata of states and actions for considering infections in
kidney organ transplant.
According to Figure 1, an organ recipient could be found in different
situations after a graft. The organ recipient may require another graft and
the state of the infection could be unpredictable. This situation makes the
automata of Figure 1 nondeterministic. Let us consider a couple of extended
disjunctive clauses which describe some situations presented in Figure 1.
$r\\_inf(present,T2)\vee\lnot r\\_inf(present,T2)$ $\leftarrow$
$action(transplant,T),$
---
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}d\\_inf(present,T),T2=T+1.$
$o(good\\_graft\\_funct,T2)\vee o(delayed\\_graft\\_funct,T2)\vee$
$o(terminal\\_insufficient\\_funct,T2)\leftarrow action(transplant,T),T2=T+1.$
As syntactic clarification, we want to point out that $\lnot$ is regarded as a
_strong negation_ which is not exactly the negation in classical logic. In
fact, any atom negated by strong negation will be replaced by a new atom as it
is done in ASP. This means that $a\vee\lnot a$ cannot be regarded as a logic
tautology.
Continuing with our medical scenario, we can see that the intended meaning of
the first clause is that if the organ donor has an infection, then the
infection can be _present_ or _non-present_ in the organ recipient after the
graft, and the intended meaning of the second one is that the graft’s
functions can be: _good_ , _delayed_ and _terminal_ after the graft. Observe
that these clauses are not capturing the uncertainty that is involved in each
statement. For instance, w.r.t. the first clause, one can wish to attach an
degree of uncertainty in order to capture the uncertainty that is involved in
this statement — keeping in mind that the organ recipient can be infected by
the infection of the donor’s organ; however, the infection can be
_spontaneously_ cleared by the organ recipient as it is the case of hepatitis
[López-Navidad and Caballero (2003)].
In logic programming literature, one can find different approaches for
representing uncertain information [Kifer and Subrahmanian (1992), Ng and
Subrahmanian (1992), Lukasiewicz (1998), Kern-Isberner and Lukasiewicz (2004),
van Emden (1986), Rodríguez-Artalejo and Romero-Díaz (2008), Van-Nieuwenborgh
et al. (2007), Fitting (1991), Lakshmanan (1994), Baldwin (1987), Dubois et
al. (1991), Alsinet and Godo (2002), Alsinet and Godo (2000), Alsinet et al.
(2008), Nicolas et al. (2006)]. Basically, these approaches differ in the
underlying notion of uncertainty and how uncertainty values, associated with
clauses and facts, are managed. Usually the selection of an approach for
representing uncertain information relies on the kind of information which has
to be represented. In psychology literature, one can find significant
observations related to the presentation of uncertain information. For
instance, Tversky and Kahneman have observed in [Tversky and Kahneman (1982)]
that people commonly use statements such as “ _I think that_ $\dots$”, “
_chances are_ $\dots$”, “ _it is probable that_ $\dots$”, “ _it is plausible
that_ $\dots$”, _etc._ , for supporting their decisions. In fact, many times,
experts in a domain, such as medicine, appeal to their intuition by using
these kinds of statements [Fox and Das (2000), Fox and Modgil (2006)]. One can
observe that these statements have adjectives which quantify the information
as a common denominator. These adjectives are for example: _probable_ ,
_plausible_ , _etc_. This suggests that the consideration of labels for the
_syntactic representation of uncertain values_ could help represent uncertain
information pervaded by ambiguity.
Since possibilistic logic defines a proof theory in which the strength of a
conclusion is the strength of the weakest argument in its proof, the
consideration of an ordered set of labels for capturing incomplete states of a
knowledge base is feasible. The only formal requirement is that this set of
adjectives/labels must be a finite set. For instance, for the given medical
scenario, a transplant coordinator444A transplant coordinator is an expert in
all of the processes of transplants [López-Navidad et al. (1997)]. can suggest
a set of labels in order to quantify a medical knowledge base and, of course,
to define an order between those labels. By considering those labels, we can
have possibilistic clauses as:
probable: $r\\_inf(present,T2)\vee\lnot r\\_inf(present,T2)$ $\leftarrow$
$action(transplant,T),$
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}d\\_inf(present,T),T2=T+1.$
Informally speaking, the reading of this clause is: _it is _probable_ that if
the organ donor has an infection, then the organ recipient can be infected or
not after a graft._
As we can see, possibilistic programs with _negation as failure_ represent a
rich class of logic programs which are especially adapted to automated
reasoning when the available information is pervaded by ambiguity.
In this paper, we extend the work of two earlier papers [Nieves et al.
(2007a), Nieves et al. (2007b)] in order to obtain a simple logic
characterization of a possibilistic logic programming semantics for capturing
possibilistic programs; this semantics is applicable to disjunctive as well as
normal logic programs. As we have already mentioned, the construction of the
possibilistic semantics is based on the proof theory of possibilistic logic.
Following this approach:
* •
We define the inference $\Vvdash_{PL}$. This inference takes as references the
standard definition of the answer set semantics and the inference
$\vdash_{PL}$ which corresponds to the inference of possibilistic logic.
* •
The possibilistic semantics is defined in terms of a syntactic reduction,
$\Vvdash_{PL}$ and the concept of _i-greatest set_.
* •
Since the inference of possibilistic logic is computable by a generalization
of the classical resolution rule, it is shown that the defined possibilistic
semantics is computable by inferring optimal refutations.
* •
By considering _the principle of partial evaluation_ , it is shown that the
given possibilistic semantics can be characterized by a possibilistic partial
evaluation operator.
* •
Finally, since the possibilistic logic uses $\alpha$_-cuts_ to manage
inconsistent possibilistic knowledge bases, an approach of cuts for restoring
consistency of an inconsistent possibilistic knowledge base is adopted.
The rest of the paper is divided as follows: In §2 we give all the background
and necessary notation. In §3, the syntax of our possibilistic framework is
presented. In §4, the semantics for capturing the possibilistic logic programs
is defined. Also it is shown that this semantics is computable by considering
a possibilistic resolution rule and partial evaluation. In §5, some criteria
for managing inconsistent possibilistic logic programs are defined. In §6, we
present a small discussion w.r.t. related approaches to our work. Finally, in
the last section, we present our conclusions and future work.
## 2 Background
In this section we introduce the necessary terminology and relevant
definitions in order to have a self-contained document. We assume that the
reader is familiar with basic concepts of _classic logic_ , _logic
programming_ and _lattices_.
### 2.1 Lattices and order
We start by defining some fundamental definitions of lattice theory (see
[Davey and Priestly (2002)] for more details).
###### Definition 1
Let ${\mathcal{}Q}$ be a set. An order (or partial order) on ${\mathcal{}Q}$
is a binary relation $\leq$ on ${\mathcal{}Q}$ such that, for all
$x,y,z\in{\mathcal{}Q}$,
(i)
$x\leq x$
(ii)
$x\leq y$ and $y\leq x$ imply $x=y$
(iii)
$x\leq y$ and $y\leq z$ imply $x\leq z$
These conditions are referred to, respectively, as reflexivity, antisymmetry
and transitivity.
A set ${\mathcal{}Q}$ equipped with an order relation $\leq$ is said to be an
ordered set (or partial ordered set). It will be denoted by
(${\mathcal{}Q}$,$\leq$).
###### Definition 2
Let (${\mathcal{}Q}$,$\leq$) be an ordered set and let
$S\subseteq{\mathcal{}Q}$. An element $x\in{\mathcal{}Q}$ is an upper bound of
$S$ if $s\leq x$ for all $s\in S$. A lower bound is defined dually. The set of
all upper bounds of $S$ is denoted by $S^{u}$ (read as ‘$S$ upper’) and the
set of all lower bounds by $S^{l}$ (read as ‘$S$ lower’).
If $S^{u}$ has a minimum element $x$, then $x$ is called the least upper bound
(${\mathcal{}LUB}$) of $S$. Equivalently, $x$ is the least upper bound of $S$
if
(i)
$x$ is an upper bound of $S$, and
(ii)
$x\leq y$ for all upper bound $y$ of $S$.
The least upper bound of $S$ exists if and only if there exists
$x\in{\mathcal{}Q}$ such that
$(\forall y\in{\mathcal{}Q})[((\forall s\in S)s\leq y)\Longleftrightarrow
x\leq y],$
and this characterizes the ${\mathcal{}LUB}$ of $S$. Dually, if $S^{l}$ has a
greatest element, $x$, then $x$ is called the greatest lower bound
(${\mathcal{}GLB}$) of $S$. Since the least element and the greatest element
are unique, ${\mathcal{}LUB}$ and ${\mathcal{}GLB}$ are unique when they
exist.
The least upper bound of $S$ is called the supremum of $S$ and it is denoted
by $sup~{}S$; the greatest lower bound of S is called the infimum of S and it
is denoted by $inf~{}S.$
###### Definition 3
Let (${\mathcal{}Q}$,$\leq$) be a non-empty ordered set.
(i)
If $sup\\{x,y\\}$ and $inf\\{x,y\\}$ exist for all $x,y\in{\mathcal{}Q}$, then
${\mathcal{}Q}$ is called lattice.
(ii)
If $sup~{}S$ and $inf~{}S$ exist for all $S\subseteq{\mathcal{}Q}$, then
${\mathcal{}Q}$ is called a complete lattice.
###### Example 1
Let us consider the set of labels ${\mathcal{}Q}:=\\{Certain$, $Confirmed,$
$Probable,$ $Plausible,$ $Supported,$ $Open\\}$555This set of labels was taken
from [Fox and Modgil (2006)]. In that paper, the authors argue that we can
construct a set of labels (they call those: _modalities_) in a way that this
set provides a simple scale for ordering the claims of our beliefs. We will
use this kind of labels for quantifying the degree of uncertainty of a
statement. and let $\preceq$ be a partial order such that the following set of
relations holds: $\\{Open\preceq Supported$, $Supported\preceq Plausible$,
$Supported\preceq Probable$, $Probable\preceq Confirmed$, $Plausible\preceq
Confirmed$, $Confirmed\preceq Certain\\}$. A graphic representation of $S$
according to $\preceq$ is showed in Figure 2. It is not difficult to see that
$({\mathcal{}Q},\preceq)$ is a lattice and further it is a complete lattice.
Figure 2: A graphic representation of a lattice where the following relations
holds: $\\{Open\preceq Supported$, $Supported\preceq Plausible$
$Supported\preceq Probable$, $Probable\preceq Confirmed$, $Plausible\preceq
Confirmed$ , $Confirmed\preceq Certain\\}$.
### 2.2 Logic programs: Syntax
The language of a propositional logic has an alphabet consisting of
(i)
proposition symbols: $\bot,p_{0},p_{1},...$
(ii)
connectives : $\vee,\wedge,\leftarrow,\lnot,\;not$
(iii)
auxiliary symbols : ( , )
in which $\vee,\wedge,\leftarrow$ are binary-place connectives, $\lnot$, $not$
are unary-place connective and $\bot$ is zero-ary connective. The proposition
symbols and $\bot$ stand for the indecomposable propositions, which we call
atoms, or atomic propositions. Atoms negated by $\lnot$ will be called
_extended atoms_.
###### Remark 1
We will use the concept of atom without paying attention to whether it is an
extended atom or not.
The negation sign $\lnot$ is regarded as the so called _strong negation_ by
the ASP’s literature and the negation $not$ as the _negation as failure_. A
literal is an atom, $a$, or the negation of an atom $not\;a$. Given a set of
atoms $\\{a_{1},...,a_{n}\\}$, we write $not\;\\{a_{1},...,a_{n}\\}$ to denote
the set of literals $\\{not\;a_{1},...,not\;a_{n}\\}.$ An extended disjunctive
clause, C, is denoted:
$a_{1}\vee\ldots\vee a_{m}\leftarrow
a_{m+1},\dots,a_{j},not\;a_{j+1},\dots,not\;a_{n}$
in which $m\geq 0$, $n\geq 0$, $m+n>0$, each $a_{i}$ is an atom666Notice that
these atoms can be _extended atoms_.. When $n=0$ and $m>0$ the clause is an
abbreviation of $a_{1}\vee\ldots\vee a_{m}\leftarrow$; clauses of these forms
are some times written just as $a_{1}\vee\ldots\vee a_{m}$. When $m=0$ the
clause is an abbreviation of:
$\leftarrow a_{1},\dots,a_{j},not\;a_{j+1},\dots,not\;a_{n}$
Clauses of this form are called constraints (the rest, non-constraint
clauses). An extended disjunctive program $P$ is a finite set of extended
disjunctive clauses. By ${\mathcal{}L}_{P}$, we denote the set of atoms in the
language of $P$.
Sometimes we denote an extended disjunctive clause C by
${\mathcal{}A}\leftarrow{\mathcal{}B}^{+},\ not\;{\mathcal{}B}^{-}$,
${\mathcal{}A}$ contains all the head literals, ${\mathcal{}B}^{+}$ contains
all the positive body literals and ${\mathcal{}B}^{-}$ contains all the
negative body literals. When ${\mathcal{}B}^{-}=\emptyset$, the clause is
called positive disjunctive clause. A set of positive disjunctive clauses is
called a positive disjunctive logic program. When ${\mathcal{}A}$ is a
singleton set, the clause can be regarded as a normal clause. A normal logic
program is a finite set of normal clauses. Finally, when ${\mathcal{}A}$ is a
singleton set and ${\mathcal{}B}^{-}=\emptyset$, the clause can also be
regarded as a definite clause. A finite set of definite clauses is called a
definite logic program.
We will manage the strong negation ($\neg$), in our logic programs, as it is
done in ASP [Baral (2003)]. Basically, each extended atom $\neg a$ is replaced
by a new atom symbol $a^{\prime}$ which does not appear in the language of the
program. For instance, let $P$ be the normal program:
$a\leftarrow q$. | | $q$.
---|---|---
$\lnot q\leftarrow r$. | | $r$.
Then replacing each extended atom by a new atom symbol, we will have:
$a\leftarrow q$. | | $q$.
---|---|---
$q^{\prime}\leftarrow r$. | | $r$.
In order not to allow models with complementary atoms, that is $q$ and $\lnot
q$, a constraint of the form $\leftarrow q,q^{\prime}$ is usually added to the
logic program. In our approach, this constraint can be omitted in order to
allow models with complementary atoms. In fact, the user could add/omit this
constraint without losing generality.
Formulæ are constructed as usual in classic logic by the connectives:
$\vee,\wedge,\leftarrow,\sim,\bot$. A theory $T$ is a finite set of formulæ.
By ${\mathcal{}L}_{T}$, we denote the set of atoms that occur in _T_. When we
treat a logic program as a theory,
* •
each negative literal $not\;a$ is replaced by $\sim a$ such that $\sim$ is
regarded as the negation in classic logic.
* •
each constraint $\leftarrow a_{1},\dots,a_{j},not\;a_{j+1},\dots,not\;a_{n}$
is rewritten according to the formula $a_{1}\wedge\dots\wedge a_{j}\wedge\sim
a_{j+1}\wedge\dots\wedge\sim a_{n}\rightarrow\bot$.
Given a set of proposition symbols $S$ and a theory $\Gamma$ in a logic $X$.
If $\Gamma\vdash_{X}S$ if and only if $\forall s\in S$ $\Gamma\vdash_{X}s$.
### 2.3 Interpretations and models
In this section, we define some relevant concepts w.r.t. semantics. The first
basic concept that we introduce is _interpretation_.
###### Definition 4
Let $T$ be a theory, an interpretation $I$ is a mapping from
${\mathcal{}L}_{T}$ to $\\{0,1\\}$ meeting the conditions:
1. 1.
$I(a\wedge b)=min\\{I(a),I(b)\\}$,
2. 2.
$I(a\vee b)=max\\{I(a),I(b)\\}$,
3. 3.
$I(a\leftarrow b)=0$ if and only if $I(b)=1$ and $I(a)=0$,
4. 4.
$I(\sim a)=1-I(a)$,
5. 5.
$I(\bot)=0$.
It is standard to provide interpretations only in terms of a mapping from
${\mathcal{}L}_{T}$ to $\\{0,1\\}$. Moreover, it is easy to prove that this
mapping is unique by virtue of the definition by recursion [van Dalen (1994)].
Also, it is standard to use sets of atoms to represent interpretations. The
set corresponds exactly to those atoms that evaluate to 1.
An interpretation $I$ is called a (2-valued) model of the logic program $P$ if
and only if for each clause $c\in P$, $I(c)=1$. A theory is consistent if it
admits a model, otherwise it is called inconsistent. Given a theory $T$ and a
formula $\varphi$, we say that $\varphi$ is a logical consequence of $T$,
denoted by $T\models\varphi$, if every model $I$ of $T$ holds that
$I(\varphi)=1$. It is a well known result that $T\models\varphi$ if and only
if $T\cup\\{\sim\varphi\\}$ is inconsistent [van Dalen (1994)].
We say that a model $I$ of a theory $T$ is a minimal model if a model
$I^{\prime}$ of $T$ different from $I$ such that $I^{\prime}\subset I$ does
not exist. Maximal models are defined in the analogous form.
### 2.4 Logic programming semantics
In this section, the _answer set semantics_ is presented. This semantics
represents a two-valued semantics approach.
#### 2.4.1 Answer set semantics
By using ASP, it is possible to describe a computational problem as a logic
program whose answer sets correspond to the solutions of the given problem. It
represents one of the most successful approaches of non-monotonic reasoning of
the last two decades [Baral (2003)]. The number of applications of this
approach have increased due to the efficient implementations of the answer set
solvers that exist.
The answer set semantics was first defined in terms of the so called _Gelfond-
Lifschitz reduction_ [Gelfond and Lifschitz (1988)] and it is usually studied
in the context of syntax dependent transformations on programs. The following
definition of an answer set for extended disjunctive logic programs
generalizes the definition presented in [Gelfond and Lifschitz (1988)] and it
was presented in [Gelfond and Lifschitz (1991)]: Let _P_ be any extended
disjunctive logic program. For any set $S\subseteq{\mathcal{}L}_{P}$, let
$P^{S}$ be the positive program obtained from _P_ by deleting
(i)
each rule that has a formula $not\;a$ in its body with $a\in S$, and then
(ii)
all formulæ of the form $not\;a$ in the bodies of the remaining rules.
Clearly $P^{S}$ does not contain $not$ (this means that $P^{S}$ is either a
positive disjunctive logic program or a definite logic program), hence _S_ is
called an answer set of _P_ if and only if _S_ is a minimal model of $P^{S}$.
In order to illustrate this definition, let us consider the following example:
###### Example 2
Let us consider the set of atoms $S:=\\{b\\}$ and the following normal logic
program $P$:
$b\leftarrow not\;a$. | $b$.
---|---
$c\leftarrow not\;b$. | $c\leftarrow a$.
We can see that $P^{S}$ is:
$b$. | $c\leftarrow a$.
---|---
Notice that this program has three models: $\\{b\\}$, $\\{b,c\\}$ and
$\\{a,b,c\\}$. Since the minimal model among these models is $\\{b\\}$, we can
say that $S$ is an answer set of $P$.
In the answer set definition, we will normally omit the restriction that if
$S$ has a pair of complementary literals then $S:={\mathcal{}L}_{P}$. This
means that we allow for the possibility that an answer set could have a pair
of complementary atoms. For instance, let us consider the program $P$:
$a$. | | $\lnot a$. | | $b$.
---|---|---|---|---
then, the only answer set of this program is : $\\{a,\lnot a,b\\}$. In Section
5, the inconsistency in possibilistic programs is discussed.
It is worth mentioning that in literature there are several forms for handling
an inconsistency program [Baral (2003)]. For instance, by applying the
original definition [Gelfond and Lifschitz (1991)] the only answer set of $P$
is: $\\{a,\lnot a,b,\lnot b\\}$. On the other hand, the DLV system [DLV
(1996)] returns no models if the program is inconsistent.
### 2.5 Possibilistic Logic
Since in our approach is based on the proof theory of possibilistic logic, in
this section, we present an axiomation of possibilistic logic for the case of
necessity-valued formulæ.
Possibilistic logic is a weighted logic introduced and developed in the
mid-1980s, in the setting of artificial intelligence, with the goal of
developing a simple yet rigorous approach to automated reasoning from
uncertain or prioritized incomplete information. Possibilistic logic is
especially adapted to automated reasoning when the available information is
pervaded by ambiguities. In fact, possibilistic logic is a natural extension
of classical logic in which the notion of total order/partial order is
embedded in the logic.
Possibilistic Logic is based on _possibility theory_. Possibilistic theory, as
its name implies, deals with the possible rather than probable values of a
variable with possibility being a matter of degree. One merit of possibilistic
theory is at one and the same time to represent imprecision (in the form of
fuzzy sets) and quantity uncertainty (through the pair of numbers that measure
_possibility_ and _necessity_).
Our study in possibilistic logic is devoted to a fragment of possibilistic
logic, in which knowledge bases are only _necessity-quantified_ statements. A
necessity-valued formula is a pair $(\varphi\;\alpha)$ in which $\varphi$ is a
classical logic formula and $\alpha\in(0,1]$ is a positive number. The pair
$(\varphi\;\alpha)$ expresses that the formula $\varphi$ is certain at least
to the level $\alpha$, that is $N(\varphi)\geq\alpha$, in which $N$ is a
necessity measure modeling our possibly incomplete state knowledge [Dubois et
al. (1994)]. $\alpha$ is not a probability (like it is in probability theory),
but it induces a certainty (or confidence) scale. This value is determined by
the expert providing the knowledge base. A necessity-valued knowledge base is
then defined as a finite set (that is to say a conjunction) of necessity-
valued formulæ.
The following properties hold w.r.t. necessity-valued formulæ:
$N(\varphi\wedge\psi)=min(\\{N(\varphi),N(\psi)\\})$ (1)
$N(\varphi\vee\psi)\geq max(\\{N(\varphi),N(\psi)\\})$ (2)
$\text{ if }\varphi\vdash\psi\text{ then }N(\psi)\geq N(\varphi)$ (3)
Dubois et al., in [Dubois et al. (1994)] introduced a formal system for
necessity-valued logic which is based on the following axioms schemata
(propositional case):
(A1)
$(\varphi\rightarrow(\psi\rightarrow\varphi)\;1)$
(A2)
$((\varphi\rightarrow(\psi\rightarrow\xi))\rightarrow((\varphi\rightarrow\psi)\rightarrow(\varphi\rightarrow\xi))\;1)$
(A3)
$((\neg\varphi\rightarrow\neg\psi)\rightarrow((\neg\varphi\rightarrow\psi)\rightarrow\varphi)\;1)$
Inference rules:
(GMP)
$(\varphi\;\alpha),(\varphi\rightarrow\psi\;\beta)\vdash(\psi\;min\\{\alpha,\beta\\})$
(S)
$(\varphi\;\alpha)\vdash(\varphi\;\beta)$ if $\beta\leq\alpha$
According to Dubois et al., in [Dubois et al. (1994)], basically we need a
complete lattice to express the levels of uncertainty in Possibilistic Logic.
Dubois et al. extended the axioms schemata and the inference rules for
considering partially ordered sets. We shall denote by $\vdash_{PL}$ the
inference under Possibilistic Logic without paying attention to whether the
necessity-valued formulæ are using a totally ordered set or a partially
ordered set for expressing the levels of uncertainty.
The problem of inferring automatically the necessity-value of a classical
formula from a possibilistic base was solved by an extended version of
_resolution_ for possibilistic logic (see [Dubois et al. (1994)] for details).
One of the main principles of possibilistic logic is that:
###### Remark 2
The strength of a conclusion is the strength of the weakest argument used in
its proof.
According to Dubois and Prade [Dubois and Prade (2004)], the contribution of
possibilistic logic setting is to relate this principle (measuring the
validity of an inference chain by its weakest link) to fuzzy set-based
necessity measures in the framework of Zadeh’s possibilistic theory, since the
following pattern then holds:
$N(\sim p\vee q)\geq\alpha\textnormal{ and }N(p)\geq\beta\textnormal{ imply
}N(q)\geq min(\alpha,\beta)$
This interpretive setting provides a semantic justification to the claim that
the weight attached to a conclusion should be the weakest among the weights
attached to the formulæ involved in the derivation.
## 3 Syntax
In this section, the general syntax for possibilistic disjunctive logic
programs will be presented. This syntax is based on the standard syntax of
extended disjunctive logic programs (see Section 2.2).
We start by defining some concepts for managing the possibilistic values of a
possibilistic knowledge base777Some concepts presented in this section extend
some terms presented in [Nicolas et al. (2006)].. We want to point out that in
the whole document only finite lattices are considered. This assumption was
made based on the recognition that in real applications we will rarely have an
infinite set of labels for expressing the incomplete state of a knowledge
base.
A _possibilistic atom_ is a pair $p=(a,q)\in{\mathcal{}A}\times{\mathcal{}Q}$,
in which ${\mathcal{}A}$ is a finite set of atoms and $({\mathcal{}Q},\leq)$
is a lattice. The projection $*$ to a possibilistic atom $p$ is defined as
follows: $p^{*}=a$. Also given a set of possibilistic atoms $S$, $*$ over $S$
is defined as follows: $S^{*}=\\{p^{*}|p\in S\\}$.
Let $({\mathcal{}Q},\leq)$ be a lattice. A possibilistic disjunctive clause
$R$ is of the form:
$\alpha:{\mathcal{}A}\leftarrow{\mathcal{}B}^{+},\ not\;{\mathcal{}B}^{-}$
in which $\alpha\in{\mathcal{}Q}$ and
${\mathcal{}A}\leftarrow{\mathcal{}B}^{+},\ not\;{\mathcal{}B}^{-}$ is an
extended disjunctive clause as defined in Section 2.2. The projection $*$ for
a possibilistic clause is $R^{*}={\mathcal{}A}\leftarrow{\mathcal{}B}^{+},\
not\;{\mathcal{}B}^{-}$. On the other hand, the projection $n$ for a
possibilistic clause is $n(R)=\alpha$. This projection denotes the degree of
necessity captured by the certainty level of the information described by $R$.
A possibilistic constraint $C$ is of the form:
$\top_{\mathcal{}Q}:\;\;\;\leftarrow{\mathcal{}B}^{+},\
not\;{\mathcal{}B}^{-}$
in which $\top_{\mathcal{}Q}$ is the top of the lattice $({\mathcal{}Q},\leq)$
and $\leftarrow{\mathcal{}B}^{+},\ not\;{\mathcal{}B}^{-}$ is a constraint as
defined in Section 2.2. The projection $*$ for a possibilistic constraint $C$
is: $C^{*}=\;\;\leftarrow{\mathcal{}B}^{+},\ not\;{\mathcal{}B}^{-}$. Observe
that the possibilistic constraints have the top of the lattice
$({\mathcal{}Q},\leq)$ as an uncertain value, this assumption is due to the
fact that similar a constraint in standard ASP, the purpose of a possibilistic
constraint is to eliminate possibilistic models. Hence, it can be assumed that
there is no doubt about the veracity of the information captured by a
possibilistic constraint. However, as in standard ASP, one can define
possibilistic constraints of the form: $\alpha:x\leftarrow{\mathcal{}B}^{+},\
not\;{\mathcal{}B}^{-},\;not\;x$ such that $x$ is an atom which is not used in
any other possibilistic clause and $\alpha\in{\mathcal{}Q}$. This means that
the user can define possibilistic constraints with different levels of
certainty.
A possibilistic disjunctive logic program $P$ is a tuple of the form
$\langle({\mathcal{}Q},\leq),N\rangle$, in which $N$ is a finite set of
possibilistic disjunctive clauses and possibilistic constraints. The
generalization of $*$ over $P$ is as follows: $P^{*}=\\{r^{*}|r\in N\\}$.
Notice that $P^{*}$ is an extended disjunctive program. When $P^{*}$ is a
normal program, $P$ is called a possibilistic normal program. Also, when
$P^{*}$ is a positive disjunctive program, $P$ is called a possibilistic
positive logic program and so on. A given set of possibilistic disjunctive
clauses $\\{\gamma,\dots,\gamma\\}$ is also represented as
$\\{\gamma;\dots;\gamma\\}$ to avoid ambiguities with the use of the comma in
the body of the clauses.
Given a possibilistic disjunctive logic program
$P=\langle({\mathcal{}Q},\leq),N\rangle$, we define the _$\alpha$ -cut_ and
the _strict $\alpha$-cut_ of $P$, denoted respectively by $P_{\alpha}$ and
$P_{\overline{\alpha}}$, by
$P_{\alpha}=\langle({\mathcal{}Q},\leq),N_{\alpha}\rangle$ such that
$N_{\alpha}=\\{c|c\in N\text{ and }n(c)\geq\alpha\\}$
$P_{\overline{\alpha}}=\langle({\mathcal{}Q},\leq),N_{\overline{\alpha}}\rangle$
such that $N_{\overline{\alpha}}=\\{c|c\in N\text{ and }n(c)>\alpha\\}$
###### Example 3
In order to illustrate a possibilistic program, let us go back to our scenario
described in Section 1. Let $({\mathcal{}Q},\preceq)$ be the lattice of Figure
2 such that the relation $A\preceq B$ means that $A$ is less possible than
$B$. The possibilistic program $P:=\langle({\mathcal{}Q},\preceq),N\rangle$
will be the following set of possibilistic clauses:
It is probable that if the organ donor has an infection, then the organ
recipient can be infected or not after a graft:
probable: $r\\_inf(present,T2)\vee\lnot r\\_inf(present,T2)$ $\leftarrow$
$action(transplant,T),$
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}d\\_inf(present,T),T2=T+1.$
It is confirmed that the organ’s functions can be: good, delayed and terminal
after a graft.
confirmed: $o(good\\_graft\\_funct,T2)\vee o(delayed\\_graft\\_funct,T2)\vee$
$o(terminal\\_insufficient\\_funct,T2)\leftarrow action(transplant,T),T2=T+1.$
It is confirmed that if the organ’s functions are terminally insufficient then
a transplanting is necessary.
confirmed: $action(transplant,T)\leftarrow$
$o(terminal\\_insufficient\\_funct,T)$.
It is plausible that the clinical situation of the organ recipient can be
stable if the functions of the graft are good.
plausible: $cs(stable,T)\leftarrow o(good\\_graft\\_funct,T)$.
It is plausible that the clinical situation of the organ recipient can be
unstable if the functions of the graft are delayed.
plausible: $cs(unstable,T)\leftarrow o(delayed\\_graft\\_funct,T)$.
It is plausible that the clinical situation of the organ recipient can be of
0-urgency if the functions of the graft are terminally insufficient after the
graft.
plausible: $cs($0-urgency$,T2)\leftarrow
o(terminal\\_insufficient\\_funct,T2),$
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}action(transplant,T),T2=T+1$.
It is certain that the doctor cannot do two actions at the same time.
certain: $\;\;\leftarrow action(transplant,T),action(wait,T)$.
It is certain that a transplant cannot be done if the organ recipient is dead.
certain: $\;\;\leftarrow action(transplant,T),cs(dead,T)$.
The initial state of the automata of Figure 1 is captured by the following
possibilistic clauses:
certain: $d\\_inf(present,0)$.
certain: $\lnot r\\_inf(present,0)$.
certain: $o(terminal\\_insufficient\\_funct,0)$.
certain: $cs(stable,0)$.
## 4 Semantics
In §3, the syntax for any possibilistic disjunctive program was introduced,
Now, in this section, a semantics for capturing these programs is studied.
This semantics will be defined in terms of the standard definition of the
answer set semantics (§2.4.1) and the proof theory of possibilistic logic
(§2.5).
As sets of atoms are considered as interpretations, two basic operations
between sets of possibilistic atoms are defined; also a relation of order
between them is defined: Given a finite set of atoms ${\mathcal{}A}$ and a
lattice (${\mathcal{}Q}$,$\leq$),
${\mathcal{}PS^{\prime}}=2^{{\mathcal{}A}\times{\mathcal{}Q}}$ and
${\mathcal{}PS}={\mathcal{}PS^{\prime}}\setminus\\{A|A\in{\mathcal{}PS}\text{
such that }x\in{\mathcal{}A}\text{ and
}Cardinality(\\{(x,\alpha)|(x,\alpha)\in A\\})\geq 2\\}$
Observe that ${\mathcal{}PS^{\prime}}$ is the finite set of all the
possibilistic atom sets induced by ${\mathcal{}A}$ and $Q$. Informally
speaking, ${\mathcal{}PS}$ is the subset of ${\mathcal{}PS^{\prime}}$ such
that each set of ${\mathcal{}PS}$ has no atoms with different uncertain value.
###### Definition 5
Let ${\mathcal{}A}$ be a finite set of atoms and (${\mathcal{}Q}$,$\leq$) be a
lattice. $\forall A,B\in{\mathcal{}PS}$, we define.
$A\sqcap B$ | $=\\{(x,{\mathcal{}GLB}(\\{\alpha,\beta\\})|(x,\alpha)\in A\wedge(x,\beta)\in B\\}$
---|---
$A\sqcup B$ | $=\\{(x,\alpha)|(x,\alpha)\in A\;and\;x\notin B^{*}\\}$ $\cup$
| $\\{(x,\alpha)|x\notin A^{*}\;and\;(x,\alpha)\in B\\}$ $\cup$
| $\\{(x,{\mathcal{}LUB}(\\{\alpha,\beta\\})|(x,\alpha)\in A\text{ and }$
$(x,\beta)\in B\\}$.
$A\sqsubseteq B$ | $\Longleftrightarrow A^{*}\subseteq B^{*}$, and $\forall x,\alpha,\beta,(x,\alpha)\in A\wedge(x,\beta)\in B$
| then $\alpha\leq\beta$.
This definition is almost the same as Definition 7 presented in [Nicolas et
al. (2006)]. The main difference is that in Definition 7 from [Nicolas et al.
(2006)] the operations $\sqcap$ and $\sqcup$ are defined in terms of the
operators _min_ and _max_ instead of the operators ${\mathcal{}GLB}$ and
${\mathcal{}LUB}$. Hence, the following proposition is a direct result of
Proposition 6 of [Nicolas et al. (2006)].
###### Proposition 1
$({\mathcal{}PS},\sqsubseteq)$ is a complete lattice.
Before moving on, let us define the concept of _i-greatest set_ w.r.t.
${\mathcal{}PS}$ as follows: Given $M\in{\mathcal{}PS}$, $M$ is an _i-greatest
set_ in ${\mathcal{}PS}$ iff $\nexists M^{\prime}\in{\mathcal{}PS}$ such that
$M\sqsubseteq M^{\prime}$. For instance, let
${\mathcal{}PS}=\\{\\{(a,1)\\},\\{(a,2)\\},\\{(a,2),(b,1)\\},\\{(a,2),(b,2)\\}\\}$.
One can see that ${\mathcal{}PS}$ has two i-greatest sets: $\\{(a,2)\\}$ and
$\\{(a,2),(b,2)\\}$. The concept of i-greatest set will play a key role in the
definition of possibilistic answer sets in order to infer possibilistic answer
sets with optimal certainty values.
### 4.1 Possibilistic answer set semantics
Similar to the definition of answer set semantics, the possibilistic answer
set semantics is defined in terms of a syntactic reduction. This reduction is
inspired by the Gelfond-Lifschitz reduction.
###### Definition 6 (Reduction $P_{M}$)
Let $P=\langle({\mathcal{}Q},\leq),N\rangle$ be a possibilistic disjunctive
logic program, M be a set of atoms. $P$ reduced by $M$ is the positive
possibilistic disjunctive logic program:
$P_{M}:=$ | $\\{(n(r):{\mathcal{}A}\cap M\leftarrow{\mathcal{}B}^{+})|r\in N,{\mathcal{}A}\cap M\neq\emptyset,$ ${\mathcal{}B}^{-}\cap M=\emptyset,{\mathcal{}B}^{+}\subseteq M\\}$
---|---
in which $r^{*}$ is of the form ${\mathcal{}A}\leftarrow{\mathcal{}B}^{+},\
not\;{\mathcal{}B}^{-}$.
Notice that $(P^{*})_{M}$ is not exactly equal to the Gelfond-Lifschitz
reduction. For instance, let us consider the following programs:
$P:$ | | $P_{\\{c,b\\}}:$ | | $(P^{*})^{\\{c,b\\}}:$ |
---|---|---|---|---|---
| $\alpha_{1}:a\vee b$. | | $\alpha_{1}:b$. | | $a\vee b$.
| $\alpha_{2}:c\leftarrow\;not\;a$. | | $\alpha_{2}:c$. | | $c$.
| $\alpha_{3}:c\leftarrow\;not\;b$. | | | |
The program $P_{\\{c,b\\}}$ is obtained from $P$ and $\\{c,b\\}$ by applying
Definition 6 and the program $(P^{*})^{\\{c,b\\}}$ is obtained from $P^{*}$
and $\\{c,b\\}$ by applying the Gelfond-Lifschitz reduction. Observe that the
reduction of Definition 6 removes from the head of the possibilistic
disjunctive clauses any atom which does not belong to $M$. As we will see in
Section 4.2, this property will be helpful for characterizing the
possibilistic answer set in terms of a fixed-point operator. It is worth
mentioning that the reduction $(P^{*})_{M}$ also has a different effect from
the Gelfond-Lifschitz reduction in the class of normal programs. This
difference is illustrated in the following programs:
$P:$ | | $P_{\\{a\\}}:$ | | $(P^{*})^{\\{a\\}}:$ |
---|---|---|---|---|---
| $\alpha_{1}:a\leftarrow\;not\;b$. | | $\alpha_{1}:a$. | | $a$.
| $\alpha_{2}:a\leftarrow b$. | | | | $a\leftarrow b$.
| $\alpha_{3}:b\leftarrow c$. | | | | $b\leftarrow c$.
###### Example 4
Continuing with our medical scenario described in the introduction, let $P$ be
a ground instance of the possibilistic program presented in Example 3:
probable: $r\\_inf(present,1)\vee no\\_r\\_inf(present,1)$ $\leftarrow$
$action(transplant,0),$
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}d\\_inf(present,0).$
confirmed: $o(good\\_graft\\_funct,1)\vee o(delayed\\_graft\\_funct,1)\vee$
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}o(terminal\\_insufficient\\_funct,1)\leftarrow
action(transplant,0).$
confirmed: $action(transplant,0)\leftarrow$
$o(terminal\\_insufficient\\_funct,0)$.
plausible: $cs(stable,1)\leftarrow o(good\\_graft\\_funct,1)$.
plausible: $cs(unstable,1)\leftarrow o(delayed\\_graft\\_funct,1)$.
plausible: $cs($0-urgency$,1)\leftarrow o(terminal\\_insufficient\\_funct,1),$
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}action(transplant,0)$.
certain: $\;\;\leftarrow action(transplant,0),action(wait,0)$.
certain: $\;\;\leftarrow action(transplant,0),cs(dead,0)$.
certain: $d\\_inf(present,0)$.
certain: $no\\_r\\_inf(present,0)$.
certain: $o(terminal\\_insufficient\\_funct,0)$.
certain: $cs(stable,0)$.
Observe that the variables of time $T$ and $T2$ were instantiated with the
values $0$ and $1$ respectively; moreover, observe that the atoms $\lnot
r\\_inf(present,0)$ and $\lnot r\\_inf(present,1)$ were replaced by
$no\\_r\\_inf(present,0)$ and $no\\_r\\_inf(present,1)$ respectively. This
change was applied in order to manage the strong negation, $\lnot$ .
Now, let $S$ be the following possibilistic set:
$S=$ | $\\{(d\\_inf(present,0),certain),$ $(no\\_r\\_inf(present,0),certain),$
---|---
| $(o(terminal\\_insufficient\\_funct,0),certain),$ $(cs(stable,0),certain),$
| $(action(transplant,0),confirmed),$
$(o(good\\_graft\\_funct,1),confirmed),$
| $(cs(stable,1),plausible),$ $(no\\_r\\_inf(present,1),probable)\\}$.
One can see that $P_{S^{*}}$ is:
probable: $no\\_r\\_inf(present,1)$ $\leftarrow$
$action(transplant,0),d\\_inf(present,0).$
confirmed: $o(good\\_graft\\_funct,1)\leftarrow action(transplant,0).$
confirmed: $action(transplant,0)\leftarrow$
$o(terminal\\_insufficient\\_funct,0)$.
plausible: $cs(stable,1)\leftarrow o(good\\_graft\\_funct,1)$.
plausible: $cs(unstable,1)\leftarrow o(delayed\\_graft\\_funct,1)$.
plausible: $cs($0-urgency$,1)\leftarrow o(terminal\\_insufficient\\_funct,1),$
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}action(transplant,0)$.
certain: $\;\;\leftarrow action(transplant,0),action(wait,0)$.
certain: $\;\;\leftarrow action(transplant,0),cs(dead,0)$.
certain: $d\\_inf(present,0)$.
certain: $no\\_r\\_inf(present,0)$.
certain: $o(terminal\\_insufficient\\_funct,0)$.
certain: $cs(stable,0)$.
Once a possibilistic logic program $P$ has been reduced by a set of
possibilistic atoms $M$, it is possible to test whether $M$ is a possibilistic
answer set of the program $P$. For this end, we consider a syntactic approach;
meaning that it is based on the proof theory of possibilistic logic. Let us
remember that the possibilistic logic is axiomatizable [Dubois et al. (1994)];
hence, the inference in possibilistic logic can be managed by both a syntactic
approach (axioms and inference rules) and a possibilistic model theory
approach (interpretations and possibilistic distributions).
Since the certainty value of a possibilistic disjunctive clause can belong to
a partially ordered set, the inference rules of possibilistic logic introduced
in Section 2.5 have to be generalized in terms of bounds. The generalization
of GMP and S is defined as follows:
(GMP*)
$(\varphi\;\alpha),(\varphi\rightarrow\psi\;\beta)\vdash(\psi\;GLB\\{\alpha,\beta\\})$
(S*)
$(\varphi\;\alpha),(\varphi\;\beta)\vdash(\varphi\;\gamma)$, where $\gamma\leq
GLB\\{\alpha,\beta\\}$
Observe that these inference rules are essentially the same as the inference
rules introduced in Section 2.5; however, they are defined in terms of $GLB$
to lead with certainty values which are not comparable (in Example 6 these
inference rules are illustrated).
Once we have defined $GMP^{*}$ and $S^{*}$, the inference $\Vvdash_{PL}$ is
defined as follows:
###### Definition 7
Let $P=\langle({\mathcal{}Q},\leq),N\rangle$ be a possibilistic disjunctive
logic program and $M\in{\mathcal{}PS}$.
* •
We write $P\Vvdash_{PL}M$ when $M^{*}$ is an answer set of $P^{*}$ and
$P_{M^{*}}\vdash_{PL}M$.
One can see that $\Vvdash_{PL}$ is defining a joint inference between _the
answer set semantics_ and _the proof theory of possibilistic logic_. Let us
consider the following example.
###### Example 5
Let $P=\langle({\mathcal{}Q},\leq),N\rangle$ be a possibilistic disjunctive
logic program such that ${\mathcal{}Q}=\\{0.1,\dots,$ $0.9\\}$, $\leq$ denotes
the standard relation in real numbers and $N$ is the following set of
possibilistic clauses:
$0.6:a\vee b.$ | | $0.4:a\leftarrow not\;b.$ | | $0.8:b\leftarrow not\;a.$
---|---|---|---|---
It is easy to see that $P^{*}$ has two answer sets: $\\{a\\}$ and $\\{b\\}$.
On the other hand, one can see that $P_{\\{a\\}}\vdash_{PL}\\{(a,0.6)\\}$,
$P_{\\{a\\}}\vdash_{PL}\\{(a,0.4)\\}$, $P_{\\{b\\}}\vdash_{PL}\\{(b,0.6)\\}$
and $P_{\\{b\\}}\vdash_{PL}\\{(b,0.8)\\}$. This means that
$P\Vvdash_{PL}\\{(a,0.6)\\}$, $P\Vvdash_{PL}\\{(a,0.4)\\}$,
$P\Vvdash_{PL}\\{(b,0.6)\\}$ and $P\Vvdash_{PL}\\{(b,0.8)\\}$.
The basic idea of $\Vvdash_{PL}$ is to identify candidate sets of
possibilistic atoms in order to consider them as possibilistic answer sets.
The following proposition formalizes an important property of $\Vvdash_{PL}$.
###### Proposition 2
Let $P=\langle({\mathcal{}Q},\leq),N\rangle$ be a possibilistic disjunctive
logic program and $M_{1},M_{2}\in{\mathcal{}PS}$ such that
$M_{1}^{*}=M_{2}^{*}$. If $P\Vvdash_{PL}M1$ and $P\Vvdash_{PL}M2$, then
$P\Vvdash_{PL}M_{1}\sqcup M_{2}$.
In this proposition, since $M_{1}$ and $M_{2}$ are two sets of possibilistic
atoms, ${\mathcal{}LUB}$ is instantiated in terms of $\sqsubseteq$. By
considering $\Vvdash_{PL}$ and the concept of _i-greatest set_ , a
possibilistic answer set is defined as follows:
###### Definition 8 (A possibilistic answer set)
Let $P=\langle({\mathcal{}Q},\leq),N\rangle$ be a possibilistic disjunctive
logic program and $M$ be a set of possibilistic atoms such that $M^{*}$ is an
answer set of $P^{*}$. $M$ is a possibilistic answer set of $P$ iff $M$ is an
i-greatest set in ${\mathcal{}PS}$ such that $P\Vvdash_{PL}M$.
Essentially, a possibilistic answer set is an i-greatest set which is inferred
by $\Vvdash_{PL}$. In other words, a possibilistic answer set is an answer set
with _optimal certainty values_. For instance, in Example 5, we saw that
$P\Vvdash_{PL}\\{(a,0.6)\\}$, $P\Vvdash_{PL}\\{(a,0.4)\\}$,
$P\Vvdash_{PL}\\{(b,0.6)\\}$ and $P\Vvdash_{PL}\\{(b,0.8)\\}$; however,
$\\{(a,0.4)\\}$ and $\\{(b,0.6)\\}$ are not i-greatest sets. This means that
the possibilistic answer sets of the possibilistic program $P$ of Example 5
are: $\\{(a,0.6)\\}$ and $\\{(b,0.8)\\}$.
###### Example 6
Let $P$ be again the possibilistic program of Example 3 and $S$ be the
possibilistic set of atoms introduced in Example 4.
One can see that $S^{*}$ is an answer set of the extended disjunctive program
$P^{*}$. Hence, in order to prove that $P\Vvdash_{PL}S$, we have to verify
that $P_{S^{*}}\vdash_{PL}S$. This means that for each possibilistic atom
$p\in S$, $P_{S^{*}}\vdash_{PL}p$. It is clear that
$P_{S^{*}}\vdash_{PL}$ | $\\{(d\\_inf(present,0),certain),$ $(no\\_r\\_inf(present,0),certain),$
---|---
| $(o(terminal\\_insufficient\\_funct,0),certain),$
| $(cs(stable,0),certain)\\}$
Now let us prove $(cs(stable,1),plausible)$ from $P_{S^{*}}$.
Premises from $P_{S^{*}}$ |
---|---
1\. $o(terminal\\_insufficient\\_funct,0)$ | $certain$
2\. $o(terminal\\_insufficient\\_funct,0)\rightarrow action(transplant,0)$ | $confirmed$
3\. $action(transplant,0)\rightarrow o(good\\_graft\\_funct,1)$ | $confirmed$
4\. $o(good\\_graft\\_funct,1)\rightarrow cs(stable,1)$ | $plausible$
From 1 and 2 by GMP* |
5\. $action(transplant,0)$ | $confirmed$
From 3 and 5 by GMP* |
6\. $o(good\\_graft\\_funct,1)$ | $confirmed$
From 4 and 6 by GMP* |
7\. $cs(stable,1)$. | $plausible$
In this proof, we can also see the inference of the possibilistic atom
$(action(transplant,0),$ $confirmed)$. The proof of the possibilistic atom
$(no\\_r\\_inf(present,1),probable)$ is similar to the proof of the
possibilistic atom $(cs(stable,1),plausible)$. Therefore,
$P_{S^{*}}\vdash_{PL}S$ is true. Notice that a possibilistic set $S^{\prime}$
such that $S^{\prime}\neq S$, $P_{(S^{\prime})^{*}}\vdash_{PL}S^{\prime}$ and
$S\sqsubseteq S^{\prime}$ does not exists; hence, $S$ is an i-greatest set.
Then, $S$ is a possibilistic answer set of $P$.
By considering the possibilistic answer set $S$, what can we conclude about
our medical scenario from $S$? We can conclude that if it is _confirmed_ that
a transplant is performed on a donor with an infection, it is _probable_ that
the recipient will not be infected after the transplant; moreover it is
_plausible_ that he will be stable. It is worth mentioning that this
_optimistic_ conclusion is just one of the possible scenarios that we can
infer from the program $P$. In fact, the program $P$ has six possibilistic
answer sets in which we can find pessimistic scenarios such as it is
_probable_ that the recipient will be infected by the organ donor’s infection
and; moreover, it is _confirmed_ that the recipient needs another transplant.
Now, let us identify some properties of the possibilistic answer set
semantics. First, observe that there is an important condition w.r.t. the
definition of a _possibilistic answer set_ which is introduced by
$\Vvdash_{PL}$: a possibilistic set $S$ cannot be a possibilistic answer set
of a possibilistic logic program $P$ if $S^{*}$ is not an answer set of the
extended logic program $P^{*}$. This condition guarantees that any clause of
$P^{*}$ is satisfied by $S^{*}$. For instance, let us consider the
possibilistic logic program $P$:
$0.4:a.$ | | $0.6:b.$
---|---|---
and the possibilistic set $S=\\{(a,0.4)\\}$. We can see that
$P_{S^{*}}\vdash_{PL}S$; however, $S^{*}$ is not an answer set of $P^{*}$.
Therefore, $P\Vvdash_{PL}S$ is _false_. Then $S$ could not be a possibilistic
answer set of $P$. This suggests, a direct relationship between the
possibilistic answer semantics and the answer set semantics.
###### Proposition 3
Let $P$ be a possibilistic disjunctive logic program. If $M$ is a
possibilistic answer set of P then $M^{*}$ is an answer set of $P^{*}$.
When all the possibilistic clauses of a possibilistic program $P$ have the
same certainly level, the answer sets of $P^{*}$ can be directly generalized
to the possibilistic answer sets of $P$.
###### Proposition 4
Let $P=\langle({\mathcal{}Q},\leq),N\rangle$ be a possibilistic disjunctive
logic program and $\alpha$ be a fixed element of ${\mathcal{}Q}$. If $\forall
r\in P$, $n(r)=\alpha$ and $M^{\prime}$ is an answer set of $P^{*}$, then
$M:=\\{(a,\alpha)|a\in M^{\prime}\\}$ is a possibilistic answer set of $P$.
For the class of possibilistic normal logic programs which are defined with a
totally ordered set, our definition of possibilistic answer set is closely
related to the definition of a _possibilistic stable model_ presented in
[Nicolas et al. (2006)]. In fact, both semantics coincide.
###### Proposition 5
Let $P:=\langle(Q,\leq),N\rangle$ be a possibilistic normal program such that
$(Q,\leq)$ is a totally ordered set and ${\mathcal{}L}_{P}$ has no extended
atoms. $M$ is a possibilistic answer set of P if and only if $M$ is a
possibilistic stable model of $P$.
To prove that the possibilistic answer set semantics is computable, we will
present an algorithm for computing possibilistic answer sets. With this in
mind, let us remember that a classical resolvent is defined as follows: Assume
that $C$ and $D$ are two clauses in their disjunctive form such that $C=a\vee
l_{1}\vee\dots\vee l_{n}$ and $D=\sim a\vee ll_{1}\vee\dots\vee ll_{m}$. The
clause $l_{1}\vee\dots\vee l_{n}\vee ll_{1}\vee\dots\vee ll_{m}$ is called a
resolvent of $C$ and $D$ w.r.t. $a$. Thus clauses $C$ and $D$ have a resolvent
in case a literal $a$ exists such that $a$ appears in $C$ and $\sim a$ appears
in $D$ (or conversely).
Now, let us consider a straightforward generalization of the possibilistic
resolution rule introduced in [Dubois et al. (1994)]:
(R)
$(c_{1}\;\alpha_{1})(c_{2}\;\alpha_{2})\vdash(R(c_{1},c_{2})\;{\mathcal{}GLB}(\\{\alpha_{1},\alpha_{2}\\}))$
in which $R(c_{1},c_{2})$ is any classical resolvent of $c_{1}$ and $c_{2}$
such that $c_{1}$ and $c_{2}$ are disjunctions of literals. It is worth
mentioning that it is easy to transform any possibilistic disjunctive logic
program $P$ into a set of possibilistic disjunctions ${\mathcal{}C}$. Indeed,
${\mathcal{}C}$ can be obtained as follows:
${\mathcal{}C}:=\bigcup\\{(a_{1}\vee\ldots\vee a_{m}\vee\sim
a_{m+1}\vee\dots\vee\sim a_{j}\vee a_{j+1}\vee\dots,a_{n}\;\alpha)|$
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}(\alpha:a_{1}\vee\ldots\vee
a_{m}\leftarrow a_{m+1},\dots,a_{j},not\;a_{j+1},\dots,not\;a_{n})\in P\\}$
Let us remember that whenever a possibilistic program is considered as a
possibilistic theory, each negative literal $not\;a$ is replaced by $\sim a$
such that $\sim$ is regarded as the negation in classic logic — in Example 7,
the transformation of a possibilistic program into a set of possibilistic
disjunctions is shown.
The following proposition shows that the resolution rule (R) is sound.
###### Proposition 6
Let ${\mathcal{}C}$ be a set of possibilistic disjunctions, and
$C=(c\;\alpha)$ be a possibilistic clause obtained by a finite number of
successive application of _(R)_ to ${\mathcal{}C}$; then
${\mathcal{}C}\vdash_{PL}C$.
Like the possibilistic rule introduced in [Dubois et al. (1994)], (R) is
complete for refutation. We will say that a possibilistic disjunctive program
$P$ is _consistent_ if $P$ has at least a possibilistic answer set. Otherwise
$P$ is said to be _inconsistent_. The degree of inconsistency of a
possibilistic logic program $P$ is
$Inc(P)={\mathcal{}GLB}(\\{\alpha|P_{\alpha}\text{ is consistent }\\})$.
###### Proposition 7
Let $P$ be a set of possibilistic clauses and ${\mathcal{}C}$ be the set of
possibilistic disjunctions obtained from $P$; then the valuation of the
optimal refutation by resolution from ${\mathcal{}C}$ is the inconsistent
degree of $P$.
The main implication of Proposition 6 and Proposition 7 is that (R) suggests a
method for inferring a possibilistic formula from a possibilistic knowledge
base.
###### Corollary 1
Let $P:=\langle({\mathcal{}Q},\leq),N\rangle$ be a possibilistic disjunctive
logic program, $\varphi$ be a literal and ${\mathcal{}C}$ be a set of
possibilistic disjunctions obtained from
$N\cup\\{(\sim\varphi\;\top_{\mathcal{}Q})\\}$; then the valuation of the
optimal refutation from ${\mathcal{}C}$ is $n(\varphi)$, that is
$P\vdash_{PL}(\varphi\;n(\varphi))$.
Based on the fact that the resolution rule (R) suggests a method for inferring
the necessity value of a possibilistic formula, we can define the following
function for computing the possibilistic answer sets of a possibilistic
program $P$. In this function, $\square$ denotes an empty clause.
Function $Poss\\_Answer\\_Sets(P)$
Let $ASP(P^{*})$ be a function that computes the answer set models of the
standard logic program $P^{*}$, for example DLV [DLV (1996)].
Poss-ASP $:=\emptyset$
For all $S\in ASP(P^{*})$
Let ${\mathcal{}C}$ be the set of possibilistic disjunctions obtained from
$P_{S}$.
$S^{\prime}:=\emptyset$
for all $a\in S$
$C^{\prime}:={\mathcal{}C}\cup\\{(\sim a\;\top_{\mathcal{}Q})\\}$
Search for a deduction of $(R(\square)\;\alpha)$ by applying repeatedly
the resolution rule (R) from $C^{\prime}$, with $\alpha$ maximal.
$S^{\prime}:=S^{\prime}\cup\\{(a\;\alpha)\\}$
endfor
Poss-ASP $:=$ Poss-ASP $\cup\;S^{\prime}$
endfor
return(Poss-ASP).
The following proposition proves that the function $Poss\\_Answer\\_Sets$
computes all the possibilistic answer sets of a possibilistic logic program.
###### Proposition 8
Let $P:=\langle({\mathcal{}Q},\leq),N\rangle$ be a possibilistic logic
program. The set Poss-ASP returned by $Poss\\_Answer\\_Sets(P)$ is the set of
all the possibilistic answer sets of $P$.
In order to illustrate this algorithm, let us consider the following example:
###### Example 7
Let $P:=\langle({\mathcal{}Q},\leq),N\rangle$ be a possibilistic program such
that ${\mathcal{}Q}:=\\{0,$ $0.1$, $\dots$, $0.9,$ $1\\}$, $\leq$ is the
standard relation between rational numbers and $N$ the following set of
possibilistic clauses:
$0.7:$ | $a\vee b$ | $\leftarrow not\;c.$
---|---|---
$0.6:$ | $c$ | $\leftarrow not\;a,not\;b.$
$0.8:$ | $a$ | $\leftarrow b.$
$0.9:$ | $e$ | $\leftarrow b.$
$0.6:$ | $b$ | $\leftarrow a.$
$0.5:$ | $b$ | $\leftarrow a.$
First of all, we can see that $P^{*}$ has two answer sets:
$S_{1}:=\\{a,b,e\\}$ and $S_{2}:=\\{c\\}$. This means that $P$ has two
possibilistic answer set models. Let us consider $S_{1}$ for our example.
Then, one can see that $P_{S_{1}}$ is:
$0.7:$ | $a\vee b$. |
---|---|---
$0.8:$ | $a$ | $\leftarrow b.$
$0.9:$ | $e$ | $\leftarrow b.$
$0.6:$ | $b$ | $\leftarrow a.$
$0.5:$ | $b$ | $\leftarrow a.$
Then ${\mathcal{}C}:=\\{(a\vee b\;0.7),(a\vee\sim b\;0.8),(e\vee\sim
b\;0.9),(b\vee\sim a\;0.6),(b\vee\sim a\;0.5)\\}$. In order to infer the
necessity value of the atom $a$, we add $(\sim a\;1)$ to ${\mathcal{}C}$ and a
search for finding an optimal refutation is applied. As we can see in Figure
3, there are three refutations, however the optimal refutation is
$(\square\;0.7)$. This means that the best necessity value for the atom $a$ is
$0.7$.
Figure 3: Possibilistic resolution: Search for an _optimal refutation_ for the
atom $a$.
In Figure 4, we can see the optimal refutation search for the atom $b$. As we
can see the optimal refutation is $(\square\;0.6)$; hence the best necessity
value for the atom $b$ is $0.6$.
Figure 4: Possibilistic resolution: Search for an _optimal refutation_ for the
atom $b$.
In Figure 5, we can see that the best necessity value for the atom $e$ is
$0.6$.
Figure 5: Possibilistic resolution: Search for an _optimal refutation_ for the
atom $e$.
Thought the search, we can infer that a possibilistic answer set of the
program $P$ is : $\\{(a,0.7),(b,0.6),(e,0.6)\\}$.
### 4.2 Possibilistic answer sets based on partial evaluation
We have defined a possibilistic answer set semantics by considering the formal
proof theory of possibilistic logic. However, in standard logic programming
there are several frameworks for analyzing, defining and computing logic
programming semantics [Dix (1995a), Dix (1995b)]. One of these approaches is
based on program transformations, in fact there are many studies on this
approach, for example [Brass and Dix (1999), Brass and Dix (1997), Brass and
Dix (1998), Dix et al. (2001)]. For the case of disjunctive logic program, one
important transformation is _partial evaluation (also called unfolding)_
[Brass and Dix (1999)].
This section shows that it is also possible to define a possibilistic
disjunctive semantics based on an operator which is a combination between
partial evaluation for disjunctive logic programs and the infer rule $GMP^{*}$
of possibilistic logic. This semantics has the same behavior as the semantics
based on the proof theory of possibilistic logic.
This section starts by defining a version of the general principle of partial
evaluation (GPPE) for possibilistic positive disjunctive clauses.
###### Definition 9 (Grade-GPPE (G-GPPE))
Let $r_{1}$ be a possibilistic clause of the form
$\alpha:{\mathcal{}A}\leftarrow{\mathcal{}B}^{+}\cup\\{B\\}$ and $r_{2}$ a
possibilistic clause of the form $\alpha_{1}:{\mathcal{}A}_{1}$ such that
$B\in{\mathcal{}A}_{1}$ and $B\notin B^{+}$, then
$\text{G-GPPE}(r_{1},r_{2})=({\mathcal{}GLB}(\\{\alpha,\alpha_{1}\\}):{\mathcal{}A}\cup({\mathcal{}A}_{1}\setminus\\{B\\})\leftarrow{\mathcal{}B}^{+})$
Observe that one of the possibilistic clauses which is considered by G-GPPE
has an empty body. For instance, let us consider the following two
possibilistic clauses:
$r_{1}=\;0.7:a\vee b$.
$r_{2}=\;0.9:e\leftarrow b$.
Then G-GPPE$(r_{1},r_{2})=(0.7:e\vee a)$. Now, by considering G-GPPE, we will
define the operator ${\mathcal{}T}$.
###### Definition 10
Let $P$ be a possibilistic positive logic program. The operator
${\mathcal{}T}$ is defined as follows:
${\mathcal{}T}(P):=P\cup\\{\text{G-GPPE}(r_{1},r_{2})|r_{1},r_{2}\in P\\}$
In order to illustrate the operator ${\mathcal{}T}$, let us consider the
program $P_{S_{1}}$ of Example 7.
$0.7:$ | $a\vee b$. |
---|---|---
$0.8:$ | $a$ | $\leftarrow b.$
$0.9:$ | $e$ | $\leftarrow b.$
$0.6:$ | $b$ | $\leftarrow a.$
$0.5:$ | $b$ | $\leftarrow a.$
Hence, ${\mathcal{}T}(P_{S_{1}})$ is:
$0.7:$ | $a\vee b$. | | $0.7:$ | $a$.
---|---|---|---|---
$0.8:$ | $a\leftarrow b.$ | | $0.7:$ | $e\vee a$.
$0.9:$ | $e\leftarrow b.$ | | $0.6:$ | $b$.
$0.6:$ | $b\leftarrow a.$ | | $0.5:$ | $b$.
$0.5:$ | $b\leftarrow a.$ | | |
Notice that by considering the possibilistic clauses that were added to
$P^{S_{1}}$ by ${\mathcal{}T}$, one can reapply G-GPPE. For instance, if we
consider $0.6:b$ and $0.9:e\leftarrow b$ from ${\mathcal{}T}(P^{S_{1}})$,
G-GPPE infers $0.6:e$. Indeed, ${\mathcal{}T}({\mathcal{}T}(P_{S_{1}}))$ is:
$0.7:$ | $a\vee b$. | | $0.7:$ | $a$. | | $0.6:$ | $a$.
---|---|---|---|---|---|---|---
$0.8:$ | $a\leftarrow b.$ | | $0.7:$ | $e\vee a$. | | $0.5:$ | $a$.
$0.9:$ | $e\leftarrow b.$ | | $0.6:$ | $b$. | | $0.6:$ | $e$.
$0.6:$ | $b\leftarrow a.$ | | $0.5:$ | $b$. | | $0.5:$ | $e$.
$0.5:$ | $b\leftarrow a.$ | | | | | $0.6:$ | $b\vee e$.
| | | | | | $0.5:$ | $b\vee e$.
An important property of the operator ${\mathcal{}T}$ is that it always
reaches a fixed-point.
###### Proposition 9
Let $P$ be a possibilistic disjunctive logic program. If
$\Gamma_{0}:={\mathcal{}T}(P)$ and $\Gamma_{i}:={\mathcal{}T}(\Gamma_{i-1})$
such that $i\in{\mathcal{}N}$, then $\exists\;n\in{\mathcal{}N}$ such that
$\Gamma_{n}=\Gamma_{n-1}$. We denote $\Gamma_{n}$ by $\Pi(P)$.
Let us consider again the possibilistic program $P_{S_{1}}$. We can see that
$\Pi(P_{S_{1}})$ is:
$0.7:$ | $a\vee b$. | | $0.7:$ | $a$. | | $0.6:$ | $a$. | | $0.6$ | $a\vee e.$
---|---|---|---|---|---|---|---|---|---|---
$0.8:$ | $a\leftarrow b.$ | | $0.7:$ | $e\vee a$. | | $0.5:$ | $a$. | | $0.5$ | $a\vee e.$
$0.9:$ | $e\leftarrow b.$ | | $0.6:$ | $b$. | | $0.6:$ | $e$. | | |
$0.6:$ | $b\leftarrow a.$ | | $0.5:$ | $b$. | | $0.5:$ | $e$. | | |
$0.5:$ | $b\leftarrow a.$ | | | | | $0.6:$ | $b\vee e$. | | |
| | | | | | $0.5:$ | $b\vee e$. | | |
Observe that in $\Pi(P_{S_{1}})$ there are possibilistic facts (possibilistic
clauses with empty bodies and one atom in their heads) with different
necessity value. In order to infer the optimal necessity value of each
possibilistic fact, one can consider the _least upper bound_ of these values.
For instance, the optimal necessity value for the possibilistic atom $a$ is
${\mathcal{}LUB}(\\{0.7,0.6,0.5\\})=0.7$. Based on this idea, $Sem_{min}$ is
defined as follows.
###### Definition 11
Let $P$ be a possibilistic logic program and
$Facts(P,a):=\\{(\alpha:a)|(\alpha:a)\in P\\}$.
$Sem_{min}(P):=\\{(x,\alpha)|Facts(P,x)\neq\emptyset\text{ and }$
$\alpha:={\mathcal{}LUB}(\\{n(r)|{r\in Facts(P,x)}\\})\\}$ in which
$x\in{\mathcal{}L}_{P}$.
It is easy to see that $Sem_{min}(\Pi(P_{S_{1}}))$ is
$\\{(a,0.7),(b,0.6),(e,0.6)\\}$. Now by considering the operator
${\mathcal{}T}$ and $Sem_{min}$, we can define a semantics for possibilistic
disjunctive logic programs that will be called possibilistic-${\mathcal{}T}$
answer set semantics.
###### Definition 12
Let $P$ be a possibilistic disjunctive logic program and $M$ be a set of
possibilistic atoms such that $M^{*}$ is an answer set of $P^{*}$. $M$ is a
possibilistic-${\mathcal{}T}$ answer set of P if and only if
$M=Sem_{min}(\Pi(P_{M^{*}}))$.
In order to illustrate this definition, let us consider again the program $P$
of Example 7 and $S=\\{(a,0.7),(b,0.6),(e,0.6)\\}$. As commented in Example 7,
$S^{*}$ is an answer set of $P^{*}$. We have already seen that
$Sem_{min}(\Pi(P_{S_{1}}))$ is $\\{(a,0.7),(b,0.6),(e,0.6)\\}$, therefore we
can say that $S$ is a possibilistic-${\mathcal{}T}$ answer set of $P$. Observe
that the possibilistic-${\mathcal{}T}$ answer set semantics and the
possibilistic answer set semantics coincide. In fact, the following
proposition guarantees that both semantics are the same.
###### Proposition 10
Let $P$ be a possibilistic disjunctive logic program and $M$ a set of
possibilistic atoms. $M$ is a possibilistic answer set of $P$ if and only if
$M$ is a possibilistic-${\mathcal{}T}$ answer set of $P$.
## 5 Inconsistency in possibilistic logic programs
In the first part of this section, the relevance of considering inconsistent
possibilistic knowledge bases is introduced, and in the second part, some
criteria for managing inconsistent possibilistic logic programs are
introduced.
### 5.1 Relevance of inconsistent possibilistic logic programs
Inconsistent knowledge bases are usually regarded as an _epistemic hell_ that
have to be avoided at all costs. However, many times it is difficult or
impossible to stay away from managing inconsistent knowledge bases. There are
authors such as Octávio Bueno [Bueno (2006)] who argues that the consideration
of inconsistent systems is a useful device for a number of reasons: (1) it is
often the only way to explore inconsistent information without arbitrarily
rejecting precious data. (2) inconsistent systems are sometimes the only way
to obtain new information (particularly information that conflicts with deeply
entrenched theories). As a result, (3) inconsistent belief systems allow us to
make better _informed decisions_ regarding which bits of information to accept
or reject in the end.
In order to give a small example, in which exploring inconsistent information
can be important for making a better informed decision, we will continue with
the medical scenario described in Section 1. In Example 4, we have already
presented the grounded program $P_{infections}$ of our medical scenario:
probable: $r\\_inf(present,1)\vee no\\_r\\_inf(present,1)$ $\leftarrow$
$action(transplant,0),$
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}d\\_inf(present,0).$
confirmed: $o(good\\_graft\\_funct,1)\vee o(delayed\\_graft\\_funct,1)\vee$
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}o(terminal\\_insufficient\\_funct,1)\leftarrow
action(transplant,0).$
confirmed: $action(transplant,0)\leftarrow$
$o(terminal\\_insufficient\\_funct,0)$.
plausible: $cs(stable,1)\leftarrow o(good\\_graft\\_funct,1)$.
plausible: $cs(unstable,1)\leftarrow o(delayed\\_graft\\_funct,1)$.
plausible: $cs($0-urgency$,1)\leftarrow o(terminal\\_insufficient\\_funct,1),$
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}action(transplant,0)$.
certain: $\;\;\leftarrow action(transplant,0),action(wait,0)$.
certain: $\;\;\leftarrow action(transplant,0),cs(dead,0)$.
certain: $d\\_inf(present,0)$.
certain: $no\\_r\\_inf(present,0)$.
certain: $o(terminal\\_insufficient\\_funct,0)$.
certain: $cs(stable,0)$.
As mentioned in Example 4, in this program the atoms $\lnot
r\\_inf(present,0)$ and
$\lnot r\\_inf(present,1)$ were replaced by $no\\_r\\_inf(present,0)$ and
$no\\_r\\_inf(present,1)$ respectively. Usually in standard answer set
programming, the constraints
$\leftarrow no\\_r\\_inf(present,0),r\\_inf(present,0).$
$\leftarrow no\\_r\\_inf(present,1),no\\_r\\_inf(present,1)$.
must be added to the program to avoid inconsistent answer sets. In order to
illustrate the role of these kinds of constraints, let $C_{1}$ be the
following possibilistic constraints:
certain: $\;\;\leftarrow no\\_r\\_inf(present,0),r\\_inf(present,0).$
certain: $\;\;\leftarrow no\\_r\\_inf(present,1),no\\_r\\_inf(present,1)$.
Also let us consider three new possibilistic clauses (denoted by $P_{v}$):
confirmed: $v(kidney,0)\leftarrow cs(stable,1),action(transplant,0)$.
probable: $no\\_v(kidney,0)\leftarrow
r\\_inf(present,1),action(transplant,0)$.
certain: $\;\;\leftarrow\;not\;cs(stable,1)$.
The intended meaning of the predicate $v(t,T)$ is that the organ $t$ is viable
for a transplant and $T$ denotes a moment in time. Observe that we replaced
the atom $\lnot v(kidney,0)$ with $no\\_v(kidney,0)$. The reading of the first
clause is that if the clinical situation of the organ recipient is stable
after the graft, then it is _confirmed_ that the kidney is viable for
transplant. The reading of the second one is that if the organ recipient is
infected after the graft, then it is _plausible_ that the kidney is not viable
for transplant. The aim of the possibilistic constraint is to discard
scenarios in which the clinical situation of the organ recipient is not
stable. Let us consider the respective possibilistic constraint w.r.t. the
atoms $no\\_v(kidney,0)$ and $v(kidney,0)$ (denoted by $C_{2}$):
certain: $\;\;\leftarrow no\\_v(kidney,0),v(kidney,0).$
Two programs are defined:
$P:=P_{infections}\;\cup\;P_{v}\text{ and }P_{c}:=P_{infections}\;\cup
P_{v}\;\cup\;C_{1}\;\cup\;C_{2}$
Basically, the difference between $P$ and $P_{c}$ is that $P$ allows
inconsistent possibilistic models and $P_{c}$ does not allow inconsistent
possibilistic models.
Now let us consider the possibilistic answer sets of the programs $P$ and
$P_{c}$. One can see that the program $P_{c}$ has just one possibilistic
answer set:
$\\{(d\\_inf(present,0),certain),$ $(no\\_r\\_inf(present,0),certain),$
$(o(terminal\\_insufficient\\_funct,0),certain),$ $(cs(stable,0),certain),$
$(action(transplant,0),confirmed),$ $(o(good\\_graft\\_funct,1),confirmed),$
$\textbf{(cs(stable,1), plausible)},$ $\textbf{(no\\_r\\_inf(present,1),
probable)},$
$\textbf{(v(kidney,0), plausible)}\\}$
This possibilistic answer set suggests that since it is plausible that the
recipient’s clinical situation will be stable after the graft, it is plausible
that the kidney is _viable_ for transplanting. _Observe that the possibilistic
answer sets of $P$ do not show the possibility that the organ recipient could
be infected after the graft_.
Let us consider the possibilistic answer set of the program $P$:
$S_{1}:=\\{(d\\_inf(present,0),certain),$ $(no\\_r\\_inf(present,0),certain),$
$(o(terminal\\_insufficient\\_funct,0),certain),$ $(cs(stable,0),certain),$
$(action(transplant,0),confirmed),$ $(o(good\\_graft\\_funct,1),confirmed),$
$\textbf{(cs(stable,1), plausible)},$ $\textbf{(no\\_r\\_inf(present,1),
probable)},$
$\textbf{(v(kidney,0), plausible)}\\}$
$S_{2}:=\\{(d\\_inf(present,0),certain),$ $(no\\_r\\_inf(present,0),certain),$
$(o(terminal\\_insufficient\\_funct,0),certain),$ $(cs(stable,0),certain),$
$(action(transplant,0),confirmed),$ $(o(good\\_graft\\_funct,1),confirmed),$
$\textbf{(cs(stable,1), plausible)},$ $\textbf{(r\\_inf(present,1),
probable)},$
$\textbf{(v(kidney,0), plausible)},\textbf{(no\\_v(kidney,0), probable)}\\}$
$P$ has two possibilistic answer sets: $S_{1}$ and $S_{2}$. $S_{1}$
corresponds to the possibilistic answer set of the program $P_{c}$ and $S_{2}$
is an inconsistent possibilistic answer set — because the atoms (v(kidney,0),
plausible) and (no_v(kidney,0), probable) appear in $S_{2}$. Observe that
although $S_{2}$ is an inconsistent possibilistic answer set, it contains
important information w.r.t. the considerations of our scenario. $S_{2}$
suggests that even though it is plausible that the clinical situation of the
organ recipient will be stable after the graft, it is also probable that the
organ recipient will be infected by the infection of the donor’s organ.
Observe that $P_{c}$ is unable to infer the possibilistic answer set $S_{2}$;
because, it contains the following possibilistic constraint:
certain: $\;\;\leftarrow no\\_v(kidney,0),v(kidney,0).$
By defining these kinds of constraints, we can guarantee that any
possibilistic answer set inferred from $P_{c}$ will be consistent; however,
one can omit important considerations w.r.t. a decision-making problem. In
fact, we agree with Bueno [Bueno (2006)] that considering inconsistent systems
as inconsistent possibilistic answer sets is some times the only way to
explore inconsistent information without arbitrarily rejecting precious data.
### 5.2 Inconsistency degrees of possibilistic sets
To manage inconsistent possibilistic answer sets, it is necessary to define a
criterion of preference between possibilistic answer sets. In order to define
a criterion between possibilistic answer sets, the concept of _inconsistency
degree of a possibilistic set_ is defined. We say that a set of possibilistic
atoms $S$ is inconsistent (resp. consistent) if and only if $S^{*}$ is
inconsistent (resp. consistent), that is to say there is an atom $a$ such that
$a,\lnot a\in S^{*}$.
###### Definition 13
Let ${\mathcal{}A}\in{\mathcal{}SP}$. The inconsistent degree of $S$ is
defined as follows:
$InconsDegre(S):=\left\\{\begin{array}[]{lcl}\bot_{\mathcal{}Q}&&\text{ if
}S^{*}\text{ is consistent}\\\ {\mathcal{}GLB}(\\{\alpha|S_{\alpha}\text{ is
consistent}\\})&&\text{otherwise }\end{array}\right.$
in which $\bot_{\mathcal{}Q}$ is the bottom of the lattice
(${\mathcal{}Q}$,$\leq$) and $S_{\alpha}:=\\{(a,\alpha_{1})\in
S|\alpha_{1}\geq\alpha\\}$.
For instance, the possibilistic answer set $S_{2}$ of our example above has a
degree of inconsistency of _confirmed_. Based on the degree of inconsistency
of possibilistic sets, we can define a criterion of preference between
possibilistic answer sets.
###### Definition 14
Let $P=\langle({\mathcal{}Q},\leq),N\rangle$ be a possibilistic program and
$M_{1}$, $M_{2}$ two possibilistic answer sets of $P$. We say that $M_{1}$ is
more-consistent than $M_{2}$ if and only if
$InconsDegre(M_{1})<InconsDegre(M_{2})$.
In our example above, it is obvious that $S_{1}$ is more-consistent than
$S_{2}$. In general terms, a possibilistic answer set $M_{1}$ is preferred to
$M_{2}$ if and only if $M_{1}$ is more-consistent than $M_{2}$. This means
that any consistent possibilistic answer set will be preferred to any
inconsistent possibilistic answer set.
So far we have commented only on the case of inconsistent possibilistic answer
sets. However, there are possibilistic programs that are inconsistent because
they have no possibilistic answer sets. For instance, let us consider the
following possibilistic program $P_{inc}$ (we are assuming the lattice of
Example 7):
$0.3:$ | $a\leftarrow\;not\;b.$
---|---
$0.5:$ | $b\leftarrow\;not\;c.$
$0.6:$ | $c\leftarrow\;not\;a.$
Observe that $P_{inc}^{*}$ has no answer sets; hence, $P_{inc}$ has no
possibilistic answer sets.
### 5.3 Restoring inconsistent possibilistic knowledge bases
In order to restore consistency of an inconsistent possibilistic knowledge
base, possibilistic logic eliminates the set of possibilistic formulæ which
are lower than the inconsistent degree of the inconsistent knowledge base.
Considering this idea, the authors of [Nicolas et al. (2006)] defined the
concept of $\alpha$-cut for possibilistic logic programs. Based on Definition
14 of [Nicolas et al. (2006)], we define its respective generalization for our
approach.
###### Definition 15
Let $P$ be a possibilistic logic program
-
the strict $\alpha$-cut is the subprogram $P_{>\alpha}=\\{r\in
P|n(r)>\alpha\\}$
-
the consistency cut degree of $P$:
$ConsCutDeg(P):=\left\\{\begin{array}[]{lcl}\bot_{\mathcal{}Q}&&\text{ if
}P^{*}\text{ is consistent}\\\ {\mathcal{}GLB}(\\{\alpha|P_{\alpha}\text{ is
consistent}\\})&&\text{otherwise }\end{array}\right.$
where $\bot_{\mathcal{}Q}$ is the bottom of the lattice
(${\mathcal{}Q}$,$\leq$).
Notice that the consistency cut degree of a possibilistic logic program
identifies the minimum level of certainty for which a strict $\alpha$-cut of
$P$ is consistent. As Nicolas et al., remarked in [Nicolas et al. (2006)], by
the non-monotonicity of the framework, it is not certain that a higher cut is
necessarily consistent.
In order to illustrate these ideas, let us reconsider the program $P_{inc}$.
First, one can see that $ConsCutDeg(P_{inc})=0.3$; hence, the subprogram
$P_{ConsCutDeg(P_{inc})}$ is:
$0.5:$ | $b\leftarrow\;not\;c.$
---|---
$0.6:$ | $c\leftarrow\;not\;a.$
Observe that this program has a possibilistic answer set which is
$\\{(c,0.6)\\}$. Hence due to the strict $\alpha$-cut of $P$, one is able to
infer information from $P_{inc}$
To resume, one can identify two kinds of inconsistencies in our approach,
* •
one which arises from the presence of complementary atoms in a possibilistic
answer set and
* •
one which arises from the non-existence of a possibilistic answer set of a
possibilistic logic program.
To manage the inconsistency of possibilistic answer sets, a criterion of
preference between possibilistic answer sets was defined. On the other hand,
to manage the non-existence of a possibilistic answer set of a possibilistic
logic program $P$, the approach suggested by Nicolas et al. in [Nicolas et al.
(2006)], was adopted. This approach is based on $\alpha$_-cuts_ in order to
get consistent subprograms of a given program $P$.
## 6 Related Work
Research on logic programming with uncertainty has dealt with various
approaches of logic programming semantics, as well as different applications.
Most of the approaches in the literature employ one of the following
formalisms:
* •
annotated logic programming, _e.g._ [Kifer and Subrahmanian (1992)].
* •
probabilistic logic, _e.g._ [Ng and Subrahmanian (1992), Lukasiewicz (1998),
Kern-Isberner and Lukasiewicz (2004), Baral et al. (2009)].
* •
fuzzy set theory, _e.g._ [van Emden (1986), Rodríguez-Artalejo and Romero-Díaz
(2008), Van-Nieuwenborgh et al. (2007)].
* •
multi-valued logic, _e.g._ [Fitting (1991), Lakshmanan (1994)].
* •
evidence theoretic logic programming, _e.g._ [Baldwin (1987)].
* •
possibilistic logic, _e.g._ [Dubois et al. (1991), Alsinet and Godo (2002),
Alsinet and Godo (2000), Alsinet et al. (2008), Nicolas et al. (2006)].
Basically, these approaches differ in the underlying notion of uncertainty and
how uncertainty values, associated with clauses and facts, are managed. Among
these approaches, the formalisms based on possibilistic logic are closely
related to the approach presented in this paper. A clear distinction betweem
them and the formalism of this paper is that none of them capture disjunctive
clauses. On the other hand, excepting the work of Nicolas, et al., [Nicolas et
al. (2006)], none of these approaches describe a formalism for dealing with
uncertainty in a logic program with default negation by means of possibilistic
logic. Let us recall that the work of [Nicolas et al. (2006)] is totally
captured by the formalism presented in this paper (Proposition 5), but not
directly vice versa. For instance, let us consider the possibilistic logic
programs $P=\langle(\\{0.1,\dots,0.9\\},\leq),N\rangle$ such that $\leq$ is
the standard relation between rational number and $N$ the following set of
possibilistic clauses:
$0.5:a\vee b.$ | | $0.5:a\leftarrow b.$
---|---|---
| | $0.5:b\leftarrow a.$
By considering a standard transformation from disjunctive clauses to normal
clauses [Baral (2003)], this program can be transformed to the possibilistic
normal logic programs $P^{\prime}$:
$0.5:a\leftarrow\;not\;b.$ | | $0.5:a\leftarrow b.$
---|---|---
$0.5:b\leftarrow\;not\;a.$ | | $0.5:b\leftarrow a.$
One can see that $P$ has a possibilistic answer set: $\\{(a,0.5),(b,0.5)\\}$;
however, $P^{\prime}$ has no possibilistic answer sets.
Even though, one can find a wide range of formalisms for dealing with
uncertainty by using _normal logic programs_ , there are few proposals for
dealing with uncertainty by using _disjunctive logic programs_ [Lukasiewicz
(2001), Gergatsoulis et al. (2001), Mateis (2000), Baral et al. (2009)]:
* •
In [Lukasiewicz (2001)], Many-Valued Disjunctive Logic Programs with
_probabilistic semantics_ are introduced. In this approach, _probabilistic
values_ are associated with each clause. Like our approach, Lukasiewicz
considers partial evaluation for characterizing different semantics by means
of probabilistic theory.
* •
In [Gergatsoulis et al. (2001)], the logic programming language _Disjunctive
Chronolog_ is introduced. This approach combines _temporal and disjunctive
logic programming_. Disjunctive Chronolog is capable of expressing dynamic
behaviour as well as uncertainty. In this approach, like our semantics, it is
shown that logic semantics of these programs can be characterized by a fixed-
point semantics.
* •
In [Mateis (2000)], the Quantitative Disjunctive Logic Programs (QDLP) are
introduced. These programs associate an reliability interval with each clause.
Different triangular norms (T-norms) are employed to define calculi for
propagating uncertainty information from the premises to the conclusion of a
quantitative rule; hence, the semantics of these programs is parameterized.
This means that each choice of a T-norm induces different QDLP languages.
* •
In [Baral et al. (2009)], intensive research is done in order to achieve a
complete integration between ASP and probability theory. This approach is
similar to the approach presented in this paper; but it is in the context of
probabilistic theory.
We want to point out that the syntactic approach of this paper is motivated by
the fact that the possibilistic logic is axiomatizable; therefore, a proof
theory approach (axioms and inference rules) leads to constructions of a
possibilistic semantics such as a logic inference. This kind of possibilistic
framework allows us to explore extensions of the possibilistic answer set
semantics by considering the inference of different logics. In fact, by
considering a syntactic approach, one can explore properties such as _strong
equivalence_ and _free-syntax programs_. This means that the exploration of a
syntactic approach leads to important implications such as the implications of
an approach based on interpretations and possibilistic distributions.
The consideration of axiomatizations of given logics has shown to be a generic
approach for characterizing logic programming semantics. For instance, the
answer set semantics inference can be characterized as _a logic inference_ in
terms of the proof theory of intuitionistic logic and intermediate logics
[Pearce (1999), Osorio et al. (2004)].
## 7 Conclusions and future work
At the beginning of this research, two main goals were expected to be
achieved: 1.- a possibilistic extension of the answer set programming paradigm
for leading with uncertain, inconsistent and incomplete information; and, 2.-
exploring the axiomatization of possibilistic logic in order to define a
computable possibilistic disjunctive semantics.
In order to achieve the first goal, the work presented in [Nicolas et al.
(2006)] was taken as a reference point. Unlike the approach of [Nicolas et al.
(2006)], which is restricted to _possibilistic normal programs_ , we define a
possibilistic logic programming framework based on _possibilistic disjunctive
logic programs_. Our approach introduces the use of possibilistic disjunctive
clauses which are able to capture _incomplete information_ and _incomplete
states of a knowledge base_ at the same time.
For capturing the semantics of possibilistic disjunctive logic programs, the
axiomatization of possibilistic logic and the standard definition of the
answer set semantics are taken as a base. Given that the inference of
possibilistic logic is characterized by a possibilistic resolution rule
(Proposition 6), it is shown that:
1. 1.
The optimal certainty value of an atom which belongs to a possibilistic answer
set corresponds to the optimal refutation by possibilistic resolution
(Proposition 7); hence,
2. 2.
There exists an algorithm for computing the possibilistic answer sets of a
possibilistic disjunctive logic program (Proposition 8).
As an alternative approach for inferring the possibilistic answer set
semantics, it is shown that this semantics can be characterized by a
simplified version of _the principle of partial evaluation_. This means that
the possibilistic answer set semantics is characterized by a possibilistic
fixed-point operator (Proposition 10). This result gives two points of view
for constructing the possibilistic answer set semantics in terms of two
_syntactic processes_ (_i.e._ , the possibilistic proof theory and the
principle of partial evaluation).
Based on the flexibility of possibilistic logic for defining degrees of
uncertainty, it is shown that non-numerical degrees for capturing uncertain
information can be captured by the defined possibilistic answer set semantics.
This is illustrated in a medical scenario.
To manage the inconsistency of possibilistic models, we have defined a
criterion of preference between possibilistic answer sets. Also, to manage the
non-existence of possibilistic answer set of a possibilistic logic program
$P$, we have adopted the approach suggested by Nicolas et al. in [Nicolas et
al. (2006)] of cuts for achieving consistent subprograms of $P$.
In future work, there are several topics which will be explored. One of the
main topics to explore is to show that the possibilistic answer set semantics
can be characterized as a logic inference in terms of a possibilistic version
of the intuitionistic logic. This issue is motivated by the fact that the
answer set semantic inference can be characterized as a logic inference in
terms of intuitionistic logic [Pearce (1999), Osorio et al. (2004)]. On the
other hand, we have been exploring to define a possibilistic action language.
In [Nieves et al. (2007)], we have already defined our first ideas in the
context of the action language ${\mathcal{}A}$. Finally, we have started to
explore the definition of a possibilistic framework in order to define
preference between rules and preferences between atoms. With this objective in
mind, possibilistic ordered disjunction programs have been explored
[Confalonieri et al. (2010)].
## References
* Alsinet et al. (2008) Alsinet, T., Chesñevar, C. I., Godo, L., and Simari, G. R. 2008\. A logic programming framework for possibilistic argumentation: Formalization and logical properties. Fuzzy Sets and Systems 159, 10, 1208–1228.
* Alsinet and Godo (2000) Alsinet, T. and Godo, L. 2000\. A Complete Calculus for Possibilistic Logic Programming with Fuzzy Propositional Variables. In Proceedings of the Sixteen Conference on Uncertainty in Artificial Intelligence. ACM Press, 1-10.
* Alsinet and Godo (2002) Alsinet, T. and Godo, L. 2002\. Towards an automated deduction system for first-order possibilistic logic programming with fuzzy constants. Int. J. Intell. Syst. 17, 9, 887–924.
* Baldwin (1987) Baldwin, J. F. 1987\. Evidential support logic programming. Fuzzy Sets and Systems 24, 1 (October), 1–26.
* Baral (2003) Baral, C. 2003\. Knowledge Representation, Reasoning and Declarative Problem Solving. Cambridge University Press, Cambridge.
* Baral et al. (2009) Baral, C., Gelfond, M., and Rushton, J. N. 2009\. Probabilistic reasoning with answer sets. TPLP 9, 1, 57–144.
* Brass and Dix (1997) Brass, S. and Dix, J. 1997\. Characterizations of the disjunctive stable semantics by partial evaluation. J. Log. Program. 32, 3, 207–228.
* Brass and Dix (1998) Brass, S. and Dix, J. 1998\. Characterizations of the disjunctive well-founded semantics: Confluent calculi and iterated gcwa. J. Autom. Reasoning 20, 1, 143–165.
* Brass and Dix (1999) Brass, S. and Dix, J. 1999\. Semantics of (Disjunctive) Logic Programs Based on Partial Evaluation. Journal of Logic Programming 38(3), 167–213.
* Bueno (2006) Bueno, O. 2006\. Knowledge and Inquiry : Essays on the Pragmatism of Isaac Levi. Cambridge Studies in Probability, Induction and Decision Theory. Cambridge University Press, Chapter Why Inconsistency Is Not Hell: Making Room for Inconsistency in Science, 70–86.
* Confalonieri et al. (2010) Confalonieri, R., Nieves, J. C., Osorio, M., and Vázquez-Salceda, J. 2010\. Possibilistic semantics for logic programs with ordered disjunction. In FoIKS. Lecture Notes in Computer Science, vol. 5956. Springer, 133–152.
* Davey and Priestly (2002) Davey, B. A. and Priestly, H. A. 2002\. Introduction to Lattices and Order, Second ed. Cambridge University Press.
* Dix (1995a) Dix, J. 1995a. A classification theory of semantics of normal logic programs: I. strong properties. Fundam. Inform. 22, 3, 227–255.
* Dix (1995b) Dix, J. 1995b. A classification theory of semantics of normal logic programs: II. weak properties. Fundam. Inform. 22, 3, 257–288.
* Dix et al. (2001) Dix, J., Osorio, M., and Zepeda, C. 2001\. A general theory of confluent rewriting systems for logic programming and its applications. Ann. Pure Appl. Logic 108, 1-3, 153–188.
* DLV (1996) DLV, S. 1996\. Vienna University of Technology. http://www.dbai.tuwien.ac.at/proj/dlv/.
* Dubois et al. (1991) Dubois, D., Lang, J., and Prade, H. 1991\. Towards possibilistic logic programming. In ICLP, K. Furukawa, Ed. The MIT Press, 581–595.
* Dubois et al. (1994) Dubois, D., Lang, J., and Prade, H. 1994\. Possibilistic logic. In Handbook of Logic in Artificial Intelligence and Logic Programming, Volume 3: Nonmonotonic Reasoning and Uncertain Reasoning, D. Gabbay, C. J. Hogger, and J. A. Robinson, Eds. Oxford University Press, Oxford, 439–513.
* Dubois and Prade (2004) Dubois, D. and Prade, H. 2004\. Possibilistic logic: a retrospective and prospective view. Fuzzy Sets and Systems 144, 1, 3–23.
* Fitting (1991) Fitting, M. 1991\. Bilattices and the semantics of logic programming. Journal of Logic Programming 11, 1&2, 91–116.
* Fox and Das (2000) Fox, J. and Das, S. 2000\. Safe and Sound: Artificial Intelligence in Hazardous Applications. AAAI Press/ The MIT Press.
* Fox and Modgil (2006) Fox, J. and Modgil, S. 2006\. From arguments to decisions: Extending the Toulmin view. In Arguing on the Toulmin model: New essays on argument analysis and evaluation, D. Hitchcock and B. Verheij, Eds. Springer Netherlands, 273–287.
* Gelfond (2008) Gelfond, M. 2008\. Handbook of Knowledge Representation. Elsevier, Chapter Answer Sets, 285–316.
* Gelfond and Lifschitz (1988) Gelfond, M. and Lifschitz, V. 1988\. The Stable Model Semantics for Logic Programming. In 5th Conference on Logic Programming, R. Kowalski and K. Bowen, Eds. MIT Press, 1070–1080.
* Gelfond and Lifschitz (1991) Gelfond, M. and Lifschitz, V. 1991\. Classical Negation in Logic Programs and Disjunctive Databases. New Generation Computing 9, 365–385.
* Gergatsoulis et al. (2001) Gergatsoulis, M., Rondogiannis, P., and Panayiotopoulos, T. 2001\. Temporal disjunctive logic programming. New Generation Computing 19, 1, 87–100.
* Kern-Isberner and Lukasiewicz (2004) Kern-Isberner, G. and Lukasiewicz, T. 2004\. Combining probabilistic logic programming with the power of maximum entropy. Artificial Intelligence 157, 1-2, 139–202.
* Kifer and Subrahmanian (1992) Kifer, M. and Subrahmanian, V. S. 1992\. Theory of generalized annotated logic programming and its applications. J. Log. Program. 12, 3&4, 335–367.
* Lakshmanan (1994) Lakshmanan, L. V. S. 1994\. An epistemic foundation for logic programming with uncertainty. In FSTTCS. 89–100.
* López-Navidad and Caballero (2003) López-Navidad, A. and Caballero, F. 2003\. Extended criteria for organ acceptance: Strategies for achieving organ safety and for increasing organ pool. Clinical Transplantation, Blackwell Munksgaard 17, 308–324.
* López-Navidad et al. (1997) López-Navidad, A., Domingo, P., and Viedma, M. A. 1997\. Professional characteristics of the transplant coordinator. In XVI International Congress of the Transplantation Society. Transplantation Proceedings, vol. 29. Elsevier Science Inc, 1607–1613.
* Lukasiewicz (1998) Lukasiewicz, T. 1998\. Probabilistic logic programming. In ECAI. 388–392.
* Lukasiewicz (2001) Lukasiewicz, T. 2001\. Fixpoint characterizations for many-valued disjunctive logic programs with probabilistic semantics. In LPNMR. 336–350.
* Mateis (2000) Mateis, C. 2000\. Quantitative disjunctive logic programming: Semantics and computation. AI Commun. 13, 4, 225–248.
* Ng and Subrahmanian (1992) Ng, R. T. and Subrahmanian, V. S. 1992\. Probabilistic logic programming. Inf. Comput. 101, 2, 150–201.
* Nicolas et al. (2006) Nicolas, P., Garcia, L., Stéphan, I., and Lefèvre, C. 2006\. Possibilistic Uncertainty Handling for Answer Set Programming. Annals of Mathematics and Artificial Intelligence 47, 1-2 (June), 139–181.
* Nieves et al. (2007a) Nieves, J. C., Osorio, M., and Cortés, U. 2007a. Semantics for possibilistic disjunctive programs. In Ninth International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR-07), C. Baral, G. Brewka, and J. Schlipf, Eds. Number 4483 in LNAI. Springer-Verlag, 315–320.
* Nieves et al. (2007b) Nieves, J. C., Osorio, M., and Cortés, U. 2007b. Semantics for possibilistic disjunctive programs. In Answer Set Programming: Advances in Theory and Implementation, S. Costantini and R. Watson, Eds. 271–284.
* Nieves et al. (2007) Nieves, J. C., Osorio, M., Cortés, U., Caballero, F., and López-Navidad, A. 2007\. Reasoning about actions under uncertainty: A possibilistic approach. In In proceedings of CCIA, C. Angulo and L. Godo, Eds.
* Osorio et al. (2004) Osorio, M., Navarro, J. A., and Arrazola, J. 2004\. Applications of Intuitionistic Logic in Answer Set Programming. Theory and Practice of Logic Programming (TPLP) 4, 3 (May), 225–354.
* Pearce (1999) Pearce, D. 1999\. Stable Inference as Intuitionistic Validity. Logic Programming 38, 79–91.
* Rodríguez-Artalejo and Romero-Díaz (2008) Rodríguez-Artalejo, M. and Romero-Díaz, C. A. 2008\. Quantitative Logic Programming revisited. In 9th International Symposium, FLOPS, J. Garrigue and M. Hermenegildo, Eds. LNCS, vol. 4989. Springer-Verlag Berlin Heidelberg, 272–288.
* Tarski (1955) Tarski, A. 1955\. A lattice-theoretical fixpoint theorem and its applications. Pacific Journal of Mathematics 5, 2, 285–309.
* Tversky and Kahneman (1982) Tversky, A. and Kahneman, D. 1982\. Judgment under uncertainty:Heuristics and biases. Cambridge University Press, Chapter Judgment under uncertainty:Heuristics and biases, 3–20.
* van Dalen (1994) van Dalen, D. 1994\. Logic and structure, 3rd., aumented edition ed. Springer-Verlag, Berlin.
* van Emden (1986) van Emden, M. H. 1986\. Quantitative deduction and its fixpoint theory. Journal of Logic Programming 3, 1, 37–53.
* Van-Nieuwenborgh et al. (2007) Van-Nieuwenborgh, D., Cock, M. D., and Vermeir, D. 2007\. An introduction to fuzzy answer set programming. Ann. Math. Artif. Intell. 50, 3-4, 363–388.
## Appendix: Proofs
In this appendix, we give the proofs of some results presented in this paper.
Proposition 2 Let $P=\langle({\mathcal{}Q},\leq),N\rangle$ be a possibilistic
disjunctive logic program and $M_{1},M_{2}\in{\mathcal{}PS}$ such that
$M_{1}^{*}=M_{2}^{*}$. If $P\Vvdash_{PL}M1$ and $P\Vvdash_{PL}M2$, then
$P\Vvdash_{PL}M_{1}\sqcup M_{2}$.
###### Proof 7.1.
The proof is straightforward.
Proposition 3 Let $P$ be a possibilistic disjunctive logic program. If $M$ is
a possibilistic answer set of P then $M^{*}$ is an answer set of $P^{*}$.
###### Proof 7.2.
The proof is straightforward by the possibilistic answer set’s definition.
Proposition 4 Let $P=\langle({\mathcal{}Q},\leq),N\rangle$ be a possibilistic
disjunctive logic program and $\alpha$ be a fixed element of ${\mathcal{}Q}$.
If $\forall r\in P$, $n(r)=\alpha$ and $M^{\prime}$ is an answer set of
$P^{*}$, then $M:=\\{(a,\alpha)|a\in M^{\prime}\\}$ is a possibilistic answer
set of $P$.
###### Proof 7.3.
Let us introduce two observations:
1. 1.
We known that $\forall r\in P$, $n(r)=\alpha$; hence, if
$P\vdash_{PL}(a,\alpha^{\prime})$, then $\alpha^{\prime}=\alpha$. This
statement can be proved by contradiction.
2. 2.
Given a set of atoms $S$, if $\forall r\in P$, $n(r)=\alpha$ then $\forall
r\in P_{S}$, $n(r)=\alpha$. This statement follows by fact that the reduction
$P_{S}$ (Definition 6) does not affect $n(r)$ of clause rule $r$ in $P$.
As premises we know that $\forall r\in P$, $n(r)=\alpha$ and $M^{\prime}$ is
an answer set of $P^{*}$; hence, let $M:=\\{(a,\alpha)|a\in M^{\prime}\\}$.
Hence, we will prove that $M$ is a possibilistic answer set of $P$.
If $M^{\prime}$ is an answer set of $P^{*}$, then $\forall a\in M^{\prime}$,
$\exists\alpha^{\prime}\in{\mathcal{}Q}$ such that
$P_{M^{\prime}}\vdash_{PL}(a,\alpha^{\prime})$. Therefore, by observations 1
and 2, $\forall a\in M^{\prime}$, $P_{M^{\prime}}\vdash_{PL}(a,\alpha)$. Then,
$P\Vvdash_{PL}M$. Observe that $M$ is a greatest set in ${\mathcal{}PS}$,
hence, since $P\Vvdash_{PL}M$ and $M$ is a greatest set, $M$ is a
possibilistic answer set of $P$.
Proposition 5 Let $P:=\langle(Q,\leq),N\rangle$ be a possibilistic normal
program such that $(Q,\leq)$ is a totally ordered set and ${\mathcal{}L}_{P}$
has no extended atoms. $M$ is a possibilistic answer set of P if and only if
$M$ is a possibilistic stable model of $P$.
###### Proof 7.4.
(Sketch) It is not difficult to see that when $P$ is a possibilistic normal
program, then the syntactic reduction of Definition 6 and the syntactic
reduction of Definition 10 from [Nicolas et al. (2006)] coincide. Then the
proof is reduced to possibilistic definite programs. But, this case is
straightforward, since essentially GMP is applied for inferring the
possibilistic models of the program in both approaches.
Proposition 6 Let ${\mathcal{}C}$ be a set of possibilistic disjunctions, and
$C=(c\;\alpha)$ be a possibilistic clause obtained by a finite number of
successive application of _(R)_ to ${\mathcal{}C}$; then
${\mathcal{}C}\vdash_{PL}C$.
###### Proof 7.5.
(The proof is similar to the proof of Proposition 3.8.2 of [Dubois et al.
(1994)]) Let us consider two possibilistic clauses:
$C_{1}=(c_{1}\;\alpha_{1})$ and $C_{2}=(c_{2}\;\alpha_{2})$, the application
of $R$ yields
$C^{\prime}=(R(c_{1},c_{2})\;{\mathcal{}GLB}(\\{\alpha_{1},\alpha_{2}\\}))$.
By classic logic, we known that $R(c_{1},c_{2})$ is sound; hence the key point
of the proof is to show that
$n(R(c_{1},c_{2}))\geq{\mathcal{}GLB}(\\{\alpha_{1},\alpha_{2}\\})$.
By definition of necessity-valued clause, $n(c_{1})\geq\alpha_{1}$ and
$n(c_{2})\geq\alpha_{2}$, then $n(c_{1}\wedge
c_{2})={\mathcal{}GLB}(\\{n(c_{1}),n(c_{2})\\})\geq{\mathcal{}GLB}(\\{\alpha_{1},\alpha_{2}\\})$.
Since $c_{1}\wedge c_{2}\vdash_{C}R(c_{1},c_{2})$, then $n(R(c_{1},c_{2}))\geq
n(c_{1}\wedge c_{2})$ (because if $\varphi\vdash_{PL}\psi$ then $N(\psi)\geq
N(\varphi)$). Thus
$n(R(c_{1},c_{2}))\geq{\mathcal{}GLB}(\\{\alpha_{1},\alpha_{2}\\})$; therefore
(R) is sound. Then by induction any possibilistic formula inferred by a finite
number of successive applications of (R) to ${\mathcal{}C}$ is a logical
consequence of ${\mathcal{}C}$.
Proposition 7 Let $P$ be a set of possibilistic clauses and ${\mathcal{}C}$ be
the set of possibilistic disjunctions obtained from $P$; then the valuation of
the optimal refutation by resolution from ${\mathcal{}C}$ is the inconsistent
degree of $P$.
###### Proof 7.6.
(The proof is similar to the proof of Proposition 3.8.3 of [Dubois et al.
(1994)]) By possibilistic logic, we know that
${\mathcal{}C}\vdash_{PL}(\bot\;\alpha)$ if and only if
$({\mathcal{}C}_{\alpha})^{*}$ is inconsistent in the sense of classic logic.
Since (R) is complete in classic logic, then there exists a refutation
$R(\square)$ from $({\mathcal{}C}_{\alpha})^{*}$. Thus considering the
valuation of the refutation $R(\square)$, we obtain a refutation from
${\mathcal{}C}_{\alpha}$ such that $n(R(\square))\geq\alpha$. Then
$n(R(\square))\geq Inc({\mathcal{}C})$. Since (R) is sound then
$n(R(\square))$ cannot be strictly greater than $Inc({\mathcal{}C})$. Thus
$n(R(\square))$ is equal to $Inc({\mathcal{}C})$. According to Proposition
3.8.1 of [Dubois et al. (1994)], $Inc({\mathcal{}C})=Inc(P)$, thus
$n(R(\square))$ is also equal to $Inc(P)$.
Proposition 8 Let $P:=\langle({\mathcal{}Q},\leq),N\rangle$ be a possibilistic
logic program. The set Poss-ASP returned by $Poss\\_Answer\\_Sets(P)$ is the
set of all the possibilistic answer sets of $P$.
###### Proof 7.7.
The result follows from the following facts:
1. 1.
The function $ASP$ computes all the answer set of $P^{*}$.
2. 2.
If $M$ is a possibilistic answer set of P iff $M^{*}$ is an answer set of
$P^{*}$ (Proposition 3).
3. 3.
By Corollary 1, we know that the possibilistic resolution rule $R$ is sound
and complete for computing optimal possibilistic degrees.
Proposition 9 Let $P$ be a possibilistic disjunctive logic program. If
$\Gamma_{0}:={\mathcal{}T}(P)$ and $\Gamma_{i}:={\mathcal{}T}(\Gamma_{i-1})$
such that $i\in{\mathcal{}N}$, then $\exists\;n\in{\mathcal{}N}$ such that
$\Gamma_{n}=\Gamma_{n-1}$. We denote $\Gamma_{n}$ by $\Pi(P)$.
###### Proof 7.8.
It is not difficult to see that the operator ${\mathcal{}T}$ is monotonic,
then the proof is direct by Tarski’s Lattice-Theoretical Fixpoint Theorem
[Tarski (1955)].
Proposition 10 Let $P$ be a possibilistic disjunctive logic program and $M$ a
set of possibilistic atoms. $M$ is a possibilistic answer set of $P$ if and
only if $M$ is a possibilistic-${\mathcal{}T}$ answer set of $P$.
###### Proof 7.9.
Two observations:
1. 1.
By definition, it is straightforward that if $M_{1}$ is a possibilistic answer
set of $P$, then there exists a possibilistic-${\mathcal{}T}$ answer set
$M_{2}$ of $P$ such that $M_{1}^{*}=M_{2}^{*}$ and viceversa.
2. 2.
Since G-GPPE can be regarded as a macro of the possibilistic rule $(R)$, we
can conclude by Proposition 6 that G-GPPE is sound.
Let $M_{1}$ be a possibilistic answer set of $P$ and $M_{2}$ be a
possibilistic-${\mathcal{}T}$ answer set of $P$. By Observation 1, the central
point of the proof is to prove that if $(a,\alpha_{1})\in M_{1}$ and
$(a,\alpha_{2})\in M_{2}$ such that $M_{1}^{*}=M_{2}^{*}$, then
$\alpha_{1}=\alpha_{2}$.
The proof is by contradiction. Let us suppose that $(a,\alpha_{1})\in M_{1}$
and $(a,\alpha_{2})\in M_{2}$ such that $M_{1}^{*}=M_{2}^{*}$ and
$\alpha_{1}\neq\alpha_{2}$. Then there are two cases $\alpha_{1}<\alpha_{2}$
or $\alpha_{1}>\alpha_{2}$
$\alpha_{1}<\alpha_{2}$
: Since G-GPPE is sound (Observation 2), then $\alpha_{1}$ is not the optimal
necessity-value for the atom $a$, but this is false by Corollary 1.
$\alpha_{1}>\alpha_{2}$
: If $\alpha_{1}>\alpha_{2}$ then there exists a possibilistic claus
$\alpha_{1}:{\mathcal{}A}\leftarrow{\mathcal{}B}^{+}\in P^{(M_{1})^{*}}$ that
belongs to the optimal refutation of the atom $a$ and it was not reduced by
G-GPPE. But this is false because G-GPPE is a macro of the resolution rule
(R).
|
arxiv-papers
| 2011-06-03T22:57:06 |
2024-09-04T02:49:19.370271
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Juan Carlos Nieves, Mauricio Osorio and Ulises Cort\\'es",
"submitter": "Juan Carlos Nieves",
"url": "https://arxiv.org/abs/1106.0776"
}
|
1106.0787
|
# Super-Exponential Solution in Markovian Supermarket Models: Framework and
Challenge
Quan-Lin Li
School of Economics and Management Sciences
Yanshan University, Qinhuangdao 066004, China
(January 28, 2011)
###### Abstract
Marcel F. Neuts opened a key door in numerical computation of stochastic
models by means of phase-type (PH) distributions and Markovian arrival
processes (MAPs). To celebrate his 75th birthday, this paper reports a more
general framework of Markovian supermarket models, including a system of
differential equations for the fraction measure and a system of nonlinear
equations for the fixed point. To understand this framework heuristically,
this paper gives a detailed analysis for three important supermarket examples:
M/G/1 type, GI/M/1 type and multiple choices, explains how to derive the
system of differential equations by means of density-dependent jump Markov
processes, and shows that the fixed point may be simply super-exponential
through solving the system of nonlinear equations. Note that supermarket
models are a class of complicated queueing systems and their analysis can not
apply popular queueing theory, it is necessary in the study of supermarket
models to summarize such a more general framework which enables us to focus on
important research issues. On this line, this paper develops matrix-analytical
methods of Markovian supermarket models. We hope this will be able to open a
new avenue in performance evaluation of supermarket models by means of matrix-
analytical methods.
Keywords: Randomized load balancing, supermarket model, matrix-analytic
method, super-exponential solution, density-dependent jump Markov process,
Batch Markovian Arrival Process (BMAP), phase-type (PH) distribution, fixed
point.
## 1 Introduction
In the study of Markovian supermarket models, this paper proposes a more
general framework including a system of differential equations for the
fraction measure and a system of nonlinear equations for the fixed point, and
the both systems of equations enable us to focus on important research issues
of Markovian supermarket models. At the same time, this paper indicates that
it is difficult and challenging to analyze the system of differential
equations and to solve the system of nonlinear equations from four key
directions: Existence of solution, uniqueness of solution, stability of
solution and effective algorithms. Since there is a large gap to provide a
complete solution to the both systems of equations, this paper devotes
heuristic understanding of how to organize and solve the both systems of
equations by means of discussing three important supermarket examples: M/G/1
type, GI/M/1 type and multiple choices. Specifically, the supermarket examples
show a key result that the fixed point can be super-exponential for more
supermarket models. Note that supermarket models are a class of complicated
queueing systems and their analysis can not apply popular queueing theory,
while recent research gave some simple and beautiful results for special
supermarket models, e.g., see Mitzenmacher [19], Li and Lui [11] and Luczak
and McDiarmid [14], this motivates us in this paper to summarize a more
general framework in order to develop matrix-analytical methods of Markovian
supermarket models. We hope this is able to open a new avenue for performance
evaluation of supermarket models by means of matrix-analytical methods.
Recently, a number of companies, such as Amazon and Google, are offering cloud
computing service and cloud manufacturing technology. This motivates us in
this paper to study randomized load balancing for large-scale networks with
many computational and manufacturing resources. Randomized load balancing,
where a job is assigned to a server from a small subset of randomly chosen
servers, is very simple to implement. It can surprisingly deliver better
performance (for example reducing collisions, waiting times and backlogs) in a
number of applications including data centers, distributed memory machines,
path selection in computer networks, and task assignment at web servers.
Supermarket models are extensively used to study randomized load balancing
schemes. In the past ten years, supermarket models have been studied by
queueing theory as well as Markov processes. Since the study of supermarket
models can not apply popular queueing theory, they have not been extensively
studied in queueing committee up to now. Therefore, this leads to that
available queueing results of supermarket models are few up to now. Some
recent works dealt with the supermarket model with Poisson arrivals and
exponential service times by means of density-dependent jump Markov processes,
discussed limiting behavior of the supermarket model under a weakly convergent
setting when the population size goes to infinite, and indicated that there
exists a doubly exponential solution to the fixed point through solving the
system of nonlinear equations. Readers may refer to population processes by
Kurtz [8], and doubly exponential solution with exponential improvement by
Vvedenskaya, Dobrushin and Karpelevich [27], Mitzenmacher [19], Li and Lui
[11] and Luczak and McDiarmid [14].
Certain generalization of supermarket models has been explored in, for
example, studying simple variations by Vvedenskaya and Suhov [28],
Mitzenmacher [20], Azar, Broder, Karlin and Upfal [1], Vöcking [26],
Mitzenmacher, Richa, and Sitaraman [22] and Li, Lui and Wang [13]; considering
non-Poisson arrivals or non-exponential service times by Li, Lui and Wang
[12], Li and Lui [11], Bramson, Lu and Prabhakar [2] and Li [10]; discussing
load information by Mirchandaney, Towsley, and Stankovic [23], Dahlin [3] and
Mitzenmacher [21]; mathematical analysis by Graham [4, 5, 6], Luczak and
Norris [16] and Luczak and McDiarmid [14, 15]; using fast Jackson networks by
Martin and Suhov [18], Martin [17] and Suhov and Vvedenskaya [25].
The main contributions of the paper are twofold. The first one is to propose a
more general framework for Marovian supermarket models. This framework
contains a system of differential equations for the fraction measure and a
system of nonlinear equations for the fixed point. It is indicated that there
exist more difficulties and challenges for dealing with the system of
differential equations and for solving the system of nonlinear equations
because of two key factors: infinite dimension and complicated structure of
nonlinear equations. Since there is still a large gap up to being able to deal
with the both systems of equations systematically, the second contribution of
this paper is to analyze three important supermarket examples: M/G/1 type,
GI/M/1 type and multiple choices. These examples provide necessary
understanding and heuristic methods in order to discuss the both systems of
equations from practical and more general point of view. For the supermarket
examples, this paper derives the systems of differential equations for the
fraction measure by means of density-dependent jump Markov processes, and
illustrates that the fixed points may be super-exponential through solving the
systems of nonlinear equations by means of matrix-analytic methods.
The remainder of this paper is organized as follows. Section 2 proposes a more
general framework for Markovian supermarket models. This framework contains a
system of differential equations for the fraction measure and a system of
nonlinear equations for the fixed point. In Sections 3 and 4, we consider a
supermarket model of M/G/1 type by means of BMAPs and a supermarket model of
GI/M/1 type in terms of batch PH service processes, respectively. For the both
supermarket models, we derive the systems of differential equations satisfied
by the fraction measure in terms of density-dependent jump Markov processes,
and obtain the system of nonlinear equations satisfied by the fixed point
which is shown to be super-exponential. In Section 5, we analyze two
supermarket models with multiple choice numbers, and give super-exponential
solution to the fixed points for the two supermarket models. Note that the
supermarket examples discussed in Sections 3 to 5 can provide a heuristic
understanding for the more general framework of Markovian supermarket model
given in Section 2.
## 2 Markovian Supermarket Models
In this section, we propose a more general framework for Markovian supermarket
models. This framework contains a system of differential equations for the
fraction measure and a system of nonlinear equations for the fixed point.
Recent research, e.g., see Mitzenmacher [20] and Li and Lui [11], shows that a
Markovian supermarket model contains two important factors:
(1) Continuous-time Markov chain $Q$, called stochastic environment of the
supermarket model; and
(2) Choice numbers, including input choice numbers $d_{1},d_{2},\ldots,d_{v}$
and output choice numbers $f_{1},f_{2},\ldots,f_{w}$. Note that the choice
numbers determine decomposed structure of the stochastic environment $Q$.
We first analyze stochastic environment of the Markovian supermarket model.
From point of view of stochastic models, we take a more general stochastic
environment which is a continuous-time Markov chain $\left\\{X_{t},t\geq
0\right\\}$ with block structure. We assume that the Markov chain
$\left\\{X_{t},t\geq 0\right\\}$ on state space
$\Omega=\left\\{\left(k,j\right):k\geq 0,1\leq j\leq m_{k}\right\\}$ is
irreducible and positive recurrent, and that its infinitesimal generator is
given by
$Q=\left(\begin{array}[c]{cccc}Q_{0,0}&Q_{0,1}&Q_{0,2}&\cdots\\\
Q_{1,0}&Q_{1,1}&Q_{1,2}&\cdots\\\ Q_{2,0}&Q_{2,1}&Q_{2,2}&\cdots\\\
\vdots&\vdots&\vdots&\ddots\end{array}\right),$ (1)
where $Q_{i,j}$ is a matrix of size $m_{i}\times m_{j}$ whose
$\left(r,r^{\prime}\right)$th entry is the transition rate of the Markov chain
from state $\left(i,r\right)$ to state $\left(j,r^{\prime}\right)$. It is
well-known that $Q_{i,j}\geq 0$ for $i\neq j$, $Q_{i,i}$ is invertible with
strictly negative diagonal entries and nonnegative off-diagonal entries. For
state $\left(i,k\right)$, $i$ is called the level variable and $k$ the phase
variable. We write level $i$ as $L_{i}=\left\\{\left(i,k\right):1\leq k\leq
m_{i}\right\\}$.
Since the Markov chain is irreducible, for each level $i$ there must exist at
east one left-block state transition: $\leftarrow$ level $i$ or level
$i\leftarrow$, and at east one right-block state transition: $\rightarrow$
level $i$ or level $i\rightarrow$. We write
$E_{\text{left}}=\left\\{\leftarrow\text{level }i\text{ or level
}i\leftarrow:\text{level }i\in\Omega\right\\}$
and
$E_{\text{right}}=\left\\{\rightarrow\text{level }i\text{ or level
}i\rightarrow:\text{level }i\in\Omega\right\\}.$
Note that $E_{\text{left}}$ and $E_{\text{right}}$ describe output and input
processes in the supermarket model. Based on the two block-transition sets
$E_{\text{left}}$ and $E_{\text{right}}$, we write
$Q=Q_{\text{left}}+Q_{\text{right}},$ (2)
and for $i\geq 0$
$Q_{i,i}=Q_{i}^{\text{left}}+Q_{i}^{\text{right}}.$
Thus we have
$Q_{\text{left}}=\left(\begin{array}[c]{cccc}Q_{0}^{\text{left}}&&&\\\
Q_{1,0}&Q_{1}^{\text{left}}&&\\\ Q_{2,0}&Q_{2,1}&Q_{2}^{\text{left}}&\\\
\vdots&\vdots&\vdots&\ddots\end{array}\right)$
and
$Q_{\text{right}}=\left(\begin{array}[c]{cccc}Q_{0}^{\text{right}}&Q_{0,1}&Q_{0,2}&\cdots\\\
&Q_{1}^{\text{right}}&Q_{1,2}&\cdots\\\ &&Q_{2}^{\text{right}}&\cdots\\\
&&&\ddots\end{array}\right).$
Note that $Qe=0,Q_{\text{left}}e=0$ and $Q_{\text{right}}e=0$, where $e$ is a
column vector of ones with a suitable dimension in the context. We assume that
the matrices $Q_{j}^{\text{left}}$ for $j\geq 1$ and $Q_{i}^{\text{right}}$
for $i\geq 0$ are all invertible, while $Q_{0}^{\text{left}}$ is possibly
singular if there is not an output process in level $0$. We call
$Q=Q_{\text{left}}+Q_{\text{right}}$ an input-output rate decomposition of the
Markovian supermarket model.
Now, we provide a choice decomposition of the Markovian supermarket model
through decomposing the two matrices $Q_{\text{left}}$ and $Q_{\text{right}}$.
Note that the choice decomposition is based on the input choice numbers
$d_{1},d_{2},\ldots,d_{v}$ and the output choice numbers
$f_{1},f_{2},\ldots,f_{w}$. We write
$Q_{\text{left}}=Q_{\text{left}}\left(f_{1}\right)+Q_{\text{left}}\left(f_{2}\right)+\cdots+Q_{\text{left}}\left(f_{w}\right)$
(3)
for the output choice numbers $f_{1},f_{2},\ldots,f_{w}$, and
$Q_{\text{right}}=Q_{\text{right}}\left(d_{1}\right)+Q_{\text{right}}\left(d_{2}\right)+\cdots+Q_{\text{right}}\left(d_{v}\right)$
(4)
for the input choice numbers $d_{1},d_{2},\ldots,d_{v}$.
To study the Markovian supermarket model, we need to introduce two vector
notation. For a vector $a=\left(a_{1},a_{2},a_{3},\ldots\right)$, we write
$a^{\odot d}=\left(a_{1}^{d},a_{2}^{d},a_{3}^{d},\ldots\right)$
and
$a^{\odot\frac{1}{d}}=\left(a_{1}^{\frac{1}{d}},a_{2}^{\frac{1}{d}},a_{3}^{\frac{1}{d}},\ldots\right).$
Let
$S\left(t\right)=\left(S_{0}\left(t\right),S_{1}\left(t\right),S_{2}\left(t\right),\ldots\right)$
be the fraction measure of the Markovian supermarket model, where
$S_{i}\left(t\right)$ is a row vector of size $m_{i}$ for $i\geq 0$. Then
$S\left(t\right)\geq 0$ and $S_{0}\left(t\right)e=1$. Based on the input-
output rate decomposition and the choice decomposition for the stochastic
environment, we introduce the following system of differential equations
satisfied by the fraction measure $S\left(t\right)$ as follows:
$S_{0}\left(t\right)\geq 0\text{ and }S_{0}\left(t\right)e=1,$ (5)
and
$\frac{\text{d}}{\text{d}t}S\left(t\right)=\sum_{l=1}^{w}S^{\odot
f_{l}}\left(t\right)Q_{\text{left}}\left(f_{l}\right)+\sum_{k=1}^{v}S^{\odot
d_{k}}\left(t\right)Q_{\text{right}}\left(d_{k}\right).$ (6)
In the Markovian supermarket model, a row vector
$\pi=\left(\pi_{0},\pi_{1},\pi_{2},\ldots\right)$ is called a fixed point of
the fraction measure $S\left(t\right)$ if
$\lim_{t\rightarrow+\infty}S\left(t\right)=\pi$. In this case, it is easy to
see that
$\lim_{t\rightarrow+\infty}\left[\frac{\mathtt{d}}{\text{d}t}S\left(t\right)\right]=0.$
If there exists a fixed point of the fraction measure, then it follows from
(5) and (6) that the fixed point is a nonnegative non-zero solution to the
following system of nonlinear equations
$\pi_{0}\geq 0\text{ and }\pi_{0}e=1,$ (7)
and
$\sum_{l=1}^{w}\pi^{\odot
f_{l}}Q_{\text{left}}\left(f_{l}\right)+\sum_{k=1}^{v}\pi^{\odot
d_{k}}Q_{\text{right}}\left(d_{k}\right)=0.$ (8)
###### Remark 1
If $d_{k}=1$ for $1\leq k\leq v$ and $f_{l}=1$ for $1\leq l\leq w$, then the
system of differential equations (5) and (6) is given by
$S_{0}\left(t\right)\geq 0\text{ and }S_{0}\left(t\right)e=1,$
and
$\frac{\text{d}}{\text{d}t}S\left(t\right)=S\left(t\right)Q.$
Thus we obtain
$S\left(t\right)=cS\left(0\right)\exp\left\\{Qt\right\\}.$
Let
$W\left(t\right)=\left(W_{0}\left(t\right),W_{1}\left(t\right),W_{2}\left(t\right),\ldots\right)=S\left(0\right)\exp\left\\{Qt\right\\},$
where $W_{i}\left(t\right)$ is a row vector of size $m_{i}$ for $i\geq 0$.
Then $S\left(t\right)=cW\left(t\right),$ where $c=1/W_{0}\left(t\right)e$. At
the same time, the system of nonlinear equations (7) and (8) is given by
$\pi_{0}\geq 0\text{ and }\pi_{0}e=1,$
and
$\pi Q=0.$
Let $W=\left(w_{0},w_{1},w_{2},\ldots\right)$ be the stationary probability
vector of the Markov chain $Q$, where $W_{i}$ is a row vector of size $m_{i}$
for $i\geq 0$. Then $\pi=cW$, where $c=1/w_{0}e$. Note that the stationary
probability vector $W$ of the block-structured Markov chain $Q$ is given a
detailed analysis in Chapter 2 of Li [9] by means of the RG-factorizations.
If there exist some $d_{k}\geq 2$ or/and $f_{l}\geq 2$ in the Markovian
supermarket model, then the system of differential equations (5) and (6) and
the system of nonlinear equations (7) to (8) are two decomposed power-form
generalizations of transient solution and of stationary probability of an
irreducible continuous-time Markov chain with block structure (see Chapters 2
and 8 of Li [9]). Note that Li [9] can deal with transient solution and
stationary probability for an irreducible block-structured Markov chain, where
the RG-factorizatons play a key role. However, the RG-factorizatons can not
hold for Markovian supermarket models with some $d_{k}\geq 2$ or/and
$f_{l}\geq 2$. Therefore, there exist more difficulties and challenges to
study the system of differential equations (5) and (6) and the system of
nonlinear equations (7) to (8). Specifically, it still keeps not to be able to
answer four important issues: Existence of solution, uniqueness of solution,
stability of solution and effective algorithms. This is similar to some
research on the four important issues of irreducible continuous-time Markov
chains with block structure.
In the remainder of this paper, we will study thee important Markovian
supermarket examples: M/G/1 type, GI/M/1 type and multiple choices. Our
purpose is to provide heuristic understanding of how to set up and solve the
system of differential equations (5) and (6), and the system of nonlinear
equations (7) to (8).
## 3 A Supermarket Model of M/G/1 Type
In this section, we consider a supermarket model with a BMAP and exponential
service times. Note that the stochastic environment is a Markov chain of M/G/1
type, the supermarket model is called to be of M/G/1 type. For the supermarket
model of M/G/1 type, we set up the system of differential equations for the
fraction measure by means of density-dependent jump Markov processes, and
derive the system of nonlinear equations satisfied by the fixed point which is
shown to be super-exponential solution.
The supermarket model of M/G/1 type is described as follows. Customers arrive
at a queueing system of $n>1$ servers as a BMAP with irreducible matrix
descriptor $\left(nC,nD_{1},nD_{2},nD_{3},\ldots\right)$ of size $m$, where
the matrix $C$ is invertible and has strictly negative diagonal entries and
nonnegative off-diagonal; $D_{k}\geq 0$ is the arrival rate matrix with batch
size $k$ for $k\geq 1$. We assume that $\sum_{k=1}^{\infty}kD_{k}$ is finite
and that $C+\sum_{k=1}^{\infty}D_{k}$ is an irreducible infinitesimal
generator with $\left(C+\sum_{k=1}^{\infty}D_{k}\right)e=0$. Let $\gamma$ be
the stationary probability vector of the irreducible Markov chain
$C+\sum_{k=1}^{\infty}D_{k}$. Then the stationary arrival rate of the BMAP is
given by $n\lambda=n\gamma\sum_{k=1}^{\infty}kD_{k}e$. The service times of
each customer are exponentially distributed with service rate $\mu$. Each
batch of arriving customers choose $d\geq 1$ servers independently and
uniformly at random from the $n$ servers, and joins for service at the server
which currently possesses the fewest number of customers. If there is a tie,
servers with the fewest number of customers will be chosen randomly. All
customers in every server will be served in the first-come-first service
(FCFS) manner. We assume that all the random variables defined above are
independent of each other and that this system is operating in the region
$\rho=\lambda/\mu<1$. Clearly, $d$ is an input choice number in this
supermarket model. Figure 1 is depicted as an illustration for supermarket
models of M/G/1 type.
Figure 1: A supermarket model of M/G/1 type
The supermarket model with a BMAP and exponential service times is stable if
$\rho=\lambda/\mu<1$. This proof can be given by a simple comparison argument
with the queueing system in which each customer queues at a random server
(i.e., where $d=1$). When $d=1$, each server acts like a BMAP/M/1 queue which
is stable if $\rho=\lambda/\mu<1$, see chapter 5 in Neuts [24]. Similar to
analysis in Winston [30] and Weber [29], the comparison argument leads to two
useful results: (1) the shortest queue is optimal due to the assumptions on a
BMAP and exponential service times in the supermarket model; and (2) the size
of the longest queue in the supermarket model is stochastically dominated by
the size of the longest queue in a set of $n$ independent BMAP/M/1 queues.
We define $n_{k}^{\left(i\right)}\left(t\right)$ as the number of queues with
at least $k$ customers, including customers in service, and with the BMAP in
phase $i$ at time $t\geq 0$. Clearly, $0\leq
n_{k}^{\left(i\right)}\left(t\right)\leq n$ for $k\geq 0$ and $1\leq i\leq m$.
Let
$x_{n}^{\left(i\right)}\left(k,t\right)=\frac{n_{k}^{\left(i\right)}\left(t\right)}{n},$
which is the fraction of queues with at least $k$ customers and the BMAP in
phase $i$ at time $t\geq 0$ for $k\geq 0$. We write
$X_{n}\left(k,t\right)=\left(x_{n}^{\left(1\right)}\left(k,t\right),x_{n}^{\left(2\right)}\left(k,t\right),\ldots,x_{n}^{\left(m\right)}\left(k,t\right)\right)$
for $k\geq 0$, and
$X_{n}\left(t\right)=\left(X_{n}\left(0,t\right),X_{n}\left(1,t\right),X_{n}\left(2,t\right),\ldots\right).$
The state of the supermarket model may be described by the vector
$X_{n}\left(t\right)$ for $t\geq 0$. Since the arrival process to the queueing
system is a BMAP and the service time of each customer is exponential, the
stochastic process $\left\\{X_{n}\left(t\right),t\geq 0\right\\}$ is a Markov
process whose state space is given by
$\displaystyle\Omega_{n}=$
$\displaystyle\\{\left(g_{n}^{\left(0\right)},g_{n}^{\left(1\right)},g_{n}^{\left(2\right)}\ldots\right):g_{n}^{\left(0\right)}\text{
is a probability vector, }g_{n}^{\left(k\right)}\geq
g_{n}^{\left(k+1\right)}\geq 0$ $\displaystyle\text{ for }k\geq
1\text{,}\text{ and }ng_{n}^{\left(l\right)}\text{ is a vector of nonnegative
integers for }l\geq 0\\}.$
Let
$s_{k}^{\left(i\right)}\left(n,t\right)=E\left[x_{k}^{\left(i\right)}\left(n,t\right)\right],$
and
$S_{k}\left(n,t\right)=\left(s_{k}^{\left(1\right)}\left(n,t\right),s_{k}^{\left(2\right)}\left(n,t\right),\ldots,s_{k}^{\left(m\right)}\left(n,t\right)\right)$
for $k\geq 0,$
$S\left(n,t\right)=\left(S_{0}\left(n,t\right),S_{1}\left(n,t\right),S_{2}\left(n,t\right),\ldots\right).$
As shown in Martin and Suhov [18] and Luczak and McDiarmid [14], the Markov
process $\left\\{X_{n}\left(t\right),t\geq 0\right\\}$ is asymptotically
deterministic as $n\rightarrow\infty$. Thus
$\lim_{n\rightarrow\infty}E\left[x_{k}^{\left(i\right)}\left(n,t\right)\right]$
always exist by means of the law of large numbers for $k\geq 0$. Based on
this, we write
$S_{k}\left(t\right)=\lim_{n\rightarrow\infty}S_{k}\left(n,t\right)$
for $k\geq 0$, and
$S\left(t\right)=\left(S_{0}\left(t\right),S_{1}\left(t\right),S_{2}\left(t\right),\ldots\right).$
Let $X\left(t\right)=\lim_{n\rightarrow\infty}X_{n}\left(t\right)$. Then it is
easy to see from the BMAP and the exponential service times that
$\left\\{X\left(t\right),t\geq 0\right\\}$ is also a Markov process whose
state space is given by
$\Omega=\left\\{\left(g^{\left(0\right)},g^{\left(1\right)},g^{\left(2\right)},\ldots\right):g^{\left(0\right)}\text{
is a probability vector},g^{\left(k\right)}\geq g^{\left(k+1\right)}\geq
0\text{ for }k\geq 1\right\\}.$
If the initial distribution of the Markov process
$\left\\{X_{n}\left(t\right),t\geq 0\right\\}$ approaches the Dirac delta-
measure concentrated at a point $g\in$ $\Omega$, then
$X\left(t\right)=\lim_{n\rightarrow\infty}X_{n}\left(t\right)$ is concentrated
on the trajectory $S_{g}=\left\\{S\left(t\right):t\geq 0\right\\}$. This
indicates a law of large numbers for the time evolution of the fraction of
queues of different lengths. Furthermore, the Markov process
$\left\\{X_{n}\left(t\right),t\geq 0\right\\}$ converges weakly to the
fraction vector
$S\left(t\right)=\left(S_{0}\left(t\right),S_{1}\left(t\right),S_{2}\left(t\right),\ldots\right)$
as $n\rightarrow\infty$, or for a sufficiently small $\varepsilon>0$,
$\lim_{n\rightarrow\infty}P\left\\{||X_{n}\left(t\right)-S\left(t\right)||\geq\varepsilon\right\\}=0,$
where $||a||$ is the $L_{\infty}$-norm of vector $a$.
In what follows we set up a system of differential vector equations satisfied
by the fraction vector $S\left(t\right)$ by means of density-dependent jump
Markov processes.
We first provide an example to indicate how to derive the differential vector
equations. Consider the supermarket model with $n$ servers, and determine the
expected change in the number of queues with at least $k$ customers over a
small time interval $[0,dt)$. The probability vector that an arriving customer
joins a queue with $k-1$ customers in this time interval is given by
$\left[S_{0}^{\odot d}\left(n,t\right)D_{k}+S_{1}^{\odot
d}\left(n,t\right)D_{k-1}+\cdots+S_{k-1}^{\odot
d}\left(n,t\right)D_{1}+S_{k}^{\odot d}\left(n,t\right)C\right]\cdot
n\text{d}t,$
since each arriving customer chooses $d$ servers independently and uniformly
at random from the $n$ servers, and waits for service at the server which
currently contains the fewest number of customers. Similarly, the probability
vector that a customer leaves a server queued by $k$ customers in this time
interval is given by
$\left[-\mu S_{k}\left(n,t\right)+\mu S_{k+1}\left(n,t\right)\right]\cdot
n\text{d}t.$
Therefore, we obtain
$\displaystyle\text{d}E\left[n_{k}\left(n,t\right)\right]=$
$\displaystyle\left[\sum_{l=0}^{k-1}S_{l}^{\odot
d}\left(n,t\right)D_{k-l}+S_{k}^{\odot d}\left(n,t\right)C\right]\cdot
n\text{d}t$ $\displaystyle+\left[-\mu S_{k}\left(n,t\right)+\mu
S_{k+1}\left(n,t\right)\right]\cdot n\text{d}t.$
This leads to
$\frac{\text{d}S_{k}\left(n,t\right)}{\text{d}t}=\sum_{l=0}^{k-1}S_{l}^{\odot
d}\left(n,t\right)D_{k-l}+S_{k}^{\odot d}\left(n,t\right)C+-\mu
S_{k}\left(n,t\right)+\mu S_{k+1}\left(n,t\right).$ (9)
Since
$\lim_{n\rightarrow\infty}E\left[x_{k}^{\left(i\right)}\left(n,t\right)\right]$
always exists for $k\geq 0$, taking $n\rightarrow\infty$ in both sides of
Equation (9) we can easily obtain
$\frac{\text{d}S_{k}\left(t\right)}{\text{d}t}=\sum_{l=0}^{k-1}S_{l}^{\odot
d}\left(t\right)D_{k-l}+S_{k}^{\odot d}\left(t\right)C-\mu
S_{k}\left(t\right)+\mu S_{k+1}\left(t\right).$ (10)
Using a similar analysis to that in Equation (10), we obtain the system of
differential vector equations for the fraction vector
$S\left(t\right)=\left(S_{0}\left(t\right),S_{1}\left(t\right),\ldots\right)$
as follows:
$S_{0}\left(t\right)\geq 0,S_{0}\left(t\right)e=1,$ (11)
$\frac{\mathtt{d}}{\text{d}t}S_{0}\left(t\right)=S_{0}^{\odot
d}\left(t\right)C+\mu S_{1}\left(t\right)$ (12)
and for $k\geq 1$
$\frac{\mathtt{d}}{\text{d}t}S_{k}\left(t\right)=\sum_{l=0}^{k-1}S_{l}^{\odot
d}\left(n,t\right)D_{k-l}+S_{k}^{\odot d}\left(n,t\right)C-\mu
S_{k}\left(n,t\right)+\mu S_{k+1}\left(n,t\right).$ (13)
Let $\pi$ be the fixed point. Then $\pi$ satisfies the following system of
nonlinear equations
$\pi_{0}\geq 0,\pi_{0}e=1,$ (14) $\pi_{0}^{\odot d}C+\mu\pi_{1}=0$ (15)
and for $k\geq 1$,
$\sum_{l=0}^{k-1}\pi_{l}^{\odot d}D_{k-l}+\pi_{k}^{\odot
d}C-\mu\pi_{k}+\mu\pi_{k+1}=0.$ (16)
Let
$Q_{\text{right}}=\left(\begin{array}[c]{cccccc}C&D_{1}&D_{2}&D_{3}&D_{4}&\cdots\\\
&C&D_{1}&D_{2}&D_{3}&\cdots\\\ &&C&D_{1}&D_{2}&\cdots\\\ &&&C&D_{1}&\cdots\\\
&&&&\ddots&\end{array}\right)$
and
$Q_{\text{left}}=\left(\begin{array}[c]{ccccc}0&&&&\\\ \mu I&-\mu I&&&\\\ &\mu
I&-\mu I&&\\\ &&\mu I&-\mu I&\\\ &&&\ddots&\ddots\end{array}\right).$
Then the system of differential vector equations is given by
$S_{0}\left(t\right)\geq 0,S_{0}\left(t\right)e=1,$
and
$\frac{\text{d}}{\text{d}t}S\left(t\right)=S^{\odot
d}\left(t\right)Q_{\text{right}}+S\left(t\right)Q_{\text{left}};$
and the system of nonlinear equations (14) to (16) is given by
$\pi_{0}\geq 0,\pi_{0}e=1,$
and
$\pi^{\odot d}Q_{\text{right}}+\pi Q_{\text{left}}=0.$
###### Remark 2
For the supermarket model with a BMAP and exponential service times, its
stochastic environment is a Markov chain of M/G/1 type whose infinitesimal
generator is given by $Q=Q_{\text{left}}+Q_{\text{right}}$. This example
clearly indicates how to set up the system of differential equations (5) and
(6) for the fraction measure and the system of nonlinear equations (7) to (8)
for the fixed point.
In the remainder of this section, we provide a super-exponential solution to
the fixed point $\pi$ by means of some useful relations among the vectors
$\pi_{k}$ for $k\geq 0$.
It follows from (16) that
$\displaystyle\left(\pi_{1}^{\odot d},\pi_{2}^{\odot d},\pi_{3}^{\odot
d},\ldots\right)\left(\begin{array}[c]{cccc}C&D_{1}&D_{2}&\cdots\\\
&C&D_{1}&\cdots\\\ &&C&\cdots\\\
&&&\ddots\end{array}\right)+\left(\pi_{1},\pi_{2},\pi_{3},\ldots\right)\left(\begin{array}[c]{cccc}-\mu
I&&&\\\ \mu I&-\mu I&&\\\ &\mu I&-\mu I&\\\ &&\ddots&\ddots\end{array}\right)$
(25) $\displaystyle=-\left(\pi_{0}^{\odot d}D_{1},\pi_{0}^{\odot
d}D_{2},\pi_{0}^{\odot d}D_{3},\ldots\right).$ (26)
Let
$A=\left(\begin{array}[c]{cccc}-\mu I&&&\\\ \mu I&-\mu I&&\\\ &\mu I&-\mu
I&\\\ &&\ddots&\ddots\end{array}\right).$
Then
$\left(-A\right)^{-1}=\left(\begin{array}[c]{cccc}\frac{1}{\mu}I&&&\\\
\frac{1}{\mu}I&\frac{1}{\mu}I&&\\\
\frac{1}{\mu}I&\frac{1}{\mu}I&\frac{1}{\mu}I&\\\
\vdots&\vdots&\vdots&\ddots\end{array}\right).$
Note that
$\left(\begin{array}[c]{cccc}D_{0}&D_{1}&D_{2}&\cdots\\\
&D_{0}&D_{1}&\cdots\\\ &&D_{0}&\cdots\\\
&&&\ddots\end{array}\right)\left(-A^{-1}\right)=\left(\begin{array}[c]{cccc}\frac{1}{\mu}\sum\limits_{k=0}^{\infty}D_{k}&\frac{1}{\mu}\sum\limits_{k=1}^{\infty}D_{k}&\frac{1}{\mu}\sum\limits_{k=2}^{\infty}D_{k}&\cdots\\\
\frac{1}{\mu}\sum\limits_{k=0}^{\infty}D_{k}&\frac{1}{\mu}\sum\limits_{k=0}^{\infty}D_{k}&\frac{1}{\mu}\sum\limits_{k=1}^{\infty}D_{k}&\cdots\\\
\frac{1}{\mu}\sum\limits_{k=0}^{\infty}D_{k}&\frac{1}{\mu}\sum\limits_{k=0}^{\infty}D_{k}&\frac{1}{\mu}\sum\limits_{k=0}^{\infty}D_{k}&\cdots\\\
\vdots&\vdots&\vdots&\end{array}\right)$
and
$\left(\pi_{0}^{\odot d}D_{1},\pi_{0}^{\odot d}D_{2},\pi_{0}^{\odot
d}D_{3},\ldots\right)\left(-A^{-1}\right)=\left(\frac{1}{\mu}\pi_{0}^{\odot
d}\sum\limits_{k=1}^{\infty}D_{k},\frac{1}{\mu}\pi_{0}^{\odot
d}\sum\limits_{k=2}^{\infty}D_{k},\frac{1}{\mu}\pi_{0}^{\odot
d}\sum\limits_{k=3}^{\infty}D_{k},\ldots\right),$
it follows from (26) that
$\pi_{1}=\pi_{0}^{\odot
d}\left[\frac{1}{\mu}\sum\limits_{i=1}^{\infty}D_{i}\right]+\sum_{j=1}^{\infty}\pi_{j}^{\odot
d}\left[\frac{1}{\mu}\sum\limits_{i=0}^{\infty}D_{i}\right]$ (27)
and for $k\geq 2$,
$\pi_{k}=\sum\limits_{i=0}^{k-1}\pi_{i}^{\odot
d}\left[\frac{1}{\mu}\sum\limits_{j=k-i}^{\infty}D_{j}\right]+\sum_{j=k}^{\infty}\pi_{j}^{\odot
d}\left[\frac{1}{\mu}\sum\limits_{i=0}^{\infty}D_{i}\right].$ (28)
To omit the terms $\sum_{j=k}^{\infty}\pi_{j}^{\odot
d}\left[\frac{1}{\mu}\sum\limits_{i=0}^{\infty}D_{i}\right]$ for $k\geq 1$, we
assume that the system of nonlinear equations (27) and (28) has a closed-form
solution
$\pi_{k}=r\left(k\right)\gamma^{\odot\frac{1}{d}},$ (29)
where $r\left(k\right)$ is an underdetermined positive constant for $k\geq 1$.
Then it follows from (27), (28) and (29) that
$\pi_{1}=\pi_{0}^{\odot
d}\left[\frac{1}{\mu}\sum\limits_{i=1}^{\infty}D_{i}\right]$ (30)
or
$r\left(1\right)\gamma^{\odot\frac{1}{d}}=\pi_{0}^{\odot
d}\left[\frac{1}{\mu}\sum\limits_{i=1}^{\infty}D_{i}\right];$ (31)
and for $k\geq 2$,
$\pi_{k}=\sum\limits_{i=0}^{k-1}\pi_{i}^{\odot
d}\left[\frac{1}{\mu}\sum\limits_{j=k-i}^{\infty}D_{j}\right]$
or
$r\left(k\right)\gamma^{\odot\frac{1}{d}}=\pi_{0}^{\odot
d}\left[\frac{1}{\mu}\sum\limits_{j=k}^{\infty}D_{j}\right]+\sum\limits_{i=0}^{k-1}\left[r\left(i\right)\right]^{d}\gamma\left[\frac{1}{\mu}\sum\limits_{j=k-i}^{\infty}D_{j}\right].$
(32)
Let $\theta=1/\gamma^{\odot\frac{1}{d}}e$. Then $0<\theta<1$. Let
$\lambda_{k}=\gamma\sum_{i=k}^{\infty}D_{i}e$ and $\rho_{k}=\lambda_{k}/\mu$.
Then it follows from (31) and (32) that
$r\left(1\right)=\frac{\theta}{\mu}\pi_{0}^{\odot
d}\sum\limits_{i=1}^{\infty}D_{i}e$ (33)
and for $k\geq 2$
$\displaystyle r\left(k\right)$
$\displaystyle=\frac{\theta}{\mu}\pi_{0}^{\odot
d}\sum\limits_{j=k}^{\infty}D_{j}e+\frac{\theta}{\mu}\sum\limits_{i=1}^{k-1}\left[r\left(i\right)\right]^{d}\gamma\sum\limits_{j=k-i}^{\infty}D_{j}e$
$\displaystyle=\frac{\theta}{\mu}\pi_{0}^{\odot
d}\sum\limits_{j=k}^{\infty}D_{j}e+\theta\sum\limits_{i=1}^{k-1}\left[r\left(i\right)\right]^{d}\rho_{k-i}.$
(34)
It is easy to see from (33) and (34) that $\pi_{0}$ and $r\left(1\right)$ are
two key underdetermined terms for the closed-form solution to the system of
nonlinear equations (31) and (32). Let us first derive the vector $\pi_{0}$.
It follows from (15) and (30) that
$\left\\{\begin{array}[c]{l}\pi_{0}^{\odot d}C+\mu\pi_{1}=0,\\\
\mu\pi_{1}=\pi_{0}^{\odot
d}\sum\limits_{i=1}^{\infty}D_{i}.\end{array}\right.$
This leads to
$\pi_{0}^{\odot d}\left(C+\sum\limits_{i=1}^{\infty}D_{i}\right)=0.$
Thus, it is easy to see that $\pi_{0}=\theta\gamma^{\odot\frac{1}{d}}$, which
is a probability vector with $\pi_{0}e=1$. Hence we have
$\pi_{1}=-\frac{\theta^{d}}{\mu}\gamma
C=\frac{\theta^{d}}{\mu}\gamma\sum\limits_{i=1}^{\infty}D_{i}.$ (35)
It follows from (33) and (34) that
$r\left(1\right)=\frac{\theta}{\mu}\cdot\theta^{d}\gamma\sum\limits_{i=1}^{\infty}D_{i}e=\theta^{d+1}\rho_{1}$
(36)
and for $k\geq 2$
$\displaystyle r\left(k\right)$
$\displaystyle=\frac{\theta}{\mu}\pi_{0}^{\odot
d}\sum\limits_{j=k}^{\infty}D_{j}e+\frac{\theta}{\mu}\sum\limits_{i=1}^{k-1}\left[r\left(i\right)\right]^{d}\gamma\sum\limits_{j=k-i}^{\infty}D_{j}e$
$\displaystyle=\theta^{d+1}\rho_{k}+\theta\sum\limits_{i=1}^{k-1}\left[r\left(i\right)\right]^{d}\rho_{k-i}.$
(37)
Therefore, we obtain the super-exponential solution to the fixed point as
follows:
$\pi_{0}=\theta\gamma^{\odot\frac{1}{d}}$
and for $k\geq 1$
$\pi_{k}=\left[\theta^{d+1}\rho_{k}+\theta\sum\limits_{i=1}^{k-1}\left[r\left(i\right)\right]^{d}\rho_{k-i}\right]\gamma^{\odot\frac{1}{d}}.$
## 4 A Supermarket Model of GI/M/1 Type
In this section, we analyze a supermarket model with Poisson arrivals and
batch PH service processes. Note that the stochastic environment is a Markov
chain of GI/M/1 type, thus the supermarket model is called to be of GI/M/1
type. For the supermarket model of GI/M/1 type, we set up the system of
differential equations for the fraction measure by means of density-dependent
jump Markov processes, and derive the system of nonlinear equations satisfied
the fixed point which can be computed by an iterative algorithm. Further, it
is seen that the supermarket model of GI/M/1 type is more difficult than the
case of M/G/1 type.
Let us describe the supermarket model of GI/M/1 type. Customers arrive at a
queueing system of $n>1$ servers as a Poisson process with arrival rate
$n\lambda$ for $\lambda>0$. The service times of each batch of customers are
of phase type with irreducible representation $\left(\alpha,T\right)$ of order
$m$ and with a batch size distribution $\left\\{b_{k},k=1,2,3,\ldots\right\\}$
for $\sum_{k=1}^{\infty}b_{k}=1$ and
$\overline{b}=\sum_{k=1}^{\infty}kb_{k}<+\infty$. Let $T^{0}=-Te\gvertneqq 0$.
Then the expected service time is given by $1/\mu=-\alpha T^{-1}e=\eta T^{0}$,
where $\eta$ is the stationary probability vector of the Markov chain
$T+T^{0}\alpha$. Each batch of arriving customers choose $d\geq 1$ servers
independently and uniformly at random from the $n$ servers, and waits for
service at the server which currently contains the fewest number of customers.
If there is a tie, servers with the fewest number of customers will be chosen
randomly. All customers in every server will be served in FCFS for different
batches and in random service within one batch. We assume that all random
variables defined above are independent of each other, and that the system is
operating in the stable region $\rho=\lambda/\mu\overline{b}<1$. Clearly, $d$
is an input choice number in this supermarket model. Figure 2 is depicted as
an illustration for supermarket models of GI/M/1 type.
Figure 2: A supermarket model of GI/M/1 type
We define $n_{k}^{\left(i\right)}\left(t\right)$ as the number of queues with
at least $k$ customers and the service time in phase $i$ at time $t\geq 0$.
Clearly, $0\leq n_{k}^{\left(i\right)}\left(t\right)\leq n$ for $k\geq 1$ and
$1\leq i\leq m$. Let
$X_{n}^{\left(0\right)}\left(t\right)=\frac{n}{n}=1,$
and $k\geq 1$
$X_{n}^{\left(k,i\right)}\left(t\right)=\frac{n_{k}^{\left(i\right)}\left(t\right)}{n},$
which is the fraction of queues with at least $k$ customers and the service
time in phase $i$ at time $t\geq 0$. We write
$X_{n}^{\left(k\right)}\left(t\right)=\left(X_{n}^{\left(k,1\right)}\left(t\right),X_{n}^{\left(k,2\right)}\left(t\right),\ldots,X_{n}^{\left(k,m\right)}\left(t\right)\right),\text{
\ }k\geq 1,$
$X_{n}\left(t\right)=\left(X_{n}^{\left(0\right)}\left(t\right),X_{n}^{\left(1\right)}\left(t\right),X_{n}^{\left(2\right)}\left(t\right),\ldots\right).$
The state of the supermarket model may be described by the vector
$X_{n}\left(t\right)$ for $t\geq 0$. Since the arrival process to the queueing
system is Poisson and the service times of each server are of phase type,
$\left\\{X_{n}\left(t\right),t\geq 0\right\\}$ is a Markov process whose state
space is given by
$\displaystyle\Omega_{n}$
$\displaystyle=\\{\left(g_{n}^{\left(0\right)},g_{n}^{\left(1\right)},,g_{n}^{\left(2\right)}\ldots\right):g_{n}^{\left(0\right)}=1,g_{n}^{\left(k-1\right)}\geq
g_{n}^{\left(k\right)}\geq 0,$ $\displaystyle\text{and \ \
}ng_{n}^{\left(k\right)}\text{ \ is a vector of nonnegative integers for
}k\geq 1\\}.$
Let
$s_{0}\left(n,t\right)=E\left[X_{n}^{\left(0\right)}\left(t\right)\right]$
and $k\geq 1$
$s_{k}^{\left(i\right)}\left(n,t\right)=E\left[X_{n}^{\left(k,i\right)}\left(t\right)\right].$
Clearly, $s_{0}\left(n,t\right)=1$. We write
$S_{k}\left(n,t\right)=\left(s_{k}^{\left(1\right)}\left(n,t\right),s_{k}^{\left(2\right)}\left(n,t\right),\ldots,s_{k}^{\left(m\right)}\left(n,t\right)\right),\text{
\ }k\geq 1.$
As shown in Martin and Suhov [18] and Luczak and McDiarmid [14], the Markov
process $\left\\{X_{n}\left(t\right),t\geq 0\right\\}$ is asymptotically
deterministic as $n\rightarrow\infty$. Thus
$\lim_{n\rightarrow\infty}E\left[X_{n}^{\left(0\right)}\left(t\right)\right]$
and $\lim_{n\rightarrow\infty}E\left[X_{n}^{\left(k,i\right)}\right]$ always
exist by means of the law of large numbers. Based on this, we write
$S_{0}\left(t\right)=\lim_{n\rightarrow\infty}s_{0}\left(n,t\right)=1,$
for $k\geq 1$
$s_{k}^{\left(i\right)}\left(t\right)=\lim_{n\rightarrow\infty}s_{k}^{\left(i\right)}\left(n,t\right),$
$S_{k}\left(t\right)=\left(s_{k}^{\left(1\right)}\left(t\right),s_{k}^{\left(2\right)}\left(t\right),\ldots,s_{k}^{\left(m\right)}\left(t\right)\right)$
and
$S\left(t\right)=\left(S_{0}\left(t\right),S_{1}\left(t\right),S_{2}\left(t\right),\ldots\right).$
Let $X\left(t\right)=\lim_{n\rightarrow\infty}X_{n}\left(t\right)$. Then it is
easy to see from Poisson arrivals and batch PH service times that
$\left\\{X\left(t\right),t\geq 0\right\\}$ is also a Markov process whose
state space is given by
$\Omega=\left\\{\left(g^{\left(0\right)},g^{\left(1\right)},g^{\left(2\right)},\ldots\right):g^{\left(0\right)}=1,g^{\left(k-1\right)}\geq
g^{\left(k\right)}\geq 0\right\\}.$
If the initial distribution of the Markov process
$\left\\{X_{n}\left(t\right),t\geq 0\right\\}$ approaches the Dirac delta-
measure concentrated at a point $g\in$ $\Omega$, then its steady-state
distribution is concentrated in the limit on the trajectory
$S_{g}=\left\\{S\left(t\right):t\geq 0\right\\}$. This indicates a law of
large numbers for the time evolution of the fraction of queues of different
lengths. Furthermore, the Markov process $\left\\{X_{n}\left(t\right),t\geq
0\right\\}$ converges weakly to the fraction vector
$S\left(t\right)=\left(S_{0}\left(t\right),S_{1}\left(t\right),S_{2}\left(t\right),\ldots\right)$,
or for a sufficiently small $\varepsilon>0$,
$\lim_{n\rightarrow\infty}P\left\\{||X_{n}\left(t\right)-S\left(t\right)||\geq\varepsilon\right\\}=0,$
where $||a||$ is the $L_{\infty}$-norm of vector $a$.
To determine the fraction vector $S\left(t\right)$, we need to set up a system
of differential vector equations satisfied by the fraction measure
$S\left(t\right)$ by means of density-dependent jump Markov processes.
Consider the supermarket model with $n$ servers, and determine the expected
change in the number of queues with at least $k$ customers over a small time
period of length d$t$. The probability vector that during this time period,
any arriving customer joins a queue of size $k-1$ is given by
$n\left[\lambda S_{k-1}^{\odot d}\left(n,t\right)-\lambda S_{k}^{\odot
d}\left(n,t\right)\right]\text{d}t.$
Similarly, the probability vector that a customer leaves a server queued by
$k$ customers is given by
$n\left[S_{k}\left(n,t\right)T+\sum_{l=1}^{\infty}b_{l}S_{k+l}\left(n,t\right)T^{0}\alpha\right]\text{d}t.$
Therefore, we obtain
$\displaystyle\text{d}E\left[n_{k}\left(n,t\right)\right]=$ $\displaystyle
n\left[\lambda S_{k-1}^{\odot d}\left(n,t\right)-\lambda S_{k}^{\odot
d}\left(n,t\right)\right]\text{d}t$
$\displaystyle+n\left[S_{k}\left(n,t\right)T+\sum_{l=1}^{\infty}b_{l}S_{k+l}\left(n,t\right)T^{0}\alpha\right]\text{d}t.$
This leads to
$\frac{\text{d}S_{k}\left(n,t\right)}{\text{d}t}=\lambda S_{k-1}^{\odot
d}\left(n,t\right)-\lambda S_{k}^{\odot
d}\left(n,t\right)+S_{k}\left(n,t\right)T+\sum_{l=1}^{\infty}b_{l}S_{k+l}\left(n,t\right)T^{0}\alpha.$
(38)
Taking $n\rightarrow\infty$ in both sides of Equation (38), we have
$\frac{\text{d}S_{k}\left(t\right)}{\text{d}t}=\lambda S_{k-1}^{\odot
d}\left(t\right)-\lambda S_{k}^{\odot
d}\left(t\right)+S_{k}\left(t\right)T+\sum_{l=1}^{\infty}b_{l}S_{k+l}\left(n,t\right)T^{0}\alpha.$
(39)
Using a similar analysis to Equation (39), we obtain a system of differential
vector equations for the fraction vector
$S\left(t\right)=\left(S_{0}\left(t\right),S_{1}\left(t\right),S_{2}\left(t\right),\ldots\right)$
as follows:
$S_{0}\left(t\right)=1,$ (40)
$\frac{\mathtt{d}}{\text{d}t}S_{0}\left(t\right)=-\lambda
S_{0}^{d}\left(t\right)+\sum_{l=1}^{\infty}S_{l}\left(t\right)T^{0}\sum_{k=l}^{\infty}b_{k},$
(41) $\frac{\mathtt{d}}{\text{d}t}S_{1}\left(t\right)=\lambda\alpha
S_{0}^{d}\left(t\right)-\lambda S_{1}^{\odot
d}\left(t\right)+S_{1}\left(t\right)T+\sum_{l=1}^{\infty}b_{l}S_{1+l}\left(t\right)T^{0}\alpha,$
(42)
and for $k\geq 2$,
$\frac{\mathtt{d}}{\text{d}t}S_{k}\left(t\right)=\lambda S_{k-1}^{\odot
d}\left(t\right)-\lambda S_{k}^{\odot
d}\left(t\right)+S_{k}\left(t\right)T+\sum_{l=1}^{\infty}b_{l}S_{k+l}\left(t\right)T^{0}\alpha.$
(43)
If the row vector $\pi=\left(\pi_{0},\pi_{1},\pi_{2},\ldots\right)$ is a fixed
point of the fraction vector $S\left(t\right)$, then the fixed point $\pi$
satisfies the following system of nonlinear equations
$\pi_{0}=1$ (44)
$-\lambda\pi_{0}^{d}+\sum_{l=1}^{\infty}\pi_{l}T^{0}\sum_{k=l}^{\infty}b_{k}=0,$
(45) $\lambda\alpha\pi_{0}^{d}-\lambda\pi_{1}^{\odot
d}+\pi_{1}T+\sum_{l=1}^{\infty}b_{l}\pi_{1+l}T^{0}\alpha=0,$ (46)
and for $k\geq 2$,
$\lambda\pi_{k-1}^{\odot d}-\lambda\pi_{k}^{\odot
d}+\pi_{k}T+\sum_{l=1}^{\infty}b_{l}\pi_{k+l}T^{0}\alpha=0.$ (47)
Let
$Q_{\text{right}}=\left(\begin{array}[c]{cccccc}-\lambda&\lambda\alpha&&&&\cdots\\\
&-\lambda I&\lambda I&&&\cdots\\\ &&-\lambda I&\lambda I&&\cdots\\\
&&&-\lambda I&\lambda I&\cdots\\\ &&&&\ddots&\ddots\end{array}\right)$
and
$Q_{\text{left}}=\left(\begin{array}[c]{cccccc}0&&&&&\\\ T^{0}&T&&&&\\\
T^{0}\sum\limits_{k=2}^{\infty}b_{k}&b_{1}T^{0}\alpha&T&&&\\\
T^{0}\sum\limits_{k=3}^{\infty}b_{k}&b_{2}T^{0}\alpha&b_{1}T^{0}\alpha&T&&\\\
T^{0}\sum\limits_{k=4}^{\infty}b_{k}&b_{3}T^{0}\alpha&b_{2}T^{0}\alpha&b_{1}T^{0}\alpha&T&\\\
\vdots&\vdots&\vdots&\vdots&\vdots&\ddots\end{array}\right).$
Then the system of differential vector equations for the fraction measure is
given by
$S_{0}\left(t\right)=1,$
and
$\frac{\text{d}}{\text{d}t}S\left(t\right)=S^{\odot
d}\left(t\right)Q_{\text{right}}+S\left(t\right)Q_{\text{left}};$
and the system of nonlinear equations for the fixed point is given by
$\pi_{0}=1,$
and
$\pi^{\odot d}Q_{\text{right}}+\pi Q_{\text{left}}=0.$ (48)
In the remainder of this section, we provide an iterative algorithm for
computing the fixed point for the supermarket model of GI/M/1 type.
Specifically, the iterative algorithm indicates that the supermarket model of
GI/M/1 type is more difficult than the case of M/G/1 type.
Let
$B=\left(\begin{array}[c]{cccc}-\lambda I&\lambda I&&\cdots\\\ &-\lambda
I&\lambda I&\cdots\\\ &&-\lambda I&\cdots\\\ &&&\ddots\end{array}\right)$
and
$Q_{\text{service}}=\left(\begin{array}[c]{ccccc}T&&&&\\\
b_{1}T^{0}\alpha&T&&&\\\ b_{2}T^{0}\alpha&b_{1}T^{0}\alpha&T&&\\\
b_{3}T^{0}\alpha&b_{2}T^{0}\alpha&b_{1}T^{0}\alpha&T&\\\
\vdots&\vdots&\vdots&\vdots&\ddots\end{array}\right)$
Then it follows from (48) that
$\pi_{0}^{d}\left(\lambda\alpha,0,0,0,\ldots\right)+\pi_{\mathcal{L}}^{\odot
d}B+\pi_{\mathcal{L}}Q_{\text{Service}}=0,$ (49)
where $\pi_{\mathcal{L}}=\left(\pi_{1},\pi_{2},\pi_{3},\ldots\right)$. Note
that
$\left(-B\right)^{-1}=\left(\begin{array}[c]{cccc}\frac{1}{\lambda}I&\frac{1}{\lambda}I&\frac{1}{\lambda}I&\cdots\\\
&\frac{1}{\lambda}I&\frac{1}{\lambda}I&\cdots\\\
&&\frac{1}{\lambda}I&\cdots\\\ &&&\ddots\end{array}\right),$
using $\pi_{0}=1$ we obtain
$\pi_{0}^{d}\left(\lambda\alpha,0,0,0,\ldots\right)\left(-B\right)^{-1}=\left(\alpha,\alpha,\alpha,\ldots\right)$
and
$Q_{\text{service}}\left(-B\right)^{-1}=\frac{1}{\lambda}\left(\begin{array}[c]{ccccc}T&T&T&T&\cdots\\\
\left(T^{0}\alpha\right)b_{1}&T+\left(T^{0}\alpha\right)b_{1}&T+\left(T^{0}\alpha\right)b_{1}&T+\left(T^{0}\alpha\right)b_{1}&\cdots\\\
\left(T^{0}\alpha\right)b_{2}&\left(T^{0}\alpha\right)\sum\limits_{k=1}^{2}b_{k}&T+\left(T^{0}\alpha\right)\sum\limits_{k=1}^{2}b_{k}&T+\left(T^{0}\alpha\right)\sum\limits_{k=1}^{2}b_{k}&\cdots\\\
\left(T^{0}\alpha\right)b_{3}&\left(T^{0}\alpha\right)\sum\limits_{k=2}^{3}b_{k}&\left(T^{0}\alpha\right)\sum\limits_{k=1}^{3}b_{k}&T+\left(T^{0}\alpha\right)\sum\limits_{k=1}^{3}b_{k}&\cdots\\\
\vdots&\vdots&\vdots&\vdots&\end{array}\right),$
Thus it follows from (49) that
$\displaystyle\pi_{\mathcal{L}}^{\odot d}$
$\displaystyle=\pi_{0}^{d}\left(\lambda\alpha,0,0,0,\ldots\right)\left(-B\right)^{-1}+\pi_{\mathcal{L}}Q_{\text{service}}\left(-B\right)^{-1}$
$\displaystyle=\left(\alpha,\alpha,\alpha,\ldots\right)+\pi_{\mathcal{L}}Q_{\text{service}}\left(-B\right)^{-1}.$
This leads to that for $k\geq 1$
$\lambda\pi_{k}^{d}=\lambda\alpha+\sum\limits_{l=1}^{k+1}\pi_{l}T+b_{1}\sum\limits_{l=2}^{k+2}\pi_{l}\left(T^{0}\alpha\right)+b_{2}\sum\limits_{l=3}^{k+3}\pi_{l}\left(T^{0}\alpha\right)+\cdots.$
(50)
To solve the system of nonlinear equations (50) for $k\geq 1$, we assume that
the fixed point has a closed-form solution
$\pi_{k}=r\left(k\right)\eta,$
where $\eta$ is the stationary probability vector of the Markov chain
$T+T^{0}\alpha$. It follows from (50) that
$\lambda r^{d}\left(k\right)\eta^{\odot
d}=\lambda\alpha+\sum\limits_{l=1}^{k+1}r\left(l\right)\eta
T+b_{1}\sum\limits_{l=2}^{k+2}r\left(l\right)\eta\left(T^{0}\alpha\right)+b_{2}\sum\limits_{l=3}^{k+3}r\left(l\right)\eta\left(T^{0}\alpha\right)+\cdots.$
Taking $\theta=\eta^{\odot d}e$. Then $\theta\in\left(0,1\right)$. Noting that
$\alpha e=1,\eta Te=-\mu$ and $\eta T^{0}=\mu$, we obtain that for $k\geq 1$
$\rho\theta
r^{d}\left(k\right)=\rho-\sum\limits_{l=1}^{k+1}r\left(l\right)+b_{1}\sum\limits_{l=2}^{k+2}r\left(l\right)+b_{2}\sum\limits_{l=3}^{k+3}r\left(l\right)+\cdots.$
(51)
This gives
$\rho\theta\left[r^{d}\left(k\right)-r^{d}\left(k+1\right)\right]=r\left(k+2\right)-\sum\limits_{l=1}^{\infty}b_{l}r\left(k+2+l\right).$
(52)
Thus it follows from (52) that
$\rho\theta\left(r^{d}\left(1\right)-r^{d}\left(2\right),r^{d}\left(2\right)-r^{d}\left(3\right),r^{d}\left(3\right)-r^{d}\left(4\right),\ldots\right)=\left(r\left(3\right),r\left(4\right),r\left(5\right),\ldots\right)C,$
(53)
where
$C=\left(\begin{array}[c]{ccccc}1&&&&\\\ -b_{1}&1&&&\\\ -b_{2}&-b_{1}&1&&\\\
-b_{3}&-b_{2}&-b_{1}&1&\\\
\vdots&\vdots&\vdots&\vdots&\ddots\end{array}\right).$
Therefore, we have
$C^{-1}=\left(\begin{array}[c]{cccccc}1&&&&&\\\ b_{1}&1&&&&\\\
b_{2}+b_{1}^{2}&b_{1}&1&&&\\\
b_{3}+2b_{2}b_{1}+b_{1}^{3}&b_{2}+b_{1}^{2}&b_{1}&1&&\\\
b_{4}+2b_{3}b_{1}+3b_{2}b_{1}^{2}+b_{1}^{4}&b_{3}+2b_{2}b_{1}+b_{1}^{3}&b_{2}+b_{1}^{2}&b_{1}&1&\\\
\vdots&\vdots&\vdots&\vdots&\vdots&\ddots\end{array}\right),$
and the norm $||\cdot||_{\infty}$ of the matrix $C^{-1}$ is given by
$||C^{-1}||=\sup_{i\geq j\geq
1}\left\\{|\zeta_{i,j}|\right\\}\leq\sum_{k=1}^{\infty}b_{k}=1,$
where $\zeta_{i,j}$ is the $\left(i,j\right)$th entry of the matrix $C^{-1}$
for $0\leq j\leq i$. Note that the norm $||C^{-1}||\leq 1$ is useful for our
following iterative algorithm designed by the matrix $\rho\theta C^{-1}$ with
$||\rho\theta C^{-1}||=\rho\theta<1$.
Let
$X=\left(r^{d}\left(1\right)-r^{d}\left(2\right),r^{d}\left(2\right)-r^{d}\left(3\right),r^{d}\left(3\right)-r^{d}\left(4\right),\ldots\right)$
and
$Y=\left(r\left(3\right),r\left(4\right),r\left(5\right),\ldots\right).$
Then
$X=\left(r^{d}\left(1\right),r^{d}\left(2\right),Y^{\odot
d}\right)-\left(r^{d}\left(2\right),Y^{\odot d}\right)$
and it follows from (53) that
$\displaystyle Y$ $\displaystyle=X\left(\rho\theta C^{-1}\right)$
$\displaystyle=\left(r^{d}\left(1\right),r^{d}\left(2\right),Y^{\odot
d}\right)\left(\rho\theta C^{-1}\right)-\left(r^{d}\left(2\right),Y^{\odot
d}\right)\left(\rho\theta C^{-1}\right).$ (54)
It follows from (51) that
$r\left(1\right)=\rho-\rho\theta
r^{d}\left(1\right)-r\left(2\right)\left(1-b_{1}\right)+Y\left(b_{1}+b_{2},b_{2}+b_{3},b_{3}+b_{4},\ldots\right)^{T}$
(55)
and
$r\left(2\right)=\rho-r\left(1\right)-\left[b_{1}r\left(2\right)+\rho\theta
r^{d}\left(2\right)\right]+Y\left(b_{1}+b_{2}-1,b_{1}+b_{2}+b_{3},b_{2}+b_{3}+b_{4},\ldots\right)^{T}$
(56)
Now, we use Equations (54) to (56) to provide an iterative algorithm for
computing the fixed point $\pi_{k}=r\left(k\right)\eta$ for $k\geq 1$. To that
end, we write
$Y_{N}=\left(r_{N}\left(3\right),r_{N}\left(4\right),r_{N}\left(5\right),\ldots\right)$
(57)
and
$R_{N}=\left(r_{N}\left(1\right),r_{N}\left(2\right),r_{N}\left(3\right),\ldots\right)=\left(r_{N}\left(1\right),r_{N}\left(2\right),Y_{N}\right).$
(58)
Let
$\displaystyle r_{N+1}\left(1\right)=$ $\displaystyle\rho-\rho\theta
r_{N}^{d}\left(1\right)-r_{N}\left(2\right)\left(1-b_{1}\right)$
$\displaystyle+Y_{N}\left(b_{1}+b_{2},b_{2}+b_{3},b_{3}+b_{4},\ldots\right)^{T},$
(59) $\displaystyle r_{N+1}\left(2\right)=$ $\displaystyle\rho-
r_{N}\left(1\right)-\left[b_{1}r_{N}\left(2\right)+\rho\theta
r_{N}^{d}\left(2\right)\right]$
$\displaystyle+Y_{N}\left(b_{1}+b_{2}-1,b_{1}+b_{2}+b_{3},b_{2}+b_{3}+b_{4},\ldots\right)^{T}$
(60)
and
$Y_{N+1}=\left(r_{N}^{d}\left(1\right),r_{N}^{d}\left(2\right),Y_{N}^{\odot
d}\right)\left(\rho\theta
C^{-1}\right)-\left(r_{N}^{d}\left(2\right),Y_{N}^{\odot
d}\right)\left(\rho\theta C^{-1}\right).$ (61)
Based on the iterative relations given in (59) to (61), we provide an
iterative algorithm for computing the vector
$R=\left(r\left(1\right),r\left(2\right),r\left(3\right),\ldots\right)$. This
gives the fixed point
$\pi=\left(1,r\left(1\right)\eta,r\left(2\right)\eta,r\left(3\right)\eta,\ldots\right)$.
An Iterative Algorithm: Computation of the Fixed Point
Input: $\ \ \lambda,\left(\alpha,T\right),\left\\{b_{k}\right\\}$ and $d$.
Output:
$R=\left(r\left(1\right),r\left(2\right),r\left(3\right),\ldots\right)$ and
$\pi=\left(1,r\left(1\right)\eta,r\left(2\right)\eta,r\left(3\right)\eta,\ldots\right)$.
Computational Steps:
_Step one:_ Taking the initial value $R_{0}=0$, that is,
$r_{0}\left(1\right)=0,r_{0}\left(2\right)=0,Y_{0}=0$.
_Step two:_ Computing
$R_{1}=\left(r_{1}\left(1\right),r_{1}\left(2\right),Y_{1}\right)$ through
$\begin{array}[c]{ll}r_{1}\left(1\right)=\rho,&\text{ \
}\leftarrow\text{(\ref{EqGI-7})}\\\ r_{1}\left(2\right)=\rho,&\text{ \
}\leftarrow\text{(\ref{EqGI-8})}\\\
Y_{1}=\left(\rho^{d},\rho^{d},0\right)\left(\rho\theta
C^{-1}\right)-\left(\rho^{d},0\right)\left(\rho\theta C^{-1}\right),&\text{ \
}\leftarrow\text{(\ref{EqGI-9})}\end{array}$
_Step three:_ If $R_{N}$ is known, computing
$R_{N+1}=\left(r_{N+1}\left(1\right),r_{N+1}\left(2\right),Y_{N+1}\right)$
through
$\begin{array}[c]{ll}\begin{array}[c]{l}r_{N+1}\left(1\right)=\rho-\rho\theta
r_{N}^{d}\left(1\right)-r_{N}\left(2\right)\left(1-b_{1}\right)\\\ \text{ \
}+Y_{N}\left(b_{1}+b_{2},b_{2}+b_{3},b_{3}+b_{4},\ldots\right)^{T},\end{array}&\text{
\ }\leftarrow\text{(\ref{EqGI-7})}\\\
\begin{array}[c]{l}r_{N+1}\left(2\right)=\rho-
r_{N}\left(1\right)-\left[b_{1}r_{N}\left(2\right)+\rho\theta
r_{N}^{d}\left(2\right)\right]\\\ \text{ \
}+Y_{N}\left(b_{1}+b_{2}-1,b_{1}+b_{2}+b_{3},b_{2}+b_{3}+b_{4},\ldots\right)^{T},\end{array}&\text{
\ }\leftarrow\text{(\ref{EqGI-8})}\\\
Y_{N+1}=\left(r_{N}^{d}\left(1\right),r_{N}^{d}\left(2\right),Y_{N}^{\odot
d}\right)\left(\rho\theta
C^{-1}\right)-\left(r_{N}^{d}\left(2\right),Y_{N}^{\odot
d}\right)\left(\rho\theta C^{-1}\right).&\text{ \
}\leftarrow\text{(\ref{EqGI-9})}\end{array}$
_Step four:_ For a sufficiently small $\varepsilon>0$, if there exists Step
$K$ such that $||R_{K+1}-R_{K}||<\varepsilon$, then our computation is end in
this step; otherwise we go to Step three for continuous computations.
_Step five:_ When our computation is over at Step $K$, computing
$\pi=\left(1,r_{K}\left(1\right)\eta,r_{K}\left(2\right)\eta,r_{K}\left(3\right)\eta,\ldots\right)$
as an approximate fixed point under an error $\varepsilon>0$.
In what follows we analyze two numerical examples by means of the above
iterative algorithm.
In the first example, we take
$\lambda=1,d=2,\alpha=\left(1/2,1/2\right),$
$T\left(1\right)=\left(\begin{array}[c]{cc}-4&3\\\
2&-7\end{array}\right),T\left(2\right)=\left(\begin{array}[c]{cc}-5&3\\\
2&-7\end{array}\right),T\left(3\right)=\left(\begin{array}[c]{cc}-4&4\\\
2&-7\end{array}\right),$
Table 1 illustrates how the super-exponential solution ($\pi_{1}$ to
$\pi_{5}$) depends on the matrices $T\left(1\right)$, $T\left(2\right)$ and
$T\left(3\right)$, respectively.
Table 1: The super-exponential solution depends on the matrix $T$ | $T(1)$ | $T(2)$ | $T(3)$
---|---|---|---
$\pi_{1}$ | (0.2045, 0.1591) | (0.1410, 0.1026) | (0.3125, 0.2500)
$\pi_{2}$ | (0.0137, 0.0107) | (0.0043, 0.0031) | (0.0500, 0.0400)
$\pi_{3}$ | (6.193e-05, 4.817e-05) | (3.965e-06, 2.884e-06) | (0.0013 , 0.0010)
$\pi_{4}$ | (1.259e-09, 9.793e-10) | (3.390e-12, 2.465e-12) | (8.446e-07, 6.757e-07)
$\pi_{5}$ | (5.204e-19, 4.048e-19) | (2.478e-24, 1.802e-24) | (3.656e-13, 2.925e-13)
In the second example, we take
$\lambda=1,d=5,\alpha\left(1\right)=\left(1/3,1/3,1/3\right),\alpha\left(2\right)=\left(1/12,7/12,1/3\right),$
$T=\left(\begin{array}[c]{ccc}-10&2&4\\\ 3&-7&4\\\ 0&2&-5\end{array}\right),$
Table 2 shows how the super-exponential solution ($\pi_{1}$ to $\pi_{4}$)
depends on the vectors $\alpha\left(1\right)$ and $\alpha\left(2\right)$,
respectively.
Table 2: The super-exponential solution depends on the vectors $\alpha$ | $\alpha=(\frac{1}{3},\frac{1}{3},\frac{1}{3})$ | $\alpha=(\frac{1}{12},\frac{7}{12},\frac{1}{3})$
---|---|---
$\pi_{1}$ | (0.0741, 0.1358 , 0.2346) | (0.0602, 0.1728, 0.2531)
$\pi_{2}$ | (5.619e-05, 1.030e-05, 1.779e-04 ) | (7.182e-05, 2.063e-04, 3.020e-04)
$\pi_{3}$ | (1.411e-20, 2.587e-20, 4.469e-20) | (1.739e-19, 4.993e-19, 7.311e-19)
$\pi_{4}$ | (1.410e-98, 2.586e-98, 4.466e-98) | (1.444e-92, 4.148e-92, 6.074e-92)
## 5 Supermarket Models with Multiple Choices
In this section, we consider two supermarket models with multiple choices: The
first one is one mobile server with multiple waiting lines under the service
discipline of joint-shortest queue and serve-longest queue, and the second one
is a supermarket model with multiple classes of Poisson arrivals, each of
which has a choice number. Our main purpose is to organize the system of
nonlinear equations for the fixed point under multiple choice numbers, and to
be able to obtain super-exponential solution to the fixed points for the two
supermarket models.
### 5.1 One mobile server with multiple waiting lines
The supermarket model is structured as one mobile server with multiple waiting
lines, where the Poisson arrivals joint a waiting line with the shortest queue
and the mobile server enters a waiting line with the longest queue for his
service woks. Such a system is depicted in Figure 3 for an illustration. For
one mobile server with $n$ waiting lines, customers arrive at this system as a
Poisson process with arrival rate $n\lambda$, and all customers are served by
one mobile server with service rate $n\mu$. Each arriving customer chooses
$d\geq 1$ waiting lines independently and uniformly at random from the $n$
waiting lines, and waits for service at a waiting line which currently
contains the fewest number of customers. If there is a tie, waiting lines with
the fewest number of customers will be chosen by the arriving customer
randomly. The mobile server chooses $f\geq 1$ waiting lines independently and
uniformly at random from the $n$ waiting lines, and enters a waiting line
which currently contains the most number of customers. If there is a tie,
waiting lines with the most number of customers will be chosen be the server
randomly. All customers in every waiting line will be served in the FCFS
manner. We assume that all random variables defined above are independent of
each other, and that the system is operating in the stable region
$\rho=\lambda/\mu<1$. Clearly, $d$ and $f$ are input choice number and output
choice number in this supermarket model, respectively.
Figure 3: A supermarket model with input and output choices
It is clear that the stochastic environment of this supermarket model is a
positive recurrent birth-death process with an irreducible infinitesimal
generator $Q=Q_{\text{left}}+Q_{\text{right}}$, where
$Q_{\text{left}}=\left(\begin{array}[c]{ccccc}0&&&&\\\ \mu&-\mu&&&\\\
&\mu&-\mu&&\\\ &&\mu&-\mu&\\\ &&&\ddots&\ddots\end{array}\right)$
and
$Q_{\text{right}}=\left(\begin{array}[c]{ccccc}-\lambda&\lambda&&&\\\
&-\lambda&\lambda&&\\\ &&-\lambda&\lambda&\\\
&&&\ddots&\ddots\end{array}\right).$
Similar derivation to those given in Section 3 or 4, we obtain that the fixed
point satisfies the system of nonlinear equations
$\pi_{0}=1$
and
$\pi^{\odot f}Q_{\text{left}}+\pi^{\odot d}Q_{\text{right}}=0.$ (62)
Let
$Q=\left(\begin{array}[c]{cc}Q_{0,0}&U\\\
V&Q^{\left(\mathcal{L}\right)}\end{array}\right),$
where
$Q_{0,0}=-\lambda,U=\left(\lambda,0,0,\ldots\right),V=\left(\mu,0,0,\ldots\right)^{T},$
$Q_{\text{arrival}}^{\left(\mathcal{L}\right)}=\left(\begin{array}[c]{ccccc}-\lambda&\lambda&&&\\\
&-\lambda&\lambda&&\\\ &&-\lambda&\lambda&\\\
&&&\ddots&\ddots\end{array}\right)$
and
$Q_{\text{service}}^{\left(\mathcal{L}\right)}=\left(\begin{array}[c]{cccc}-\mu&&&\\\
\mu&-\mu&&\\\ &\mu&-\mu&\\\ &&\ddots&\ddots\end{array}\right).$
It follows from (62) that
$\pi_{0}=1,$ (63) $-\lambda\pi_{0}^{d}+\mu\pi_{1}^{f}=0$ (64)
and
$\pi_{0}^{d}U+\pi_{\mathcal{L}}^{\odot
d}Q_{\text{arrival}}^{\left(\mathcal{L}\right)}+\pi_{\mathcal{L}}^{\odot
f}Q_{\text{service}}^{\left(\mathcal{L}\right)}=0.$ (65)
It follows from (63) and (64) that
$\pi_{1}=\rho^{\frac{1}{f}}.$
Note that
$\left[-Q_{\text{service}}^{\left(\mathcal{L}\right)}\right]^{-1}=\left(\begin{array}[c]{cccc}\frac{1}{\mu}&&&\\\
\frac{1}{\mu}&\frac{1}{\mu}&&\\\ \frac{1}{\mu}&\frac{1}{\mu}&\frac{1}{\mu}&\\\
\vdots&\vdots&\vdots&\ddots\end{array}\right),$
it follows from (65) that for $k\geq 2$
$\pi_{k}^{f}=\pi_{k-1}^{d}\rho.$
This leads to
$\pi_{k}=\rho^{\frac{\sum\limits_{i=0}^{k-1}d^{i}f^{k-1-i}}{f^{k}}}=\rho^{\frac{1}{f}\sum\limits_{i=0}^{k-1}\left(\frac{d}{f}\right)^{i}}.$
(66)
Specifically, when $d\neq f$, we have
$\pi_{k}=\rho^{\frac{\left(\frac{d}{f}\right)^{k}-1}{d-f}}.$
###### Remark 3
Equation (66) indicates different influence of the input and output choice
numbers $d$ and $f$ on the fixed point $\pi$. If $d>f$, then the fixed point
$\pi$ decreases doubly exponentially; and if $d=f$, then
$\pi_{k}=\rho^{\frac{k}{f}}$ which is geometric. However, it is very
interesting for the case with $d<f$. In this case,
$\lim_{k\rightarrow\infty}\pi_{k}=\rho^{\frac{1}{f-d}}$, which illustrates
that the fraction of waiting lines with infinite customers has a positive
lower bound $\rho^{\frac{1}{f-d}}>0$. This shows that if $\rho<1$ and $d<f$,
this supermarket model is transient.
### 5.2 A supermarket model with multiple input choices
Now, we analyze a supermarket model with multiple input choices. There are $m$
types of different customers who arrive at a queueing system of $n>1$ servers
for receiving their required service. Arrivals of customers of $i$th type are
a Poisson process with arrival rate $n\lambda_{i}$ for $\lambda_{i}>0$, and
the service times at each server are exponential with service rate $\mu>0$.
Note that different types of customers have the same service time. Each
arriving customer of $i$th type chooses $d_{i}\geq 1$ servers independently
and uniformly at random from the $n$ servers, and waits for service at the
server which currently contains the fewest number of customers. If there is a
tie, servers with the fewest number of customers will be chosen randomly. All
customers in every server will be served in the FCFS manner. We assume that
all random variables defined above are independent of each other, and that the
system is operating in the stable region $\rho=\sum_{i=1}^{m}\rho_{i}<1$,
where $\rho_{i}=\lambda_{i}/\mu$. Clearly, $d_{1},d_{2},\ldots,d_{m}$ are
multiple input choice numbers in this supermarket model.
Let
$Q_{\text{right}}\left(i\right)=\left(\begin{array}[c]{ccccc}-\lambda_{i}&\lambda_{i}&&&\\\
&-\lambda_{i}&\lambda_{i}&&\\\ &&-\lambda_{i}&\lambda_{i}&\\\
&&&\ddots&\ddots\end{array}\right)$
and
$Q_{\text{left}}=\left(\begin{array}[c]{cccc}0&&&\\\ \mu&-\mu&&\\\
&\mu&-\mu&\\\ &&\ddots&\ddots\end{array}\right).$
Obviously, the stochastic environment of this supermarket model is a positive
recurrent birth-death process with an irreducible infinitesimal generator
$Q=Q_{\text{left}}+\sum_{i=1}^{m}Q_{\text{right}}\left(i\right)$. Similar
derivation to those given in Section 3 or 4, we obtain that the fixed point
satisfies the system of nonlinear equations
$\pi_{0}=1$ (67)
and
$\pi Q_{\text{left}}+\sum_{i=1}^{m}\pi^{\odot
d_{i}}Q_{\text{right}}\left(i\right)=0.$ (68)
Let
$\pi=\left(\pi_{0},\pi_{\mathcal{L}}\right),$
$U_{i}=\left(\lambda_{i},0,0,\ldots\right),$
$Q_{\text{arrival}}^{\left(\mathcal{L}\right)}\left(i\right)=\left(\begin{array}[c]{ccccc}-\lambda_{i}&\lambda_{i}&&&\\\
&-\lambda_{i}&\lambda_{i}&&\\\ &&-\lambda_{i}&\lambda_{i}&\\\
&&&\ddots&\ddots\end{array}\right)$
and
$Q_{\text{service}}^{\left(\mathcal{L}\right)}=\left(\begin{array}[c]{cccc}-\mu&&&\\\
\mu&-\mu&&\\\ &\mu&-\mu&\\\ &&\ddots&\ddots\end{array}\right).$
Therefore, the system of nonlinear equations (67) and (68) is written as
$\pi_{0}=1,$ (69) $-\sum_{i=1}^{m}\lambda_{i}\pi_{0}^{d_{i}}+\mu\pi_{1}=0,$
(70)
$\sum_{i=1}^{m}\pi_{0}^{d_{i}}\left(\lambda_{i},0,0,\ldots\right)+\sum_{i=1}^{m}\pi_{\mathcal{L}}^{\odot
d_{i}}Q_{\text{arrival}}^{\left(\mathcal{L}\right)}\left(i\right)+\pi_{\mathcal{L}}Q_{\text{service}}^{\left(\mathcal{L}\right)}=0.$
(71)
It follows from (69) and (70) that
$\pi_{1}=\sum_{i=1}^{m}\rho_{i}=\rho,$
and from (71) that
$\pi_{\mathcal{L}}=\sum_{i=1}^{m}\pi_{0}^{d_{i}}\left(\lambda_{i},0,0,\ldots\right)\left[-Q_{\text{service}}^{\left(\mathcal{L}\right)}\right]^{-1}+\sum_{i=1}^{m}\pi_{\mathcal{L}}^{\odot
d_{i}}Q_{\text{arrival}}^{\left(\mathcal{L}\right)}\left(i\right)\left[-Q_{\text{service}}^{\left(\mathcal{L}\right)}\right]^{-1}.$
This leads to that for $k\geq 2$
$\pi_{k}=\sum_{i=1}^{m}\pi_{k-1}^{d_{i}}\rho_{i}.$
Let $\delta_{1}=\rho$ and
$\delta_{k}=\sum_{i=1}^{m}\delta_{k-1}^{d_{i}}\rho_{i}$ for $k\geq 2$. Then
the fixed point has a super-exponential solution
$\pi_{0}=1$
and for $k\geq 1$
$\pi_{k}=\delta_{k}.$
## Acknowledgements
Q.L. Li was supported by the National Science Foundation of China under grant
No. 10871114.
## References
* [1] Y. Azar, A.Z. Broder, A.R. Karlin and E. Upfal (1999). Balanced allocations. SIAM Journal on Computing 29, 180–200.
* [2] M. Bramson, Y. Lu and B. Prabhakar (2010). Randomized load balancing with general service time distributions. In Proceedings of the ACM SIGMETRICS international conference on Measurement and modeling of computer systems, pages 275–286.
* [3] M. Dahlin (1999). Interpreting stale load information. IEEE Transactions on Parallel and Distributed Systems 11, 1033–1047.
* [4] C. Graham (2000). Kinetic limits for large communication networks. In Modelling in Applied Sci-ences, N. Bellomo and M. Pulvirenti (eds.), Birkhäuser, pages 317–370.
* [5] C. Graham (2000). Chaoticity on path space for a queueing network with selection of the shortest queue among several. Journal of Applied Probabability 37, 198–201.
* [6] C. Graham (2004). Functional central limit theorems for a large network in which customers join the shortest of several queues. Probability Theory Related Fields 131, 97–120.
* [7] M. Harchol-Balter and A.B. Downey (1997). Exploiting process lifetime distributions for dynamic load balancing. ACM Transactions on Computer Systems 15, 253–285.
* [8] T.G. Kurtz (1981). Approximation of Population Processes. SIAM.
* [9] Q.L. Li (2010). Constructive Computation in Stochastic Models with Applications: The RG-Factorizations. Springer and Tsinghua Press.
* [10] Q.L. Li (2010). Doubly exponential solution for randomized load balancing with general service times. Submited for publication.
* [11] Q.L. Li and John C.S. Lui (2010). Doubly exponential solution for randomized load balancing models with Markovian arrival processes and PH service times. Submited for publication.
* [12] Q.L. Li, John C.S. Lui and Y. Wang (2010). A matrix-analytic solution for randomized load balancing models with PH service times. In Proceeding of PERFORM 2010 Workshop, Lecture Notes of Computer Science, Pages 1-20.
* [13] Q.L. Li, John C.S. Lui and Y. Wang (2010). Super-exponential solution for a retrial supermarket Mmodel. Submited for publication.
* [14] M. Luczak and C. McDiarmid (2006). On the maximum queue length in the supermarket model. The Annals of Probability 34, 493–527.
* [15] M. Luczak and C. McDiarmid (2007). Asymptotic distributions and chaos for the supermarket model. Electronic Journal of Probability 12, 75–99.
* [16] M.J. Luczak and J.R. Norris (2005). Strong approximation for the supermarket model. The Annals of Applied Probability 15, 2038–2061.
* [17] J.B. Martin (2001). Point processes in fast Jackson networks. The Annals of Applied Probability 11, 650-663.
* [18] J.B. Martin and Y.M Suhov (1999). Fast Jackson networks. The Annals of Applied Probability 9, 854–870.
* [19] M.D. Mitzenmacher (1996). The power of two choices in randomized load balancing. PhD thesis, University of California at Berkeley, Department of Computer Science, Berkeley, CA.
* [20] M. Mitzenmacher (1999). On the analysis of randomized load balancing schemes. Theory of Computing Systems 32, 361–386.
* [21] M. Mitzenmacher (2000). How useful is old information? IEEE Transactions on Parallel and Distributed Systems 11, 6–20.
* [22] M. Mitzenmacher, A. Richa, and R. Sitaraman (2001). The power of two random choices: a survey of techniques and results. In Handbook of Randomized Computing: Volume 1, P. Pardalos, S. Rajasekaran and J. Rolim (eds), pages 255-312.
* [23] R. Mirchandaney, D. Towsley, and J.A. Stankovic (1989). Analysis of the effects of delays on load sharing. IEEE Transactions on Computers 38, 1513–1525.
* [24] M.F. Neuts (1989). Structured stochastic matrices of $M/G/1$ type and their applications. Marcel Decker Inc.: New York.
* [25] Y.M. Suhov and N.D. Vvedenskaya (2002). Fast Jackson Networks with Dynamic Routing. Problems of Information Transmission 38, 136{153.
* [26] B. Vöcking (1999). How asymmetry helps load balancing. In Proceedings of the Fortieth Annual Symposium on Foundations of Computer Science, pages 131–140.
* [27] N.D. Vvedenskaya, R.L. Dobrushin and F.I. Karpelevich (1996). Queueing system with selection of the shortest of two queues: An asymptotic approach. Problems of Information Transmissions 32, 20–34.
* [28] N.D. Vvedenskaya and Y.M. Suhov (1997). Dobrushin’s mean-field approximation for a queue with dynamic routing. Markov Processes and Related Fields 3, 493–526.
* [29] R. Weber (1978). On the optimal assignment of customers to parallel servers. Journal of Applied Probabiblities 15, 406–413.
* [30] W. Winston (1977). Optimality of the shortest line discipline. Journal of Applied Probabilities 14, 181–189.
|
arxiv-papers
| 2011-06-04T04:24:26 |
2024-09-04T02:49:19.384752
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Quan-Lin Li",
"submitter": "Quan-Lin Li",
"url": "https://arxiv.org/abs/1106.0787"
}
|
1106.0929
|
11institutetext: Department of Physics, Syracuse University, Syracuse, New
York 13244-1130, USA
Syracuse Biomaterials Institute, Syracuse University, Syracuse, New York
13244-1130, USA
Elasticity Theory Cell adhesion and cell mechanics
# Substrate rigidity deforms and polarizes active gels
S. Banerjee 11 M.C. Marchetti 11221122
###### Abstract
We present a continuum model of the coupling between cells and substrate that
accounts for some of the observed substrate-stiffness dependence of cell
properties. The cell is modeled as an elastic active gel, adapting recently
developed continuum theories of active viscoelastic fluids. The coupling to
the substrate enters as a boundary condition that relates the cell’s
deformation field to local stress gradients. In the presence of activity, the
coupling to the substrate yields spatially inhomogeneous contractile stresses
and deformations in the cell and can enhance polarization, breaking the cell’s
front-rear symmetry.
###### pacs:
87.10.Pq
###### pacs:
87.17.Rt
## 1 Introduction
Many cell properties, including cell shape, migration and differentiation, are
critically controlled by the strength and nature of the cell’s adhesion to a
solid substrate and by the substrate’s mechanical properties [1]. For
instance, it has been demonstrated that cell differentiation is optimized in a
narrow range of matrix rigidity [2] and that the stiffness of the substrate
can direct lineage specification of human mesenchymal stem cells [3]. In
endothelial cells, adhesion to a substrate plays a crucial role in guiding
cell migration and controlling a number of physiological processes, including
vascular development, wound healing, and tumor spreading [4]. Fibroblasts and
endothelial cells seem to generate more traction force and develop a broader
and flatter morphology on stiff substrates than they do on soft but equally
adhesive surfaces [5, 6]. They show an abrupt change in their spread area
within a narrow range of substrate stiffnesses. This spreading also coincides
with the appearance of stress fibers in the cytoskeleton, corresponding to the
onset of a substantial amount of polarization within the cell [6]. Finally,
such cells preferentially move from a soft to a hard surface and migrate
faster on stiffer substrates [7]. The mechanical interaction of cells with a
surrounding matrix is to a great extent controlled by contractile forces
generated by interactions between filamentary actin and myosin proteins in the
cytoskeleton. Such forces are then transmitted by cells to their surroundings
through the action of focal adhesions that produce elastic stresses both in
the cell and in the surrounding matrix. Cells in turn are capable of
responding to the substrate stiffness by adjusting their own adhesion and
elastic properties, with important implications for cell motility and shape
[1, 8].
In this letter we present a simple model of the coupling between cells and
substrate that accounts for some of the observed substrate-stiffness
dependence of cell properties. The cell itself is modeled as an elastic active
gel, adapting recently developed continuum theories of active viscoelastic
fluids [9, 10, 11]. In these models the transduction of chemical energy from
ATP hydrolysis into mechanical work by myosin motor proteins pulling on actin
filaments yields active contractile contributions to the local stresses. The
continuum theory of such _active liquids_ has led to several predictions,
including the onset of spontaneous deformation and flow in active films [12,
13] and the retrograde flow of actin in the lamellipodium of crawling cells
[11]. Active liquids cannot, however, support elastic stresses at long times,
as required for the understanding of the crawling dynamics of the
lamellipodium and of active contractions in living cells. Models of _active
elastic solids_ on the other hand have been shown to account for the
contractility and stiffening of in-vitro actomyosin networks [14, 15, 16] and
the spontaneous oscillations of muscle sarcomeres [17, 18]. Very recently a
continuum model of a one-dimensional polar, active elastic solid has also been
used to describe the alternating polarity patterns observed in stress fibers
[19]. In all these cases the elastic nature of the network at low frequency is
crucial to provide the restoring forces needed to support deformations and
oscillatory behavior.
We model a cell as an elastic active film anchored to a solid substrate and
study the static response of the film to variations in the strength of the
anchoring. Although in the following we refer to our system as a cell, we
stress that, on different length scales, the active elastic gel could also
serve as a model for a confluent cell monolayer on a substrate. The coupling
of the cell to the substrate enters via a boundary condition controlled by a
“stiffness” parameter that depends on both the cell/substrate adhesion as well
as the substrate rigidity. The description is macroscopic and applies on
length scales large compared to the typical mesh size of the actin network in
the cell lamellipodium (or large compared to the typical cell size in the case
of a cell monolayer). By solving the elasticity and force balance equations in
a simple one-dimensional geometry we obtain several experimentally relevant
results. First, in an isotropic active gel substrate anchoring yields stresses
and contractile deformations. The stress and deformation profiles for an
isotropic active elastic gel are shown in the top frame of Fig. 1. The stress
is largest at the center of the cell. Interestingly, a very similar profile of
_tensile_ stresses has been observed in confluent monolayers of migrating
epithelial cells [20], where the stress increases as a function of the
distance from the leading edge of the migrating layer and reaches its maximum
at the center of the cell colony. Although our model considers stationary
active elastic layers (and the resulting stresses are contractile as opposed
to tensile), in both cases these stresses originate from active processes in
the cell, driven by ATP consumption. The deformation of the active layer is
largest at the cell boundaries (see Fig.1, top frame), as seen in experiments
imaging traction forces exerted by cells on substrates [21] and its overall
magnitude increases with cell activity. The density of the active gel layer is
concentrated at the boundary, where the local contractile deformations are
largest. The net deformation of the cell over its length is shown in the
bottom frame of Fig. 1 and it increases monotonically with decreasing
substrate stiffness, in qualitative agreement with experiments on fibroblasts
showing that these cells are more extended on stiff substrates [6]. Finally,
if the cell is polarized on average, the coupling to the substrate generates a
spatially inhomogeneous polarization profile inside the cell. The mean
polarization is enhanced over its value in the absence of substrate anchoring
and it is a non-monotonic function of substrate stiffness (see Fig. 4). This
result is in qualitative agreement with recent experiments that have
demonstrated an intimate relation between the matrix rigidity and the
alignment of cell fibers within the cell, suggesting that maximum alignment
may be obtained for an optimal value of the substrate rigidity [22].
Figure 1: Top: stress $\sigma(x)/\zeta\Delta\mu$ (dashed line) and deformation
$u(x)B/\zeta\Delta\mu$ (solid line) profiles as functions of the position $x$
inside a cell of length $L$ for $\lambda/L=0.25$. Bottom: the cell’s total
deformation $\Delta\ell=u(0)-u(L)$ as a function of $\lambda/L$. In the plot
the deformation $\Delta\ell$ is normalized to its maximum value
$\zeta\Delta\mu/B$.
## 2 The active gel model
The cell is modeled as an active gel described in terms of a density,
$\rho({\bf r},t)$, and a displacement field, ${\bf u}({\bf r},t)$,
characterizing local deformations. In addition, to account for the possibility
of cell polarization as may be induced by directed myosin motion and/or
filament treadmilling, we introduce a polar orientational order parameter
field, ${\bf P}({\bf r},t)$. Although we are describing a system out of
equilibrium, it is convenient to formulate the model in terms of a local free
energy density $f=f_{el}+f_{P}+f_{w}$, with
$\displaystyle f_{el}=\frac{B}{2}u_{kk}^{2}+G\tilde{u}_{ij}^{2}\;,$ (1a)
$\displaystyle f_{P}=\frac{a}{2}|{\bf P}|^{2}+\frac{b}{4}|{\bf
P}|^{4}+\frac{K}{2}(\partial_{i}P_{j})(\partial_{j}P_{i})\;,$ (1b)
$\displaystyle
f_{w}=\frac{w}{2}(\partial_{i}P_{j}+\partial_{j}P_{i})u_{ij}+w^{\prime}(\bm{\nabla}\cdot{\bf
P})u_{kk}\;,$ (1c)
Here $f_{el}$ is the energy of elastic deformations, with $B$ and $G$ the
compressional and shear elastic moduli of the gel, respectively,
$u_{ij}=\frac{1}{2}(\partial_{i}u_{j}+\partial_{j}u_{i})$ the symmetrized
strain tensor, with $\tilde{u}_{ij}=u_{ij}-\frac{1}{d}\delta_{ij}u_{kk}$ and
$d$ the dimensionality. The first two terms in Eq. (1b), with $b>0$, allow the
onset of a homogeneous polarized state when $a<0$; the last term is the energy
cost for spatially inhomogeneous deformations of the polarization. We have
used an isotropic elastic constant approximation, with $K$ a stiffness
parameter characterizing the cost of both splay and bend deformations.
Finally, the contribution $f_{w}$ couples strain and polarization and is
unique to polar systems [13, 19]. It describes the fact that in the active
polar system considered here, like in liquid crystal elastomers, a local
strain is always associated with a local gradient in polarization. Such
gradients will align or oppose each other depending on the sign of the
phenomenological parameters $w$ and $w^{\prime}$, which are controlled by
microscopic physics. A positive sign indicates that an increase of density is
accompanied by positive splay (or enhanced polarization in one dimension). In
active actomyosin systems filament polarity can be induced by both myosin
motion and by treadmilling. If the polarization is defined as positive when
pointing towards the plus (barbed) end of the filament, i.e., the direction
towards which myosin proteins walk, the forces transmitted by myosin
procession will yield filament motion in the direction of negative
polarization, corresponding to $w<0$ [23]. In contrast, treadmilling, where
polarization occurs at the barbed end, corresponds to $w>0$. Density
variations $\delta\rho=\rho-\rho_{0}$ from the equilibrium value, $\rho_{0}$,
are slaved to the local strain according to
$\delta\rho/\rho_{0}=-\bm{\nabla}\cdot{\bf u}$. The stress tensor is written
as the sum of reversible and active contributions as
$\sigma_{ij}=\sigma_{ij}^{r}+\sigma_{ij}^{a}$, where
$\sigma_{ij}^{r}=\frac{\partial f}{\partial u_{ij}}$. The two contributions
are given by
$\displaystyle\sigma_{ij}^{r}=\delta_{ij}Bu_{kk}+2G\tilde{u}_{ij}+\frac{w}{2}\left(\partial_{i}P_{j}+\partial_{j}P_{i}\right)+w^{\prime}\bm{\nabla}\cdot{\bf
P}\delta_{ij}\;,$ (2a)
$\displaystyle\sigma_{ij}^{a}=\zeta(\rho)\Delta\mu\delta_{ij}+\zeta_{\alpha}\Delta\mu~{}P_{i}P_{j}\;.$
(2b)
Active stresses arise because the gel is driven out of equilibrium by
continuous input of energy from the hydrolysis of ATP, characterized by the
chemical potential difference $\Delta\mu$ between ATP and its products. For
simplicity, we assume here $\Delta\mu$ to be constant, although situations
where inhomogeneities in $\Delta\mu$ may arise, for instance, from
inhomogeneous myosin distribution within the actin lamellipodium are also of
interest. The experimentally observed contractile effect of myosin corresponds
to positive values of the coefficients $\zeta$ and $\zeta_{\alpha}$, that
characterize the isotropic and anisotropic stress per unit $\Delta\mu$,
respectively, due to the action of active myosin crosslinkers [24, 25, 10]. In
polar gels there are also active stresses proportional to
$\Delta\mu(\partial_{i}P_{j}+\partial_{j}P_{i})$ [26, 13]. We neglect these
terms here as terms of similar structure already arise from the coupling terms
in $f_{w}$. By letting $\rho=\rho_{0}-\rho_{0}\bm{\nabla}\cdot{\bf u}$, we can
write $\zeta(\rho)\Delta\mu\simeq\zeta_{0}\Delta\mu-\zeta_{1}\Delta\mu
u_{kk}$. The second term describes active renormalization of the compressional
modulus $B$ of the gel and can yield a contractile instability [17, 18]. These
effects have been described elsewhere [18] and will not be discussed here,
where we will assume we are in a regime where the gel is elastically stable.
Finally, we note that the parameters $a$, $w$ and $w^{\prime}$ may also in
general depend on $\Delta\mu$ as cell polarity is induced by ATP-driven
processes. For simplicity we keep these parameters fixed below.
Force balance requires
$\partial_{j}\sigma_{ij}=0\;.$ (3)
The coupling to the substrate (assumed for simplicity isotropic) is introduced
as a boundary condition [27] by requiring
$\left[\sigma_{ij}\hat{n}_{j}\right]_{{\bf r}_{s}}=Eu_{i}({\bf r}={\bf
r}_{s})$, where $\hat{n}$ is a unit normal to the substrate and both sides of
the equation are evaluated at points ${\bf r}_{s}$ on the substrate. Although
in the following we will often refer to the parameter $E$ as the substrate
stiffness, it is important to keep in mind that $E$ is controlled not only by
the substrate rigidity, but also by the properties of the cell/substrate
adhesions [28]. Anisotropic substrates are not considered here, but can be
described by a generalized boundary condition where $E$ is a tensor quantity
and will be discussed in a later publication. Finally, variations in the
polarization are described by the equation
$\partial_{t}{\bf P}+\beta\left({\bf P}\cdot\bm{\nabla}\right){\bf
P}=\Gamma{\bf h}\;,$ (4)
with $\beta$ an advective coupling arising from ATP driven processes, such as
treadmilling [26, 13], $\Gamma$ an inverse friction, and ${\bf
h}=-\frac{\delta f}{\delta{\bf P}}$ the molecular field, given by
$h_{i}=-\left(a+b|{\bf
P}|^{2}\right)P_{i}+K\nabla^{2}P_{i}+w\partial_{j}u_{ij}+w^{\prime}\partial_{i}u_{kk}\;.$
(5)
Here $\beta$ is an active velocity and is controlled by the activity
$\Delta\mu$. In the following we write
$\beta/(L\Gamma)=\zeta_{\beta}\Delta\mu$, with $L$ the typical size of the
active gel.
## 3 Isotropic cell
We begin by considering the case of an isotropic cell and neglect the coupling
to polarization. For simplicity, we consider a quasi-one-dimensional model
where the cell is a thin sheet of active gel of thickness $z$ extending from
$x=0$ to $x=L$, with $L>>h$. The substrate is flat and located at $z=0$.
Although this is of course a gross simplification, we will see below that it
captures the substrate-induced stresses and deformations and their dependence
on substrate stiffness. More realistic planar or thin film geometries will be
discussed in a future publication. Force balance yields
$\partial_{x}\sigma_{xx}+\partial_{z}\sigma_{xz}=0$. Integrating over the
thickness of the film, using $\sigma_{xz}(x,z=h)=0$ and
$\sigma_{xx}(x,z=0)=Eu_{x}(x,0)$, and letting
$\sigma=\frac{1}{h}\int_{0}^{h}dz\sigma_{xx}(x,z)$, we obtain
$\partial_{x}\sigma=Eu_{x}(x,0)/h$. In the limit $h<<L$, we neglect all $z$
dependence and assume that the only component of the displacement field is
$u_{x}(x,0)\equiv u(x)$. Combining then the expression for the mean stress,
$\sigma=B\partial_{x}u+\zeta_{0}\Delta\mu$ with the boundary condition we
obtain
$\sigma=\lambda^{2}\frac{d^{2}\sigma}{dx^{2}}+\zeta_{0}\Delta\mu\;,$ (6)
where $\lambda=\sqrt{Bh/E}$ is a length scale controlled by the interplay of
cell and substrate stiffness. The solution of this equation with boundary
conditions $\sigma(x=0)=\sigma(x=L)=0$ is
$\sigma(x)=\zeta\Delta\mu\left(1-\frac{\cosh{[(L-2x)/2\lambda]}}{\cosh{(L/2\lambda)}}\right)\;.$
(7)
The deformation field is then given by
$u(x)=\frac{\zeta\Delta\mu\lambda}{B}\frac{\sinh{[(L-2x)/2\lambda]}}{\cosh{(L/2\lambda)}}\;.$
(8)
A finite activity $\Delta\mu$ generates stresses and deformations in the cell,
as shown in the top frame of Fig. 1. In an isotropic gel, both the stress and
the displacement profiles are symmetric about the cell’s mid point and the
cell is uniformly contracted. The deformation is localized near the cell’s
boundaries. The length scale $\lambda$ determined by the ratio of cell to
substrate stiffness controls the penetration of the deformation to the
interior of the cell. If $\lambda\sim L$, corresponding to a substrate
rigidity $E_{L}\sim Bh/L^{2}$, the active stresses and deformation extend over
the entire cell. For a cell layer of length $10\ \mu m$, thickness $1\ \mu m$
and elastic modulus $B\sim 100\ kPa$, the substrate rigidity parameter $E_{L}$
can be estimated to be $\sim 1\ kPa/\mu m$. The total deformation
$\Delta\ell=u(0)-u(L)$ grows with activity and is shown in Fig. 1 (bottom
frame) as a function of $\lambda/L\sim 1/\sqrt{E}$. The contraction decreases
with increasing substrate stiffness and saturates to a finite value for soft
substrates.
It is also interesting to consider a substrate of varying stiffness, as such
substrates can be realized in experiments. We consider a constant stiffness
gradient, corresponding to $E(x)=E_{0}x/L$. In this case Eq. (6) becomes
$\sigma=\frac{\lambda^{2}L}{x}\left(\frac{d^{2}\sigma}{dx^{2}}-\frac{1}{x}\frac{d\sigma}{dx}\right)+\zeta_{0}\Delta\mu$
(9)
A closed solution can be obtained in terms of hypergeometric functions. The
corresponding stress and displacement profiles are now asymmetric and are
shown in Fig. 2. The stress is largest in the region of stiffest substrate,
with a correspondingly smaller cell deformation. In other words, the largest
cell deformation is obtained in the boundary region where the substrate is
softest. In real cells the region where the substrate is softer and the
resulting stresses in the cell are smaller may correspond to region of reduced
focal adhesions. Hence the gradient stiffness may yield a gradient in the
strength of cell-substrate adhesion, providing a possible driving force for
durotaxis, the tendency of cells to move from softer to stiffer regions [5,
29, 30].
Figure 2: The stress $\sigma(x)/\zeta\Delta\mu$ (dashed line) and
displacement $u(x)B/\zeta\Delta\mu$ (solid line) profiles of a cell on a
substrate a constant stiffness gradient, described by $E(x)=E_{0}x/L$ are
shown as functions of the position $x$ inside the cell for $\lambda/L=0.25$.
The profiles are asymmetric and the stress is localized near $x=L$ where the
stiffness is largest.
## 4 Polarized cell
We now consider the case of a polarized cell, described by the full free
energy $f$. The cell is modeled again as a thin film of length $L$ in the
quasi-$1d$ geometry described earlier. We are interested in steady state
configurations. In the chosen geometry these are given by the solutions of the
equations
$\displaystyle\frac{d\sigma}{dx}=\frac{E}{h}u$ (10a)
$\displaystyle\sigma=B\frac{du}{dx}+\zeta_{0}\Delta\mu+\zeta_{\alpha}\Delta\mu\
p^{2}+2w\frac{dp}{dx}$ (10b) $\displaystyle\zeta_{\beta}\Delta\mu
Lp\frac{dp}{dx}=K\frac{d^{2}p}{dx^{2}}+2w\frac{d^{2}u}{dx^{2}}-\left(a+bp^{2}\right)p$
(10c)
where ${\bf P}=p(x){\bf\hat{x}}$ and we have let $w^{\prime}=w$ and
$\beta/(L\Gamma)=\zeta_{\beta}\Delta\mu$. In the following we scale lengths
with the cell’s length $L$ and stresses with the cell’s compressional modulus
$B$. By combining Eqs. (10a)-(10c), we can eliminate $u$ and rewrite them as
coupled equations for $\tilde{\sigma}=\sigma/B$ and $p$ as
$\displaystyle\tilde{\sigma}=\frac{\lambda^{2}}{L^{2}}\tilde{\sigma}^{\prime\prime}+\nu_{0}+\nu_{\alpha}p^{2}+\tilde{w}p^{\prime}$
(11a)
$\displaystyle\left(\nu_{\beta}+2\nu_{\alpha}\tilde{w}\right)pp^{\prime}=\tilde{K}p^{\prime\prime}+\tilde{w}\tilde{\sigma}^{\prime}-\left(\tilde{a}+\tilde{b}p^{2}\right)p$
(11b)
where the prime denotes a derivative with respect to $x/L$,
$\nu_{0,\alpha,\beta}=\zeta_{0,\alpha,\beta}\Delta\mu/B$, $\tilde{w}=2w/BL$,
$\tilde{a}=a/B$, $\tilde{b}=b/B$, and $\tilde{K}=K/(BL^{2})-\tilde{w}$.
Thermodynamic stability requires $\tilde{K}>0$. As discussed in Ref. [19]
there could be possible active contributions to the coupling $w$, which at
high activity leads to an alternating polarity pattern in the gel. Here we
restrict ourselves to $\tilde{K}>0$.
Figure 3: Stress $\sigma(x)/B$ (dashed line), deformation field $u(x)/L$
(solid line), and polarization $\delta p(x)=p(x)-p_{0}$ (dotted line) profiles
obtained by numerical solution of Eqs. (11a) and (11b) for two sets of
boundary conditions on the polarization: $p(0)=p(L)=0$ (top frame) and
$p(0)=p(L)=1$ (bottom frame). Both plots are for $\lambda/L=0.25$,
$\tilde{w}=4$, $\nu_{0}=\nu_{\alpha}=\nu_{\beta}=1$, $\tilde{a}=\tilde{b}=1$,
$\tilde{K}=1$.
In the absence of activity ($\Delta\mu=0$) Eqs. (11a) and (11b) have two
homogeneous solutions that satisfy the boundary condition
$\sigma(0)=\sigma(L)=0$, corresponding to an isotropic state for $a>0$, with
$p(x)=u(x)=0$ and to a polarized state for $a<0$, with
$p(x)=p_{0}=\sqrt{-a/b}$ and $u(x)=0$. In both cases $\sigma(x)=0$.
For finite activity ($\Delta\mu\neq 0$), we find two qualitatively different
solutions, depending on the boundary conditions used for the polarization.
When Eqs. (11a) and (11b) are solved with boundary condition $p(0)=p(L)=0$,
consistent with an isotropic state in the limit $\Delta\mu=0$, the stress is
an even function of $x$, as shown in the top frame of Fig. 3. It exhibits a
maximum at $x=L/2$ and is symmetric about the mid point of the cell. Both the
displacement and the polarization vanish at $x=L/2$ and are odd functions of
$x$ about this point. For $a<0$ we solve the nonlinear equations with boundary
condition $p(0)=p(L)=\sqrt{-a/b}$, consistent with a polarized state in the
limit $\Delta\mu=0$. In this case the stress, deformation and polarization
profiles are all asymmetric, as shown in the bottom frame of Fig. 3. The sign
of the anisotropy is controlled by the sign of the polar coupling $w$. The
figure displays the case $w>0$, corresponding to filament convection towards
the direction of positive polarization.
To quantify the different properties of these two states, we define an excess
mean polarization averaged over the cell as $\langle\delta
p\rangle=\int_{0}^{L}\frac{dx}{L}[p(x)-p_{0}]$. The excess polarization
$\langle\delta p\rangle$ is zero for the symmetric polarization profiles
obtained with the boundary condition $p(0)=p(L)=0$, whereas $\langle\delta
p\rangle$ obtained for the boundary condition $p(0)=p(L)=\sqrt{-a/b}$ is a
non-monotonic function of substrate stiffness, as shown in Fig. 4 for three
values of activity. The excess polarization is largest at a characteristic
substrate stiffness, suggesting that enhancement of stress fiber and resulting
cell polarization may be obtained for an optimal substrate rigidity, as
reported in [22]. The excess polarization $\langle\delta p\rangle$ vanishes in
the absence of activity and its maximum value increases with activity.
We have presented a minimal continuum model of the interaction of a cell
adhering to an elastic substrate. The cell is described as an active elastic
gel and the coupling to the substrate enters as a boundary condition. The
model shows that the interplay of substrate coupling and activity yields
contractile stresses and deformation in the cell and can enhance polarization,
breaking the front/rear symmetry of the cell. The model provides a simple, yet
powerful continuum formulation for the description of cell-substrate
interactions and can be extended in various directions by considering more
realistic two-dimensional cell geometries and anisotropic and deformable
substrates. The possibility of cell migration will also be incorporated in
future work. Finally, the continuum model can be used to describe the
interaction of confluent layers of epithelial cells with substrates. In this
case a direct comparison with recent experiments that have imaged the stress
distribution in migrating cell layers [31] may be possible.
Figure 4: Excess mean polarization $\langle\delta p\rangle$ as a function of
$L/\lambda\sim\sqrt{E}$ obtained from averaging the numerical solutions of
Eqs. (11a) and (11b) for three different values of activity
$\nu=\nu_{0}=\nu_{\alpha}=\nu_{\beta}$ : $\nu=0.5$ (dashed line), $\nu=1.0$
(dotted line) and $\nu=1.5$ (solid line). The plots are for $\tilde{w}=4$,
$\tilde{a}=\tilde{b}=1$ and $\tilde{K}=1$.
###### Acknowledgements.
Acknowledgements This work was supported by the National Science Foundation
through awards DMR-0806511 and NSF-DMR-1004789. We thank Yaouen Fily and Silke
Henkes for important feedback on cell-substrate elasticity and Tannie
Liverpool for illuminating discussions on active systems in general.
## References
* [1] Discher D.E., Janmey P.A. Wang Y. Science31020051139.
* [2] Engler A.J., Griffin M.A., Sen S., Bönnemann C.G., Sweeney H.L. Discher D.E. J. Cell Biol.1662004877.
* [3] Engler A.J., Sen S., Sweeney H.L. Discher D.E. Cell1262006677.
* [4] Reinhart-King C.A. Methods in enzymology443200845.
* [5] Lo C.M., Wang H.B., Dembo M. Wang Y. Biophysical Journal792000144.
* [6] Yeung T., Georges P.C., Flanagan L.A., Marg B., Ortiz M., Funaki M., Zahir N., Ming W., Weaver V. Janmey P.A. Cell motility and the cytoskeleton60200524.
* [7] Guo W., Frey M.T., Burnham N.A. Wang Y. Biophys. J.9020062213.
* [8] Barnhart E.L., Lee K.C., Keren K., Mogilner A. Theriot J.A. PLoS Biology92011e1001059.
* [9] Kruse K., Joanny J.-F., Jülicher F., Prost J. Sekimoto K. Phys. Rev. Lett.92200478101.
* [10] Kruse K., Joanny J.-F., Jülicher F., Prost J. Sekimoto K. Eur. Phys. J. E: Soft Matter and Biological Physics1620055.
* [11] Jülicher F., Kruse K., Prost J. Joanny J.-F. Phys. Rep.44920073.
* [12] Voituriez R., Joanny J.-F. Prost J. Europhys. Lett.702005118102.
* [13] Giomi L., Marchetti M.C. Liverpool T.B. Phys. Rev. Lett.1012008198101.
* [14] Mizuno D., Tardin C., Schmidt C.F. MacKintosh F.C. Science3152007370.
* [15] MacKintosh F.C. Levine A.J. Phys. Rev. Lett.1002008018104.
* [16] Liverpool T.B., Marchetti M.C., Joanny J.-F. Prost J. EurPhys. Lett.85200918007.
* [17] Günther S. Kruse K. New Journal of Physics 92007417\.
* [18] Banerjee S. Marchetti M.C. Soft Matter72011463.
* [19] Yoshinaga N., Joanny J.-F., Prost J. Marcq P. Phys. Rev. Lett.1052010238103.
* [20] Trepat X., Wasserman M.R., Angelini T.E., Millet E., Weitz D.A., Butler J.P. Fredberg J.J. Nature Physics52009426.
* [21] Lima J.I., Sabouri-Ghomia M., Machacek M., Waterman C.M. Danuser G. Exp. Cell Res.31620102027.
* [22] Zemel A., Rehfeldt F., Brown A. E.X., Discher D.E. Safran S.A. Nature Physics62010468.
* [23] Liverpool T.B. Marchetti M.C. Phys. Rev. Lett.902003138102.
* [24] Aditi Simha R. Ramaswamy S. Phys. Rev. Lett.892002058101.
* [25] Hatwalne Y., Ramaswamy S., Rao M. Simha R.A. Phys. Rev. Lett.922004118101.
* [26] Ahmadi A., Marchetti M.C. Liverpool T.B. Phys. Rev. E742006061913.
* [27] Kruse K., Joanny J.-F., Jülicher F. Prost J. Phys. Biol.32006130.
* [28] Murray J.D. Oster G.F. Mathematical Medicine and Biology1198451.
* [29] Wong J.Y., Velasco A., Rajagopalan P. Pham Q. Langmuir1920031908.
* [30] We thank the referee for helping us clarify this point.
* [31] Tambe D.T., Hardin C.C., Angelini T.E., Rajendran K., Park C.Y., Serra-Picamal X., Zhou E.H., Zaman M.H., Butler J.P., Weitz D.A., Fredberg J.J. Trepat X. Nature Material102011469.
|
arxiv-papers
| 2011-06-05T20:23:56 |
2024-09-04T02:49:19.402052
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Shiladitya Banerjee and M. Cristina Marchetti",
"submitter": "Shiladitya Banerjee",
"url": "https://arxiv.org/abs/1106.0929"
}
|
1106.1199
|
# Open-loop multi-channel inversion of room impulse response
###### Abstract
This paper considers methods for audio display in a CAVE-type virtual reality
theater, a 3 m cube with displays covering all six rigid faces. Headphones are
possible since the user's headgear continuously measures ear positions, but
loudspeakers are preferable since they enhance the sense of total immersion.
The proposed solution consists of open-loop acoustic point control. The
transfer function, a matrix of room frequency responses from the loudspeakers
to the ears of the user, is inverted using multi-channel inversion methods, to
create exactly the desired sound field at the user's ears. The inverse
transfer function is constructed from impulse responses simulated by the image
source method. This technique is validated by measuring a $2\times 2$ matrix
transfer function, simulating a transfer function with the same geometry, and
filtering the measured transfer function through the inverse of the
simulation. Since accuracy of the image source method decreases with time,
inversion performance is improved by windowing the simulated response prior to
inversion. Parameters of the simulation and inversion are adjusted to minimize
residual reverberant energy; the best-case dereverberation ratio is 10 dB.
## 1 Introduction
The task of interest in this paper is free-field audio display for a virtual
reality environment [1]. The virtual reality testbed for these experiments is
a 3 m cube called ALICE (A Laboratory for Interactive Cognitive Experiments),
located at the University of Illinois. Images are projected from outside onto
all faces of the cube. Users are untethered: no wires connect equipment they
wear to the outside world. In order to accurately convey images to the user,
the positions of up to 20 user points (e.g., head, ears, hands, and feet) are
precisely tracked using a magnetic tracking system (calibrated mean accuracy
of 8 cm and 1 degree, 120 samples updated per second). The goal of virtual
reality in ALICE is total immersion: users must be able to ``suspend
disbelief'' and convince themselves that they are physically present in the
virtual environment portrayed to them.
The goal of most previous virtual reality audio experiments is to correctly
portray the position of a sound source. Position accuracy is usually achieved
by filtering the audio signal through head-related transfer functions and then
playing it over headphones. The disadvantage of headphone audio is that it
sounds like it is coming from the headphones. The impression of total
immersion is lost if the audio display is part of the user's headgear rather
than part of the environment.
A measure of headphone-free realism is possible by simply playing the desired
audio from the most appropriate speaker in a large speaker array. For the
ALICE environment, an array of eight transparent loudspeakers has been
prototyped. These loudspeakers consist of millimeter-thick sheet glass
suspended into the cube, connected to compression drivers located outside the
walls of the cube. The transparent speakers provide reasonable audio display
with good localization for distant objects (outside the cube wall), and most
important, the transparent loudspeakers do not obstruct or distort the video
display.
Moving the virtual audio source inside the room is much more difficult. The
positions of the user's ears are known precisely. If the room impulse response
were known, then the known room impulse response could be inverted using well-
studied multi-channel inversion methods [2, 3], thus creating exactly the
desired sound field at the two ears of the user. Unfortunately the room
impulse response is not known. The user is free to put his or her head
anywhere in the room; it is impossible to measure the room impulse response
from every speaker location to every possible location of the user's ears.
Two solutions to this problem are possible. First, an estimate of the room
impulse response can be adaptively updated using microphones placed on the
user's headgear, by means of a number of adaptive signal processing methods.
This paper analyzes a second solution to the problem of headphone-free virtual
reality audio display. The proposed solution consists of open-loop acoustic
point control, using a simulation of room impulse response based only on
knowledge of the room geometry, architectural materials, and user location.
The image source method of room response simulation was originally proposed
for open-loop dereverberation experiments similar to the one proposed here
[4]. Its performance was never quantitatively reported in the literature,
since multi-channel inversion methods for non-minimum phase impulse responses
were not well understood at that time [5]. Other methods of simulating room
impulse response are almost always evaluated by purely qualitative means like
acoustic perceptual studies and visual comparisons of impulse responses [6].
This paper proposes instead to evaluate simulated room impulse responses based
on their performance in a regularized dereverberation task. Dereverberation
performance is measured in terms of the decibel ratio of the energy of the
room impulse response to that of the dereverberated response. It is
demonstrated that this method can be used to optimize parameters of the model
including absorptivity and window taper.
This paper is organized as follows. Section 2 describes previous published
research in the fields of room impulse response measurement, room impulse
response simulation, and room impulse response inversion. Section 3 describes
the methods of these tasks in a simulated virtual reality environment, a 2 m
plywood cube. Room impulse responses are measured with a starter pistol. An
evaluation metric is proposed to quantitatively measure the dereverberation
ratio. Section 4 discusses the results, demonstrating how the proposed
evaluation metric optimizes methods for simulating and inverting the room
impulse reponse. Section 5 reviews conclusions.
## 2 Background
### 2.1 Measurement of room impulse response
An excitation signal is required in order to measure a room impulse response.
A perfect impulse (a Dirac delta function) simplifies the measurement task
(measured response equals the impulse response), but it is not possible to
physically generate a Dirac delta function. In practice, impulse-like signals
or signals with characteristics similar to a perfect impulse such as flat
frequency response are used.
ISO Standard 3382 specifies the following requirements for an excitation
signal for measuring room impulse response [7]. First, it should be nearly
omnidirectional. Second, its sound pressure level should provide sufficient
dynamic range to avoid contamination by background noise. Third, the signal
should be repeatable.
Impulse-like signals such as the starter pistol, balloon pop, and electric
spark have been traditionally chosen as excitation signals [7, 8]. These
impulses are easily generated and have been used to determine rough
characteristics of a room, such as reverberation time. However, measurement by
these impulse methods has not been widely discussed in the scientific
literature for three reasons. First, the frequency response of these impulses
is not flat (Fig. 1). Second, some authors report that it is difficult to get
an adequate SNR because all the impulse's energy is packed into a very short
duration [8]. Third, the signal is not precisely repeatable because it depends
significantly on small variations in charge distribution, balloon shape, etc
(Fig. 2).
Fig. 1: Magnitude responses of starter pistol and ambient noise.
Fig. 2: Waveforms of individual pistol shots.
In 1979 Schroeder suggested an alternative method to measure room impulse
response using Maximum Length Sequences (MLS) [8]. The autocorrelation
function of an MLS of order $m$ with length $N=2^{m}-1$ samples is two-valued,
$1$ at time zero and $-1/N$ at times other than zero (modulo $N$). If $N$ is
sufficiently large, then $-1/N$ becomes negligible and we can assume that the
resulting MLS has the same autocorrelation as pseudo-random noise. Because MLS
signal energy grows with $N$, its SNR can be made arbitrarily high without
needing high amplitude, which is not the case with impulses. Since the MLS is
stored in a computer, it can be generated repeatedly. Therefore MLS meets the
last two requirements of ISO 3382 better than impulse-like signals such as
starter pistols and electric sparks.
Computer-generated pseudo-random sequences have discrete values such as $+1$
and $-1$. Since it is impossible to make perfectly abrupt transitions between
these two values, distortion occurs and the frequency response of the system
must therefore be compensated to acquire accurate impulse responses. The
technique of MLS measurement has also been proven to be vulnerable to the
nonlinearity of measuring equipment, particularly loudspeakers [9].
Nonlinearities produce repeated distortion peaks in the time domain, which
prevent the integrated energy of the impulse response from falling below $-30$
dB [10, 11]. A modification of MLS, the inverse repeated sequence (IRS),
reduces the distortion caused by nonlinearities [12, 13, 14]. Other papers
discuss the accuracy of the MLS method [15, 16], its computational complexity
[17, 18, 19, 20, 21], and its application to a variety of system response
measurements [22, 23, 24].
Aoshima proposed the time-stretched pulses technique, based on the time
expansion and compression of an impulsive signal [25]. The purpose of the
time-stretched pulse signal is to increase the total energy of the excitation
signal while keeping the frequency response flat.
Berkhous proposed a sine sweep as an excitation signal [26]. Farina and
Ugolotti introduced a logarithmic sine sweep method using a different
deconvolution method [27]. Farina's detailed method accurately derives an
impulse response from the raw measurement by separating the linear and
nonlinear components of the measured impulse response, where the strength of
nonlinear distortion is measured by observing the harmonic distortion caused
by nonlinearity of the system.
Stan et al. compare four different room impulse response measurement
techniques: pseudo-random noise (MLS and IRS), time-stretched pulses, and
logarithmic sine sweep [11]. Since the randomized phase of psuedo-random
sequences makes them immune to background noise, MLS and IRS techniques are
preferred in noisy environments. However, parameter optimization is required
for high SNR because of nonlinear distortion. Nevertheless, the achieved SNR
is only 60.5 dB with an MLS order of 16 and single measurement.
Time-stretched pulses and sine sweep methods produce a higher SNR than the
pseudo-random noise techniques, but they require a quiet environment. The SNR
of the time-stretched pulses technique is 77 dB after precise calibration. The
logarithmic sine sweep method has 80.1 dB SNR. The benefit of the sine sweep
is that unlike the previous methods, it produces a high SNR without any
calibration [11].
### 2.2 Simulation of room impulse response
Simulations of room impulse responses fall into two categories: spatial mesh
methods and ray acoustic methods.
Spatial mesh methods numerically solve the constituents of the acoustic wave
equation, namely the equations of motion and continuity [28, 29]. In this
method, sound pressure and velocity are computed at a finite number of points,
usually mesh points in a cavity. The differential equations in the continuous
domain are computed as difference equations in the discrete domain. This
method can simulate diffraction effects, which ray acoustic methods cannot.
Unfortunately, to compute an impulse response at a specific location of
interest, the values of sound pressure and velocity must be computed over the
entire mesh, because the solution at a specific point depends on those of the
adjacent points. Spatial mesh methods are thus far more computationally
expensive than ray acoustic methods.
Ray acoustic methods assume that sound rays are emitted from the sound source,
usually as a spherical wave. Ray paths are then traced using either image
source or ray-tracing methods [4, 30]. The ray-tracing method considers a
finite number of rays to be emitted from the sound source. These ray paths are
traced and their trajectories summed at points of interest. Although it
requires little computation, ray tracing is appropriate only for a rough
estimate, e.g., to compute the first few reflections of the room impulse
response.
In 1979 Allen and Berkley showed that the impulse response of a small
rectangular room can be computed using a geometric ``image source'' method
[4]. Their model creates an ``image space'' without walls, in which each echo
is modeled as the direct sound from an image source outside the actual walls
of the room. The first six image sources are reflections of the original
source in the six walls of the room. The next few image sources are created by
reflecting the first six, and so on (Fig. 3). At each reflection, the
amplitude of the source is scaled by the wall's reflection coefficient.
Fig. 3: Two-dimensional illustration of the image source method.;
`x'=receiver, solid `o'=source, dashed `o'=image sources. Solid line=an echo
path, dashed line=corresponding image source path.
The image source method requires more computation than ray tracing because it
considers all possible reflected wavefronts. It can be extended from
rectangular cavities to arbitrary polyhedra [31]. In this case, some image
sources may not contribute to the total impulse response. Such image sources
are called hidden images. An algorithm is therefore needed to decide whether a
given image source is hidden or not. Lee and Lee proposed a relatively
efficient algorithm for the image source computation of impulse responses of
arbitrarily shaped rooms [32], but this method is still computationally
expensive relative to the image source method for rectangular rooms. The image
source method is efficient for a rectangular room because every image source
contributes to the total impulse response (unless there are obstacles in the
room), and also, because the locations of all image sources are analytically
pre-computed due to symmetry.
In a rectangular room, the image sources can be indexed by integer coordinates
$l$, $m$, and $n$, where $(l,\ m,\ n)=(0,\ 0,\ 0)$ corresponds to the direct
source, $(1,\ 0,\ 0)$ corresponds to the first reflection in the positive $x$
direction, and so on.
Given a room of size $(L_{x},\ L_{y},\ L_{z})$ with origin at the center and a
source location $(S_{x},\ S_{y},\ S_{z})$, the image source location with
indices $(l,\ m,\ n)$ is:
$(I_{x},\ I_{y},\ I_{z})=(lL_{x}+(-1)^{l}S_{x},\ mL_{y}+(-1)^{m}S_{y},\
nL_{z}+(-1)^{n}S_{z})$
Then the distance $d_{lmn}$ from the image source to the receiver at
$(R_{x},R_{y},R_{z})$ is:
$d_{lmn}=\sqrt{(R_{x}-I_{x})^{2}+(R_{y}-I_{y})^{2}+(R_{z}-I_{z})^{2}}$
The impulse response predicted by the image source method is
$g(t)=\sum_{l,m,n=-\infty}^{\infty}\frac{r^{|l|+|m|+|n|}}{d_{lmn}}\delta(t-\tau_{lmn})$
(1)
where $\tau_{lmn}=d_{lmn}/c$ is the wave propagation time from the image
source at $(l,m,n)$ to the receiver, $c$ is the speed of sound, and $r$ is the
reflection coefficient of the walls. Eq. (1) assumes that all surfaces have
the same reflection coefficient, but relaxing this assumption is
straightforward and computationally inexpensive.
Three-dimensional audio applications are usually considered in a rectagular
cavity, a room; this paper considers only this special but common case, to
justify use of the otherwise computationally expensive image source method for
simulating the room impulse response.
### 2.3 Inversion of room impulse response
Given the room impulse response, a desired signal can be reproduced at points
of interest if a valid inverse filter is first created from the impulse
response. The dereverberation problem thus reduces to constructing such an
inverse filter.
Since the purpose of inverting the room impulse response is to cancel
reverberation at multiple points in a room, human ears for example, the
frequency responses and inverse filters are formulated as a matrix of
sequences. Let the term transfer function denote this matrix of frequency
responses.
Let $G_{ji}(z)$ be the frequency response from the $i^{th}$ loudspeaker to the
$j^{th}$ control point, for $1\leq i\leq L$ and $1\leq j\leq M$: a total of
$M\times L$ individual room impulse responses. The inverse of this transfer
fuction is therefore an $L\times M$ matrix. The image source method computes
the simulated frequency response $\hat{G}_{ji}(z)$ which approximates
$G_{ji}(z)$.
Let $X_{j}(z)$ and $\hat{X}_{j}(z)$ denote the Z-transforms of the desired and
actual control point signals respectively. The inverse transfer function
$H(z)$ has as element $H_{ij}(z)$, the impulse response from the $j^{th}$
desired control point signal $X_{j}(z)$ to the $i^{th}$ loudspeaker signal
$V_{i}(z)$. $\hat{X}_{j}(z)$ is therefore expressed as
$\hat{X}_{j}(z)=\sum_{i}\hat{G}_{ji}(z)V_{i}(z)=\sum_{i,k}\hat{G}_{ji}(z)H_{ik}(z)X_{k}(z)$
We want to find $H_{ij}(z)$ so that $\hat{X}_{j}(z)$ is as similar as possible
to $X_{j}(z)$. Figure 4 shows the diagram of the room impulse response
inversion process. If $L=M$ and the matrix $\hat{G}(z)$ is minimum phase, then
an exact inverse transfer function is given by
$H(e^{j\omega})=\hat{G}(e^{j\omega})^{-1}$.
Fig. 4: System diagram.
If the impulse responses are non-minimum phase, the inverse filter has poles
outside the unit circle. In this case, we can make the inverse filter either
stable but noncausal (the region of convergence includes the unit circle) or
causal but unstable (the region of convergence does not include the unit
circle), but not both stable and causal. Therefore, the exact inverse filter
of a square transfer function matrix is only realizable for minimum phase
transfer functions.
Neely and Allen found that the impulse response of a small room is minimum
phase only for reflection coefficients below approximately 0.37 [5]. The
impulse response of a small room is rarely minimum phase, and therefore the
stable inverse filter $\hat{G}(e^{j\omega})^{-1}$ of a square matrix
$\hat{G}(e^{j\omega})$ is usually noncausal in practice.
Miyoshi and Kaneda showed that the transfer function of a room can be exactly
inverted for the case $L=2,M=1$ [2]. Nelson et al. generalized their result by
showing that, in most circumstances without any extreme symmetry, when $L>M$,
the transfer function can be exactly inverted [33]. Thus a stable, causal
inverse transfer function exists if $G_{ji}(z)$ has more columns than rows.
Unfortunately no tractable method for finding the causal, stable inverse of a
non-square transfer function in the frequency domain has yet been proposed. An
equivalent method can be computed in the time domain, but is computationally
expensive.
Recall that a non-minimum phase square transfer function has a stable but
noncausal inverse $H(e^{j\omega})$. A causal, stable semi-inverse may be
constructed by applying a time shift $D$:
$\tilde{H}(e^{j\omega})=e^{-j\omega D}H(e^{j\omega})$ (2)
and then truncating $\tilde{h}[n]$ by zeroing the tail at $n<0$:
$\hat{h}[n]=\left\\{\begin{array}[]{ll}\tilde{h}[n],&n\geq 0\\\
0,&n<0\end{array}\right.$ (3)
This creates a stable and causal approximation $\hat{h}[n]$ of the exact
inverse filter. The time shift $D$ is called modeling delay.
The inverse transfer function $H$ can be computed by sampling the spectrum of
$\hat{G}$ using an FFT, and inverting the matrix at each frequency bin.
Sampling the frequency-domain transfer function causes aliasing in the time
domain. This ``wrap-around effect'' is eliminated by time-shifting $h[n]$ by
$e^{-j\omega D}$, which is the same as the modeling delay described
previously.
Merely inverting the sampled FFT matrix $\hat{G}$ yields a poor estimate of
$H$ because of singularities related to the non-minimum phase character of
$\hat{G}$ (zeros of $\hat{G}$ tend to be very close to the unit circle). A
better estimate can be computed by using a regularized inversion formula [3],
in which a small constant $\beta$ is added to each eigenvalue of $G$ before
inversion:
$H=({\hat{G}^{T}}\hat{G}+{\beta}I)^{-1}\hat{G}^{T}$ (4)
where $\hat{G}^{T}$ is the Hermitian transpose of $\hat{G}$.
After computing $H(e^{j\omega})$ using Eq. (4), Eqs. (2) and (3) yield a
stable and causal approximate inverse $\hat{h}[n]$. The resulting control
point signal vector is
$\hat{X}=G\hat{H}X$
Signals produced during regularized inversion are depicted in Fig. 5. For
illustration, a one-dimensional inverse transfer function is computed from the
simulation $\hat{G}$ and filtered through the simulated transfer function
itself ($\hat{X}=\hat{G}\hat{H}X$). Regularized inversion gives more than $50$
dB of SNR with $D=750$ ms and $\beta=0.05$ (Fig. 5).
Fig. 5: Inversion process. (a), simulated impulse response $\hat{g}(t)$; (b),
noncausal inverse filter $h(t)$ using regularized inversion; (c), shifted and
truncated inverse filter $\hat{h}(t)$; and (d), dereverberated output
$\hat{x}(t)$ ($x(t)=\delta(t)$.
## 3 Methods
This section describes the design and validation of an open-loop room response
inversion algorithm. Section 3.1 describes methods for acquiring validation
data (measured room responses) using impulse-like excitation signals. Sections
3.2 and 3.3 describe methods for simulating and inverting the room response.
### 3.1 Room response measurement
Because of the deficiencies described in Section 2.1, few papers in recent
decades describe impulse response measurement techniques using impulse-like
excitation signals. For the application considered in this paper, impulse-like
signals have important advantages. Measurement techniques were therefore
developed to minimize their disadvantages.
#### 3.1.1 Motivation for the use of starter pistol as an impulse
Using a starter pistol for room response measurement has three advantages over
non-impulse signals using loudspeakers. First, measured response need not be
deconvolved into an impulse response because it is already qualitatively
similar to the room impulse response. Second, the SNR is very high because the
starter pistol exceeds 140 dB SPL at 2 m [34]. For a typical background noise
level of 50 dB SPL, the SNR is 90 dB. In comparison, the MLS method with order
16 and no repetition has only 60.5 dB SNR after parameter optimization and
compensation for nonlinearities [11]. Therefore, inadequate signal energy is
not an issue for a starter pistol. Third, a starter pistol blast approximates
a point source more closely than any other excitation method considered. This
is good for comparing the measured impulse response with the simulation from
the image source method because the latter assumes a point source.
Figure 1 compares the frequency response of a starter pistol, estimated as the
average of ten pistol shots in an anechoic chamber, to the frequency response
of the ambient noise in a room response test chamber (a 2 m plywood cube). The
ambient noise is 70 dB SPL with linear weighting, as measured with a Type 2260
B&K Modular Precision Sound Analyzer.
ISO 3382 specifies a peak SPL at least 45 dB above the background noise in the
frequency range of interest [7]. Even for a noisy 70 dB SPL environment, the
SNR of a starter pistol shot exceeds 45 dB for the frequency range 280 Hz to
11 kHz, and 30 dB for 110 Hz to 20.5 kHz (Fig. 1).
According to the excitation signal requirements described in Sec. 2.1, the
starter pistol still lacks repeatability and omnidirectionality. To use it for
room response measurement, experimental methods must be developed to control
these two deficiencies.
#### 3.1.2 Transfer function measurement methods
Our experiment measures the room impulse response of a 2 m plywood-walled
cube. The cube contains only a microphone and starter pistol; all other
measuring equipment is located outside to avoid any disturbance caused by
obstacles inside the cube. The starter pistol is mounted on the end of a
sturdy pipe and triggered from outside the cube by pulling a cable.
There are two different starter pistol and microphone positions, resulting in
a $2\times 2$ matrix transfer function. The exact dimensions of the plywood
cube are $(L_{x},\ L_{y},\ L_{z})$ = (1.84 m, 1.79 m, 1.83 m); table 1 lists
the positions of the starter pistol and microphone. Note that the center of
the cube is (0 m, 0 m, 0 m).
Since waveforms of individual pistol shots are not identical (Fig. 2), we
average multiple measurements at the same location. This repetition has two
benefits. First, we can assume that the averaged impulse response is due to
the averaged excitation. This reduces measurement irregularity, improving
repeatability. Second, SNR improves because the background noise can be
assumed to be independent of room impulse response.
Like any excitation signal, a starter pistol blast is directional. This
variation of signal with respect to angle we label Gun-Related Transfer
Fuction (GRTF). Figure 6 shows the first 1.5 ms of each GRTF.
Fig. 6: Gun-Related Transfer Functions (GRTFs).
Figure 7 shows the directional pattern computed from the energy at each angle.
Fig. 7: Directional pattern of the starter pistol blast in normalized linear
scale.
To measure the response of the room to an omnidirectional source, the position
of the starter pistol is fixed and the barrel is rotated to positions
$30^{\circ}$ apart, where $0^{\circ}$ is directly toward the microphone,
averaging five impulse responses at each angle. The experiment uses 12
rotation angles, so a total of 60 shots determine the room impulse response
from one point to another point.
### 3.2 Room response simulation
It is impractical to directly measure all the room impulse responses from
every loudspeaker position to every mesh point: only eight loudspeakers and a
$10\times 10\times 10$ mesh demand 8000 experiments. Instead, inverse transfer
functions are derived from approximate room responses simulated with the image
source method. This then demands verification of the simulation, using error
metrics to compare corresponding pairs of measured and simulated responses.
Parameters of the image source simulation include the room dimensions, the
position of the sound source and receivers, the speed of sound (dependent on
temperature and relative humidity[35]), and the reflection coefficient $r$ of
the wall material (Eq. (1)). Although $r$ varies with frequency, modeling the
frequency dependence greatly increases the computational cost of the
simulation. All simulations reported in this paper therefore assume a
frequency-independent $r$, related to the average Sabine absorptivity
$\bar{a}$: [4]
$\bar{a}=1-r^{2}$
The value of $\bar{a}$ is optimized experimentally.
The simulation uses a sampling frequency of 44.1 kHz. The length of the
simulated impulse response is 65536 samples, about 1.5 seconds. The speed of
sound is taken to be 346.58 m/s, based on Cramer's equation evaluated at the
temperature and relative humidity measured in the plywood cube
($24.4^{\circ}$C$,37.5\%$) [35].
Room response simulations were evaluated using three metrics: local mean-
squared error (described here), global and local dereverberation ratios
(described in the next section), and remainder reverberation time (described
in the next section). Mean-squared error measures time-domain similarity,
i.e., how alike the amplitude-versus-time graphs of the two responses look.
For an $M$ sample interval starting at the $k^{th}$ sample, this error is
expressed as
$E_{ms}[k]=\frac{1}{M}\sum_{n=k}^{k+M-1}\left(\frac{\hat{g}[n]}{\hat{g}_{rms}}-\frac{g[n]}{g_{rms}}\right)^{2}$
(5)
where $\hat{g}[n]$ and $g[n]$ are the simulation and measurement of the room
impulse responses, and $\hat{g}_{rms}$ and $g_{rms}$ are their RMS values in
the interval $[k,k+M-1]$.
For an actual room response $G_{orig}(z)$ and an excitation signal spectrum
$S(z)$, the measured room response is $G(z)=S(z)G_{orig}(z)$. When the
excitation signal is a starter pistol, $S(z)$ may be measured by recording the
pistol impulse response $s(t)$ in an anechoic chamber; when the excitation
signal is pseudo-noise or a sine sweep, $S(z)$ must be computed by multiplying
the theoretical pseudo-noise spectrum with the loudspeaker frequency response.
Pseudo-noise room response measurement techniques may then compare
$G(z)S(z)^{-1}$, the source-corrected room response, with $\hat{G}(z)$, the
simulated room response. When $S(z)$ is the spectrum of a starter pistol,
however, $S(z)^{-1}$ has undesirable properties (it is high-pass, noncausal,
and nearly singular), so the ``source-corrected'' room response $g(t)\ast
s(t)^{-1}$ is difficult to evaluate visually. Conversely, because $s(t)$ is
impulse-like, visual comparison of the measured response $g(t)$ with
$\hat{g}(t)\ast s(t)$ (the measured excitation filtered by simulated response)
is natural and meaningful.
Fig. 8: Experiment setup.
Figure 9 compares the first 20 ms of two impulse responses from starter pistol
position 2 to microphone position 1 (see Fig. 8 and table 1). The upper plot
shows $g(t)$, the average of the 60 measured impulse responses. The lower plot
is $\hat{g}(t)\ast s(t)$, where $\hat{g}(t)$ is computed using the image
source method. The very close match between these two impulse responses
validates both the image source method and the angle-averaged pistol
measurement.
Fig. 9: First 20 ms of measured and simulated impulse responses.
The simulation preserves the peak locations closely even after 100 ms (Fig.
10), but the visual similarity of the signals is not as great as during the
first 20 ms (Fig. 9).
Fig. 10: Measured and simulated impulse responses after 100 ms.
The increasing dissimilarity between $g(t)$ and $\hat{g}(t)\ast s(t)$ as $t$
increases is quantified by a gradual increase in the local mean-squared error
of the simulation, computed using Eq. (5) with intervals of $20$ and $100$ ms
(Fig. 11). This time-dependent dissimilarity may be explained by considering
the accumulated effect on $g(t)$ of frequency-dependent wall reflections and
air propagation filtering. $\hat{g}(t)$ is computed using the time-domain
image source method, which does not model frequency-dependent wall reflections
and air propagation.
Fig. 11: Mean-squared error between simulation and measurement.
Figure 12 shows the filtering effect of wall reflections on the spectrum of a
single acoustic ray. After only one reflection, frequencies below 10 Hz are
attenuated 20 dB relative to frequencies above 100 Hz [36]. After ten
reflections (70 to 100 ms), frequencies below 100 Hz are effectively zeroed.
Figure 13 shows the filtering effect of propagation through air at
$20^{\circ}$ C and 30% relative humidity [37]. After 34.3 m (100 ms), spectral
components at 10 kHz are attenuated about 8 dB. The attenuation due to wall
reflections and air propagation is enormous even after 100 ms.
Fig. 12: Frequency response after one, two, and ten wall reflections.
(Circled data points are adapted from [36].)
Fig. 13: Frequency response of propagation through air at $20\deg$ C, 30%
relative humidity after 1 m, 34.3 m, and 343 m. (Circled data points are
adapted from [37].)
### 3.3 Room response inversion
Room responses simulated using the image source method are next inverted using
the method of regularized inversion with modeling delay [3]. Experiments
indicate that effective inversion requires $\beta>0$, but that the exact value
of $\beta$ in the range $10^{-4}\leq\beta\leq 1$ has little effect on
inversion performance. The value of the modeling delay $D$ is more important.
Inversion results improve almost monotonically as $D$ increases, suggesting
that $D$ be the longest modeling delay imperceptible to users. Results in this
paper use $\beta=10^{-2}$ and $D=500$ ms. Finally, Eq. (4) may be used for
either scalar inversion (one speaker, one control point) or matrix room
response inversion ($L$ speakers, $M$ control points). This paper reports
results of both scalar and matrix inversion experiments, where matrix
inversion is performed with $L=2$, $M=2$ using the geometry shown in Fig. 8
and table 1.
#### 3.3.1 Evaluation of room impulse response inversion
Room response inversion can eliminate the perceptual ``signature'' of a room
by attenuating early echoes; it can also reduce long-term reverberant energy.
The early echoes should be well suppressed because they characterize the
perceived geometry of the room. The later portion of the room response is
related more to wall material and room size than to specific room geometry.
Assuming that the desired signal $x(t)$ at a certain location is an impulse,
the output $\hat{x}(t)$ at that location needs to be as close to an impulse as
possible: it should contain as little energy as possible at time $t\neq 0$.
The output is expressed as $\hat{X}=G\hat{H}X$ where $G$ is a measured impulse
response and $\hat{H}$ is the approximate inverse filter created from the
simulation $\hat{G}$ using Eqs. (2), (3), and (4). The time-domain expression
of the output is
$\hat{x}(t)=g(t)\ast\hat{h}(t)\ast x(t)$
or, for matrix inversion experiments,
$\hat{x}_{k}(t)=\sum_{j=1}^{L}\sum_{i=1}^{M}g_{kj}(t)\ast\hat{h}_{ji}(t)\ast
x_{i}(t)$
To claim that the inverse filter dereverberates the room impulse response, for
an input $x(t)=\delta(t)$, the filtered output $\hat{x}(t)=\hat{h}(t)\ast
g(t)$ should be similar to a delayed impulse, $\hat{x}(t)\approx\delta(t-D)$.
The success of dereverberation may be measured by computing the residual
energy in the signal $\hat{x}(t)$ at times $|t-D|>T_{min}$, for some small
value of $T_{min}$. The residual energy in $\hat{x}(t)$ is computed as
$E_{resid}(\infty)=\int_{T_{min}<|t-D|}\hat{x}^{2}(t)dt$
``Early echoes'' may be defined to be causal or noncausal echoes within a time
window $|t-D|<T$. The residual energy within $T$ seconds is computed as
$E_{resid}(T)=\int_{T_{min}<|t-D|<T}\hat{x}^{2}(t)dt$
The efficacy of the dereverberation is described by the ``dereverberation
ratio'' (DR): the ratio of the original room response energy
$\int_{T_{min}}^{T}g^{2}(t)$ to the residual energy, thus,
$DR(\infty)=10\log_{10}\frac{\int_{T_{min}}^{\infty}g^{2}(t)dt}{\int_{T_{min}<|t-D|}\hat{x}^{2}(t)dt}$
$DR(T)=10\log_{10}\frac{\int_{T_{min}}^{T}g^{2}(t)dt}{\int_{T_{min}<|t-D|<T}\hat{x}^{2}(t)dt}$
For the output $\hat{x}(t)$ to have less energy than the measured impulse
response $g(t)$, both decibel ratios should be positive. They can therefore be
used to evaluate and optimize the simulation and inversion of the room impulse
responses.
Simulation and inversion can also be evaluated using the remainder
reverberation time $T_{L}$, defined implicitly as
$L=10\log_{10}\frac{\int_{0}^{\infty}g^{2}(t)dt}{\int_{T_{L}}^{\infty}\hat{x}^{2}(t)dt}$
The remainder reverberation times $T_{10}$, $T_{20}$, and $T_{60}$ of both
measured and dereverberated outputs will be compared. The reference for the
remainder reverberation time is the integrated energy of the measured room
impulse response $g(t)$.
#### 3.3.2 Optimization of the window for for impulse response inversion
An inverse filter created using a complete 1.5 second simulation of the room
response fails: the energy of the dereverberated output exceeds the energy of
the measured impulse response.
The mean-squared error with respect to time ($E_{ms}$) indicates that the
accuracy of the image source simulation decreases with time (Fig. 11). This
suggests that the dereverberation ratio may improve by applying a tapering
window, such as an exponential with time constant $\tau$:
$\tilde{g}(t)=e^{-t/\tau}\hat{g}(t)$
Figure 14 shows the dereverberation ratios $DR(\infty)$ and $DR(100$ ms) for
both scalar and matrix inversion, using $\tilde{g}(t)$ instead of $\hat{g}(t)$
in order to create the inverse filter, with values of $\tau$ = 0.01, 0.02,
0.04, 0.08, 0.16, 0.32, and 0.64 s.
Fig. 14: Dereverberation ratios with respect to the time constant. Upper
graphs, scalar inversion; lower graphs, matrix inversion.
According to the dereverberation ratios depicted above, $\tau=0.06$ s is close
to optimal.
## 4 Results
Two measures are used to discuss the inversion results. First, the total and
the early dereverberation ratios $DR(\infty)$ and $DR(T)$ are compared.
Second, the remainder reverberation times $T_{10}$, $T_{20}$, and $T_{60}$ of
measured and dereverberated responses are compared.
Results of scalar inversion for four different impulse responses and of matrix
inversion of a $2\times 2$ matrix transfer function are presented here. The
four different room response geometries used for both scalar and matrix
inversion may be numbered as follows, with reference to Fig. 8 and table 1:
Impulse response number I, pistol 1 to microphone 1; II, pistol 1 to
microphone 2; III, pistol 2 to microphone 1; and IV, pistol 2 to microphone 2.
For $2\times 2$ matrix inversion, room responses at microphone location 1 and
2 are called microphone 1 and microphone 2 respectively.
Table 1: Positions (in meters) of pistol and microphone in plywood cube. | Position 1 | Position 2
---|---|---
Pistol | $(0.26,\ 0.30,\ -0.15)$ | $(-0.26,\ -0.30,\ -0.15)$
Microphone | $(-0.57,\ 0.58,\ 0.31)$ | $(-0.39,\ 0.58,\ 0.31)$
| |
Section 4.1 describes experiments designed to optimize and validate the image
source method. Section 4.2 describes scalar inversion results with and without
windowing the simulated room response. Section 4.3 describes $2\times 2$
matrix inversion results with windowing.
### 4.1 Optimization of absorption coefficient
When all interior surfaces of the room are covered with the same material and
the reverberation time is known, average Sabine absorptivity $\bar{a}$ is
given directly by Sabine's formula
$\bar{a}=\frac{0.161V}{ST_{60}}$
where $V$ and $S$ are the volume and the surface area of the room respectively
[36].
The measured $T_{60}$ of the 2 m plywood cube using Schroeder's integration
formula [38] is 1.32 s, yielding $\bar{a}=0.0407$. Since this is merely an
estimation from Sabine's formula, it was bracketed with values 0.01, 0.02,
0.04, 0.08, and 0.16.
Figure 15 shows that dereverberation ratios are not affected by the modeled
absorption coefficient $\bar{a}$. This indicates that the phase information of
the room impulse response is more important than the magnitude information,
i.e., for inversion the exact timing of the reflections is more important than
their magnitudes.
Fig. 15: Dereverberation ratios with respect to absorption coefficient
$\bar{a}$.
### 4.2 Scalar inversion with and without windowing
Scalar inversion was performed both without and with exponential windowing of
the simulated response. Figure 16 depicts the measured impulse response and
the response ``dereverberated'' without windowing from starter pistol location
1 to microphone location 2 (impulse response number II). Table 2 shows that
the supposedly dereverberated impulse responses have approximately 10 dB more
energy than the measured impulse responses; so dereverberation fails when no
tapering window is applied to the simulated impulse reponses.
Fig. 16: Linear plots of measured and dereverberated impulse responses
(scalar inversion; impulse response number III). No windowing is applied to
the simulated impulse response $\hat{g}(t)$.
Figure 17 depicts the results of dereverberation, where simulated impulse
responses are windowed by an exponential window with time constant $\tau=0.06$
s. Table 2 shows that the dereverberated impulse responses have from $8.11$ to
$10.98$ dB less energy than the measured impulse responses; dereverberation
works.
Fig. 17: Measured and dereverberated impulse responses (scalar inversion; impulse response number III). An exponential window with time constant $\tau=0.06$ s is applied to the simulated impulse response $\hat{g}(t)$. Table 2: Dereverberation ratios in dB of dereverberated impulse responses. | Non-windowed | Windowed
---|---|---
| $DR(\infty)$ | $DR(100$ ms$)$ | $DR(\infty)$ | $DR(100$ ms$)$
I | $-8.95$ | $-9.03$ | $10.98$ | $11.45$
II | $-10.54$ | $-9.85$ | $10.56$ | $10.84$
III | $-10.11$ | $-10.04$ | $8.11$ | $8.30$
IV | $-8.88$ | $-8.60$ | $10.56$ | $10.84$
| | | |
Figure 18 shows integrated energy decay curves of four different
dereverberated impulse responses with respect to the measured impulse
resposes; table 3 lists the dereverberation times of scalar inversion
according to these decay curves.
Fig. 18: Integrated energy decay curves for scalar inversion. Solid line is measured response; dashed line is dereverberated response. Table 3: Remainder reverberation times $T_{10}$, $T_{20}$, and $T_{60}$, in seconds, of measured and dereverberated impulse responses; scalar inversion. | Measured responses | Dereverberated responses
---|---|---
| $T_{10}$ | $T_{20}$ | $T_{60}$ | $T_{10}$ | $T_{20}$ | $T_{60}$
I | $0.20$ | $0.39$ | $1.39$ | $0.02$ | $0.20$ | $1.21$
II | $0.18$ | $0.38$ | $1.31$ | $0.02$ | $0.18$ | $1.12$
III | $0.19$ | $0.39$ | $1.35$ | $0.04$ | $0.24$ | $1.19$
IV | $0.18$ | $0.36$ | $1.23$ | $0.02$ | $0.19$ | $1.26$
| | | | | |
### 4.3 Matrix inversion with windowed simulation
Figures 19 and 20 depict $2\times 2$ matrix inversion results; dereverberation
ratios are in table 4. Remainder reverberation energy decay curve and
corresponding remainder reverberation times ($T_{10}$, $T_{20}$, and $T_{60}$)
are shown in Fig. 21 and table 5.
Fig. 19: Linear plots of measured and dereverberated impulse responses
($2\times 2$ matrix inversion). An exponential window with time constant
$\tau=0.06$ s is applied to the simulated impulse response $\hat{g}(t)$.
Fig. 20: Decibel plots of measured and dereverberated impulse responses ($2\times 2$ matrix inversion). An exponential window with time constant $\tau=0.06$ s is applied to the simulated impulse response $\hat{g}(t)$. Table 4: Dereverberation ratios in dB of dereverberated impulse responses; matrix inversion. | Microphone 1 | Microphone 2
---|---|---
$DR(\infty)$ | $8.64$ | $12.04$
$DR(100$ ms$)$ | $8.83$ | $12.19$
| |
Fig. 21: Integrated energy decay curves for $2\times 2$ matrix inversion. Solid line is measured response; dashed line is dereverberated response. Table 5: Remainder reverberation times $T_{10}$, $T_{20}$, and $T_{60}$, in seconds, of measured and dereverberated impulse responses; matrix inversion. | Measured responses | Dereverberated responses
---|---|---
| Microphone 1 | Microphone 2 | Microphone 1 | Microphone 2
$T_{10}$ | $0.19$ | $0.16$ | $0.02$ | $0.02$
$T_{20}$ | $0.39$ | $0.36$ | $0.26$ | $0.20$
$T_{60}$ | $1.37$ | $1.22$ | $1.26$ | $1.27$
| | | |
## 5 Conclusion
This paper describes experiments in open-loop room response inversion for the
purpose of headphone-free virtual reality audio display. Room responses were
simulated using the image source method and inverted using a regularized
Fourier transform inversion with a modeling delay of 500 ms. Scalar room
response inversion provided an average of 10.1 dB of short-term
dereverberation (early echoes within 100 ms of the direct sound), and 10.4 dB
of long-term dereverberation. Matrix room response inversion (two inputs, two
outputs) provided an average of 10.5 dB short-term and 10.3 dB long-term
dereverberation.
## 6 Acknowledgments
This work was supported in part by a grant from the University of Illinois
Research Board, and in part by funding from the University of Illinois Beckman
Institute. We thank Hank Kaczmarski for hardware assistance.
## References
* [1] C. Cruz-Neira, D. J. Sandin, and T. A. DeFanti, ``Surround-screen projection-based virtual reality: the design and implementation of the CAVE,'' Proc. ACM Special Interest Group on Computer Graphics and Interactive Techniques (SIGGRAPH),135–142 (1993).
* [2] M. Miyoshi and Y. Kaneda, ``Inverse filtering of room acoustics,'' IEEE Trans. Acoustics, Speech and Signal Proc. 36(2), 145–152 (1988).
* [3] O. Kirkeby, P. A. Nelson, H. Hamada, and F. Orduna-Bustamente, ``Fast deconvolution of multichannel systems using regularization,'' IEEE Trans. Speech and Audio Processing 6(2), 189–194 (1998).
* [4] J. B. Allen and D. A. Berkley, ``Image method for efficiently simulating small-room acoustics,'' J. Acoust. Soc. Am. 65(4), 912–915 (1979).
* [5] S. T. Neely and J. B. Allen, ``Invertibility of a room impulse response,'' J. Acoust. Soc. Am. 66(1), 165–169 (1979).
* [6] A. Omoto, C. Hiratsuka, H. Fujita, T. Fukushima, M. Nakahara, and K. Fujiwara, ``Similarity evaluation of room acoustic impulse responses: Visual and auditory impressions,'' J. Audio. Eng. Soc. 50(6), 451–457 (2002).
* [7] ISO 3382, Acoustics - Measurement of the reverberation time of rooms with reference to other acoustical parameters, ISO, 1997.
* [8] M. R. Schroeder, ``Integrated-impulse method measuring sound decay without using impulses,'' J. Acoust. Soc. Am. 66(2), 497–500 (1979).
* [9] D. D. Rife and J. Vanderkooy, ``Transfer-function measurement with maximum-length sequences,'' J. Audio. Eng. Soc. 37(6), 419–444 (1989).
* [10] J. Vanderkooy, ``Aspects of MLS measuring systems,'' J. Audio. Eng. Soc. 42(4), 219–231 (1994).
* [11] G.-B. Stan, J.-J. Embrechts, and D. Archambeau, ``Comparison of different impulse response measurement techniques,'' J. Audio. Eng. Soc. 50(4), 249–262 (2002).
* [12] C. Dunn and M. O. Hawksford, ``Distortion immunity of MLS-derived impulse response measurements,'' J. Audio. Eng. Soc. 41(5), 314–335 (1993).
* [13] N. Ream, ``Nonlinear identification using inverse-repeat m sequences,'' Proc. IEE (London) 117(1), 213–218 (1970).
* [14] P. A. N. Briggs and K. R. Godfrey, ``Pseudorandom signals for the dynamic analysis of multivariable systems,'' Proc. IEE 113, 1259–1267 (1966).
* [15] H. R. Simpson, ``Statistical properties of a class of pseudorandom sequence,'' Proc. IEE (London) 113, 2075–2080 (1966).
* [16] C. Bleakley and R. Scaife, ``New formulas for predicting the accuracy of acoustical measurements made in noisy environments using the averaged m-sequence correlation technique,'' J. Acoust. Soc. Am. 97(2), 1329–1332 (1995).
* [17] H. Alrutz and M. R. Schroeder, ``A fast Hadamard transform method for the evaluation of measurements using pseudorandom test signals,'' Proc. 11th Int. Congress on Acoustics (Paris) 6, 235–238 (1983).
* [18] M. Cohn and A. Lempel, ``On fast M-sequence transforms,'' IEEE Trans. Inform. Theory 23(1), 135–137 (1977).
* [19] W. D. T. Davies, ``Generation and properties of maximum-length sequences, part 1,'' Control 10(96), 302–304 (1966).
* [20] W. D. T. Davies, ``Generation and properties of maximum-length sequences, part 2,'' Control 10(97), 364–365 (1966).
* [21] W. D. T. Davies, ``Generation and properties of maximum-length sequences, part 3,'' Control 10(98), 431–433 (1966).
* [22] D. D. Rife, ``Modulation transfer function measurement with maximum-length sequences,'' J. Audio. Eng. Soc. 40(10), 779–790 (1992).
* [23] M. Vorländer and M. Kob, ``Practical aspects of MLS measurements in building acoustics,'' Appl. Acoustics 52(3-4), 239–258 (1997).
* [24] R. Burkard, Y. Shi, and K. E. Hecox, ``A comparison of maximum length and Legendre sequences for the derivation of brain-stem auditory-evoked responses at rapid rates of stimulation,'' J. Acoust. Soc. Am. 87(4), 1656–1664 (1990).
* [25] N. Aoshima, ``Computer-generated pulse signal applied for sound measurement,'' J. Acoust. Soc. Am. 69(5), 1484–1488 (1981).
* [26] A. J. Berkhout, D. de Vries, and M. M. Boone, ``A new method to acquire impulse responses in concert halls,'' J. Acoust. Soc. Am. 68(1), 179–183 (1980).
* [27] A. Farina, ``Simultaneous measurement of impulse response and distortion with a swept-sine technique,'' J. Audio. Eng. Soc. 48, 350 (2000).
* [28] R. Rabenstein and A. Zayati, ``A direct method to computational acoustics,'' IEEE Proc. Int. Conf. Acoustics, Speech and Signal Processing (ICASSP 99) 2, 957–960 (1999).
* [29] T. Schetelig and R. Rabenstein, ``Simulation of three-dimensional sound propagation with multidimensional wave digital filters,'' IEEE Proc. Int. Conf. Acoustics, Speech and Signal Processing (ICASSP 98) 6, 3537–3540 (1998).
* [30] A. Krokstad, S. Strøm, and S. Sørsdal, ``Calculating the acoustical room response by the use of a ray tracing technique,'' J. Sound Vib. 8(1), 118–125 (1968).
* [31] J. Borish, ``Extension of the image model to arbitrary polyhedra,'' J. Acoust. Soc. Am. 75(6), 1827–1836 (1984).
* [32] H. Lee and B.-H. Lee, ``An efficient algoritm for the image model technique,'' Applied Acoustics 24, 87–115 (1988).
* [33] P. A. Nelson, H. Hamada, and S. J. Elliott, ``Adaptive inverse filters for stereophonic sound reproduction,'' IEEE Trans. Signal Processing 40(7), 1621–1632 (1992).
* [34] Database for assessing the annoyance of the noise of small arms, United States Army Environmental Hygiene Agency, 1983.
* [35] O. Cramer, ``The variation of the specific heat ratio and the speed of sound in air with temperature, pressure, humidity, and CO2 concentration,'' J. Acoust. Soc. Am. 93(5), 2510–2514 (1993).
* [36] L. E. Kinsler, A. R. Frey, A. B. Coppens, and J. V. Sanders, Fundamentals of Acoustics, 4th ed. (Wiley and Sons, Inc., New York, 2000).
* [37] CRC Handbook of Chemistry and Physics, 79th ed., edited by D. R. Lide (CRC Press, Boca Raton, 1998).
* [38] M. R. Schroeder, ``New method of measuring reveberation time,'' J. Acoust. Soc. Am. 37(3), 409–412 (1965).
|
arxiv-papers
| 2011-06-06T21:03:31 |
2024-09-04T02:49:19.416704
|
{
"license": "Public Domain",
"authors": "Bowon Lee, Camille Goudeseune, Mark A. Hasegawa-Johnson",
"submitter": "Camille Goudeseune",
"url": "https://arxiv.org/abs/1106.1199"
}
|
1106.1352
|
# On the equivalence of stochastic completeness, Liouville and Khas’minskii
condition in linear and nonlinear setting
Luciano Mari Luciano Mari
Dipartimento di Matematica
Università degli studi di Milano
via Saldini 50
20133 Milano, Italy, EU luciano.mari@unimi.it and Daniele Valtorta Daniele
Valtorta
Dipartimento di Matematica
Università degli studi di Milano
via Saldini 50
20133 Milano, Italy, EU danielevaltorta@gmail.com
(Date: August 27, 2024)
###### Abstract.
Set in Riemannian enviroment, the aim of this paper is to present and discuss
some equivalent characterizations of the Liouville property relative to
special operators, in some sense modeled after the $p$-Laplacian with
potential. In particular, we discuss the equivalence between the Lioville
property and the Khas’minskii condition, i.e. the existence of an exhaustion
functions which is also a supersolution for the operator outside a compact
set. This generalizes a previous result obtained by one of the authors and
answers to a question in [26].
###### Key words and phrases:
Khas’minskii condition, stochastic completeness, parabolicity
###### 2010 Mathematics Subject Classification:
Primary 31C12 (potential theory on Riemannian manifolds, Secondary: 35B53
(Liouville theorems), 58J65 (stochastic equations and processes on manifolds),
58J05 (elliptic equations on manifolds)
## 1\. Introduction
In what follows, let $M$ denote a connected Riemannian manifold of dimension
$m$, with no boundary. We stress that no completeness assumption is required.
The relationship between the probabilistic notions of stochastic completeness
and parabolicity (respectively the non-explosion and the recurrence of the
Brownian motion on $M$) and function-theoretic properties of $M$ has been the
subject of an active area of research in the last decades. Deep connections
with the heat equation, Liouville type theorems, capacity theory and spectral
theory have been described, for instance, in the beautiful survey [8]. In [23]
and [22], the authors showed that stochastic completeness and parabolicity are
also related to weak maximum principles at infinity. This characterization
reveals to be fruitful in investigating many kinds of geometric problems (for
a detailed account, see [24]). Among the various conditions equivalent to
stochastic completeness, the following two are of prior interest to us:
* -
[_$L^{\infty}$ -Liouville_] for some (any) $\lambda>0$, the sole bounded, non-
negative, continuous weak solution of $\Delta u-\lambda u\geq 0$ is $u=0$;
* -
[_weak maximum principle_] for every $u\in C^{2}(M)$ with
$u^{\star}=\sup_{M}u<+\infty$, and for every $\eta<u^{\star}$,
(1) $\inf_{\Omega_{\eta}}\Delta u\leq 0,\qquad\text{where }\
\Omega_{\eta}=u^{-1}\\{(\eta,+\infty)\\}.$
R.Z. Khas’minskii [11] has found the following condition for stochastic
completeness. We recall that $w\in C^{0}(M)$ is called an exhaustion if it has
compact sublevels $w^{-1}((-\infty,t])$, $t\in\mathbb{R}$.
###### Theorem 1.1 (Khas’minskii test, [11]).
Suppose that there exists a compact set $K$ and a function $w\in C^{0}(M)\cap
C^{2}(M\setminus K)$ satisfying for some $\lambda>0$:
$(i)\quad w\ \text{is an exhaustion;}\qquad(ii)\quad\Delta w-\lambda w\leq 0\
\text{ on }M\backslash K.$
Then $M$ is stochastically complete.
A very similar characterization holds for the parabolicity of $M$. Namely,
among many others, parabolicity is equivalent to:
* -
every bounded, non-negative continuous weak solutions of $\Delta u\geq 0$ on
$M$ is constant;
* -
for every non-constant $u\in C^{2}(M)$ with $u^{\star}=\sup_{M}u<+\infty$, and
for every $\eta<u^{\star}$,
(2) $\inf_{\Omega_{\eta}}\Delta u<0,\qquad\text{where }\
\Omega_{\eta}=u^{-1}\\{(\eta,+\infty)\\}.$
Note that the first condition is precisely case $\lambda=0$ of the Liouville
property above. As for Khas’minskii type conditions, it has been proved by M.
Nakai [20] and Z. Kuramochi [15] that the parabolicity of $M$ is indeed
_equivalent_ to the existence of a so-called Evans potential, that is, an
exhaustion, harmonic function $w$ defined outside a compact set $K$ and such
that $w=0$ on $\partial K$. To the best of our knowledge, an analogue of such
equivalence for stochastic completeness or for the nonlinear case has still to
be proved, and this is the starting point of the present work.
With some modifications, it is possible to define the Liouville property, the
Khas’minskii test and Evans potentials also for $p$-Laplacians or other
nonlinear operators, and the aim of this paper is to prove that in this more
general setting the Liouville property is equivalent to the Khas’minskii test,
answering in the affirmative to a question raised in [26] (question 4.6).
After that, a brief discussion on the connection with appropriate definitions
of the weak maximum principle is included. The final section will be devoted
to the existence of Evans type potentials in the particular setting of
radially symmetric manifolds. To fix the ideas, we state the main theorem in
the “easy case” of the $p$-Laplacian, and then introduce the more general (and
more technical) operators to which our theorem applies. Recall that for a
function $u\in W^{1,p}_{\mathrm{loc}}(\Omega)$, the $p$-laplacian $\Delta_{p}$
is defined weakly as:
(3) $\displaystyle\int_{\Omega}\phi\Delta_{p}u=-\int_{\Omega}\left|\nabla
u\right|^{p-2}\left\langle\nabla u\middle|\nabla\phi\right\rangle$
where $\phi\in C^{\infty}_{c}(\Omega)$ and integration is with respect to the
Riemannian measure.
###### Theorem 1.2.
Let $M$ be a Riemannian manifold and let $p>1$, $\lambda\geq 0$. Then, the
following conditions are equivalent.
* $(W)$
The weak maximum principle for $C^{0}$ holds for $\Delta_{p}$, that is, for
every non-constant $u\in C^{0}(M)\cap W^{1,p}_{loc}(M)$ with
$u^{\star}=\sup_{M}u<\infty$ and for every $\eta<u^{\star}$ we have:
(4) $\displaystyle\inf_{\Omega_{\eta}}\Delta_{p}u\leq 0\qquad(<0\text{ if
}\lambda=0)$
weakly on $\Omega_{\eta}=u^{-1}\\{(\eta,+\infty)\\}$.
* $(L)$
Every non-negative, $L^{\infty}\cap W^{1,p}_{\mathrm{loc}}$ solution $u$ of
$\Delta_{p}u-\lambda u^{p-1}\geq 0$ is constant (hence zero if $\lambda>0$).
* $(K)$
For every compact $K$ with smooth boundary, there exists an exhaustion $w\in
C^{0}(M)\cap W^{1,p}_{\mathrm{loc}}(M)$ such that
$w>0\ \text{ on }M\backslash K,\quad w=0\ \text{ on
}K,\quad\Delta_{p}w-\lambda w^{p-1}\leq 0\ \text{ on }M\setminus K.$
Up to some minor changes, the implications $(W)\Leftrightarrow(L)$ and
$(K)\Rightarrow(L)$ have been shown in [25], Theorem A, where it is also
proved that, in $(W)$ and $(L)$, $u$ can be equivalently restricted to the
class $C^{1}(M)$. In this respect, see also [26], Section 2. On the other
hand, the second author in [33] has proved that $(L)\Rightarrow(K)$ when
$\lambda=0$. The proof developed in this article covers both the case
$\lambda=0$ and $\lambda>0$, is easier and more straightforward and, above
all, does not depend on some features which are typical of the $p$-Laplacian.
## 2\. Definitions and main theorems
###### Notational conventions.
_We set $\mathbb{R}^{+}=(0,+\infty)$, $\mathbb{R}^{+}_{0}=[0,+\infty)$, and
$\mathbb{R}^{-}$, $\mathbb{R}^{-}_{0}$ accordingly; for a function $u$ defined
on some set $\Omega$, $u^{\star}=\mathrm{esssup}_{\Omega}u$ and
$u_{\star}=\mathrm{essinf}_{\Omega}u$; we will write $K\Subset\Omega$ whenever
the set $K$ has compact closure in $\Omega$; $\mathrm{Lip}_{\mathrm{loc}}(M)$
denotes the class of locally Lipschitz functions on $M$; with
$u\in\mathrm{H\ddot{o}l}_{\mathrm{loc}}(M)$ we mean that, for every
$\Omega\Subset M$, $u\in C^{0,\alpha}(\Omega)$ for some $\alpha\in(0,1]$
possibly depending on $\Omega$. Finally, we will adopt the symbol
$Q\doteq\ldots$ to define the quantity $Q$ as $\ldots$._
In order for our techniques to work, we will consider quasilinear operators of
the following form. Let $A:TM\rightarrow TM$ be a Caratheodory map, that is if
$\pi:TM\rightarrow M$ is the bundle projection, $\pi\circ A=\pi$, moreover
every representation $\tilde{A}$ of $A$ in local charts satisfies
* •
$\tilde{A}(x,\cdot)$ continuous for a.e. $x\in M$
* •
$\tilde{A}(\cdot,v)$ measurable for every $v\in\mathbb{R}^{m}$
Note that every continuous bundle map satisfies these assumptions.
Furthermore, let $B:M\times\mathbb{R}\rightarrow\mathbb{R}$ be of Caratheodory
type, that is, $B(\cdot,t)$ is measurable for every fixed $t\in\mathbb{R}$,
and $B(x,\cdot)$ is continuous for a.e. $x\in M$. We shall assume that there
exists $p>1$ such that, for each fixed open set $\Omega\Subset M$, the
following set of assumptions $\mathscr{S}$ is met:
(A1) $\displaystyle\left\langle A(X)\middle|X\right\rangle\geq
a_{1}|X|^{p}\quad\forall\ X\in TM$ (A2) $\displaystyle|A(X)|\leq
a_{2}|X|^{p-1}\quad\forall X\in TM$ (Mo)
$\displaystyle\begin{array}[]{l}A\text{ is strictly monotone, i.e. }\
\left\langle A(X)-A(Y)\middle|X-Y\right\rangle_{p}\geq 0\ \text{
for}\\\\[2.84544pt] \text{every }x\in M,\ X,Y\in T_{x}M,\text{ with equality
if and only if }X=Y\end{array}$ (B1) $\displaystyle|B(x,t)|\leq
b_{1}+b_{2}|t|^{p-1}\quad\text{for }t\in\mathbb{R}$ (B2)
$\displaystyle\text{for a.e. }x,\ B(x,\cdot)\text{ is monotone non-
decreasing}$ (B3) $\displaystyle\text{for a.e. }x,\ B(x,t)t\geq 0,$
where $a_{1},a_{2},b_{1},b_{2}$ are positive constants possibly depending on
$\Omega$. As explained in remark 4.2, we could state our main theorem relaxing
condition B1 to:
(B1+) $\displaystyle|B(x,t)|\leq b(t)\quad\text{for }t\in\mathbb{R}$
for some positive and finite function $b$, however for the moment we assume B1
to avoid some complications in the notation, and explain later how to extend
our result to this more general case.
We define the operators
$\mathcal{F},\mathcal{A},\mathcal{B}:W^{1,p}(\Omega)\rightarrow
W^{1,p}(\Omega)^{\star}$ by setting
(7) $\begin{array}[]{llcl}\mathcal{A}:&u&\longmapsto&\Big{[}\phi\in
W^{1,p}(\Omega)\longmapsto\int_{\Omega}\left\langle A(\nabla
u)\middle|\nabla\phi\right\rangle\Big{]}\\\\[14.22636pt]
\mathcal{B}:&u&\longmapsto&\Big{[}\phi\in
W^{1,p}(\Omega)\longmapsto\int_{\Omega}B(x,u(x))\phi\Big{]}\\\\[11.38092pt]
\mathcal{F}\doteq\mathcal{A}+\mathcal{B}.&&&\end{array}$
With these assumptions, it can be easily verified that both $\mathcal{A}$ and
$\mathcal{B}$ map to continuous linear functionals on $W^{1,p}(\Omega)$ for
each fixed $\Omega\Subset M$. We define the operators $L_{\mathcal{A}}$,
$L_{\mathcal{F}}$ according to the distributional equality:
$\int_{M}\phi L_{\mathcal{A}}u\doteq-<\mathcal{A}(u),\phi>,\quad\int_{M}\phi
L_{\mathcal{F}}u\doteq-<\mathcal{F}(u),\phi>$
for every $u\in W^{1,p}_{\mathrm{loc}}(M)$ and $\phi\in C^{\infty}_{c}(M)$,
where $<,>$ is the duality. In other words, in the weak sense
$L_{\mathcal{F}}u=\mathrm{div}(A(\nabla u))-B(x,u)\qquad\forall\ u\in
W^{1,p}_{\mathrm{loc}}(M).$
###### Example 2.1.
_The $p$-Laplacian defined in (3), corresponding to the choices
$A(X)\doteq|X|^{p-2}X$ and $B(x,t)\doteq 0$, satisfies all the assumptions in
$\mathscr{S}$ for each $\Omega\Subset M$. Another admissible choice of $B$ is
$B(x,t)\doteq\lambda|t|^{p-2}t$, where $\lambda\geq 0$. For such a choice,_
(8) $L_{\mathcal{F}}u=\Delta_{p}u-\lambda|u|^{p-2}u$
_is the operator of Theorem 1.2. We stress that, however, in $\mathscr{S}$ we
require no homogeneity condition either on $A$ or on $B$._
###### Example 2.2.
_More generally, as in[25] and in [29], for each function $\varphi\in
C^{0}(\mathbb{R}^{+}_{0})$ such that $\varphi>0$ on $\mathbb{R}^{+}$,
$\varphi(0)=0$, and for each symmetric, positive definite $2$-covariant
continuous tensor field $h\in\Gamma(\mathrm{Sym}_{2}(TM))$, we can consider
differential operators of type_
$L_{\varphi,h}u\doteq\mathrm{div}\left(\frac{\varphi(|\nabla u|)}{|\nabla
u|}h(\nabla u,\cdot)^{\sharp}\right),$
_where $\sharp$ is the musical isomorphism. Due to the continuity and the
strict positivity of $h$, the conditions (A1) and (A2) in $\mathscr{S}$ can be
rephrased as_
(9) $a_{1}t^{p-1}\leq\varphi(t)\leq a_{2}t^{p-1}.$
_Furthermore, if $\varphi\in C^{1}(\mathbb{R}^{+})$, a sufficient condition
for (Mo) to hold is given by_
(10)
$\frac{\varphi(t)}{t}h(X,X)+\left(\varphi^{\prime}(t)-\frac{\varphi(t)}{t}\right)\left\langle
Y\middle|X\right\rangle h(Y,X)>0$
_for every $X,Y$ with $|X|=|Y|=1$. The reason why it implies the strict
monotonicity can be briefly justified as follows: for $L_{\varphi,h}$, (Mo) is
equivalent to requiring_
(11)
$\frac{\varphi(|X|)}{|X|}h(X,X-Y)-\frac{\varphi(|Y|)}{|Y|}h(Y,X-Y)>0\qquad\text{if
}\ X\neq Y.$
_In the nontrivial case when $X$ and $Y$ are not proportional, the segment
$Z(t)=Y+t(X-Y)$, $t\in[0,1]$ does not pass through zero, so that_
$F(t)=\frac{\varphi(|Z|)}{|Z|}h(Z,Z^{\prime})$
_is $C^{1}$. Condition (10) implies that $F^{\prime}(t)>0$. Hence, integrating
we get $F(1)>F(0)$, that is, (11). We observe that, if $h$ is the metric
tensor, the strict monotonicity is satisfied whenever $\varphi$ is strictly
increasing on $\mathbb{R}^{+}$ even without any differentiability assumption
on $\varphi$._
###### Example 2.3.
_Even more generally, if $A$ is of class $C^{1}$, a sufficient condition for
the monotonicity of $A$ has been considered in [1], Section 5 (see the proof
of Theorem 5.3). Indeed, the authors required that, for every $x\in M$ and
every $X\in T_{x}M$, the differential of the map $A_{x}:T_{x}M\rightarrow
T_{x}M$ at the point $X\in T_{x}M$ is positive definite as a linear
endomorphism of $T_{X}(T_{x}M)$. This is the analogue, for Riemannian
manifolds, of Proposition 2.4.3 in [28]._
We recall the concept of subsolutions and supersolutions for
$L_{\mathcal{F}}$.
###### Definition 2.4.
We say that $u\in W^{1,p}_{\mathrm{loc}}(M)$ solves $L_{\mathcal{F}}u\geq 0$
(resp. $\leq 0$, $=0$) weakly on $M$ if, for every non-negative $\phi\in
C^{\infty}_{c}(M)$, $<\mathcal{F}(u),\phi>\leq 0$, (resp., $\geq 0$, $=0$).
Explicitly,
$\int_{M}\left\langle A(\nabla
u)\middle|\nabla\phi\right\rangle+\int_{M}B(x,u)\phi\leq 0\ \ \ \text{(resp.,
}\geq 0,\,=0\text{).}$
Solutions of $L_{\mathcal{F}}u\geq 0$ (resp, $\leq 0$, $=0$) are called (weak)
subsolutions (resp. supersolutions, solutions) for $L_{\mathcal{F}}$.
###### Remark 2.5.
_When defining solutions of $L_{\mathcal{F}}u=0$, we can drop the requirement
that the test function $\phi$ is non-negative. This can be easily seen by
splitting $\phi$ into its positive and negative parts and using a density
argument._
###### Remark 2.6.
_Note that, since $B$ is Caratheodory, (B3) implies that $B(x,0)=0$ a.e. on
$M$. Therefore, the constant function $u=0$ solves $L_{\mathcal{F}}u=0$. Again
by $\eqref{B3}$, positive constants are supersolutions._
Following [25] and [26], we present the analogues of the
$L^{\infty}$-Liouville property and the Khas’minskii property for the
nonlinear operators constructed above.
###### Definition 2.7.
Let $M$ be a Riemannian manifold, and let
$\mathcal{A},\mathcal{B},\mathcal{F}$ be as above.
* -
We say that the $L^{\infty}$-Liouville property $(L)$ for $L^{\infty}$
(respectively, $\mathrm{H\ddot{o}l}_{\mathrm{loc}}$) functions holds for the
operator $L_{\mathcal{F}}$ if every $u\in L^{\infty}(M)\cap W^{1,p}_{loc}(M)$
(respectively, $\mathrm{H\ddot{o}l}_{\mathrm{loc}}(M)\cap
W^{1,p}_{\mathrm{loc}}(M)$) essentially bounded, satisfying $u\geq 0$ and
$L_{\mathcal{F}}u\geq 0$ is constant.
* -
We say that the Khas’minskii property $(K)$ holds for $L_{\mathcal{F}}$ if,
for every pair of open sets $K\Subset\Omega\Subset M$ with Lipschitz boundary,
and every $\varepsilon>0$, there exists an exhaustion function
$w\in C^{0}(M)\cap W^{1,p}_{\mathrm{loc}}(M)$
such that
$\begin{array}[]{ll}w>0\ \text{ on }M\backslash K,&\quad w=0\ \text{ on
}K,\\\\[5.69046pt] w\leq\varepsilon\ \text{ on }\Omega\backslash K,&\quad
L_{\mathcal{F}}w\leq 0\ \text{on }M\backslash K.\end{array}$
A function $w$ with such properties will be called a Khas’minskii potential
relative to the triple $(K,\Omega,\varepsilon)$.
* -
a Khas’minskii potential $w$ relative to some triple $(K,\Omega,\varepsilon)$
is called an Evans potential if $L_{\mathcal{F}}w=0$ on $M\backslash K$. The
operator $L_{\mathcal{F}}$ has the Evans property $(E)$ if there exists an
Evans potential for every triple $(K,\Omega,\varepsilon)$.
The main result in this paper is the following
###### Theorem 2.8.
Let $M$ be a Riemannian manifold, and let $A,B$ satisfy the set of assumptions
$\mathscr{S}$, with (B1+) instead of (B1). Define
$\mathcal{A},\mathcal{B},\mathcal{F}$ as in (7), and
$L_{\mathcal{A}},L_{\mathcal{F}}$ accordingly. Then, the conditions $(L)$ for
$\mathrm{H\ddot{o}l}_{\mathrm{loc}}$, $(L)$ for $L^{\infty}$ and $(K)$ are
equivalent.
###### Remark 2.9.
_It should be observed that if $L_{\mathcal{F}}$ is homogeneous, as in (8),
the Khas’minskii condition considerably simplifies as in $(K)$ of Theorem 1.2.
Indeed, the fact that $\delta w$ is still a supersolution for every
$\delta>0$, and the continuity of $w$, allow to get rid of $\Omega$ and
$\varepsilon$._
Next, in Section 5 we briefly describe in which way $(L)$ and $(K)$ are
related to the concepts of weak maximum principle and parabolicity. Such
relationship has been deeply investigated in [24], [25], whose ideas and
proofs we will follow closely. With the aid of Theorem 2.8, we will be able to
prove the next Theorem 2.12. To state it, we shall restrict to a particular
class of potentials $B(x,t)$, those of the form $B(x,t)=b(x)f(t)$ with
(12) $\begin{array}[]{l}\displaystyle b,b^{-1}\in
L^{\infty}_{\mathrm{loc}}(M),\quad b>0\text{ a.e. on }M;\\\\[5.69046pt] f\in
C^{0}(\mathbb{R}),\quad f(0)=0,\quad f\text{ is non-decreasing on
}\mathbb{R}.\end{array}$
Clearly, $B$ satisfies (B1+), (B2) and (B3). As for $A$, we require (A1) and
(A2), as before.
###### Definition 2.10.
Let $A,B$ be as above, define $\mathcal{A},\mathcal{B},\mathcal{F}$ as in (7)
and $L_{\mathcal{A}},L_{\mathcal{F}}$ accordingly.
* $(W)$
We say that $b^{-1}L_{\mathcal{A}}$ satisfies the weak maximum principle for
$C^{0}$ functions if, for every $u\in C^{0}(M)\cap W^{1,p}_{\mathrm{loc}}(M)$
such that $u^{\star}<+\infty$, and for every $\eta<u^{\star}$,
$\inf_{\Omega_{\eta}}b^{-1}L_{\mathcal{A}}u\leq 0\qquad\text{weakly on }\
\Omega_{\eta}=u^{-1}\\{(\eta,+\infty)\\}.$
* $(W_{\mathrm{pa}})$
We say that $b^{-1}L_{\mathcal{A}}$ is parabolic if, for every non-constant
$u\in C^{0}(M)\cap W^{1,p}_{\mathrm{loc}}(M)$ such that $u^{\star}<+\infty$,
and for every $\eta<u^{\star}$,
$\inf_{\Omega_{\eta}}b^{-1}L_{\mathcal{A}}u<0\qquad\text{weakly on }\
\Omega_{\eta}=u^{-1}\\{(\eta,+\infty)\\}.$
* -
We say that $\mathcal{F}$ is of type $1$ if, in the potential $B(x,t)$, the
factor $f(t)$ satisfies $f>0$ on $\mathbb{R}^{+}$. Otherwise, when $f=0$ on
some interval $[0,T]$, $\mathcal{F}$ is called of type $2$.
###### Remark 2.11.
_$\inf_{\Omega_{\eta}}b^{-1}L_{\mathcal{A}}u\leq 0$ weakly means that, for
every $\varepsilon>0$, there exists $0\leq\phi\in
C^{\infty}_{c}(\Omega_{\eta})$, $\phi\not\equiv 0$ such that_
$-<\mathcal{A}(u),\phi>\ <\varepsilon\int b\phi.$
_Similarly, with $\inf_{\Omega_{\eta}}b^{-1}L_{\mathcal{A}}u<0$ weakly we mean
that there exist $\varepsilon>0$ and $0\leq\phi\in
C^{\infty}_{c}(\Omega_{\eta})$, $\phi\not\equiv 0$ such that
$-<\mathcal{A}(u),\phi>\ <-\varepsilon\int b\phi$. _
###### Theorem 2.12.
Under the assumptions (12) for $B(x,t)=b(x)f(t)$, and (A1), (A2) for $A$, the
following properties are equivalent:
* -
The operator $b^{-1}L_{\mathcal{A}}$ satisfies $(W)$;
* -
Property $(L)$ holds for some (hence any) operator $\mathcal{F}$ of type 1;
* -
Property $(K)$ holds for some (hence any) operator $\mathcal{F}$ of type 1;
Furthermore, under the same assumptions, the next equivalence holds:
* -
The operator $b^{-1}L_{\mathcal{A}}$ is parabolic;
* -
Property $(L)$ holds for some (hence any) operator $\mathcal{F}$ of type 2;
* -
Property $(K)$ holds for some (hence any) operator $\mathcal{F}$ of type 2;
In the final Section 6, we address the question whether $(W)$, $(K)$, $(L)$
are equivalent to the Evans property $(E)$. Indeed, it should be observed
that, in Theorem 2.8, no growth control on $B$ as a function of $t$ is
required at all. On the contrary, as we will see, the validity of the Evans
property forces some precise upper bound for its growth. To better grasp what
we shall expect, we will restrict to the case of radially symmetric manifolds.
For the statements of the main results, we refer the reader directly to
Section 6 covering the situation.
## 3\. Technical tools
In this section we introduce some technical tools, such as the obstacle
problem, that will be crucial to the proof of our main theorems. In doing so,
a number of basic results from literature is recalled. We have decided to add
a full proof to those results for which we have not found any reference
convering the situation at hand. Our aim is to keep the paper basically self-
contained, and to give the non-expert reader interested in this topic a brief
overview also of the standard technical tricks. Throughout this section, we
will always assume that the assumptions in $\mathscr{S}$ are satisfied, if not
explicitly stated. First, we state some basic results on subsolutions-
supersolutions such as the comparison principle, which follows from the
monotonicity of $A$ and $B$.
###### Proposition 3.1.
Assume $w$ and $s$ are a super and a subsolution defined on $\Omega$. If
$\min\\{w-s,0\\}\in W^{1,p}_{0}(\Omega)$, then $w\geq s$ a.e. in $\Omega$.
###### Proof.
This theorem and its proof, which follows quite easily using the right test
function in the definition of supersolution, are standard in potential theory.
For a detailed proof see [1], Theorem 4.1. ∎
Next, we observe that $A$, $B$ satisfy all the assumptions for the
subsolution-supersolution method in [14] to be applicable.
###### Theorem 3.2 ([14], Theorems 4.1, 4.4 and 4.7).
Let $\phi_{1},\phi_{2}\in L^{\infty}_{\mathrm{loc}}\cap
W^{1,p}_{\mathrm{loc}}$ be, respectively, a subsolution and a supersolution
for $L_{\mathcal{F}}$ on $M$, and suppose that $\phi_{1}\leq\phi_{2}$ a.e. on
$M$. Then, there is a solution $u\in L^{\infty}_{\mathrm{loc}}\cap
W^{1,p}_{\mathrm{loc}}$ of $L_{\mathcal{F}}u=0$ satisfying $\phi_{1}\leq
u\leq\phi_{2}$ a.e. on $M$.
A fundamental property is the strong maximum principle, which follows from the
next Harnack inequality
###### Theorem 3.3 ([28], Theorems 7.1.2, 7.2.1 and 7.4.1).
Let $u\in W^{1,p}_{\mathrm{loc}}(M)$ be a non-negative solution of
$L_{\mathcal{A}}u\leq 0$. Let the assumptions in $\mathscr{S}$ be satisfied.
Fix a relatively compact open set $\Omega\Subset M$.
* (i)
Suppose that $1<p\leq m$, where $m=\mathrm{dim}M$. Then, for every ball
$B_{4R}\subset\Omega$ and for every $s\in(0,(p-1)m/(m-p))$, there exists a
constant $C$ depending on $R$, on the geometry of $B_{4R}$, on $m$ and on the
parameters $a_{1},a_{2}$ in $\mathscr{S}$ such that
$\left\|u\right\|_{L^{s}(B_{2R})}\leq
C\Big{(}\mathrm{essinf}_{B_{2R}}u\Big{)}.$
* (i)
Suppose that $p>m$. Then, for every ball $B_{4R}\subset\Omega$, there exists a
constant $C$ depending on $R$, on the geometry of $B_{4R}$, on $m$ and on the
parameters $a_{1},a_{2}$ in $\mathscr{S}$ such that
$\mathrm{esssup}_{B_{R}}u\leq C\Big{(}\mathrm{essinf}_{B_{R}}u\Big{)}.$
In particular, for every $p>1$, each non-negative solution $u$ of
$L_{\mathcal{A}}u\leq 0$ on $M$ is such that either $u=0$ on $M$ or
$\mathrm{essinf}_{\Omega}u>0$ for every relatively compact set $\Omega$.
###### Remark 3.4.
_We spend few words to comment on the Harnack inequalities quoted from[28]. In
our assumptions $\mathscr{S}$, the functions
$\bar{a}_{2},\bar{a},b_{1},b_{2},b$ in Chapter 7, (7.1.1) and (7.1.2) and the
function $a$ in the monotonicity inequality (6.1.2) can be chosen to be
identically zero. Thus, in Theorems 7.1.2 and 7.4.1 the quantity $k(R)$ is
zero. This gives no non-homogeneous term in the Harnack inequality, which is
essential for us. For this reason, we cannot weaken (A2) to_
$|A(X)|\leq a_{2}|X|^{p-1}+\bar{a}$
_locally on $\Omega$, since the presence of non-zero $\bar{a}$ implies that
$k(R)>0$. It should be observed that Theorem 7.1.2 is only stated for $1<p<m$
but, as observed at the beginning of Section 7.4, the proof can be adapted to
cover the case $p=m$._
###### Remark 3.5.
_In the rest of the paper, we will only use the fact that either $u\equiv 0$
or $u>0$ on $M$, that is, the strong maximum principle. It is worth observing
that, for the operators $L_{\mathcal{A}}=L_{\varphi,h}$ described in Example
2.2, very general strong maximum principles for $C^{1}$ or
$\mathrm{Lip}_{\mathrm{loc}}$ solutions of $L_{\varphi,h}u\leq 0$ on
Riemannian manifolds have been obtained in [27] (see Theorem 1.2 when $h$ is
the metric tensor, and Theorems 5.4 and 5.6 for the general case). In
particular, if $h$ is the metric tensor, the sole requirements_
(13) $\varphi\in C^{0}(\mathbb{R}^{+}_{0}),\ \ \varphi(0)=0,\ \
\varphi>0\text{ on }\mathbb{R}^{+},\ \ \varphi\text{ in strictly increasing on
}\mathbb{R}^{+}$
_are enough for the strong maximum principle to hold for $C^{1}$ solutions of
$L_{\varphi}u\leq 0$. Hence, for instance for $L_{\varphi}$, the two-sided
bound (9) on $\varphi$ can be weakened to any bound ensuring that the
comparison and strong maximum principles hold, the subsoluton-supersolution
method is applicable and the obstacle problem has a solution. For instance,
besides (13), the requirement_
(14) $\varphi(0)=0,\quad a_{1}t^{p-1}\leq\varphi(t)\leq a_{2}t^{p-1}+a_{3}$
_is enough for Theorems, 3.1, 3.2, and it also suffices for the obstacle
problem to admit a unique solution, as the reader can infer from the proof of
the next Theorem 3.11._
###### Remark 3.6.
_Regarding the above observation, if $\varphi$ is merely continuous then even
solutions of $L_{\varphi}u=0$ are not expected to be $C^{1}$, nor even
$\mathrm{Lip}_{\mathrm{loc}}$. Indeed, in our assumptions the optimal
regularity for $u$ is (locally) some Hölder class, see the next Theorem 3.7.
If $\varphi\in C^{1}(\mathbb{R}^{+})$ is more regular, then we can avail of
the regularity result in [32] to go even beyond the $C^{1}$ class. Indeed,
under the assumptions_
$\gamma(k+t)^{p-2}\leq\min\left(\varphi^{\prime}(t),\frac{\varphi(t)}{t}\right)\leq\max\left(\varphi^{\prime}(t),\frac{\varphi(t)}{t}\right)\leq\Gamma(k+t)^{p-2},$
_for some $k\geq 0$ and some positive constants $\gamma\leq\Gamma$, then each
solution of $L_{\varphi}u=0$ is in some class $C^{1,\alpha}$ on each
relatively compact set $\Omega$, where $\alpha\in(0,1)$ may depend on
$\Omega$. When $h$ is not the metric tensor, the condition on $\varphi$ and
$h$ is more complicated, and we refer the reader to [25] (in particular, see
(0.1) $(v)$, $(vi)$ p. 803)._
Part of the regularity properties that we need are summarized in the following
###### Theorem 3.7.
Let the assumptions in $\mathscr{S}$ be satisfied.
* (i)
[[18], Theorem 4.8] If $u$ solves $L_{\mathcal{F}}u\leq 0$ on some open set
$\Omega$, then there exists a representative in $W^{1,p}(\Omega)$ which is
lower semicontinuous.
* (ii)
[[16], Theorem 1.1 p. 251] If $u\in L^{\infty}(\Omega)\cap W^{1,p}(\Omega)$ is
a bounded solution of $L_{\mathcal{F}}u=0$ on $\Omega$, then there exists
$\alpha\in(0,1)$ depending on the geometry of $\Omega$, on the constants in
$\mathscr{S}$ and on $\|u\|_{L^{\infty}(\Omega)}$ such that $u\in
C^{0,\alpha}(\Omega)$. Furthermore, for every $\Omega_{0}\Subset\Omega$, there
exists $C=C(\gamma,\mathrm{dist}(\Omega_{0},\partial\Omega))$ such that
$\|u\|_{C^{0,\alpha}(\Omega_{0})}\leq C.$
###### Remark 3.8.
_As for $(i)$, it is worth observing that, in our assumptions, both $b_{0}$
and a in the statement of [18], Theorem 4.8 are identically zero. Although we
will not need the following properties, it is worth noting that any $u$
solving $L_{\mathcal{F}}u\leq 0$ has a Lebesgue point everywhere and is also
$p$-finely continuous (where finite)._
Next, this simple elliptic estimate for locally bounded supersolutions is
useful:
###### Proposition 3.9.
Let $u$ be a bounded solution of $L_{\mathcal{F}}u\leq 0$ on $\Omega$. Then,
for every relatively compact, open set $\Omega_{0}\Subset\Omega$ there is a
constant $C>0$ depending on $p,\,\Omega,\,\Omega_{0}$ and on the parameters in
$\mathscr{S}$ such that
$\left\|\nabla u\right\|_{L^{p}(\Omega_{0})}\leq
C(1+\left\|u\right\|_{L^{\infty}(\Omega)})$
###### Proof.
Given a supersolution $u$, the monotonicity of $B$ assures that for every
positive constant $c$ also $u+c$ is a supersolution, so without loss of
generality we may assume that $u_{\star}\geq 0$. Thus,
$u^{\star}=\left\|u\right\|_{L^{\infty}(\Omega)}$.
Shortly, with $\|\cdot\|_{p}$ we denote the $L^{p}$ norm on $\Omega$, and with
$C$ we denote a positive constant depending on $p,\Omega$ and on the
parameters in $\mathscr{S}$, that may vary from place to place. Let $\eta\in
C^{\infty}_{c}(\Omega)$ be such that $0\leq\eta\leq 1$ on $\Omega$ and
$\eta=1$ on $\Omega_{0}$. Then, we use the non-negative function
$\phi=\eta^{p}(u^{\star}-u)$ in the definition of supersolution to get, after
some manipulation and from (A1), (A2) and (B3),
(15) $\begin{array}[]{lcl}\displaystyle a_{1}\int_{\Omega}\eta^{p}|\nabla
u|^{p}&\leq&\displaystyle pa_{2}\int_{\Omega}|\nabla
u|^{p-1}\eta^{p-1}(u^{\star}-u)|\nabla\eta|+\int_{\Omega}\eta^{p}B(x,u)u^{\star}\\\\[11.38092pt]
\end{array}$
Using (B1), the integral involving $B$ is roughly estimated as follows:
(16)
$\int_{\Omega}\eta^{p}B(x,u)u^{\star}\leq|\Omega|(b_{1}u^{\star}+b_{2}(u^{\star})^{p})\leq
C(1+u^{\star})^{p},$
where the last inequality follows by applying Young inequality on the first
addendum. As for the term involving $|\nabla\eta|$, using $(u^{\star}-u)\leq
u^{\star}$ and again Young inequality
$|ab|\leq|a|^{p}/(p\varepsilon^{p})+\varepsilon^{q}|b|^{q}/q$ we obtain
(17) $\begin{array}[]{lcl}\displaystyle pa_{2}\int_{\Omega}\big{(}|\nabla
u|^{p-1}\eta^{p-1}(u^{\star}-u)|\nabla\eta|\big{)}&\leq&\displaystyle
pa_{2}\int_{\Omega}\big{(}|\nabla
u|^{p-1}\eta^{p-1}\big{)}\big{(}u^{\star}|\nabla\eta|\big{)}\\\\[11.38092pt]
&\leq&\frac{a_{2}}{\varepsilon^{p}}\left\|\eta\nabla
u\right\|_{p}^{p}+\frac{a_{2}p\varepsilon^{q}}{q}\left\|\nabla\eta\right\|^{p}_{p}(u^{\star})^{p}\end{array}$
Choosing $\varepsilon$ such that $a_{2}\varepsilon^{-p}=a_{1}/2$, inserting
(16) and (17) into (15) and rearranging we obtain
$\frac{a_{1}}{2}\left\|\eta\nabla u\right\|_{p}^{p}\leq
C\Big{[}1+(1+\left\|\nabla\eta\right\|_{p}^{p})(u^{\star})^{p}\Big{]}.$
Since $\eta=1$ on $\Omega_{0}$ and $\left\|\nabla\eta\right\|_{p}\leq C$,
taking the $p$-root the desired estimate follows. ∎
###### Remark 3.10.
_We observe that, when $B\neq 0$ we cannot apply the technique of [9], Lemma
3.27 to get a Caccioppoli-type inequality for bounded, non-negative
supersolutions. The reason is that subtracting a positive constant to a
supersolution does not yield, for general $B\neq 0$, a supersolution. It
should be stressed that, however, when $p\leq m$ a refined Caccioppoli
inequality for supersolution has been given in in [18], Theorem 4.4._
Now, we fix our attention on the obstacle problem. There are a lot of
references regarding this subject (for example see [18], Chapter 5 or [9],
Chapter 3 in the case $B=0$). As often happens, notation can be quite
different from one reference to another. Here we try to adapt the conventions
used in [9], and for the reader’s convenience we also sketch some of the
proofs.
First of all, some definitions. Given a function
$\psi:\Omega\to\mathbb{R}\cup{\pm\infty}$, and given $\theta\in
W^{1,p}(\Omega)$, we define the closed convex set
$\displaystyle\mathcal{K}_{\psi,\theta}\doteq\\{f\in W^{1,p}(\Omega)\ |\ \
f\geq\psi\ \text{ a.e. and }\ f-\theta\in W^{1,p}_{0}(\Omega)\\}.$
Loosely speaking, $\theta$ determines the boundary condition for the solution
$u$, while $\psi$ is the “obstacle”-function. Most of the times, obstacle and
boundary function coincide, and in this case we use the convention
$\mathcal{K}_{\theta}\doteq\mathcal{K}_{\theta,\theta}$. We say that
$u\in\mathcal{K}_{\psi,\theta}$ solves the obstacle problem if for every
$\varphi\in\mathcal{K}_{\psi,\theta}$:
(18) $\displaystyle<\mathcal{F}(u),\varphi-u>\ \geq 0.$
Note that for every nonnegative $\phi\in C^{\infty}_{c}(\Omega)$ the function
$\varphi=u+\phi$ belongs to $\mathcal{K}_{\psi,\theta}$, and this implies that
the solution to the obstacle problem is always a supersolution. Note also that
if we choose $\psi=-\infty$, we get the standard Dirichlet problem with
Sobolev boundary value $\theta$ for the operator $\mathcal{F}$, in fact in
this case any test function $\phi\in C^{\infty}_{c}(\Omega)$ verifies
$u\pm\phi\in\mathcal{K}_{\psi,\theta}$, and so inequality in (18) becomes an
equality. Next, we address the solvability of the obstacle problem.
###### Theorem 3.11.
Under the assumptions $\mathscr{S}$, if $\Omega$ is relatively compact and
$\mathcal{K}_{\psi,\theta}$ is nonempty, then there exists a unique solution
to the relative obstacle problem.
###### Proof.
The proof is basically the same if we assume $B=0$, as in [9], Appendix 1; in
particular, it is an application of Stampacchia theorem, see for example
Corollary III.1.8 in [13]. To apply the theorem, we shall verify that
$\mathcal{K}_{\psi,\theta}$ is closed and convex, which follows
straightforwardly from its very definition, and that
$\mathcal{F}:W^{1,p}(\Omega)\to W^{1,p}(\Omega)^{\star}$ is weakly continuous,
monotone and coercive. Monotonicity is immediate by properties (Mo), (B2). To
prove that $\mathcal{F}$ is weakly continuous, we take a sequence $u_{i}\to u$
in $W^{1,p}(\Omega)$. By using (A2) and (B1), we deduce from (7) that
$|<\mathcal{F}(u_{i}),\phi>|\leq\Big{(}(a_{2}+b_{2})\left\|u_{i}\right\|^{p-1}_{W^{1,p}(\Omega)}+b_{1}|\Omega|^{\frac{p-1}{p}}\Big{)}\left\|\phi\right\|_{W^{1,p}(\Omega)}$
Hence the $W^{1,p}(\Omega)^{\star}$ norm of $\\{\mathcal{F}(u_{i})\\}$ is
bounded. Since $W^{1,p}(\Omega)^{\star}$ is reflexive, we can extract from any
subsequence a weakly convergent sub-subsequence
$\mathcal{F}(u_{k})\rightharpoonup z$ in $W^{1,p}(\Omega)^{\star}$, for some
$z$. From $u_{k}\rightarrow u$ in $W^{1,p}(\Omega)$, by Riesz theorem we get
(up to a further subsequence) $(u_{k},\nabla u_{k})\rightarrow(u,\nabla u)$
pointwise on $\Omega$, and since the maps
$X\longmapsto A(X),\qquad t\longmapsto B(x,t)$
are continuous, then necessarily $z=\mathcal{F}(u)$. Since this is true for
every weakly convergent subsequence $\\{\mathcal{F}(u_{k})\\}$, we deduce that
the whole $\mathcal{F}(u_{i})$ converges weakly to $\mathcal{F}(u)$. This
proves the weak continuity of $\mathcal{F}$.
Coercivity on $\mathcal{K}_{\psi,\theta}$ follows if we fix any
$\varphi\in\mathcal{K}_{\psi,\theta}$ and consider a diverging sequence
$\\{u_{i}\\}\subset\mathcal{K}_{\psi,\theta}$ and calculate:
$\displaystyle\frac{\left\langle\mathcal{F}(u_{i})-\mathcal{F}(\varphi)\middle|u_{i}-\varphi\right\rangle}{\left\|u_{i}-\varphi\right\|_{W^{1,p}(\Omega)}}\stackrel{{\scriptstyle\eqref{B3}}}{{\geq}}\frac{\left\langle\mathcal{A}(u_{i})-\mathcal{A}(\varphi)\middle|u_{i}-\varphi\right\rangle}{\left\|u_{i}-\varphi\right\|_{W^{1,p}(\Omega)}}\stackrel{{\scriptstyle\eqref{A1},\eqref{A2}}}{{\geq}}$
$\displaystyle\geq\frac{a_{1}\left(\left\|\nabla
u_{i}\right\|_{p}^{p}+\left\|\nabla\varphi\right\|_{p}^{p}\right)-a_{2}\left(\left\|\nabla
u_{i}\right\|_{p}^{p-1}\left\|\nabla\varphi\right\|_{p}+\left\|\nabla
u_{i}\right\|_{p}\left\|\nabla\varphi\right\|_{p}^{p-1}\right)}{\left\|u_{i}-\varphi\right\|_{W^{1,p}(\Omega)}}$
This last quantity tends to infinity as $i$ goes to infinity thanks to the
Poincarè inequality on $\Omega$:
$\displaystyle\left\|u_{i}-\varphi\right\|_{L^{p}(\Omega)}\leq C\left\|\nabla
u_{i}-\nabla\varphi\right\|_{L^{p}(\Omega)}$
which leads to $\left\|\nabla u_{i}\right\|_{L^{p}(\Omega)}\geq
C_{1}+C_{2}\left\|u_{i}\right\|_{W^{1,p}(\Omega)}$ for some constants
$C_{1},C_{2}$, where $C_{1}$ depends on
$\left\|\varphi\right\|_{W^{1,p}(\Omega)}$. ∎
A very important characterization of the solution of the obstacle problem is a
corollary to the following comparison, whose proof follows closely that of the
comparison Proposition 3.1.
###### Proposition 3.12.
If $u$ is a solution to the obstacle problem $\mathcal{K}_{\psi,\theta}$, and
if $w$ is a supersolution such that
$\min\\{u,w\\}\in\mathcal{K}_{\psi,\theta}$, then $u\leq w$ a.e.
###### Proof.
Define $U=\\{x|\ u(x)>w(x)\\}$. Suppose by contradiction that $U$ has positive
measure. Since $u$ solves the obstacle problem, using (18) with the function
$\varphi=\min\\{u,w\\}\in\mathcal{K}_{\psi,\theta}$ we get
(19) $0\leq\ <\mathcal{F}(u),\varphi-u>\ =\int_{U}\left\langle A(\nabla
u)\middle|\nabla w-\nabla u\right\rangle+\int_{U}B(x,u)(w-u).$
On the other hand, applying the definition of supersolution $w$ with the test
function $0\leq\phi=u-\min\\{u,w\\}\in W^{1,p}_{0}(\Omega)$ we get
(20) $0\leq\ <\mathcal{F}(w),\phi>\ =\int_{U}\left\langle A(\nabla
w)\middle|\nabla u-\nabla w\right\rangle+\int_{U}B(x,w)(u-w)$
adding the two inequalities we get, by (Mo) and (B2),
$0\leq\int_{U}\left\langle A(\nabla u)-A(\nabla w)\middle|\nabla w-\nabla
u\right\rangle+\int_{U}\big{[}B(x,u)-B(x,w)\big{]}(w-u)\leq 0.$
Since $A$ is strictly monotone, $\nabla u=\nabla w$ a.e. on $U$, so that
$\nabla((u-w)_{+})=0$ a.e. on $\Omega$. Consequently, since $U$ has positive
measure, $u-w=c$ a.e. on $\Omega$, where $c$ is a positive constant. Since
$\min\\{u,w\\}\in\mathcal{K}_{\psi,\theta}$, we get $c=u-w=u-\min\\{u,w\\}\in
W^{1,p}_{0}(\Omega)$, contradiction. ∎
###### Corollary 3.13.
The solution $u$ to the obstacle problem in $\mathcal{K}_{\psi,\theta}$ is the
smallest supersolution in $\mathcal{K}_{\psi,\theta}$.
###### Proposition 3.14.
Let $w_{1},w_{2}\in W^{1,p}_{\mathrm{loc}}(M)$ be supersolutions for
$L_{\mathcal{F}}$. Then, $w\doteq\min\\{w_{1},w_{2}\\}$ is a supersolution.
Analogously, if $u_{1},u_{2}\in W^{1,p}_{\mathrm{loc}}(M)$ are subsolutions
for $L_{\mathcal{F}}$, then so is $u\doteq\max\\{u_{1},u_{2}\\}$.
###### Proof.
Consider a smooth exhaustion $\\{\Omega_{j}\\}$ of $M$, and the obstacle
problem $\mathcal{K}_{w}$ on $\Omega_{j}$. By Corollary 3.13 its solution is
necessarily $w_{|\Omega_{j}}$, and so $w$ is a supersolution being locally the
solution of an obstacle problem. As for the second part of the statement,
define $\widetilde{A}(X)\doteq-A(-X)$ and $\widetilde{B}(x,t)\doteq-B(x,-t)$.
Then, $\widetilde{A},\widetilde{B}$ satisfy the set of assumptions
$\mathscr{S}$. Denote with $\widetilde{\mathcal{F}}$ the operator associated
to $\widetilde{A},\widetilde{B}$. Then, it is easy to see that
$L_{\mathcal{F}}u_{i}\geq 0$ if and only if
$L_{\widetilde{\mathcal{F}}}(-u_{i})\leq 0$, and to conclude it is enough to
apply the first part with operator $L_{\widetilde{\mathcal{F}}}$. ∎
The next version of the pasting lemma generalizes the previous proposition to
the case when one of the supersolutions is not defined on the whole $M$.
Before stating it, we need a preliminary definition. Given an open subset
$\Omega\subset M$, possibly with non-compact closure, we recall that the space
$W^{1,p}_{\mathrm{loc}}(\overline{\Omega})$ is the set of all functions $u$ on
$\Omega$ such that, for every relatively compact open set $V\Subset M$ that
intersects $\Omega$, $u\in W^{1,p}(\Omega\cap V)$. A function $u$ in this
space is, loosely speaking, well-behaved on relatively compact portions of
$\partial\Omega$, while no global control on the $W^{1,p}$ norm of $u$ is
assumed. Clearly, if $\Omega$ is relatively compact,
$W^{1,p}_{\mathrm{loc}}(\overline{\Omega})=W^{1,p}(\Omega)$. We identify the
following subset of $W^{1,p}_{\mathrm{loc}}(\overline{\Omega})$, which we call
$X^{p}_{0}(\Omega)$:
(21) $X^{p}_{0}(\Omega)=\left\\{\begin{array}[]{l}\displaystyle u\in
W^{1,p}_{\mathrm{loc}}(\overline{\Omega})\text{ such that, for every open set
}U\Subset M\text{ that}\\\\[8.5359pt] \text{intersects }\Omega\text{, there
exists }\\{\phi_{n}\\}_{n=1}^{+\infty}\subset C^{0}(\overline{\Omega\cap
U})\cap W^{1,p}(\Omega\cap U)\text{,}\\\\[8.5359pt] \displaystyle\text{with
}\phi_{n}\equiv 0\text{ in a neighbourhood of }\partial\Omega\text{,
satisfying}\\\\[8.5359pt] \displaystyle\varphi_{n}\rightarrow u\text{ in
}W^{1,p}(\Omega\cap U)\text{ as }n\rightarrow+\infty\end{array}\right.$
If $\Omega$ is relatively compact, then
$X_{0}^{p}(\Omega)=W^{1,p}_{0}(\Omega)$.
###### Remark 3.15.
_Observe that, if $u\in C^{0}(\overline{\Omega})\cap
W^{1,p}_{\mathrm{loc}}(\overline{\Omega})$, then $u\in X^{p}_{0}(\Omega)$ if
and only if $u=0$ on $\partial\Omega$. This is the version, for non-compact
domains $\Omega$, of a standard result. However, for the convenience of the
reader we briefly sketch the proof. Up to working with positive and negative
part separately, we can suppose that $u\geq 0$ on $\Omega$. If $u=0$ on
$\partial\Omega$, then choosing the sequence $\phi_{n}=\max\\{u-1/n,0\\}$ it
is easy to check that $u\in X^{p}_{0}(\Omega)$. Viceversa, if $u\in
X^{p}_{0}(\Omega)$, let $x_{0}\in\partial\Omega$ be any point. Choose
$U_{1}\Subset U_{2}\Subset M$ such that $x_{0}\in U_{1}$, and a sequence
$\\{\phi_{n}\\}\in C^{0}(\overline{\Omega\cap U_{2}})\cap W^{1,p}(\Omega\cap
U_{2})$ as in the definition of $X^{p}_{0}(\Omega)$. If $\psi\in
C^{\infty}_{c}(U_{2})$ is a smooth cut-off function such that $\psi=1$ on
$U_{1}$, then $\psi\phi_{n}\rightarrow\psi u$ in $W^{1,p}(\Omega\cap U_{2})$.
Since $\psi\phi_{n}$ is compactly supported in $\Omega\cap U_{2}$, then $\psi
u\in W^{1,p}_{0}(\Omega\cap U_{2})$. It is a standard fact that, in this case,
$\psi u=0$ on $\partial(\Omega\cap U_{2})$. Since $x_{0}\in\partial\Omega\cap
U_{2}\subset\partial(\Omega\cap U_{2})$, $u(x_{0})=u\psi(x_{0})=0$. By the
arbitrariness of $x_{0}$, this shows that $u=0$ on $\partial\Omega$._
###### Lemma 3.16.
Let $w_{1}\in W^{1,p}_{\mathrm{loc}}(M)$ be a supersolution for
$L_{\mathcal{F}}$, and let $w_{2}\in
W^{1,p}_{\mathrm{loc}}(\overline{\Omega})$ be a supersolution on some open set
$\Omega$ with $\overline{\Omega}\subset M$, $\overline{\Omega}$ being possibly
non-compact. Suppose that $\min\\{w_{2}-w_{1},0\\}\in X^{p}_{0}(\Omega)$.
Then, the function
$m\doteq\left\\{\begin{array}[]{ll}\min\\{w_{1},w_{2}\\}&\quad\text{on
}\Omega\\\\[2.84544pt] w_{1}&\quad\text{on
}M\backslash\Omega\end{array}\right.$
is a supersolution for $L_{\mathcal{F}}$ on $M$. In particular, if further
$w_{1}\in C^{0}(M)$ and $w_{2}\in C^{0}(\overline{\Omega})$, then $m$ is a
supersolution on $M$ whenever $w_{1}=w_{2}$ on $\partial\Omega$. A similar
statement is valid for subsolutions, replacing $\min$ with $\max$.
###### Proof.
We first need to check that $m\in W^{1,p}_{\mathrm{loc}}(M)$. Let $U\Subset M$
be an open set. By assumption, there exists a sequence of functions
$\\{\phi_{n}\\}\in C^{0}(\overline{\Omega\cap U})\cap W^{1,p}(\Omega\cap U)$,
each $\phi_{n}$ being zero in some neighbourhood of $\partial\Omega$, which
converges in the $W^{1,p}$ norm to $\min\\{w_{2}-w_{1},0\\}$. We can thus
continuously extend $\phi_{n}$ on the whole $U$ by setting $\phi_{n}=0$ on
$U\backslash\Omega$, and the resulting extension is in $W^{1,p}(U)$. Define
$u=\min\\{w_{2}-w_{1},0\\}\chi_{\Omega}$, where $\chi_{\Omega}$ is the
indicatrix function of $\Omega$. Then, $\phi_{n}\rightarrow u$ in
$W^{1,p}(U)$, so that $u\in W^{1,p}(U)$. It follows that $w_{1}+\phi_{n}\in
W^{1,p}(U)$ converges to $m=w_{1}+u$, which shows that $m\in W^{1,p}(U)$. To
prove that $L_{\mathcal{F}}m\leq 0$ we use a technique similar to Proposition
3.12. Let $U\Subset M$ be a fixed relatively compact open set, and let $s$ be
the solution to the obstacle problem $\mathcal{K}_{m}$ on $U$. Then we have by
Corollary 3.13 $s\leq w_{1}$ a.e. on $U$ and so $s=w_{1}=m$ on
$U\backslash\Omega$. Since $s$ solves the obstacle problem, using $\varphi=m$
in equation (18) we have:
(22) $0\leq\ <\mathcal{F}(s),m-s>\ =\int_{\Omega\cap U}\left\langle A(\nabla
s)\middle|\nabla m-\nabla s\right\rangle+\int_{\Omega\cap U}B(x,s)(m-s).$
On the other hand $m$ is a supersolution in $\Omega\cap U$, being the minimum
of two supersolutions, by Proposition 3.14. To apply the weak definition of
$L_{\mathcal{F}}m\leq 0$ on $\Omega\cap U$ to the test function $s-m$, we
first claim that $s-m\in W^{1,p}_{0}(\Omega\cap U)$. Since we know that $s\leq
w_{1}$ on $U$, then on $\Omega\cap U$
$0\leq s-m\leq w_{1}-\min\\{w_{2},w_{1}\\}=-\min\\{w_{2}-w_{1},0\\}\in
X^{p}_{0}(\Omega).$
The claim now follows by a standard result (see for example [9], Lemma 1.25),
but for the sake of completeness we sketch the proof. Since $0\leq s-m\in
W^{1,p}_{0}(U)$ by the definition of the obstacle problem, there exists a
sequence of nonnegative functions $\psi_{n}\in C^{\infty}_{c}(U)$ converging
to $s-m$. We further consider the sequence $\\{\phi_{n}\\}$ of continuous
functions, converging to $\min\\{w_{2}-w_{1},0\\}$, defined at the beginning
of this proof. Then, on $\Omega\cap U$, $0\leq
s-m\leq\lim_{n}\min\\{-\phi_{n},\psi_{n}\\}$, where the limit is taken in
$W^{1,p}(\Omega\cap U)$. Now, $\min\\{-\phi_{n},\psi_{n}\\}$ has compact
support in $\Omega\cap U$, and this proves the claim. Applying the definition
of $L_{\mathcal{F}}m\leq 0$ to the test function $s-m$ we get:
(23) $0\leq\ <\mathcal{F}(m),s-m>\ =\int_{\Omega\cap U}\left\langle A(\nabla
m)\middle|\nabla s-\nabla m\right\rangle+\int_{\Omega\cap U}B(x,m)(s-m).$
Summing inequalities (22) and (23), we conclude as in Proposition 3.12 that
$\nabla(s-m)=0$ in $\Omega\cap U$ with $s-m\in W^{1,p}_{0}(\Omega\cap U)$, and
so the two functions are equal there. Since $s=w=m$ on $U\backslash\Omega$,
then $m=s$ is a supersolution on $U$. The thesis follows by the arbitrariness
of $U$. If further $w_{1}\in C^{0}(M)$ and $w_{2}\in
C^{0}(\overline{\Omega})$, then the conclusion follows by Remark 3.15. The
proof of the statement for subsolutions is obtained via the same trick as in
Proposition 3.14. ∎
As for the regularity of solutions of the obstacle problem, we have
###### Theorem 3.17 ([18], Theorem 5.4 and Corollary 5.6).
If the obstacle $\psi$ is continuous in $\Omega$, then the solution $u$ to
$\mathcal{K}_{\psi,\theta}$ has a continuous representative in the Sobolev
sense. Furthermore, if $\psi\in C^{0,\alpha}(\Omega)$ for some
$\alpha\in(0,1)$, then there exist $C,\beta>0$ depending only on
$p,\alpha,\Omega,\left\|u\right\|_{L^{\infty}(\Omega)}$ and on the parameters
in $\mathscr{S}$ such that
$\left\|u\right\|_{C^{0,\beta}(\Omega)}\leq
C(1+\left\|\psi\right\|_{C^{0,\alpha}(\Omega)})$
###### Remark 3.18.
_The interested reader should be advised that, in the notation of[18], $b_{0}$
and a are both zero with our assumptions. Stronger results, for instance
$C^{1,\alpha}$ regularity, can be obtained from stronger requirements on
$\psi$, $A$ and $B$ which are stated for instance in [18], Theorem 5.14._
In the proof of our main theorem, and to get some boundary regularity results,
it will be important to see what happens on the set where the solution of the
obstacle problem is strictly above the obstacle.
###### Proposition 3.19.
Let $u$ be the solution of the obstacle problem $\mathcal{K}_{\psi,\theta}$
with continuous obstacle $\psi$. If $u>\psi$ on an open set $D$, then $u$ is a
solution of $L_{\mathcal{F}}u=0$ on $D$.
###### Proof.
Consider any test function $\phi\in C^{\infty}_{c}(D)$. Since $u>\psi$ on $D$,
and since $\phi$ is bounded, by continuity there exists $\delta>0$ such that
$u\pm\delta\phi\in\mathcal{K}_{\psi,\theta}$. From the definition of solution
to the obstacle problem we have that:
$\displaystyle\pm<\mathcal{F}(u),\phi>\
=\frac{1}{\delta}<\mathcal{F}(u),\pm\delta\phi>\
=\frac{1}{\delta}<F(u),(u\pm\delta\phi)-u>\ \geq 0,$
hence $<\mathcal{F}(u),\phi>\ =0$ for every $\phi\in C^{\infty}_{c}(D)$, as
required. ∎
As for boundary regularity, to the best of our knowledge there is no result
for solutions of the kind of obstacle problems we are studying. However, if we
restrict ourselves to Dirichlet problems (i.e. obstacle problems with
$\psi=-\infty$), some results are available. We briefly recall that a point
$x_{0}\in\partial\Omega$ is called “regular” if for every function $\theta\in
W^{1,p}(\Omega)$ continuous in a neighborhood of $x_{0}$, the unique solution
to the relative Dirichlet problem is continuous in $x_{0}$, and that a
necessary and sufficient condition for $x_{0}$ to be regular is the famous
Wiener criterion (which has a local nature). For our purposes, it is enough to
use some simpler sufficient conditions for regularity, so we just cite the
following corollary of the Wiener criterion:
###### Theorem 3.20 ([6], Theorem 2.5).
Let $\Omega$ be a domain, and suppose that $x_{0}\in\partial\Omega$ has a
neighborhood where $\partial\Omega$ is Lipschitz, then $x_{0}$ is regular for
the Dirichlet problem.
For a more specific discussion of the subject, we refer the reader to [6]. We
mention that Dirichlet and obstacle problems have been studied also in metric
space setting, and boundary regularity theorems with the Wiener criterion have
been obtained for example in [2], Theorem 7.2.
###### Remark 3.21.
_Note that[6] deals only with the case $1<p\leq m$, but the other cases
follows from standard Sobolev embeddings._
Using the comparison principle and Proposition 3.19, it is possible to obtain
a corollary to this theorem which deals with boundary regularity of some
particular obstacle problems.
###### Corollary 3.22.
Consider the obstacle problem $\mathcal{K}_{\psi,\theta}$ on $\Omega$, and
suppose that $\Omega$ has Lipschitz boundary and both $\theta$ and $\psi$ are
continuous up to the boundary. Then the solution $w$ to
$\mathcal{K}_{\psi,\theta}$ is continuous up to the boundary (for convenience
we denote $w$ the continuous representative of the solution).
###### Proof.
If we want $\mathcal{K}_{\psi,\theta}$ to be nonempty, it is necessary to
assume $\psi(x_{0})\leq\theta(x_{0})$ for all $x_{0}\in\partial\Omega$.
Let $\tilde{\theta}$ be the unique solution to the Dirichlet problem relative
to $\theta$ on $\Omega$. Then theorem 3.20 guarantees that $\tilde{\theta}\in
C^{0}(\overline{\Omega})$ and the comparison principle allow us to conclude
that $w(x)\geq\tilde{\theta}(x)$ everywhere in $\Omega$.
Suppose first that $\psi(x_{0})<\theta(x_{0})$, then in a neighborhood $U$ of
$x_{0}$ ($U\subset\Omega$) $w(x)\geq\tilde{\theta}(x)>\psi(x)$. By Proposition
3.19, $L_{\mathcal{F}}w=0$ on $U$, and so by Theorem 3.20 $w$ is continuous in
$x_{0}$.
If $\psi(x_{0})=\theta(x_{0})$, consider $w_{\epsilon}$ the solutions to the
obstacle problem $\mathcal{K}_{\tilde{\theta}+\epsilon,\psi}$. By the same
argument as above we have that $w_{\epsilon}$ are all continuous at $x_{0}$,
and by the comparison principle $w(x)\leq w_{\epsilon}(x)$ for every
$x\in\Omega$ (recall that both functions are continuous in $\Omega$). So we
have on one hand:
$\displaystyle\liminf_{x\to x_{0}}w(x)\geq\liminf_{x\to
x_{0}}\psi(x)=\psi(x_{0})=\theta(x_{0})$
and on the other:
$\displaystyle\limsup_{x\to x_{0}}w(x)\leq\limsup_{x\to
x_{0}}w_{\epsilon}(x)=\theta(x_{0})+\epsilon$
this proves that $w$ is continuous in $x_{0}$ with value $\theta(x_{0})$. ∎
Finally, we present some results on convergence of supersolutions and their
approximation with regular ones.
###### Proposition 3.23.
Let $w_{j}$ be a sequence of supersolutions on some open set $\Omega$. Suppose
that either $w_{j}\uparrow w$ or $w_{j}\downarrow w$ pointwise monotonically,
for some locally bounded $w$. Then, $w$ is a supersolution and there exists a
subsequence of $\\{w_{j}\\}$ that converges locally strongly in $W^{1,p}$ to
$w$ on each compact subset of $\Omega$. Furthermore, if $\\{u_{j}\\}$ is a
sequence of solutions of $L_{\mathcal{F}}u_{j}=0$ which are locally uniformly
bounded in $L^{\infty}$ and pointwise convergent to some $u$, then $u$ solves
$L_{\mathcal{F}}u=0$ and, up to choosing a subsequence, $\\{u_{j}\\}$
converges to $u$ locally strongly on each compact subset of $\Omega$.
###### Proof.
Suppose that $w_{j}\uparrow w$. Up to changing the representative in the
Sobolev class, by Theorem 3.7 we can assume that $w_{j}$ is lower
semicontinuous. Hence, it has minimum on compact subsets of $\Omega$. Since
$w$ is locally bounded and the convergence is monotone up to a set of zero
measure, the sequence $\\{w_{j}\\}$ turns out to be locally bounded in the
$L^{\infty}$-norm. The elliptic estimate in Proposition 3.9 ensures that
$\\{w_{j}\\}$ is locally bounded in $W^{1,p}(\Omega)$. Fix a smooth exhaustion
$\\{\Omega_{n}\\}$ of $\Omega$. For each $j$, up to passing to a subsequence,
$w_{j}\rightharpoonup z_{n}$ weakly in $W^{1,p}(\Omega_{n})$ and strongly in
$L^{p}(\Omega_{n})$. By Riesz theorem, $z_{j}=w$ for every $j$, hence $w\in
W^{1,p}_{\mathrm{loc}}(\Omega)$. With a Cantor argument, we can select a
sequence, still called $w_{j}$, such that $w_{j}$ converges to $w$ both weakly
in $W^{1,p}(\Omega_{n})$ and strongly in $L^{p}(\Omega_{n})$ for every fixed
$n$. To prove that $w$ is a supersolution, fix $0\leq\eta\in
C^{\infty}_{c}(\Omega)$, and choose a smooth relatively compact open set
$\Omega_{0}\Subset\Omega$ that contains the support of $\eta$. Define
$M\doteq\max_{j}\left\|w_{j}\right\|_{W^{1,p}(\Omega_{0})}<+\infty$. Since
$w_{j}$ is a supersolution and $w\geq w_{j}$ for every $j$,
$<\mathcal{F}(w_{j}),\eta(w-w_{j})>\ \geq 0.$
Using (A1) we can rewrite the above inequality as follows:
(24) $\int\left\langle A(\nabla w_{j})\middle|\eta(\nabla w-\nabla
w_{j})\right\rangle\geq-\int\Big{[}B(x,w_{j})+\left\langle A(\nabla
w_{j})\middle|\nabla\eta\right\rangle\Big{]}(w-w_{j}).$
Using (B1), (A2) and suitable Hölder inequalities, the RHS can be bounded from
below with the following quantity
(25) $\begin{array}[]{l}\displaystyle-
b_{1}\left\|\eta\right\|_{L^{\infty}(\Omega)}\int_{\Omega_{0}}(w-w_{j})-b_{2}\left\|\eta\right\|_{L^{\infty}(\Omega)}\int_{\Omega_{0}}|w_{j}|^{p-1}|w-w_{j}|\\\\[11.38092pt]
\displaystyle-
a_{2}\left\|\nabla\eta\right\|_{L^{\infty}(\Omega)}\int_{\Omega_{0}}|\nabla
w_{j}|^{p-1}|w-w_{j}|\\\\[11.38092pt]
\geq-\left\|\eta\right\|_{C^{1}(\Omega)}\Big{[}b_{1}|\Omega_{0}|^{\frac{p-1}{p}}-b_{2}M^{p-1}-a_{2}M^{p-1}\Big{]}\left\|w-w_{j}\right\|_{L^{p}(\Omega_{0})}\rightarrow
0\end{array}$
as $j\rightarrow+\infty$. Combining with (24) and the fact that
$w_{j}\rightharpoonup w$ weakly on $W^{1,p}(\Omega_{0})$, by assumption (Mo)
the following inequality holds true:
(26) $0\leq\int\eta\left\langle A(\nabla w)-A(\nabla w_{j})\middle|\nabla
w-\nabla w_{j}\right\rangle\leq o(1)\qquad\text{as }\ j\rightarrow+\infty.$
By a lemma due to F. Browder (see [3], p.13 Lemma 3), the combination of
assumptions $w_{j}\rightharpoonup w$ both locally weakly in $W^{1,p}$ and
locally strongly in $L^{p}$, and (26) for every $0\leq\eta\in
C^{\infty}_{c}(\Omega)$, implies that $w_{j}\rightarrow w$ locally strongly in
$W^{1,p}$. Since the operator $\mathcal{F}$ is weakly continuous, as shown in
the proof of Theorem 3.11, this implies that
$0\ \leq\ <\mathcal{F}(w_{j}),\eta>\ \longrightarrow\ <\mathcal{F}(w),\eta>,$
hence $L_{\mathcal{F}}w\leq 0$, as required.
The case $w_{j}\downarrow w$ is simpler. By the elliptic estimate, $w\in
W^{1,p}_{\mathrm{loc}}(\Omega)$, being locally bounded by assumption. Let
$\\{\Omega_{n}\\}$ be a smooth exhaustion of $\Omega$, and let $u_{n}$ be a
solution of the obstacle problem relative to $\Omega_{n}$ with obstacle and
boundary value $w$. Then, by (3.13) $w\leq u_{n}\leq w_{j}|_{\Omega_{n}}$, and
letting $j\rightarrow+\infty$ we deduce that $w=u_{n}$ is a supersolution on
$\Omega_{n}$, being a solution of an obstacle problem.
The proof of the last part of the Proposition follows exactly the same lines
as the case $w_{j}\uparrow w$ done before. Indeed, by the uniform local
boundedness, the elliptic estimate gives $\\{u_{j}\\}\subset
W^{1,p}_{\mathrm{loc}}(\Omega)$. Furthermore, in definition
$<\mathcal{F}(u_{j}),\phi>\ =0$ we can still use as test function
$\phi=\eta(u-u_{j})$, since no sign of $\phi$ is required. ∎
A couple of corollaries follow from this theorem. It is in fact easy to see
that we can relax the assumption of local boundedness on $w$ if we assume a
priori $w\in W^{1,p}_{\mathrm{loc}}(\Omega)$, and moreover with a simple trick
we can prove that also local uniform convergence preserves the supersolution
property, as in [9], Theorem 3.78.
###### Corollary 3.24.
Let $w_{j}$ be a sequence of supersolutions locally uniformly converging to
$w$, then $w$ is a supersolution.
###### Proof.
The trick is to transform local uniform convergence into monotone convergence.
Fix any relatively compact $\Omega_{0}\Subset\Omega$ and a subsequence of
$w_{j}$ (denoted for convenience by the same symbol) with
$\left\|w_{j}-w\right\|_{L^{\infty}(\Omega_{0})}\leq 2^{-j}$. The modified
sequence of supersolutions $\tilde{w}_{j}\doteq
w_{j}+\frac{3}{2}\sum_{k=j}^{\infty}2^{-k}=w_{j}+3\times 2^{-j}$ is easily
seen to be a monotonically decreasing sequence on $\Omega_{0}$, and thus its
limit, still $w$ by construction, is a supersolution on any $\Omega_{0}$ by
the previous proposition. The conclusion follows from the arbitrariness of
$\Omega_{0}$. ∎
Now we prove that with continuous supersolutions we can approximate every
supersolution.
###### Proposition 3.25.
For every supersolution $w\in W^{1,p}_{\mathrm{loc}}(\Omega)$, there exists a
sequence $w_{n}$ of continuous supersolutions which converge monotonically
from below and in $W^{1,p}_{\mathrm{loc}}(\Omega)$ to $w$. The same statement
is true for subsolutions with monotone convergence from above.
###### Proof.
Since every $w$ has a lower-semicontinuous representative, it can be assumed
to be locally bounded from below, and since $w^{(m)}=\min\\{w,m\\}$ is a
supersolution (for $m\geq 0$) and converges monotonically to $w$ as $m$ goes
to infinity, we can assume without loss of generality that $w$ is also bounded
above.
Let $\Omega_{n}$ be a locally finite relatively compact open covering on
$\Omega$. Since $w$ is lower semicontinuous it is possible to find a sequence
$\phi_{m}$ of smooth function converging monotonically from below to $w$ (see
[9], Section 3.71 p. 75). Let $w_{m}^{(n)}$ be the solution to the obstacle
problem $\mathcal{K}_{w,\phi_{m}}$ on $\Omega_{n}$. and define
$\bar{w}_{m}\doteq\min_{n}\\{w_{m}^{(n)}\\}$. Thanks to the local finiteness
of the covering $\Omega_{n}$, $\bar{w}_{m}$ is a continuous supersolution,
being locally the minimum of a finite family of continuous functions.
Monotonicity of the convergence is an easy consequence of the comparison
principle for obstacle problems, i.e. Proposition 3.12. To prove convergence
in the local $W^{1,p}$ sense, the steps are pretty much the same as for
Proposition 3.23, and the statement for subsolutions follows from the usual
trick. ∎
###### Remark 3.26.
With similar arguments and up to some minor technical difficulties, one could
strenghten the previous proposition and prove that every supersolution can be
approximated by locally Hölder continuous supersolutions.
## 4\. Proof of Theorem 2.8
###### Theorem 4.1.
Let $M$ be a Riemannian manifold, and let $A,B$ satisfy the set of assumptions
$\mathscr{S}$. Define $\mathcal{A},\mathcal{B},\mathcal{F}$ as in (7), and
$L_{\mathcal{A}},L_{\mathcal{F}}$ accordingly. Then, the following properties
are equivalent:
* $(1)$
$(L)$ for $\mathrm{H\ddot{o}l}_{\mathrm{loc}}$ functions,
* $(2)$
$(L)$ for $L^{\infty}$ functions,
* $(3)$
$(K)$.
###### Proof.
$(2)\Rightarrow(1)$ is obvious. To prove that $(1)\Rightarrow(2)$, we follow
the arguments in [25], Lemma 1.5. Assume by contradiction that there exists
$0\leq u\in L^{\infty}(M)\cap W^{1,p}_{\mathrm{loc}}(M)$, $u\not\equiv 0$ such
that $L_{\mathcal{F}}u\geq 0$. We distinguish two cases.
* -
Suppose first that $B(x,u)u$ is not identically zero in the Sobolev sense. Let
$u_{2}>u^{\star}$ be a constant. By (B3), $L_{\mathcal{F}}u_{2}\leq 0$. By the
subsolution-supersolution method and the regularity Theorem 3.7, there exists
$w\in\mathrm{H\ddot{o}l}_{\mathrm{loc}}(M)$ such that $u\leq w\leq u_{2}$ and
$L_{\mathcal{F}}w=0$. Since, by (B2), (B3) and $u\leq w$, $B(x,w)w$ is not
identically zero, then $w$ is non-constant, contradicting property $(1)$.
* -
Suppose that $B(x,u)u=0$ a.e. on $M$. Since $u$ is non-constant, we can choose
a positive constant $c$ such that both $\\{u-c>0\\}$ and $\\{u-c<0\\}$ have
positive measure. By (B2), $L_{\mathcal{F}}(u-c)\geq 0$, hence by Proposition
3.14 the function $v=(u-c)_{+}=\max\\{u-c,0\\}$ is a non-zero subsolution.
Denoting with $\chi_{\\{u<c\\}}$ the indicatrix of $\\{u<c\\}$, we can say
that $L_{\mathcal{F}}v\geq 0=\chi_{\\{u<c\\}}v^{p-1}$. Choose any constant
$u_{2}>v^{\star}$. Then, clearly
$L_{\mathcal{F}}u_{2}\leq\chi_{\\{u<c\\}}u_{2}^{p-1}$. Since the potential
$\widetilde{B}(x,t)\doteq B(x,t)+\chi_{\\{u<c\\}}(x)|t|^{p-2}t$
is still a Caratheodory function satisfying the assumptions in $\mathscr{S}$,
by Theorem 3.2 there exists a function $w$ such that $v\leq w\leq u_{2}$ and
$L_{\mathcal{F}}w=\chi_{\\{u<c\\}}w^{p-1}$. By Theorem 3.7, $(ii)$ $w$ is
locally Hölder continuous and, since $\\{u<c\\}$ has positive measure, $w$ is
non-constant, contradicting $(1)$.
To prove the implication $(3)\Rightarrow(1)$, we follow a standard argument in
potential theory, see for example [25], Proposition 1.6. Let
$u\in\mathrm{H\ddot{o}l}_{\mathrm{loc}}(M)\cap W^{1,p}_{\mathrm{loc}}(M)$ be a
non-constant, non-negative, bounded solution of $L_{\mathcal{F}}u\geq 0$. We
claim that, by the strong maximum principle, $u<u^{\star}$ on $M$. Indeed, let
$\widetilde{\mathcal{A}}$ be the operator associated with the choice
$\widetilde{A}(X)\doteq-A(-X)$. Then, since $\widetilde{A}$ satisfies all the
assumptions in $\mathscr{S}$, it is easy to show that
$L_{\widetilde{\mathcal{A}}}(u^{\star}-u)\leq 0$ on $M$. Hence, by the Harnack
inequality $u^{\star}-u>0$ on $M$, as desired.
Let $K\Subset M$ be a compact set. Consider $\eta$ such that
$0<\eta<u^{\star}$ and define the open set $\Omega_{\eta}\doteq
u^{-1}\\{(\eta,+\infty)\\}$. From $u<u^{\star}$ on $M$, we can choose $\eta$
close enough to $u^{\star}$ so that $K\cap\Omega_{\eta}=\emptyset$. Let
$x_{0}$ be a point such that $u(x_{0})>\frac{u^{\star}+\eta}{2}$, Let $\Omega$
be such that $x_{0}\in\Omega$, and choose a Khas’minskii potential relative to
the triple $(K,\Omega,(u^{\star}-\eta)/2)$. Now, consider the open set $V$
defined as the connected component containing $x_{0}$ of the open set
$\displaystyle\tilde{V}\doteq\\{x\in\Omega_{\eta}\ |\ u(x)>\eta+w(x)\\}$
Since $u$ is bounded and $w$ is an exhaustion, $V$ is relatively compact in
$M$ and $u(x)=\eta+w(x)$ on $\partial V$. Since, by (B2),
$L_{\mathcal{F}}(\eta+w)\leq 0$, and $L_{\mathcal{F}}u\geq 0$, this
contradicts the comparison Theorem 3.1.
We are left to the implication $(2)\Rightarrow(3)$. Fix a triple
$(K,\Omega,\varepsilon)$, and a smooth exhaustion $\\{\Omega_{j}\\}$ of $M$
with $\Omega\Subset\Omega_{1}$. By the existence Theorem 3.11 with obstacle
$\psi=-\infty$, there exists a unique solution $h_{j}$ of
$\left\\{\begin{array}[]{l}L_{\mathcal{F}}h_{j}=0\qquad\text{on
}\Omega_{j}\backslash K\\\\[5.69046pt] h_{j}=0\quad\text{on }\partial K,\quad
h_{j}=1\quad\text{on }\partial\Omega_{j},\end{array}\right.$
and $0\leq h_{j}\leq 1$ by the comparison Theorem 3.1, with $h_{j}$ continuous
up to $\partial\left(\Omega_{j}\setminus K\right)$ thanks to Theorem 3.20.
Extend $h_{j}$ by setting $h_{j}=0$ on $K$ with $h_{j}=1$ on
$M\backslash\Omega_{j}$. Again by comparison, $\\{h_{j}\\}$ is a decreasing
sequence which, by Proposition 3.23, converges pointwise on $M$ to a solution
$h\in\cap W^{1,p}_{\mathrm{loc}}(M)\quad\text{of}\quad L_{\mathcal{F}}h=0\
\text{on }M\backslash K.$
Since $0\leq h\leq h_{j}$ for every $j$, and since $h_{j}=0$ on $\partial K$,
using Corollary 3.22 with $\psi=-\infty$ we deduce that $h\in C^{0}(M)$ and
$h=0$ on $K$. We claim that $h=0$. Indeed, by Lemma 3.16 $u=\max\\{h,0\\}$ is
a non-negative, bounded solution of $L_{\mathcal{F}}u\geq 0$ on $M$. By $(1)$,
$u$ has to be constant, hence the only possibility is $h=0$.
Now we are going to build by induction an increasing sequence of continuous
functions $\\{w_{n}\\}$, $w_{0}=0$, such that:
1. (a)
$w_{n}|_{K}=0$, $w_{n}$ are continuous on $M$ and $L_{\mathcal{F}}w_{n}\leq 0$
on $M\backslash K$,
2. (b)
for every $n$, $w_{n}\leq n$ on all of $M$ and $w_{n}=n$ in a large enough
neighborhood of infinity denoted by $M\backslash C_{n}$,
3. (c)
$\left\|w_{n}\right\|_{L^{\infty}(\Omega_{n})}\leq\left\|w_{n-1}\right\|_{L^{\infty}(\Omega_{n})}+\frac{\varepsilon}{2^{n}}$.
Once this is done, by $(c)$ the increasing sequence $\\{w_{n}\\}$ is locally
uniformly convergent to a continuous exhaustion which, by Proposition 3.23,
solves $L_{\mathcal{F}}w\leq 0$ on $M\setminus K$. Furthermore,
$\left\|w\right\|_{L^{\infty}(\Omega)}\leq\sum_{n=1}^{+\infty}\frac{\varepsilon}{2^{n}}\leq\varepsilon.$
Note that $w\in W^{1,p}_{\mathrm{loc}}(M\setminus K)\cap C^{0}(M)$ with $w=0$
on $K$, so we can conclude immediately that $w\in W^{1,p}_{\mathrm{loc}}(M)$
and hence $w$ is the desired Khas’minskii potential relative to
$(K,\Omega,\varepsilon)$.
We start the induction by setting $w_{1}\doteq h_{j}$, for $j$ large enough in
order for property (c) to hold. Define $C_{1}$ in order to fix property $(b)$.
Suppose now that we have constructed $w_{n}$. For notational convenience,
write $\bar{w}=w_{n}$. Consider the sequence of obstacle problems
$\mathcal{K}_{\bar{w}+h_{j}}$ defined on $\Omega_{j+1}\backslash K$ and let
$s_{j}$ be their solution. By Theorem 3.17 and Corollary 3.22 we know that
$s_{j}$ is continuous up to the boundary of its domain. Take for convenience
$j$ large enough such that $C_{1}\subset\Omega_{j}$. Note that
$s_{j}|_{\partial K}=0$ and since the constant function $n+1$ is a
supersolution, by comparison $s_{j}\leq n+1$ and
$s_{j}|_{\Omega_{j+1}\setminus\Omega_{j}}=n+1$. So we can extend $s_{j}$ to a
function defined on all of $M$ by setting it equal to $0$ on $K$ and equal to
$n+1$ on $M\backslash\Omega_{j+1}$, and in this fashion, by Lemma 3.16
$L_{\mathcal{F}}s_{j}\leq 0$ on $M\setminus K$. By Corollary 3.13,
$\\{s_{j}\\}$ is decreasing, and so it has a pointwise limit $\bar{s}$ which
is still a supersolution on $M\setminus K$ by Proposition 3.23. By Theorem
3.7, $i)$ the function $\bar{s}$ admits a lower semicontinuous representative.
We are going to prove that $\bar{s}=\bar{w}$. First, we show that $\bar{s}\leq
n$ everywhere. Suppose by contraddiction that this is false. Then, since
$h_{j}$ converges locally uniformly to zero, on the open set
$A\doteq\bar{s}^{-1}\\{(n,\infty)\\}$ the inequality $s_{j}>\bar{w}+h_{j}$ is
locally eventually true, so that $s_{j}$ is locally eventually a solution of
$L_{\mathcal{F}}s_{j}=0$ by Proposition 3.19, and so
$L_{\mathcal{F}}\bar{s}=0$ on $A$ by Proposition 3.23. We need to apply the
Pasting Lemma 3.16 to the subsolution $\bar{s}-n$ (defined on $A$) and the
zero function. In order to do so, we shall verify that
$\max\\{\bar{s}-n,0\\}\in X^{p}_{0}(A)$, where $X^{p}_{0}(A)$ is defined as in
(21). This requires some care, since $\bar{s}$ is not a-priori continuous up
to $\partial A$. By Proposition 3.25, we can choose a sequence of continuous
supersolutions $\\{\sigma_{i}\\}\subset W^{1,p}_{\mathrm{loc}}(M\backslash
K)\cap C^{0}(M\backslash\overline{K})$ that converges to $\bar{s}$ both
pointwise monotonically and in $W^{1,p}$ on compacta of
$M\backslash\overline{K}$. Since $0\leq\bar{s}\leq s_{j}$ for every $j$, and
the sequence $\\{s_{j}\\}$ is decreasing, it follows that $\bar{s}$ is
continuous on $\partial K$ with zero boundary value. Therefore, $A$ has
positive distance from $\partial K$, and thus $\sigma_{i}$ converges to
$\bar{s}$ in $W^{1,p}_{\mathrm{loc}}(\overline{A})$. Since $\bar{s}$ is lower
semicontinuous, $\bar{s}\leq n$ on $\partial A$, so that $\sigma_{i}\leq n$ on
$\partial A$ for every $i$. Consequently, the continuous functions
$\psi_{i}=\max\\{\sigma_{i}-n-1/i,0\\}$ converge on compacta of $\overline{A}$
to $\max\\{\bar{s}-n,0\\}$, and each $\psi_{i}$ is zero in a neighbourhood of
$\partial A$. This proves the claim that $\max\\{\bar{s}-n,0\\}\in
X^{p}_{0}(A)$. By Lemma 3.16 and assumptions $\mathscr{S}$, the function
$\displaystyle f\doteq\max\\{\bar{s}-n,0\\}$
is a non-negative, non-zero bounded solution of $L_{\mathcal{F}}f\geq 0$. By
$(2)$, $f$ is constant, hence zero; therefore $\bar{s}\leq n$. This proves
that $\bar{s}=\bar{w}=n$ on $M\backslash C_{n}$. As for the remaining set, a
similar argument than the one just used shows that $\bar{s}$ is a solution of
$L_{\mathcal{F}}\bar{s}=0$ on the open, relatively compact set
$V\doteq\\{\bar{s}>\bar{w}\\}$, and that $\bar{s}-w\in W^{1,p}_{0}(V)$. The
comparison principle guarantees that $\bar{s}\leq\bar{w}$ everywhere, which is
what we needed to prove. Now, since $s_{j}\downarrow w$, by Dini’s theorem the
convergence is locally uniform and so we can choose $\bar{j}$ large enough in
such a way that $s_{\bar{j}}-\bar{w}<\frac{\varepsilon}{2^{n}}$ on
$\Omega_{n+1}$. Define $w_{n+1}\doteq s_{\bar{j}}$, and $C_{n+1}$ in order for
$(b)$ to hold, and the construction is completed. ∎
###### Remark 4.2.
As anticipated in Section 2, the results of our main theorem are the same if
we substitute condition (B1) with condition (B1+):
$\displaystyle|B(x,t)|\leq b(t)\quad\text{instead of}\quad|B(x,t)|\leq
b_{1}+b_{2}\left|t\right|^{p-1}\quad\text{for }t\in\mathbb{R}$
Although it is not even possible to define the operator $\mathcal{B}$ if we
take $W^{1,p}(\Omega)$ as its domain, this difficulty is easily overcome if we
restrict the domain to (essentially) bounded functions, i.e. if we define
$\mathcal{B}:W^{1,p}(\Omega)\cap L^{\infty}(\Omega)\to
W^{1,p}(\Omega)^{\star}.$
Now consider that each function used in the proof of the main theorem is
either bounded or essentially bounded, so it is quite immediate to see all the
existence and comparison theorems proved in Section 3, along with all the
reasoning and tools used in the proof, still work. Consider for example an
obstacle problem $\mathcal{K}_{\theta,\psi}$ such that
$\left|\theta\right|\leq C\geq\left|\psi\right|$, and define the operator
$\tilde{\mathcal{B}}$ relative to the function:
$\displaystyle\tilde{B}(x,t)=\begin{cases}B(x,t)&\text{ for
}\left|t\right|\leq C+1\\\ b_{1}(x,C+1)&\text{ for }t\geq C+1\\\
b_{1}(x,-(C+1))&\text{ for }t\leq-(C+1)\end{cases}$
$\tilde{\mathcal{B}}$ satisfies evidently condition (B1), so it admits a
solution to the obstacle problem, which by comparison Theorem 3.12 is bounded
in modulus by $C$, and now it is evident that this function solves also the
obstacle problem relative to the original bad-behaved $\mathcal{B}$.
## 5\. On the links with the weak maximum principle and parabolicity: proof
of Theorem 2.12
As already explained in the introduction, throughout this section we will
restrict ourselves to potentials $B(x,t)$ of the form $B(x,t)=b(x)f(t)$, where
(27) $\begin{array}[]{l}\displaystyle b,b^{-1}\in
L^{\infty}_{\mathrm{loc}}(M),\quad b>0\text{ a.e. on }M;\\\\[5.69046pt] f\in
C^{0}(\mathbb{R}),\quad f(0)=0,\quad f\text{ is non-decreasing on
}\mathbb{R},\end{array}$
while we require (A1), (A2) on $A$.
###### Remark 5.1.
_As in Remark 3.5, in the case of the operator $L_{\varphi}$ in Example 2.2
with $h$ being the metric tensor, (A1) and (A2) can be weakened to (13) and
(14)._
We begin with the following lemma characterizing $(W)$, whose proof follows
the lines of [24].
###### Lemma 5.2.
Property $(W)$ for $b^{-1}L_{\mathcal{A}}$ is equivalent to the following
property, which we call $(P)$:
> For every $g\in C^{0}(\mathbb{R})$, and for every $u\in C^{0}(M)\cap
> W^{1,p}_{\mathrm{loc}}(M)$ bounded above and satisfying
> $L_{\mathcal{A}}u\geq b(x)g(u)$ on M, it holds $g(u^{\star})\leq 0$.
###### Proof.
$(W)\Rightarrow(P)$. From $(W)$ and $L_{\mathcal{A}}u\geq b(x)g(u)$, for every
$\eta<u^{\star}$ and $\varepsilon>0$ we can find $0\leq\phi\in
C^{\infty}_{c}(\Omega_{\eta})$ such that
$\varepsilon\int b\phi>-<\mathcal{A}(u),\phi>\ \geq\int
g(u)b\phi\geq\inf_{\Omega_{\eta}}g(u)\int b\phi$
Since $b>0$ a.e. on $M$, we can simplify the integral term to obtain
$\inf_{\Omega_{\eta}}g(u)\leq\varepsilon$. Letting $\varepsilon\rightarrow 0$
and then $\eta\rightarrow u^{\star}$, and using the continuity of $u,g$ we get
$g(u^{\star})\leq 0$, as required. To prove that $(P)\Rightarrow(W)$, suppose
by contradiction that there exists a bounded above function $u\in C^{0}\cap
W^{1,p}_{\mathrm{loc}}$, a value $\eta<u^{\star}$ and $\varepsilon>0$ such
that $\inf_{\Omega_{\eta}}b^{-1}L_{\mathcal{A}}u\geq\varepsilon$. Let
$g_{\varepsilon}(t)$ be a continuous function on $\mathbb{R}$ such that
$g_{\varepsilon}(t)=\varepsilon$ if $t\geq u^{\star}-\eta$, and
$g_{\varepsilon}(t)=0$ for $t\leq 0$. Then, by the pasting Lemma 3.16,
$w=\max\\{u-\eta,0\\}$ satisfies $L_{\mathcal{A}}w\geq
b(x)g_{\varepsilon}(w)$. Furthermore,
$g_{\varepsilon}(w^{\star})=g_{\varepsilon}(u^{\star}-\eta)=\varepsilon$,
contradicting $(P)$. ∎
Theorem 2.12 is an immediate corollary of the main Theorem 2.8 and of the
following two propositions.
###### Proposition 5.3.
If $b^{-1}L_{\mathcal{A}}$ satisfies $(W)$, then $(L)$ holds for every
operator $L_{\mathcal{F}}$ of type 1. Conversely, if $(L)$ holds for some
operator $\mathcal{F}$ of type 1, then $b^{-1}L_{\mathcal{A}}$ satisfies
$(W)$.
###### Proof.
Suppose that $(W)$ is met, and let $u\in\mathrm{H\ddot{o}l}_{\mathrm{loc}}\cap
W^{1,p}_{\mathrm{loc}}$ be a bounded, non-negative solution of
$L_{\mathcal{F}}u\geq 0$. By Lemma 5.2, $f(u^{\star})\leq 0$. Since
$\mathcal{F}$ is of type 1, $u^{\star}\leq 0$, that is, $u=0$, as desired.
Conversely, let $\mathcal{F}$ be an operator of type 1 for which the Liouville
property holds. Suppose by contradiction that $(W)$ is not satisfied, so that
there exists $u\in C^{0}\cap W^{1,p}_{\mathrm{loc}}$ such that
$b^{-1}L_{\mathcal{A}}u\geq\varepsilon$ on some $\Omega_{\eta_{0}}$. Clearly,
$u$ is non-constant. Since $f(0)=0$, we can choose
$\eta\in(\eta_{0},u^{\star})$ in such a way that
$f(u^{\star}-\eta)<\varepsilon$. Hence, by the monotonicity of $f$, the
function $u-\eta$ solves
$L_{\mathcal{A}}(u-\eta)\geq b(x)\varepsilon\geq b(x)f(u-\eta)\qquad\text{on
}\ \Omega_{\eta}.$
Thanks to the pasting Lemma 3.16, $w=\max\\{u-\eta,0\\}$ is a non-constant,
non-negative bounded solution of $L_{\mathcal{A}}w\geq b(x)f(w)$, that is,
$L_{\mathcal{F}}w\geq 0$, contradicting the Liouville property. ∎
###### Proposition 5.4.
If $b^{-1}L_{\mathcal{A}}$ is parabolic, then $(L)$ holds for every operator
$L_{\mathcal{F}}$ of type 2. Conversely, if $(L)$ holds for some operator
$\mathcal{F}$ of type 2, then $b^{-1}L_{\mathcal{A}}$ satisfies
$(W_{\mathrm{pa}})$.
###### Proof.
Suppose that $(W_{\mathrm{pa}})$ is met. Since each bounded, non-negative
$u\in\mathrm{H\ddot{o}l}_{\mathrm{loc}}\cap W^{1,p}_{\mathrm{loc}}$ solving
$L_{\mathcal{F}}u\geq 0$ automatically solves $L_{\mathcal{A}}u\geq 0$, then
$u$ is constant by $(W_{\mathrm{pa}})$, which proves $(L)$. Conversely, let
$\mathcal{F}$ be an operator of type 2 for which the Liouville property holds,
and let $[0,T]$ be the maximal interval in $\mathbb{R}^{+}_{0}$ where $f=0$.
Suppose by contradiction that $(W_{\mathrm{pa}})$ is not satisfied, so that
there exists a nonconstant $u\in C^{0}\cap W^{1,p}_{\mathrm{loc}}$ with
$b^{-1}L_{\mathcal{A}}u\geq 0$ on $M$. For $\eta$ close enough to $u^{\star}$,
$u-\eta\leq T$ on $M$, hence $w=\max\\{u-\eta,0\\}$ is a non-negative, bounded
non-constant solution of $L_{\mathcal{A}}w\geq 0=b(x)f(w)$ on $M$,
contradicting the Liouville property for $\mathcal{F}$. ∎
## 6\. The Evans property
We conclude this paper with some comments on the existence of Evans potentials
on model manifolds. It turns out that the function-theoretic properties of
these potentials can be used to study the underlying manifold. By a way of
example, we quote the papers [34] and [31]. In the first one, the authors
extend the Kelvin-Nevanlinna-Royden condition and find a Stokes’ type theorem
for vector fields with integrability condition related to the Evans potential,
while in the second article Evans potentials are exploited in order to
understand the spaces of harmonic functions with polynomial growth. As a
matter of fact, these spaces give a lot of information on the structure at
infinity of the manifold. We recall that, only for the standard Laplace-
Beltrami operator, it is known that any parabolic Riemannian manifold admits
an Evans potential, as proved in [20] or in [30], but the technique involved
in this proof heavily relies on the linearity of the operator and cannot be
easily generalized, even for the $p$-Laplacian. In this respect, see [12].
From the technical point of view, we remark that, for the main Theorems 2.8
and 2.12 to hold, no growth control on $B(x,t)$ in the variable $t$ is
required. As we will see, for the Evans property to hold for $L_{\mathcal{F}}$
we shall necessarily assume a precise maximal growth of $B$, otherwise there
is no hope to find any Evans potential. This growth is described by the so-
called Keller-Osserman condition.
To begin with, we recall that a model manifold $M_{g}$ is $\mathbb{R}^{m}$
endowed with a metric $\mathrm{d}s^{2}$ which, in polar coordinates centered
at some origin $o$, has the expression
$\mathrm{d}s^{2}=\mathrm{d}r^{2}+g(r)^{2}\mathrm{d}\theta^{2}$, where
$\mathrm{d}\theta^{2}$ is the standard metric on the unit sphere
$\mathbb{S}^{m-1}$ and $g(r)$ satisfies the following assumptions:
$g\in C^{\infty}(\mathbb{R}^{+}_{0}),\quad g>0\text{ on }\mathbb{R}^{+},\quad
g^{\prime}(0)=1,\quad g^{(2k)}(0)=0$
for every $k=0,1,2,\ldots$, where $g^{(2k)}$ means the $(2k)$-derivative of
$g$. The last condition ensures that the metric is smooth at the origin $o$.
Note that
$\Delta r(x)=(m-1)\frac{g^{\prime}(r(x))}{g(r(x))},\quad\mathrm{vol}(\partial
B_{r})=g(r)^{m-1},\quad\mathrm{vol}(B_{r})=\int_{0}^{r}g(t)^{m-1}\mathrm{d}t.$
Consider the operator $L_{\varphi}$ of Example 2.2 with $h$ being the metric
tensor. If $u(x)=z(r(x))$ is a radial function, a straightforward computation
gives
(28)
$L_{\varphi}u=g^{1-m}\big{[}g^{m-1}\varphi(|z^{\prime}|)\mathrm{sgn}(z^{\prime})\big{]}^{\prime}.$
Note that (9) implies $\varphi(t)\rightarrow+\infty$ as $t\rightarrow+\infty$.
Let $B(x,t)=B(t)$ be such that
$B\in C^{0}(\mathbb{R}_{0}^{+}),\ B\geq 0\text{ on }\mathbb{R}^{+},\ B(0)=0,\
B\text{ is non-decreasing on }\mathbb{R},$
and set $B=0$ on $\mathbb{R}^{-}$. For $c>0$, define the functions
(29)
$\begin{array}[]{ll}V_{\mathrm{pa}}(r)=\varphi^{-1}\Big{(}cg(r)^{1-m}\Big{)},&\qquad
V_{\mathrm{st}}(r)=\varphi^{-1}\Big{(}cg(r)^{1-m}\int_{R}^{r}g(t)^{m-1}\mathrm{d}t\Big{)}\\\\[8.5359pt]
z_{\mathrm{pa}}(r)=\int_{R}^{r}V_{\mathrm{pa}}(t)\mathrm{d}t,&\qquad
z_{\mathrm{st}}(r)=\int_{R}^{r}V_{\mathrm{st}}(t)\mathrm{d}t.\end{array}$
Note that both $z_{\mathrm{pa}}$ and $z_{\mathrm{st}}$ are increasing on
$[R,+\infty)$. By (28), the functions $u_{\mathrm{pa}}=z_{\mathrm{pa}}\circ
r$, $u_{\mathrm{st}}=z_{\mathrm{st}}\circ r$ are solutions of
$L_{\varphi}u_{\mathrm{pa}}=0,\qquad L_{\varphi}u_{\mathrm{st}}=c.$
Therefore, the following property can be easily verified:
###### Proposition 6.1.
For the operator $L_{\mathcal{F}}$ defined by
$L_{\mathcal{F}}u=L_{\varphi}u-B(u)$, properties $(K)$ and $(L)$ are
equivalent to either
(30) $V_{\mathrm{st}}\not\in L^{1}(+\infty)\quad\text{for every }c>0\text{
small enough, if }B>0\text{ on }\mathbb{R}^{+},$
or
(31) $V_{\mathrm{pa}}\not\in L^{1}(+\infty)\quad\text{for every }c>0\text{
small enough, otherwise.}$
###### Proof.
We sketch the proof when $B>0$ on $\mathbb{R}^{+}$, the other case being
analogous. If $V_{\mathrm{st}}\in L^{1}(+\infty)$, then $u_{\mathrm{st}}$ is a
bounded, non-negative solution of $L_{\varphi}u\geq c$ on $M\backslash B_{R}$.
Choose $\eta\in(0,u^{\star})$ in such a way that $B(u^{\star}-\eta)\leq c$,
and proceed as in the second part of the proof of Proposition 5.3 to
contradict the Liouville property of $L_{\mathcal{F}}$. Conversely, if
$V_{\mathrm{st}}\not\in L^{1}(+\infty)$, then $u_{\mathrm{st}}$ is an
exhaustion. For every $\delta>0$, choose $c>0$ small enough that $c\leq
B(\delta)$. Since $\varphi(0)=0$, for every $\rho>R$ and $\varepsilon>0$ we
can reduce $c$ in such a way that
$w_{\varepsilon,\rho}=\delta+u_{\mathrm{st}}$ satisfies
$w_{\varepsilon,\rho}=\delta\ \text{ on }\partial B_{R},\quad
w_{\varepsilon,\rho}\leq\delta+\varepsilon\ \text{ on }B_{\rho}\backslash
B_{R},\quad L_{\varphi}w_{\varepsilon,\rho}=c\leq B(\delta)\leq
B(w_{\varepsilon,\rho}).$
As the reader can check by slightly modifying the argument in the proof of
$(3)\Rightarrow(1)$ of Theorem 2.8, the existence of these modified
Khas’minskii potentials for every choice of $\delta,\varepsilon,\rho$ is
enough to conclude the validity of $(L)$, hence of $(K)$. ∎
###### Remark 6.2.
_In the case $\varphi(t)=t^{p-1}$ of the $p$-Laplacian, making the conditions
on $V_{\mathrm{st}}$ and $V_{\mathrm{pa}}$ more explicit and using Theorem
2.12 we deduce that, on model manifolds, $\Delta_{p}$ satisfies $(W)$ if and
only if_
$\left(\frac{\mathrm{vol}(B_{r})}{\mathrm{vol}(\partial
B_{r})}\right)^{\frac{1}{p-1}}\not\in L^{1}(+\infty),$
_and $\Delta_{p}$ is parabolic if and only if_
$\left(\frac{1}{\mathrm{vol}(\partial B_{r})}\right)^{\frac{1}{p-1}}\not\in
L^{1}(+\infty).$
_This has been observed, for instance, in[25], see also the end of [26] and
the references therein for a thorough discussion on $\Delta_{p}$ on model
manifolds._
We now study the existence of an Evans potential on $M_{g}$. First, we need to
produce radial solutions of $L_{\varphi}u=B(u)$ which are zero on some fixed
sphere $\partial B_{R}$. To do so, the first step is to solve locally the
related Cauchy problem. The next result is a modification of Proposition A.1
of [4]
###### Lemma 6.3.
In our assumptions, for every fixed $R>0$ and $c\in(0,1]$ the problem
(32)
$\left\\{\begin{array}[]{l}\big{[}g^{m-1}\varphi(c|z^{\prime}|)\mathrm{sgn}(z^{\prime})\big{]}^{\prime}=g^{m-1}B(cz)\qquad\text{on
}[R,+\infty)\\\\[5.69046pt] z(R)=\vartheta\geq 0,\quad
z^{\prime}(R)=\mu>0\end{array}\right.$
has a positive, increasing $C^{1}$ solution $z_{c}$ defined on a maximal
interval $[R,\rho)$, where $\rho$ may depend on $c$. Moreover, if
$\rho<+\infty$, then $z_{c}(\rho^{-})=+\infty$.
###### Proof.
We sketch the main steps. First, we prove local existence. For every chosen
$r\in(R,R+1)$, denote with $A_{\varepsilon}$ the $\varepsilon$-ball centered
at the constant function $\vartheta$ in $C^{0}([R,r],\|\cdot\|_{L^{\infty}})$.
We look for a fixed point of the Volterra operator $T_{c}$ defined by
(33)
$T_{c}(u)(t)=\vartheta+\frac{1}{c}\int_{R}^{t}\varphi^{-1}\left(\frac{g^{m-1}(R)\varphi(c\mu)}{g^{m-1}(s)}+\int_{R}^{s}\frac{g^{m-1}(\tau)}{g^{m-1}(s)}B(cu(\tau))\mathrm{d}\tau\right)\mathrm{d}s$
It is simple matter to check the following properties:
* $(i)$
If $|r-R|$ is sufficiently small, $T_{c}(A_{\varepsilon})\subset
A_{\varepsilon}$;
* $(ii)$
There exists a constant $C>0$, independent of $r\in(R,R+1)$, such that
$|T_{c}u(t)-T_{c}u(s)|\leq C|t-s|$ for every $u\in A_{\varepsilon}$. By
Ascoli-Arzelà theorem, $T_{c}$ is a compact operator.
* $(iii)$
$T_{c}$ is continuous. To prove this, let $\\{u_{j}\\}\subset A_{\varepsilon}$
be such that $\|u_{j}-u\|_{L^{\infty}}\rightarrow 0$, and use Lebesgue
convergence theorem in the definition of $T_{c}$ to show that
$T_{c}u_{j}\rightarrow T_{c}u$ pointwise. The convergence is indeed uniform by
$(ii)$.
By Schauder theorem ([7], Theorem 11.1), $T_{c}$ has a fixed point $z_{c}$.
Differentiating $z_{c}=T_{c}z_{c}$ we deduce that $z_{c}^{\prime}>0$ on
$[R,r]$, hence $z_{c}$ is positive and increasing. Therefore, $z_{c}$ is also
a solution of (32). This solution can be extended up to a maximal interval
$[R,\rho)$. If by contradiction the (increasing) solution $z_{c}$ satisfies
$z_{c}(\rho^{-})=z_{c}^{\star}<+\infty$, differentiating $z_{c}=T_{c}z_{c}$ we
would argue that $z_{c}^{\prime}(\rho^{-})$ exists and is finite. Hence, by
local existence $z_{c}$ could be extended past $\rho$, a contradiction. ∎
We are going to prove that, if $B(t)$ does not grow too fast and under a
reasonable structure condition on $M_{g}$, the solution $z_{c}$ of (32) is
defined on $[R,+\infty)$. To do this, we first need some definitions. We
consider the initial condition $\vartheta=0$. For convenience, we further
require the following assumptions:
(34) $\varphi\in C^{1}(\mathbb{R}^{+}),\qquad a_{2}^{-1}t^{p-1}\leq
t\varphi^{\prime}(t)\leq a_{1}+a_{2}t^{p-1}\quad\text{on }\mathbb{R}^{+},$
for some positive constants $a_{1},a_{2}$. Define
$K_{\mu}(t)=\int_{\mu}^{t}s\varphi^{\prime}(s)\mathrm{d}s,\qquad\beta(t)=\int_{0}^{t}B(s)\mathrm{d}s.$
Note that $\beta(t)$ is non-decreasing on $\mathbb{R}^{+}$ and that, for every
$\mu\geq 0$, $K_{\mu}$ is strictly increasing. By (34),
$K_{\mu}(+\infty)=+\infty$. We focus our attention on the condition
($\urcorner KO$) $\frac{1}{K_{\mu}^{-1}(\beta(s))}\not\in L^{1}(+\infty).$
This (or, better, it opposite) is called the Keller-Osserman condition.
Originating, in the quasilinear setting, from works of J.B. Keller [10] and R.
Osserman [21], it has been the subject of an increasing interest in the last
years. The interested reader can consult, for instance, [5], [17], [19]. Note
that the validity of ($\urcorner KO$) is independent of the choice of
$\mu\in[0,1)$, and we can thus refer ($\urcorner KO$) to $K_{0}=K$. This
follows since, by (34), $K_{\mu}(t)\asymp t^{p}$ as $t\rightarrow+\infty$,
where the constant is independent of $\mu$, and thus $K_{\mu}^{-1}(s)\asymp
s^{1/p}$ as $s\rightarrow+\infty$, for some constants which are uniform when
$\mu\in[0,1)$. Therefore, ($\urcorner KO$) is also equivalent to
(35) $\frac{1}{\beta(s)^{1/p}}\not\in L^{1}(+\infty)$
###### Lemma 6.4.
Under the assumptions of the previous proposition and subsequent discussion,
suppose that $g^{\prime}\geq 0$ on $\mathbb{R}^{+}$. If
($\urcorner KO$) $\frac{1}{K^{-1}(\beta(s))}\not\in L^{1}(+\infty),$
then, for every choice of $c\in(0,1]$, the solution $z_{c}$ of (32) is defined
on $[R,+\infty)$.
###### Proof.
From $[g^{m-1}\varphi(cz^{\prime})]^{\prime}=g^{m-1}B(cz)$ and $g^{\prime}\geq
0$ we deduce that
$\varphi^{\prime}(cz^{\prime})cz^{\prime\prime}\leq B(cz),\quad\text{so
that}\quad cz^{\prime}\varphi^{\prime}(cz^{\prime})cz^{\prime\prime}\leq
B(cz)cz^{\prime}=(\beta(cz))^{\prime}.$
Hence integrating and changing variables we obtain
$K_{\mu}(cz^{\prime})=\int_{\mu}^{cz^{\prime}}s\varphi^{\prime}(s)\mathrm{d}s\leq\int_{0}^{cz}B(s)\mathrm{d}s=\beta(cz).$
Applying $K_{\mu}^{-1}$, $cz^{\prime}=K_{\mu}^{-1}(\beta(cz))$. Since
$z^{\prime}>0$, we can divide the last equality by $K_{\mu}^{-1}(\beta(cz))$
and integrate on $[R,t)$ to get, after changing variables,
$\int_{0}^{cz(t)}\frac{\mathrm{d}s}{K_{\mu}^{-1}(\beta(s))}\leq t-R.$
By ($\urcorner KO$), we deduce that $\rho$ cannot be finite for any fixed
choice of $c$. ∎
For every $R>0$, we have produced a radial function $u_{c}=(cz_{c})\circ r$
which solves $L_{\varphi}u_{c}=B(u_{c})$ on $M\backslash B_{R}$ and $u_{c}=0$
on $B_{R}$. The next step is to guarantee that, up to choosing $\mu,c$
appropriately, $u_{c}$ can be arbitrarily small on some bigger ball
$B_{R_{1}}$. The basic step is a uniform control of the norm of $z_{c}$ on
$[R,R_{1}]$ with respect to the variable $c$, up to choosing $\mu=\mu(c)$
appropriately small. This requires a further control on $B(t)$, this time on
the whole $\mathbb{R}^{+}$ and not only in a neighbourhood of $+\infty$.
###### Lemma 6.5.
Under the assumptions of the previous proposition, suppose further that
(36) $B(t)\leq b_{1}t^{p-1}\qquad\text{on }\mathbb{R}^{+}.$
Then, for every $R_{1}>R$ and every $c\in(0,1]$, there exists $\mu>0$
depending $c$ such that the solution $z_{c}$ of (32) with $\vartheta=0$
satisfies
(37) $\left\|z_{c}\right\|_{L^{\infty}([R,R_{1}])}\leq K,$
for some $K>0$ depending on $R,R_{1}$, on $a_{2}$ in (34) and on $b_{1}$ in
(36) but not on $c$.
###### Proof.
Note that, by (36), ($\urcorner KO$) (equivalently, (35)) is satisfied. Hence,
$z_{c}$ is defined on $[R,+\infty)$ for every choice of $\mu,c$. Fix
$R_{1}>R$. Setting $\vartheta=0$ in the expression (33) of the operator
$T_{c}$, and using the monotonicity of $g$ and $z_{c}$, we deduce that
$\begin{array}[]{lcl}u_{c}(t)&\leq&\displaystyle\frac{1}{c}\int_{R}^{t}\varphi^{-1}\left(\varphi(c\mu)+\int_{R}^{s}B(cu(\tau))\mathrm{d}\tau\right)\mathrm{d}s\\\\[11.38092pt]
&\leq&\displaystyle\frac{1}{c}\int_{R}^{t}\varphi^{-1}\Big{(}\varphi(c\mu)+(R_{1}-R)B(cu_{c}(s))\Big{)}\mathrm{d}s.\end{array}$
Differentiating, this gives
$\varphi(cu^{\prime}_{c}(t))\leq\varphi(c\mu)+(R_{1}-R)B(cu_{c}(t)).$
Now, from (34) and (36) we get
(38) $c^{p-1}(u_{c}^{\prime})^{p-1}\leq
a_{2}\varphi(c\mu)+a_{2}(R_{1}-R)b_{2}c^{p-1}u_{c}^{p-1}.$
Choose $\mu$ in such a way that
$\varphi(c\mu)\leq c^{p-1},\qquad\text{that
is,}\qquad\mu\leq\frac{1}{c}\varphi^{-1}(c^{p-1})$
Then, dividing (38) by $c^{p-1}$ and applying the elementary inequality
$(x+y)^{a}\leq 2^{a}(x^{a}+y^{a})$ we obtain the existence of a constant
$K=K(R_{1},R,a_{2},b_{2})$ such that
$u_{c}^{\prime}(t)\leq K(1+u_{c}(t)).$
Estimate (37) follows by applying Gronwall inequality. ∎
###### Corollary 6.6.
Let the assumptions of the last proposition be satisfied. Then, for each
triple $(B_{R},B_{R_{1}},\varepsilon)$, there exists a positive, radially
increasing solution of $L_{\varphi}u=B(u)$ on $M_{g}\backslash B_{R}$ such
that $u=0$ on $\partial B_{R}$ and $u<\varepsilon$ on $B_{R_{1}}\backslash
B_{R}$.
###### Proof.
By the previous lemma, for every $c\in(0,1]$ we can choose $\mu=\mu(c)>0$ such
that the resulting solution $z_{c}$ of (32) is uniformly bounded on
$[R,R_{1}]$ by some $K$ independent of $c$. Since, by (28),
$u_{c}=(cz_{c})\circ r$ solves $L_{\varphi}u_{c}=B(u_{c})$, it is enough to
choose $c<\varepsilon/K$ to get a desired $u=u_{c}$ for the triple
$(B_{R},B_{R_{1}},\varepsilon)$. ∎
To conclude, we shall show that Evans potentials exist for any triple
$(K,\Omega,\varepsilon)$, not necessarily given by concentric balls centered
at the origin. In order to do so, we use a comparison argument with suitable
radial Evans potentials. Consequently, we need to ensure that, for careful
choices of $c,\mu$, the radial Evans potentials do not overlap.
###### Lemma 6.7.
Under the assumptions of Lemma 6.4, Let $0<R$ be chosen, and let $w$ be a
positive, increasing $C^{1}$ solution of
(39)
$\left\\{\begin{array}[]{l}\big{[}g^{m-1}\varphi(w^{\prime})\big{]}^{\prime}=g^{m-1}B(w)\qquad\text{on
}[R,+\infty)\\\\[5.69046pt] w(R)=0,\quad
w^{\prime}(R)=w^{\prime}_{R}>0\end{array}\right.$
Fix $\hat{R}>R$. Then, for every $c>0$, there exists $\mu=\mu(c,R,\hat{R})$
small enough that the solution $z_{c}$ of (32), with $R$ replaced by
$\hat{R}$, satisfies $cz_{c}<w$ on $[\hat{R},+\infty)$.
###### Proof.
Let $\mu$ satisfy
$g^{m-1}(R)\varphi(w^{\prime}_{R})>g^{m-1}(\hat{R})\varphi(c\mu)$. Suppose by
contradiction that $\\{cz_{c}\geq w\\}$ is a closed, non-empty set. Let
$r>\hat{R}$ be the first point where $cz_{c}=w$. Then, $cz_{c}\leq w$ on
$[\hat{R},r]$, thus $cz_{c}^{\prime}(r)\geq w^{\prime}(r)$. However, from the
chain of inequalities
$\begin{array}[]{lcl}\varphi(w^{\prime}(r))&=&\displaystyle\frac{g^{m-1}(R)\varphi(w^{\prime}_{R})}{g^{m-1}(r)}+\int_{R}^{r}B(w(\tau))\mathrm{d}\tau\\\\[8.5359pt]
&>&\displaystyle\frac{g^{m-1}(\hat{R})\varphi(c\mu)}{g^{m-1}(r)}+\int_{\hat{R}}^{r}B(cz_{c}(\tau))\mathrm{d}\tau=\varphi(cz_{c}^{\prime}(r)),\end{array}$
and from the strict monotonicity of $\varphi$ we deduce
$w^{\prime}(r)>cz^{\prime}_{c}(r)$, a contradiction. ∎
###### Corollary 6.8.
For each $u$ constructed in Corollary 6.6, and for every $R_{2}>R$, there
exists a positive, radially increasing solution $w$ of $L_{\mathcal{F}}w=0$ on
$M_{g}\backslash B_{R_{2}}$ such that $w=0$ on $\partial B_{R_{2}}$ and $w\leq
u$ on $M\backslash B_{R_{2}}$.
###### Proof.
It is a straightforward application of the last Lemma. ∎
We are now ready to state the main result of this section
###### Theorem 6.9.
Let $M_{g}$ be a model with origin $o$ and non-decreasing defining function
$g$. Let $\varphi$ satisfies (34) with $a_{1}=0$, and suppose that $B(t)$
satisfies (36). Define $L_{\mathcal{F}}$ according to
$L_{\mathcal{F}}u=L_{\varphi}u-B(u)$. Then, properties $(K)$, $(L)$ (for
$\mathrm{H\ddot{o}l}_{\mathrm{loc}}$ or $L^{\infty}$) and $(E)$ restricted to
triples $(K,\Omega,\varepsilon)$ with $o\in K$ are equivalent, and also
equivalent to either
(40) $\left(\frac{\mathrm{vol}(B_{r})}{\mathrm{vol}(\partial
B_{r})}\right)^{\frac{1}{p-1}}\not\in L^{1}(+\infty)\quad\text{if }B>0\text{
on }\mathbb{R}^{+},$
or
(41) $\left(\frac{1}{\mathrm{vol}(\partial
B_{r})}\right)^{\frac{1}{p-1}}\not\in L^{1}(+\infty)\quad\text{otherwise.}$
###### Proof.
From (34), assumptions (40) and (41) are equivalent, respectively, to (30) and
(31). Therefore, by Proposition 6.1 and Theorem 2.8, the result will be proved
once we show that $(L)$ implies $(E)$ restricted to the triples
$(K,\Omega,\varepsilon)$ such that $o\in K$. Fix such a triple
$(K,\Omega,\varepsilon)$. Since $o\in K$ and $K$ is open, let $R<\rho$ be such
that $B_{R}\Subset K\Subset\Omega\Subset B_{\rho}$. By making use of Corollary
6.6 we can construct a radially increasing solution $w_{2}$ of
$L_{\mathcal{F}}w_{2}=0$ associated to the triple
$(B_{R},B_{\rho},\varepsilon)$. By $(L)$, $u$ must tend to $+\infty$ as $x$
diverges, for otherwise by the pasting Lemma 3.16 the function $s$ obtained
extending $w_{2}$ with zero on $B_{R}$ would be a bounded, non-negative, non-
constant solution of $L_{\mathcal{F}}s\geq 0$, contradiction. From Corollary
6.8 and the same reasoning, we can produce another exhaustion $w_{1}$ solving
$L_{\mathcal{F}}w_{1}=0$ on $M\backslash B_{\rho}$, $w_{1}=0$ on $\partial
B_{\rho}$ and $w_{1}\leq w_{2}$ on $M\backslash B_{\rho}$. Setting $w_{1}$
equal to zero on $B_{\rho}$, by the pasting lemma $w_{2}$ is a global
subsolution on $M$ below $w_{2}$. By the subsolution-supersolution method on
$M\backslash K$, there exists a solution $w$ such that $w_{1}\leq w\leq
w_{2}$. By construction, $w$ is an exhaustion and $w\leq\varepsilon$ on
$\Omega\backslash K$. Note that, by Remark 3.6, from (34) with $a_{1}=0$ we
deduce that $w\in C^{1}(M\backslash K)$. We claim that $w>0$ on $M\backslash
K$. To prove the claim we can avail of the strong maximum principle in the
form given in [27], Theorem 1.2. Indeed, again from (34) with $a_{1}=0$ we
have (in their notation)
$pa_{2}^{-1}s^{p}\leq K(s)\leq pa_{2}s^{p}\ \text{ on }\mathbb{R}^{+},\qquad
0\leq F(s)\leq\frac{b_{1}}{p}s^{p}\ \text{ on }\mathbb{R}^{+},$
hence
$\frac{1}{K^{-1}(F(s))}\not\in L^{1}(0^{+}).$
The last expression is a necessary and sufficient condition for the validity
strong maximum principle for $C^{1}$ solutions $u$ of $L_{\mathcal{F}}u\leq
0$. Therefore, $w>0$ on $M\backslash K$ follows since $w$ is not identically
zero by contruction. In conclusion, $w$ is an Evans potential relative to
$(K,\Omega,\varepsilon)$, as desired. ∎
Acknowledgements: We would like to thank prof. Anders Bjorn for an helpful
e-mail discussion, and in particular for having suggested us the reference to
Theorem 3.20. Furthermore, we thank proff. S. Pigola, M. Rigoli and A.G. Setti
for a very careful reading of this paper and for their useful comments.
## References
* [1] Paolo Antonini, Dimitri Mugnai, and Patrizia Pucci, _Quasilinear elliptic inequalities on complete Riemannian manifolds_ , J. Math. Pures Appl. (9) 87 (2007), no. 6, 582–600, doi:10.1016/j.matpur.2007.04.003. MR 2335088 (2008k:58046)
* [2] Anders Björn and Jana Björn, _Boundary regularity for $p$-harmonic functions and solutions of the obstacle problem on metric spaces_, J. Math. Soc. Japan 58 (2006), no. 4, 1211–1232, doi:10.2969/jmsj/1179759546. MR 2276190 (2008c:35059)
* [3] Felix E. Browder, _Existence theorems for nonlinear partial differential equations_ , Global Analysis (Proc. Sympos. Pure Math., Vol. XVI, Berkeley, Calif., 1968), Amer. Math. Soc., Providence, R.I., 1970, pp. 1–60. MR 0269962 (42 #4855)
* [4] Roberta Filippucci, Patrizia Pucci, and Marco Rigoli, _Non-existence of entire solutions of degenerate elliptic inequalities with weights_ , Arch. Ration. Mech. Anal. 188 (2008), no. 1, 155–179, Available from: http://dx.doi.org/10.1007/s00205-007-0081-5, doi:10.1007/s00205-007-0081-5. MR 2379656 (2009a:35273)
* [5] by same author, _On weak solutions of nonlinear weighted $p$-Laplacian elliptic inequalities_, Nonlinear Anal. 70 (2009), no. 8, 3008–3019, Available from: http://dx.doi.org/10.1016/j.na.2008.12.031, doi:10.1016/j.na.2008.12.031. MR 2509387 (2010f:35094)
* [6] Ronald Gariepy and William P. Ziemer, _A regularity condition at the boundary for solutions of quasilinear elliptic equations_ , Arch. Rational Mech. Anal. 67 (1977), no. 1, 25–39, doi:10.1007/BF00280825. MR 0492836 (58 #11898)
* [7] David Gilbarg and Neil S. Trudinger, _Elliptic partial differential equations of second order_ , Classics in Mathematics, Springer-Verlag, Berlin, 2001, Reprint of the 1998 edition. MR 1814364 (2001k:35004)
* [8] Alexander Grigor′yan, _Analytic and geometric background of recurrence and non-explosion of the Brownian motion on Riemannian manifolds_ , Bull. Amer. Math. Soc. (N.S.) 36 (1999), no. 2, 135–249, doi:10.1090/S0273-0979-99-00776-4. MR 1659871 (99k:58195)
* [9] Juha Heinonen, Tero Kilpeläinen, and Olli Martio, _Nonlinear potential theory of degenerate elliptic equations_ , Dover Publications Inc., Mineola, NY, 2006, Unabridged republication of the 1993 original. MR 2305115 (2008g:31019)
* [10] J. B. Keller, _On solutions of $\Delta u=f(u)$_, Comm. Pure Appl. Math. 10 (1957), 503–510. MR 0091407 (19,964c)
* [11] R. Z. Khas′minskiĭ, _Ergodic properties of recurrent diffusion processes and stabilization of the solution of the Cauchy problem for parabolic equations_ , Teor. Verojatnost. i Primenen. 5 (1960), 196–214. MR 0133871 (24 #A3695)
* [12] Tero Kilpeläinen, _Singular solutions to $p$-Laplacian type equations_, Ark. Mat. 37 (1999), no. 2, 275–289, Available from: http://dx.doi.org/10.1007/BF02412215, doi:10.1007/BF02412215. MR 1714768 (2000k:31010)
* [13] David Kinderlehrer and Guido Stampacchia, _An introduction to variational inequalities and their applications_ , Pure and Applied Mathematics, vol. 88, Academic Press Inc. [Harcourt Brace Jovanovich Publishers], New York, 1980. MR 567696 (81g:49013)
* [14] Takeshi Kura, _The weak supersolution-subsolution method for second order quasilinear elliptic equations_ , Hiroshima Math. J. 19 (1989), no. 1, 1–36, Available from: http://projecteuclid.org/getRecord?id=euclid.hmj/1206129479. MR 1009660 (90g:35057)
* [15] Zenjiro Kuramochi, _Mass distributions on the ideal boundaries of abstract Riemann surfaces. I_ , Osaka Math. J. 8 (1956), 119–137. MR 0079638 (18,120f)
* [16] Olga A. Ladyzhenskaya and Nina N. Ural′tseva, _Linear and quasilinear elliptic equations_ , Translated from the Russian by Scripta Technica, Inc. Translation editor: Leon Ehrenpreis, Academic Press, New York, 1968\. MR 0244627 (39 #5941)
* [17] Marco Magliaro, Luciano Mari, Paolo Mastrolia, and Marco Rigoli, _Keller-Osserman type conditions for differential inequalities with gradient terms on the Heisenberg group_ , J. Diff. Eq. 250 (2011), no. 6, 2643–2670.
* [18] Jan Malý and William P. Ziemer, _Fine regularity of solutions of elliptic partial differential equations_ , Mathematical Surveys and Monographs, vol. 51, American Mathematical Society, Providence, RI, 1997. MR 1461542 (98h:35080)
* [19] Luciano Mari, Marco Rigoli, and Alberto G. Setti, _Keller-Osserman conditions for diffusion-type operators on Riemannian manifolds_ , J. Funct. Anal. 258 (2010), no. 2, 665–712, Available from: http://dx.doi.org/10.1016/j.jfa.2009.10.008, doi:10.1016/j.jfa.2009.10.008. MR 2557951 (2011c:58041)
* [20] Mitsuru Nakai, _On Evans potential_ , Proc. Japan Acad. 38 (1962), 624–629, doi:10.3792/pja/1195523234. MR 0150296 (27 #297)
* [21] Robert Osserman, _On the inequality $\Delta u\geq f(u)$_, Pacific J. Math. 7 (1957), 1641–1647. MR 0098239 (20 #4701)
* [22] S. Pigola, M. Rigoli, and A. G. Setti, _Maximum principles at infinity on Riemannian manifolds: an overview_ , Mat. Contemp. 31 (2006), 81–128, Workshop on Differential Geometry (Portuguese). MR 2385438 (2009i:35036)
* [23] Stefano Pigola, Marco Rigoli, and Alberto G. Setti, _A remark on the maximum principle and stochastic completeness_ , Proc. Amer. Math. Soc. 131 (2003), no. 4, 1283–1288 (electronic), doi:10.1090/S0002-9939-02-06672-8. MR 1948121 (2003k:58063)
* [24] by same author, _Maximum principles on Riemannian manifolds and applications_ , Mem. Amer. Math. Soc. 174 (2005), no. 822, x+99. MR 2116555 (2006b:53048)
* [25] by same author, _Some non-linear function theoretic properties of Riemannian manifolds_ , Rev. Mat. Iberoam. 22 (2006), no. 3, 801–831, Available from: http://projecteuclid.org/getRecord?id=euclid.rmi/1169480031. MR 2320402 (2008h:31010)
* [26] by same author, _Aspects of potential theory on manifolds, linear and non-linear_ , Milan J. Math. 76 (2008), 229–256, doi:10.1007/s00032-008-0084-1. MR 2465992 (2009j:31010)
* [27] Patrizia Pucci, Marco Rigoli, and James Serrin, _Qualitative properties for solutions of singular elliptic inequalities on complete manifolds_ , J. Differential Equations 234 (2007), no. 2, 507–543, doi:10.1016/j.jde.2006.11.013. MR 2300666 (2008b:35307)
* [28] Patrizia Pucci and James Serrin, _The maximum principle_ , Progress in Nonlinear Differential Equations and their Applications, 73, Birkhäuser Verlag, Basel, 2007. MR 2356201 (2008m:35001)
* [29] Patrizia Pucci, James Serrin, and Henghui Zou, _A strong maximum principle and a compact support principle for singular elliptic inequalities_ , J. Math. Pures Appl. (9) 78 (1999), no. 8, 769–789, doi:10.1016/S0021-7824(99)00030-6. MR 1715341 (2001j:35095)
* [30] L. Sario and M. Nakai, _Classification theory of Riemann surfaces_ , Die Grundlehren der mathematischen Wissenschaften, Band 164, Springer-Verlag, New York, 1970. MR 0264064 (41 #8660)
* [31] Chiung-Jue Sung, Luen-Fai Tam, and Jiaping Wang, _Spaces of harmonic functions_ , J. London Math. Soc. (2) 61 (2000), no. 3, 789–806, doi:10.1112/S0024610700008759. MR 1766105 (2001i:31013)
* [32] Peter Tolksdorf, _Regularity for a more general class of quasilinear elliptic equations_ , J. Differential Equations 51 (1984), no. 1, 126–150, doi:10.1016/0022-0396(84)90105-0. MR 727034 (85g:35047)
* [33] Daniele Valtorta, _Reverse khas’minskii condition_ , to appear on Math. Z., doi:10.1007/s00209-010-0790-6.
* [34] Daniele Valtorta and Giona Veronelli, _Stokes’ theorem, volume growth and parabolicity_ , to appear on Tohoku Math. J. 63 (2011), no. 3, 397–412, doi:10.2748/tmj/1318338948.
|
arxiv-papers
| 2011-06-07T14:00:38 |
2024-09-04T02:49:19.429445
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Luciano Mari and Daniele Valtorta",
"submitter": "Luciano Mari",
"url": "https://arxiv.org/abs/1106.1352"
}
|
1106.1480
|
# Offsets in Electrostatically Determined Distances: Implications for Casimir
Force Measurements
S. K. Lamoreaux steve.lamoreaux@yale.edu Yale University, Department of
Physics, P.O. Box 208120, New Haven, CT 06520-8120 A. O. Sushkov
alex.sushkov@yale.edu Yale University, Department of Physics, P.O. Box 208120,
New Haven, CT 06520-8120
###### Abstract
The imperfect termination of static electric fields at semiconducting surfaces
has been long known in solid state and transistor physics. We show that the
imperfect shielding leads to an offset in the distance between two surfaces as
determined by electrostatic force measurements. The effect exists even in the
case of good conductors (metals) albeit much reduced.
It has been long known that a static electric field applied external to a
semiconductor will not be perfectly internally shielded. We previously
considered the case of Debye screening, which predicts a penetration depth of
0.7 $\mu$m for Ge, and 24 $\mu$m for Si (intrinsic undoped materials at 300 K)
debye , which leads to distance offset when the force due to a voltage applied
between the plates is used for distance determination. The distance correction
given by the twice the Debye length divided by the (bare) dielectric constant.
Our measurements with Ge kim did not support such a large distance
correction, and indeed it has been long known that surface states lead to a
shielding of the internal field. The first attempts to build a field effect
transistor showed an internal field at least 100 times smaller than expected
based on bulk material considerations.shockley Bardeen provided a complete
explanation of the importance of surface states, and the basic premises are
well established fundamental considerations in transistor physics.bardeen
These effects largely explain why no distance correction was observed in our
Ge experiment. The analysis presented here, which included band bending and
surface state effects, shows that the simple Debye screening treatment of this
problem is an oversimplification.
We are reconsidering the problem because in our recent attempts at measuring
the Casimir force between high resistivity ($>10{\rm k}\Omega$cm) Si plates,
we see a distance offset of between 60 nm and 600 nm, depending on how the
plates were cleaned. The sensitivity of the effect to surface condition
suggests that the surface states are likely the most important part of the
offset correction. In our study of the effect, it has become apparent that
distance offsets are probably affecting all Casimir experiments to date, and
is an effect that needs to be considered among the panoply of systematic
effects that are only recently being acknowledged. It is interesting to note
that all of these systematic effects (roughness, vibration, surface patch
potentials, and now the offset effect considered here) all lead to an apparent
increase in the Casimir force compared to its true value at a given distance
as determined electrostatically.
The basic physical principle is shown and explained in Fig. 1. It is usually
assumed that the charge density in the depletion region is constant ($n_{d}$)
over its width $d_{1}$.dekker This approximation is very good in the case
where there Fermi level is well below the conduction band; raising the
potential of the conduction band a small amount essentially empties it.
Shown in Fig. 2 are the related processes that are important for an
electrostatic distance calibration. We consider a case where the system is in
equilibrium when a potential $V_{0}$ is applied between the plates, and then
ask what happens when $V_{0}$ is changed by $\delta V$.
First, when $\delta V=0$, we have (in units with $\epsilon_{0}=1$, and the
electron charge $e=1$)
$\sigma_{0}d_{0}+V_{1}=V_{0}$ (1)
by energy conservation and
$\sigma_{0}+\sigma_{1}-n_{d}d_{1}=0$ (2)
by charge neutrality, where $\sigma_{0}$ is the charge on the perfectly
conducting plate, $\sigma_{1}$ is the surface state charge on the
semiconducting plate, $d_{0}$ is the physical plate separation. Integrating
the electric field from the surface into the bulk, using Poisson’s equation,
indicates that
$V_{1}=\alpha d_{1}^{2}$ (3)
where $\alpha\approx n_{d}/\kappa$ with $\kappa$ the bare dielectric constant.
Furthermore, we have
$\sigma_{1}=(E_{F}-V_{1})n_{s}$ (4)
where $n_{s}$ is the surface density of states (e.g., electrons/cm2volt in
appropriate units). We thus find that
$\sigma_{0}=-(E_{F}-V_{1})n_{s}+n_{d}\sqrt{V_{1}/\alpha}$ (5)
or
$\sigma_{0}=-(E_{F}-(V_{0}-\sigma_{0}d_{0})n_{s})+n_{d}\sqrt{(V_{0}-\sigma_{0}d_{0})/\alpha}$
(6)
which gives the relationship between $\sigma_{0}$ and $V_{0}$ which can be
used to find the net effective differential capacitance. It is easiest to
change $V_{0}$ by $\delta V$ and determine $\delta V_{1}$, $\delta d_{1}$,
$\delta\sigma_{0}$ and $\delta\sigma_{1}$, with fixed $n_{d}$, $E_{F}$,
$n_{s}$, and $d_{0}$. From charge neutrality, we have
$\delta\sigma_{0}-n_{s}\delta V_{1}-n_{d}\delta d_{1}=0.$ (7)
Furthermore,
$\delta d_{1}={1\over 2}{\delta V_{1}\over\sqrt{\alpha V_{1}}}$ (8)
and
$d_{0}\delta\sigma_{0}+\delta V_{1}=\delta V.$ (9)
We thus arrive at
$\delta\sigma_{0}+n_{s}d_{0}\delta\sigma_{0}+d_{0}n_{d}(4\alpha
V_{1})^{-1/2}=n_{s}\delta V+n_{d}(4\alpha V_{1})^{-1/2}\delta V$ (10)
which gives a differential capacitance (per unit area) of
$C={\delta\sigma_{0}\over\delta V}={n_{s}+n_{d}(4\alpha V_{1})^{-1/2}\over
1/d_{0}+n_{s}+n_{d}(4\alpha V_{1})^{-1/2}}\times{1\over d_{0}}.$ (11)
Evidently, this equation can be written in a form
$C={C_{A}C_{B}\over C_{A}+C_{B}}$ (12)
where
$C_{A}={1\over d_{0}}=C_{gap}$ (13)
and
$C_{B}=n_{s}+n_{d}(4\alpha
V_{1})^{-1/2}=C_{surface}+C_{bulk}=C_{maximum}={1\over d_{offset}}$ (14)
which is the maximum capacitance that can be observed as $d_{0}\rightarrow 0$.
It should be noted that the offset distance does not depend on $d_{0}$; the
capacitance between a spherical and flat surface has the same offset as that
for flat plates, in the limit that the proximity force approximation applies.
Taking into account surface and bulk (space charge) effects shows that the net
(differential) capacitance is lowered due to the parallel combination of
$C_{surface}$ and $C_{bulk}$ in series with $C_{gap}$. Thus, the plates are
closer than the distance given by an electrostatic calibration. When both
plates are made of the same semiconducting material, the distance offset
calculated here is simply multiplied by 2.
It is almost impossible to know the parameters required to calculate the
surface and bulk differential capacitances. For our recent measurements with
Si, we have measured the distance offset directly in the experiment (in situ)
by performing an electrostatic distance determination, then measuring how far
the plates must be moved until they touch. A more accurate determination of
the offset was possible by firmly mounting the plates, with one on a
translation stage. The offset was determined by measuring the capacitance as a
function of distance until the plates touched, which could be readily
electrically determined. A maximum capacitance, at zero plate separation as
determined by the electrical contact, was directly measured, and found to be
62 $\pm$ 5 nm. After further cleaning of the plates, an in situ measurement
shows a 600 nm distance offset.
We further note that even in the case of good conductors, there is a space
charge (bulk) correction to the capacitance. In wallmark (Sec. 3-2.4) it is
shown that in the case of a parabolic conduction band, the effective distance
offset is about 0.1 nm per plate. This means that for distances of
approximately 100 nm, in a force gradient type experiment between a flat and
smooth surface, noting that the electrostatic distance offset means the plates
are closer than expected, the correction is
${\delta F\over F}=-4{\delta d\over d}=-4\times(-0.2)/100\approx 1\%$ (15)
which is at the level of the claimed accuracy of several experiments.
Because the offset is very sensitive to the physical surface properties, it
would appear as prudent to either directly measure the offset, or include it
as an adjustable parameter in comparing theory to experiment. We have found
that it is simple to measure the maximum capacitance that occurs just before
the plates come into physical (electrical) contact. This distance needs to be
corrected by the surface roughness, and the measurements must be done at
sufficiently low frequency for equilibrium to be established, however, the
measurements are straightforward.
Figure 1: On the left is shown a non-equilibrium distribution of electrons;
taking $E=0$ as an “uncharged” surface, the surface state energy does not
equal the Fermi energy $E_{F}$ which lies between the valence and conduction
bands for a semiconductor. To establish equilibrium, electrons flow from the
bulk to surface states until the surface states are filled to $E_{F}$, a,
leaving a depletion, or “space charge” region of depth $d_{1}$. This
redistribution of charge causes the bands to bend, as shown. Figure 2:
Schematic model of electric field penetration through the surface of a
semiconductor.
## References
* (1) S.K. Lamoreaux, arXiv:0801.1283 .
* (2) W.-J. Kim, A.O. Sushkov, D.A.R. Dalvit, and S.K. Lamoreaux, Phys. Rev. Lett. 103, 060401 (2009).
* (3) W. Shockley adn G.L. Pearson, Phys. Rev. 74, 232 (1948).
* (4) J. Bardeen, Phys. Rev. 71, 717 (1947).
* (5) A.J. Dekker, Solid State Physics (Prentice-Hall, Englewood Cliffs, N.J., 1957, ninth printing), Sec. 14-4.
* (6) Dietrich Meyerhofer, “Conduction through Insulating Layers,” in Field Effect Transistors (J. Torkel Wallmark and Harwick Johnson, editors) (Prentice-Hall, Englewood Cliffs, N.J., 1966), Sec. 3-2.4.
|
arxiv-papers
| 2011-06-08T01:43:47 |
2024-09-04T02:49:19.445018
|
{
"license": "Public Domain",
"authors": "S.K. Lamoreaux and A.O. Sushkov",
"submitter": "Steve K. Lamoreaux",
"url": "https://arxiv.org/abs/1106.1480"
}
|
1106.1686
|
# Erlangen Programme at Large: An Overview
Vladimir V. Kisil School of Mathematics, University of Leeds, Leeds, LS2 9JT,
UK kisilv@maths.leeds.ac.uk http://www.maths.leeds.ac.uk/~kisilv/ Dedicated
to Prof. Hans G. Feichtinger on the occasion of his 60th birthday School of
Mathematics, University of Leeds, Leeds LS2 9JT, England
E-mail: kisilv@maths.leeds.ac.uk
###### Abstract.
This is an overview of _Erlangen Programme at Large_. Study of objects and
properties, which are invariant under a group action, is very fruitful far
beyond the traditional geometry. In this paper we demonstrate this on the
example of the group $SL_{2}{}(\mathbb{R}{})$. Starting from the conformal
geometry we develop analytic functions and apply these to functional calculus.
Finally we link this to quantum mechanics and conclude by a list of open
problems.
###### Key words and phrases:
Special linear group, Hardy space, Clifford algebra, elliptic, parabolic,
hyperbolic, complex numbers, dual numbers, double numbers, split-complex
numbers, Cauchy-Riemann-Dirac operator, Möbius transformations, functional
calculus, spectrum, quantum mechanics, non-commutative geometry.
###### 2000 Mathematics Subject Classification:
Primary 30G35; Secondary 22E46, 30F45, 32F45, 43A85, 30G30, 42C40, 46H30,
47A13, 81R30, 81R60.
On leave from the Odessa University.
††copyright: ©:
###### Contents
1. Erlangen Programme at Large: An Overview
1. 1 Introduction
1. 1.1 Make a Guess in Three Attempts
2. 1.2 Erlangen Programme at Large
2. 2 Geometry
1. 2.1 Hypercomplex Numbers
2. 2.2 Cycles as Invariant Families
3. 2.3 Invariants: Algebraic and Geometric
4. 2.4 Joint Invariants: Orthogonality
5. 2.5 Higher Order Joint Invariants: f-Orthogonality
6. 2.6 Distance, Length and Perpendicularity
3. 3 Linear Representations
1. 3.1 Hypercomplex Characters
2. 3.2 Induced Representations
3. 3.3 Similarity and Correspondence: Ladder Operators
1. 3.3.1 Elliptic Ladder Operators
2. 3.3.2 Hyperbolic Ladder Operators
3. 3.3.3 Parabolic Ladder Operators
4. 4 Covariant Transform
1. 4.1 Extending Wavelet Transform
2. 4.2 Examples of Covariant Transform
3. 4.3 Symbolic Calculi
4. 4.4 Inverse Covariant Transform
5. 5 Analytic Functions
1. 5.1 Induced Covariant Transform
2. 5.2 Induced Wavelet Transform and Cauchy Integral
3. 5.3 The Cauchy-Riemann (Dirac) and Laplace Operators
4. 5.4 The Taylor Expansion
5. 5.5 Wavelet Transform in the Unit Disk and Other Domains
6. 6 Covariant and Contravariant Calculi
1. 6.1 Intertwining Group Actions on Functions and Operators
2. 6.2 Jet Bundles and Prolongations
3. 6.3 Spectrum and Spectral Mapping Theorem
4. 6.4 Functional Model and Spectral Distance
5. 6.5 Covariant Pencils of Operators
7. 7 Quantum Mechanics
1. 7.1 The Heisenberg Group and Its Automorphisms
1. 7.1.1 The Heisenberg group and induced representations
2. 7.1.2 Symplectic Automorphisms of the Heisenberg Group
2. 7.2 p-Mechanic Formalism
1. 7.2.1 Convolutions (Observables) on $\mathbb{H}^{n}{}$ and Commutator
2. 7.2.2 States and Probability
3. 7.3 Elliptic characters and Quantum Dynamics
1. 7.3.1 Fock–Segal–Bargmann and Schrödinger Representations
2. 7.3.2 Commutator and the Heisenberg Equation
3. 7.3.3 Quantum Probabilities
4. 7.4 Ladder Operators and Harmonic Oscillator
1. 7.4.1 Ladder Operators from the Heisenberg Group
2. 7.4.2 Symplectic Ladder Operators
5. 7.5 Hyperbolic Quantum Mechanics
1. 7.5.1 Hyperbolic Representations of the Heisenberg Group
2. 7.5.2 Hyperbolic Dynamics
3. 7.5.3 Hyperbolic Probabilities
4. 7.5.4 Ladder Operators for the Hyperbolic Subgroup
5. 7.5.5 Double Ladder Operators
6. 7.6 Parabolic (Classical) Representations on the Phase Space
1. 7.6.1 Classical Non-Commutative Representations
2. 7.6.2 Hamilton Equation
3. 7.6.3 Classical probabilities
4. 7.6.4 Ladder Operator for the Nilpotent Subgroup
5. 7.6.5 Similarity and Correspondence
8. 8 Open Problems
1. 8.1 Geometry
2. 8.2 Analytic Functions
3. 8.3 Functional Calculus
4. 8.4 Quantum Mechanics
## Chapter Erlangen Programme at Large:
An Overview
Vladimir V. Kisil
_Dedicated to Prof. Hans G. Feichtinger on the occasion of his 60th birthday_
Abstract. This is an overview of _Erlangen Programme at Large_. Study of
objects and properties, which are invariant under a group action, is very
fruitful far beyond the traditional geometry. In this paper we demonstrate
this on the example of the group $SL_{2}{}(\mathbb{R}{})$. Starting from the
conformal geometry we develop analytic functions and apply these to functional
calculus. Finally we link this to quantum mechanics and conclude by a list of
open problems.
2010 Mathematics Subject Classification: Primary 30G35; Secondary 22E46,
30F45, 32F45, 43A85, 30G30, 42C40, 46H30, 47A13, 81R30, 81R60.
Key words and phrases: Special linear group, Hardy space, Clifford algebra,
elliptic, parabolic, hyperbolic, complex numbers, dual numbers, double
numbers, split-complex numbers, Cauchy-Riemann-Dirac operator, Möbius
transformations, functional calculus, spectrum, quantum mechanics, non-
commutative geometry.
A mathematical idea should not be petrified in a formalised axiomatic
settings, but should be considered instead as flowing as a river.
Sylvester (1878)
### 1\. Introduction
The simplest objects with non-commutative (but still associative)
multiplication may be $2\times 2$ matrices with real entries. The subset of
matrices _of determinant one_ has the following properties:
* •
form a closed set under multiplication since $\det(AB)=\det A\cdot\det B$;
* •
the identity matrix is the set; and
* •
any such matrix has an inverse (since $\det A\neq 0$).
In other words those matrices form a _group_ , the $SL_{2}{}(\mathbb{R}{})$
group [Lang85]—one of the two most important Lie groups in analysis. The other
group is the Heisenberg group [Howe80a]. By contrast the $ax+b$ group, which
is often used to build wavelets, is only a subgroup of
$SL_{2}{}(\mathbb{R}{})$, see the numerator in (1.1).
The simplest non-linear transforms of the real line— _linear-fractional_ or
_Möbius maps_ —may also be associated with $2\times 2$ matrices
[Beardon05a]*Ch. 13:
(1.1) $g:x\mapsto g\cdot x=\frac{ax+b}{cx+d},\quad\text{ where
}g=\begin{pmatrix}a&b\\\ c&d\end{pmatrix},\ x\in\mathbb{R}{}.$
An enjoyable calculation shows that the composition of two transforms (1.1)
with different matrices $g_{1}$ and $g_{2}$ is again a Möbius transform with
matrix the product $g_{1}g_{2}$. In other words (1.1) it is a (left) action of
$SL_{2}{}(\mathbb{R}{})$.
According to F. Klein’s _Erlangen programme_ (which was influenced by S. Lie)
any geometry is dealing with invariant properties under a certain transitive
group action. For example, we may ask: _What kinds of geometry are related to
the $SL_{2}{}(\mathbb{R}{})$ action (1.1)_?
The Erlangen programme has probably the highest rate of
$\frac{\text{praised}}{\text{actually used}}$ among mathematical theories not
only due to the big numerator but also due to undeserving small denominator.
As we shall see below Klein’s approach provides some surprising conclusions
even for such over-studied objects as circles.
#### 1.1. Make a Guess in Three Attempts
It is easy to see that the $SL_{2}{}(\mathbb{R}{})$ action (1.1) makes sense
also as a map of complex numbers $z=x+\mathrm{i}y$, $\mathrm{i}^{2}=-1$
assuming the denominator is non-zero. Moreover, if $y>0$ then $g\cdot z$ has a
positive imaginary part as well, i.e. (1.1) defines a map from the upper half-
plane to itself. Those transformations are isometries of the Lobachevsky half-
plane.
However there is no need to be restricted to the traditional route of complex
numbers only. Moreover in Subsection 2.1 we will naturally come to a necessity
to work with all three kinds of hypercomplex numbers. Less-known _double_ and
_dual_ numbers, see [Yaglom79]*Suppl. C, have also the form $z=x+\iota y$ but
different assumptions on the hypercomplex unit $\iota$: $\iota^{2}=0$ or
$\iota^{2}=1$ correspondingly. We will write $\varepsilon$ and $\mathrm{j}$
instead of $\iota$ within dual and double numbers respectively. Although the
arithmetic of dual and double numbers is different from the complex ones, e.g.
they have divisors of zero, we are still able to define their transforms by
(1.1) in most cases.
Three possible values $-1$, $0$ and $1$ of $\sigma:=\iota^{2}$ will be
refereed to here as _elliptic_ , _parabolic_ and _hyperbolic_ cases
respectively. We repeatedly meet such a division of various mathematical
objects into three classes. They are named by the historically first
example—the classification of conic sections—however the pattern persistently
reproduces itself in many different areas: equations, quadratic forms,
metrics, manifolds, operators, etc. We will abbreviate this separation as
_EPH-classification_. The common origin of this fundamental division of any
family with one-parameter can be seen from the simple picture of a coordinate
line split by zero into negative and positive half-axes:
(1.2)
Connections between different objects admitting EPH-classification are not
limited to this common source. There are many deep results linking, for
example, the ellipticity of quadratic forms, metrics and operators, e.g. the
Atiyah-Singer index theorem. On the other hand there are still a lot of white
spots, empty cells, obscure gaps and missing connections between some subjects
as well.
To understand the action (1.1) in all EPH cases we use the Iwasawa
decomposition [Lang85]*§ III.1 of $SL_{2}{}(\mathbb{R}{})=ANK$ into three one-
dimensional subgroups $A$, $N$ and $K$:
(1.3) $\begin{pmatrix}a&b\\\ c&d\end{pmatrix}={\begin{pmatrix}\alpha&0\\\
0&\alpha^{-1}\end{pmatrix}}{\begin{pmatrix}1&\nu\\\
0&1\end{pmatrix}}{\begin{pmatrix}\cos\phi&-\sin\phi\\\
\sin\phi&\cos\phi\end{pmatrix}}.$
Subgroups $A$ and $N$ act in (1.1) irrespectively to value of $\sigma$: $A$
makes a dilation by $\alpha^{2}$, i.e. $z\mapsto\alpha^{2}z$, and $N$ shifts
points to left by $\nu$, i.e. $z\mapsto z+\nu$.
The corresponding orbits are circles, parabolas and hyperbolas shown by thick
lines. Transverse thin lines are images of the vertical axis under the action
of the subgroup $K$. Grey arrows show the associated derived action.
Figure 1. Action of the subgroup $K$.
By contrast, the action of the third matrix from the subgroup $K$ sharply
depends on $\sigma$, see Fig. 1. In elliptic, parabolic and hyperbolic cases
$K$-orbits are circles, parabolas and (equilateral) hyperbolas
correspondingly. Thin traversal lines in Fig. 1 join points of orbits for the
same values of $\phi$ and grey arrows represent “local velocities”—vector
fields of derived representations. We will describe some highlights of this
geometry in Section 2.
#### 1.2. Erlangen Programme at Large
As we already mentioned the division of mathematics into areas is only
apparent. Therefore it is unnatural to limit Erlangen programme only to
“geometry”. We may continue to look for $SL_{2}{}(\mathbb{R}{})$ invariant
objects in other related fields. For example, transform (1.1) generates
unitary representations on certain $L_{2}{}$ spaces, cf. (1.1) and Section 3:
(1.4) $g:f(x)\mapsto\frac{1}{(cx+d)^{m}}f\left(\frac{ax+b}{cx+d}\right).$
For $m=1$, $2$, … the invariant subspaces of $L_{2}{}$ are Hardy and
(weighted) Bergman spaces of complex analytic functions. All main objects of
_complex analysis_ (Cauchy and Bergman integrals, Cauchy-Riemann and Laplace
equations, Taylor series etc.) may be obtaining in terms of invariants of the
_discrete series_ representations of $SL_{2}{}(\mathbb{R}{})$ [Kisil02c]*§ 3.
Moreover two other series (_principal_ and _complimentary_ [Lang85]) play the
similar rôles for hyperbolic and parabolic cases [Kisil02c, Kisil05a]. This
will be discussed in Sections 4 and 5.
Moving further we may observe that transform (1.1) is defined also for an
element $x$ in any algebra $\mathfrak{A}$ with a unit $\mathbf{1}$ as soon as
$(cx+d\mathbf{1})\in\mathfrak{A}$ has an inverse. If $\mathfrak{A}$ is
equipped with a topology, e.g. is a Banach algebra, then we may study a
_functional calculus_ for element $x$ [Kisil02a] in this way. It is defined as
an _intertwining operator_ between the representation (1.4) in a space of
analytic functions and a similar representation in a left
$\mathfrak{A}$-module. We will consider the Section 6.
In the spirit of Erlangen programme such functional calculus is still a
geometry, since it is dealing with invariant properties under a group action.
However even for a simplest non-normal operator, e.g. a Jordan block of the
length $k$, the obtained space is not like a space of point but is rather a
space of $k$-th _jets_ [Kisil02a]. Such non-point behaviour is oftenly
attributed to _non-commutative geometry_ and Erlangen programme provides an
important input on this fashionable topic [Kisil02c].
It is noteworthy that ideas of F. Klein ans S. Lie are spread more in physics
than in mathematics: it is a common viewpoint that laws of nature shall be
invariant under certain transformations. Yet systematic use of Erlangen
approach can bring new results even in this domain as we demonstrate in
Section 7. There are still many directions to extend the present work thus we
will conclude by a list of some open problems in Section 8.
Of course, there is no reasons to limit Erlangen programme to
$SL_{2}{}(\mathbb{R}{})$ group only, other groups may be more suitable in
different situations. However $SL_{2}{}(\mathbb{R}{})$ still possesses a big
unexplored potential and is a good object to start with.
### 2\. Geometry
We start from the natural domain of the Erlangen Programme—geometry.
Systematic use of this ideology allows to obtain new results even for very
classical objects like circles.
#### 2.1. Hypercomplex Numbers
Firstly we wish to demonstrate that hypercomplex numbers appear very naturally
from a study of $SL_{2}{}(\mathbb{R}{})$ action on the homogeneous spaces
[Kisil09c]. We begin from the standard definitions.
Let $H$ be a subgroup of a group $G$. Let $X=G/H$ be the corresponding
homogeneous space and $s:X\rightarrow G$ be a smooth section [Kirillov76]*§
13.2, which is a left inverse to the natural projection $p:G\rightarrow X$.
The choice of $s$ is inessential in the sense that by a smooth map
$X\rightarrow X$ we can always reduce one to another. We define a map
$r:G\rightarrow H$ associated to $p$ and $s$ from the identities:
(2.1) $r(g)={(s(x))}^{-1}g,\qquad\text{where }x=p(g)\in X.$
Note that $X$ is a left homogeneous space with the $G$-action defined in terms
of $p$ and $s$ as follows:
(2.2) $g:x\mapsto g\cdot x=p(g*s(x)),$
###### Example 2.1 ([Kisil09c]).
For $G=SL_{2}{}(\mathbb{R}{})$, as well as for other semisimple groups, it is
common to consider only the case of $H$ being the maximal compact subgroup
$K$. However in this paper we admit $H$ to be any one-dimensional subgroup.
Then $X$ is a two-dimensional manifold and for any choice of $H$ we define
[Kisil97c]*Ex. 3.7(a):
(2.3) $s:(u,v)\mapsto\frac{1}{\sqrt{v}}\begin{pmatrix}v&u\\\
0&1\end{pmatrix},\qquad(u,v)\in\mathbb{R}^{2}{},\ v>0.$
Any continuous one-dimensional subgroup $H\in SL_{2}{}(\mathbb{R}{})$ is
conjugated to one of the following:
(2.4) $\displaystyle K$ $\displaystyle=$
$\displaystyle\left\\{{\begin{pmatrix}\cos t&\sin t\\\ -\sin t&\cos
t\end{pmatrix}=\exp\begin{pmatrix}0&t\\\ -t&0\end{pmatrix}},\
t\in(-\pi,\pi]\right\\},$ (2.5) $\displaystyle N^{\prime}$ $\displaystyle=$
$\displaystyle\left\\{{\begin{pmatrix}1&0\\\
t&1\end{pmatrix}=\exp\begin{pmatrix}0&0\\\ t&0\end{pmatrix},}\
t\in\mathbb{R}{}\right\\},$ (2.6) $\displaystyle A\\!^{\prime}$
$\displaystyle=$ $\displaystyle\left\\{\begin{pmatrix}\cosh t&\sinh t\\\ \sinh
t&\cosh t\end{pmatrix}=\exp\begin{pmatrix}0&t\\\ t&0\end{pmatrix},\
t\in\mathbb{R}{}\right\\}.$
Then [Kisil09c] the action (2.2) of $SL_{2}{}(\mathbb{R}{})$ on
$X=SL_{2}{}(\mathbb{R}{})/H$ coincides with Möbius transformations (1.1) on
complex, dual and double numbers respectively.
#### 2.2. Cycles as Invariant Families
We wish to consider all three hypercomplex systems at the same time, the
following definition is very helpful for this.
###### Definition 2.2.
The common name _cycle_ [Yaglom79] is used to denote circles, parabolas and
hyperbolas (as well as straight lines as their limits) in the respective EPH
case.
(a) (b)
Figure 2. $K$-orbits as conic sections: circles are sections by the plane
$EE^{\prime}$; parabolas are sections by $PP^{\prime}$; hyperbolas are
sections by $HH^{\prime}$. Points on the same generator of the cone correspond
to the same value of $\phi$.
It is well known that any cycle is a _conic sections_ and an interesting
observation is that corresponding $K$-orbits are in fact sections of the same
two-sided right-angle cone, see Fig. 2. Moreover, each straight line
generating the cone, see Fig. 2(b), is crossing corresponding EPH $K$-orbits
at points with the same value of parameter $\phi$ from (1.3). In other words,
all three types of orbits are generated by the rotations of this generator
along the cone.
$K$-orbits are $K$-invariant in a trivial way. Moreover since actions of both
$A$ and $N$ for any $\sigma$ are extremely “shape-preserving” we find natural
invariant objects of the Möbius map:
###### Theorem 2.3.
The family of all cycles from Definition 2.2 is invariant under the action
(1.1).
According to Erlangen ideology we should now study invariant properties of
cycles.
Fig. 2 suggests that we may get a unified treatment of cycles in all EPH cases
by consideration of a higher dimension spaces. The standard mathematical
method is to declare objects under investigations (cycles in our case,
functions in functional analysis, etc.) to be simply points of some bigger
space. This space should be equipped with an appropriate structure to hold
externally information which were previously inner properties of our objects.
A generic cycle is the set of points $(u,v)\in\mathbb{R}^{2}{}$ defined for
all values of $\sigma$ by the equation
(2.7) $k(u^{2}-\sigma v^{2})-2lu-2nv+m=0.$
This equation (and the corresponding cycle) is defined by a point $(k,l,n,m)$
from a _projective space_ $\mathbb{P}^{3}{}$, since for a scaling factor
$\lambda\neq 0$ the point $(\lambda k,\lambda l,\lambda n,\lambda m)$ defines
an equation equivalent to (2.7). We call $\mathbb{P}^{3}{}$ the _cycle space_
and refer to the initial $\mathbb{R}^{2}{}$ as the _point space_.
In order to get a connection with Möbius action (1.1) we arrange numbers
$(k,l,n,m)$ into the matrix
(2.8) $C_{\breve{\sigma}}^{s}=\begin{pmatrix}l+\mathrm{\breve{\i}}sn&-m\\\
k&-l+\mathrm{\breve{\i}}sn\end{pmatrix},$
with a new hypercomplex unit $\mathrm{\breve{\i}}$ and an additional parameter
$s$ usually equal to $\pm 1$. The values of
$\breve{\sigma}:=\mathrm{\breve{\i}}^{2}$ is $-1$, $0$ or $1$ independently
from the value of $\sigma$. The matrix (2.8) is the cornerstone of an extended
_Fillmore–Springer–Cnops construction_ (FSCc) [Cnops02a].
The significance of FSCc in Erlangen framework is provided by the following
result:
###### Theorem 2.4.
The image $\tilde{C}_{\breve{\sigma}}^{s}$ of a cycle $C_{\breve{\sigma}}^{s}$
under transformation (1.1) with $g\in SL_{2}{}(\mathbb{R}{})$ is given by
similarity of the matrix (2.8):
(2.9) $\tilde{C}_{\breve{\sigma}}^{s}=gC_{\breve{\sigma}}^{s}g^{-1}.$
In other words FSCc (2.8) _intertwines_ Möbius action (1.1) on cycles with
linear map (2.9).
There are several ways to prove (2.9): either by a brute force calculation
(fortunately performed by a CAS) [Kisil05a] or through the related
orthogonality of cycles [Cnops02a], see the end of the next Subsection 2.3.
The important observation here is that our extended version of FSCc (2.8) uses
an hypercomplex unit $\mathrm{\breve{\i}}$, which is not related to $\iota$
defining the appearance of cycles on plane. In other words any EPH type of
geometry in the cycle space $\mathbb{P}^{3}{}$ admits drawing of cycles in the
point space $\mathbb{R}^{2}{}$ as circles, parabolas or hyperbolas. We may
think on points of $\mathbb{P}^{3}{}$ as ideal cycles while their depictions
on $\mathbb{R}^{2}{}$ are only their shadows on the wall of Plato’s cave.
(a) (b)
Figure 3. (a) Different EPH implementations of the same cycles defined by
quadruples of numbers.
(b) Centres and foci of two parabolas with the same focal length.
Fig. 3(a) shows the same cycles drawn in different EPH styles. Points
$c_{e,p,h}=(\frac{l}{k},-\breve{\sigma}\frac{n}{k})$ are their respective
e/p/h-centres. They are related to each other through several identities:
(2.10) $c_{e}=\bar{c}_{h},\quad c_{p}=\frac{1}{2}(c_{e}+c_{h}).$
Fig. 3(b) presents two cycles drawn as parabolas, they have the same focal
length $\frac{n}{2k}$ and thus their e-centres are on the same level. In other
words _concentric_ parabolas are obtained by a vertical shift, not scaling as
an analogy with circles or hyperbolas may suggest.
Fig. 3(b) also presents points, called e/p/h-foci:
(2.11) $f_{e,p,h}=\left(\frac{l}{k},-\frac{\det
C_{\breve{\sigma}}^{s}}{2nk}\right),$
which are independent of the sign of $s$. If a cycle is depicted as a parabola
then h-focus, p-focus, e-focus are correspondingly geometrical focus of the
parabola, its vertex, and the point on the directrix nearest to the vertex.
As we will see, cf. Theorems 2.6 and 2.8, all three centres and three foci are
useful attributes of a cycle even if it is drawn as a circle.
#### 2.3. Invariants: Algebraic and Geometric
We use known algebraic invariants of matrices to build appropriate geometric
invariants of cycles. It is yet another demonstration that any division of
mathematics into subjects is only illusive.
For $2\times 2$ matrices (and thus cycles) there are only two essentially
different invariants under similarity (2.9) (and thus under Möbius action
(1.1)): the _trace_ and the _determinant_. The latter was already used in
(2.11) to define cycle’s foci. However due to projective nature of the cycle
space $\mathbb{P}^{3}{}$ the absolute values of trace or determinant are
irrelevant, unless they are zero.
Alternatively we may have a special arrangement for normalisation of
quadruples $(k,l,n,m)$. For example, if $k\neq 0$ we may normalise the
quadruple to $(1,\frac{l}{k},\frac{n}{k},\frac{m}{k})$ with highlighted
cycle’s centre. Moreover in this case $\det{C^{s}_{\breve{\sigma}}}$ is equal
to the square of cycle’s radius, cf. Section 2.6. Another normalisation
$\det{C^{s}_{\breve{\sigma}}}=1$ is used in [Kirillov06] to get a nice
condition for touching circles. Moreover, the Kirillov normalisation is
preserved by the conjugation (2.9).
We still get important characterisation even with non-normalised cycles, e.g.,
invariant classes (for different $\breve{\sigma}$) of cycles are defined by
the condition $\det C_{\breve{\sigma}}^{s}=0$. Such a class is parametrises
only by two real numbers and as such is easily attached to certain point of
$\mathbb{R}^{2}{}$. For example, the cycle $C_{\breve{\sigma}}^{s}$ with $\det
C_{\breve{\sigma}}^{s}=0$, $\breve{\sigma}=-1$ drawn elliptically represent
just a point $(\frac{l}{k},\frac{n}{k})$, i.e. (elliptic) zero-radius circle.
The same condition with $\breve{\sigma}=1$ in hyperbolic drawing produces a
null-cone originated at point $(\frac{l}{k},\frac{n}{k})$:
$\textstyle(u-\frac{l}{k})^{2}-(v-\frac{n}{k})^{2}=0,$
i.e. a zero-radius cycle in hyperbolic metric.
Figure 4. Different $\sigma$-implementations of the same
$\breve{\sigma}$-zero-radius cycles and corresponding foci.
In general for every notion there are (at least) nine possibilities: three EPH
cases in the cycle space times three EPH realisations in the point space. Such
nine cases for “zero radius” cycles is shown in Fig. 4. For example, p-zero-
radius cycles in any implementation touch the real axis.
This “touching” property is a manifestation of the _boundary effect_ in the
upper-half plane geometry. The famous question on hearing drum’s shape has a
sister:
> _Can we see/feel the boundary from inside a domain?_
Both orthogonality relations described below are “boundary aware” as well. It
is not surprising after all since $SL_{2}{}(\mathbb{R}{})$ action on the
upper-half plane was obtained as an extension of its action (1.1) on the
boundary.
According to the categorical viewpoint internal properties of objects are of
minor importance in comparison to their relations with other objects from the
same class. As an illustration we may put the proof of Theorem 2.4 sketched at
the end of of the next section. Thus from now on we will look for invariant
relations between two or more cycles.
#### 2.4. Joint Invariants: Orthogonality
The most expected relation between cycles is based on the following Möbius
invariant “inner product” build from a trace of product of two cycles as
matrices:
(2.12) $\left\langle
C_{\breve{\sigma}}^{s},\tilde{C}_{\breve{\sigma}}^{s}\right\rangle=\mathop{tr}(C_{\breve{\sigma}}^{s}\tilde{C}_{\breve{\sigma}}^{s})$
By the way, an inner product of this type is used, for example, in GNS
construction to make a Hilbert space out of $C^{*}$-algebra. The next standard
move is given by the following definition.
###### Definition 2.5.
Two cycles are called _$\breve{\sigma}$ -orthogonal_ if $\left\langle
C_{\breve{\sigma}}^{s},\tilde{C}_{\breve{\sigma}}^{s}\right\rangle=0$.
The orthogonality relation is preserved under the Möbius transformations, thus
this is an example of a _joint invariant_ of two cycles. For the case of
$\breve{\sigma}\sigma=1$, i.e. when geometries of the cycle and point spaces
are both either elliptic or hyperbolic, such an orthogonality is the standard
one, defined in terms of angles between tangent lines in the intersection
points of two cycles. However in the remaining seven ($=9-2$) cases the
innocent-looking Definition 2.5 brings unexpected relations.
Figure 5. Orthogonality of the first kind in the elliptic point space.
Each picture presents two groups (green and blue) of cycles which are
orthogonal to the red cycle $C^{s}_{\breve{\sigma}}$. Point $b$ belongs to
$C^{s}_{\breve{\sigma}}$ and the family of blue cycles passing through $b$ is
orthogonal to $C^{s}_{\breve{\sigma}}$. They all also intersect in the point
$d$ which is the inverse of $b$ in $C^{s}_{\breve{\sigma}}$. Any orthogonality
is reduced to the usual orthogonality with a new (“ghost”) cycle (shown by the
dashed line), which may or may not coincide with $C^{s}_{\breve{\sigma}}$. For
any point $a$ on the “ghost” cycle the orthogonality is reduced to the local
notion in the terms of tangent lines at the intersection point. Consequently
such a point $a$ is always the inverse of itself.
Elliptic (in the point space) realisations of Definition 2.5, i.e. $\sigma=-1$
is shown in Fig. 5. The left picture corresponds to the elliptic cycle space,
e.g. $\breve{\sigma}=-1$. The orthogonality between the red circle and any
circle from the blue or green families is given in the usual Euclidean sense.
The central (parabolic in the cycle space) and the right (hyperbolic) pictures
show non-local nature of the orthogonality. There are analogues pictures in
parabolic and hyperbolic point spaces as well, see [Kisil05a, Kisil12a].
This orthogonality may still be expressed in the traditional sense if we will
associate to the red circle the corresponding “ghost” circle, which shown by
the dashed line in Fig. 5. To describe ghost cycle we need the _Heaviside
function_ $\chi(\sigma)$:
(2.13) $\chi(t)=\left\\{\begin{array}[]{ll}1,&t\geq 0;\\\
-1,&t<0.\end{array}\right.$
###### Theorem 2.6.
A cycle is $\breve{\sigma}$-orthogonal to cycle $C_{\breve{\sigma}}^{s}$ if it
is orthogonal in the usual sense to the $\sigma$-realisation of “ghost” cycle
$\hat{C}_{\breve{\sigma}}^{s}$, which is defined by the following two
conditions:
1. (i)
$\chi(\sigma)$-centre of $\hat{C}_{\breve{\sigma}}^{s}$ coincides with
$\breve{\sigma}$-centre of $C_{\breve{\sigma}}^{s}$.
2. (ii)
Cycles $\hat{C}_{\breve{\sigma}}^{s}$ and $C^{s}_{\breve{\sigma}}$ have the
same roots, moreover $\det\hat{C}_{\sigma}^{1}=\det
C^{\chi(\breve{\sigma})}_{\sigma}$.
The above connection between various centres of cycles illustrates their
meaningfulness within our approach.
One can easy check the following orthogonality properties of the zero-radius
cycles defined in the previous section:
1. (i)
Due to the identity $\left\langle
C_{\breve{\sigma}}^{s},{C}_{\breve{\sigma}}^{s}\right\rangle=\det{C}_{\breve{\sigma}}^{s}$
zero-radius cycles are self-orthogonal (isotropic) ones.
2. (ii)
A cycle ${C^{s}_{\breve{\sigma}}}$ is $\sigma$-orthogonal to a zero-radius
cycle $Z^{s}_{\breve{\sigma}}$ if and only if ${C^{s}_{\breve{\sigma}}}$
passes through the $\sigma$-centre of $Z^{s}_{\breve{\sigma}}$.
As we will see, in parabolic case there is a more suitable notion of an
infinitesimal cycle which can be used instead of zero-radius ones.
#### 2.5. Higher Order Joint Invariants: f-Orthogonality
With appetite already wet one may wish to build more joint invariants. Indeed
for any polynomial $p(x_{1},x_{2},\ldots,x_{n})$ of several non-commuting
variables one may define an invariant joint disposition of $n$ cycles
${}^{j}\\!{C^{s}_{\breve{\sigma}}}$ by the condition:
$\mathop{tr}p({}^{1}\\!{C^{s}_{\breve{\sigma}}},{}^{2}\\!{C^{s}_{\breve{\sigma}}},\ldots,{}^{n}\\!{C^{s}_{\breve{\sigma}}})=0.$
However it is preferable to keep some geometrical meaning of constructed
notions.
An interesting observation is that in the matrix similarity of cycles (2.9)
one may replace element $g\in SL_{2}{}(\mathbb{R}{})$ by an arbitrary matrix
corresponding to another cycle. More precisely the product
${C^{s}_{\breve{\sigma}}}{\tilde{C}^{s}_{\breve{\sigma}}}{C^{s}_{\breve{\sigma}}}$
is again the matrix of the form (2.8) and thus may be associated to a cycle.
This cycle may be considered as the reflection of
${\tilde{C}^{s}_{\breve{\sigma}}}$ in ${C^{s}_{\breve{\sigma}}}$.
###### Definition 2.7.
A cycle ${C^{s}_{\breve{\sigma}}}$ is f-orthogonal (focal orthogonal) _to_ a
cycle ${\tilde{C}^{s}_{\breve{\sigma}}}$ if the reflection of
${\tilde{C}^{s}_{\breve{\sigma}}}$ in ${C^{s}_{\breve{\sigma}}}$ is orthogonal
(in the sense of Definition 2.5) to the real line. Analytically this is
defined by:
(2.14)
$\mathop{tr}({C^{s}_{\breve{\sigma}}}{\tilde{C}^{s}_{\breve{\sigma}}}{C^{s}_{\breve{\sigma}}}R^{s}_{\breve{\sigma}})=0.$
Due to invariance of all components in the above definition f-orthogonality is
a Möbius invariant condition. Clearly this is not a symmetric relation: if
${C^{s}_{\breve{\sigma}}}$ is f-orthogonal to
${\tilde{C}^{s}_{\breve{\sigma}}}$ then ${\tilde{C}^{s}_{\breve{\sigma}}}$ is
not necessarily f-orthogonal to ${C^{s}_{\breve{\sigma}}}$.
Figure 6. Focal orthogonality for circles. To highlight both similarities and
distinctions with the ordinary orthogonality we use the same notations as that
in Fig. 5.
Fig. 6 illustrates f-orthogonality in the elliptic point space. By contrast
with Fig. 5 it is not a local notion at the intersection points of cycles for
all $\breve{\sigma}$. However it may be again clarified in terms of the
appropriate s-ghost cycle, cf. Theorem 2.6.
###### Theorem 2.8.
A cycle is f-orthogonal to a cycle $C^{s}_{\breve{\sigma}}$ if its orthogonal
in the traditional sense to its _f-ghost cycle_
${\tilde{C}^{\breve{\sigma}}_{\breve{\sigma}}}={C^{\chi(\sigma)}_{\breve{\sigma}}}\mathbb{R}^{\breve{\sigma}}_{\breve{\sigma}}{}{C^{\chi(\sigma)}_{\breve{\sigma}}}$,
which is the reflection of the real line in
${C^{\chi(\sigma)}_{\breve{\sigma}}}$ and $\chi$ is the _Heaviside function_
(2.13). Moreover
1. (i)
$\chi(\sigma)$-Centre of ${\tilde{C}^{\breve{\sigma}}_{\breve{\sigma}}}$
coincides with the $\breve{\sigma}$-focus of ${C^{s}_{\breve{\sigma}}}$,
consequently all lines f-orthogonal to ${C^{s}_{\breve{\sigma}}}$ are passing
the respective focus.
2. (ii)
Cycles ${C^{s}_{\breve{\sigma}}}$ and
${\tilde{C}^{\breve{\sigma}}_{\breve{\sigma}}}$ have the same roots.
Note the above intriguing interplay between cycle’s centres and foci. Although
f-orthogonality may look exotic it will naturally appear in the end of next
Section again.
Of course, it is possible to define another interesting higher order joint
invariants of two or even more cycles.
#### 2.6. Distance, Length and Perpendicularity
Geo _metry_ in the plain meaning of this word deals with _distances_ and
_lengths_. Can we obtain them from cycles?
(a) (b)
Figure 7. (a) The square of the parabolic diameter is the square of the
distance between roots if they are real ($z_{1}$ and $z_{2}$), otherwise the
negative square of the distance between the adjoint roots ($z_{3}$ and
$z_{4}$).
(b) Distance as extremum of diameters in elliptic ($z_{1}$ and $z_{2}$) and
parabolic ($z_{3}$ and $z_{4}$) cases.
We mentioned already that for circles normalised by the condition $k=1$ the
value
$\det{C^{s}_{\breve{\sigma}}}=\left\langle{C^{s}_{\breve{\sigma}}},{C^{s}_{\breve{\sigma}}}\right\rangle$
produces the square of the traditional circle radius. Thus we may keep it as
the definition of the $\breve{\sigma}$-_radius_ for any cycle. But then we
need to accept that in the parabolic case the radius is the (Euclidean)
distance between (real) roots of the parabola, see Fig. 7(a).
Having radii of circles already defined we may use them for other measurements
in several different ways. For example, the following variational definition
may be used:
###### Definition 2.9.
The _distance_ between two points is the extremum of diameters of all cycles
passing through both points, see Fig. 7(b).
If $\breve{\sigma}=\sigma$ this definition gives in all EPH cases the
following expression for a distance $d_{e,p,h}(u,v)$ between endpoints of any
vector $w=u+\mathrm{i}v$:
(2.15) $d_{e,p,h}(u,v)^{2}=(u+\mathrm{i}v)(u-\mathrm{i}v)=u^{2}-\sigma v^{2}.$
The parabolic distance $d_{p}^{2}=u^{2}$, see Fig. 7(b), algebraically sits
between $d_{e}$ and $d_{h}$ according to the general principle (1.2) and is
widely accepted [Yaglom79]. However one may be unsatisfied by its degeneracy.
An alternative measurement is motivated by the fact that a circle is the set
of equidistant points from its centre. However the choice of “centre” is now
rich: it may be either point from three centres (2.10) or three foci (2.11).
###### Definition 2.10.
The _length_ of a directed interval $\overrightarrow{AB}$ is the radius of the
cycle with its _centre_ (denoted by $l_{c}(\overrightarrow{AB})$) or _focus_
(denoted by $l_{f}(\overrightarrow{AB})$) at the point $A$ which passes
through $B$.
This definition is less common and have some unusual properties like non-
symmetry: $l_{f}(\overrightarrow{AB})\neq l_{f}(\overrightarrow{BA})$. However
it comfortably fits the Erlangen programme due to its
$SL_{2}{}(\mathbb{R}{})$-_conformal invariance_ :
###### Theorem 2.11 ([Kisil05a]).
Let $l$ denote either the EPH distances (2.15) or any length from Definition
2.10. Then for fixed $y$, $y^{\prime}\in\mathbb{R}^{\sigma}{}$ the limit:
$\lim_{t\rightarrow 0}\frac{l(g\cdot
y,g\cdot(y+ty^{\prime}))}{l(y,y+ty^{\prime})},\qquad\text{ where }g\in
SL_{2}{}(\mathbb{R}{}),$
exists and its value depends only from $y$ and $g$ and is independent from
$y^{\prime}$.
Figure 8. Perpendicular as the shortest route to a line.
We may return from distances to angles recalling that in the Euclidean space a
perpendicular provides the shortest root from a point to a line, see Fig. 8.
###### Definition 2.12.
Let $l$ be a length or distance. We say that a vector $\overrightarrow{AB}$ is
_$l$ -perpendicular_ to a vector $\overrightarrow{CD}$ if function
$l(\overrightarrow{AB}+\varepsilon\overrightarrow{CD})$ of a variable
$\varepsilon$ has a local extremum at $\varepsilon=0$.
A pleasant surprise is that $l_{f}$-perpendicularity obtained thought the
length from focus (Definition 2.10) coincides with already defined in Section
2.5 f-orthogonality as follows from Thm. 2.8(i). It is also possible
[Kisil08a] to make $SL_{2}{}(\mathbb{R}{})$ action isometric in all three
cases.
Further details of the refreshing geometry of Möbius transformation can be
found in the paper [Kisil05a]and the book [Kisil12a].
All these study are waiting to be generalised to high dimensions, quaternions
and Clifford algebras provide a suitable language for this [Kisil05a,
JParker07a].
### 3\. Linear Representations
A consideration of the symmetries in analysis is natural to start from _linear
representations_. The previous geometrical actions (1.1) can be naturally
extended to such representations by induction [Kirillov76]*§ 13.2 [Kisil97c]*§
3.1 from a representation of a subgroup $H$. If $H$ is one-dimensional then
its irreducible representation is a character, which is always supposed to be
a complex valued. However hypercomplex number naturally appeared in the
$SL_{2}{}(\mathbb{R}{})$ action (1.1), see Subsection 2.1 and [Kisil09c], why
shall we admit only $\mathrm{i}^{2}=-1$ to deliver a character then?
#### 3.1. Hypercomplex Characters
As we already mentioned the typical discussion of induced representations of
$SL_{2}{}(\mathbb{R}{})$ is centred around the case $H=K$ and a complex valued
character of $K$. A linear transformation defined by a matrix (2.4) in $K$ is
a rotation of $\mathbb{R}^{2}{}$ by the angle $t$. After identification
$\mathbb{R}^{2}{}=\mathbb{C}{}$ this action is given by the multiplication
$e^{\mathrm{i}t}$, with $\mathrm{i}^{2}=-1$. The rotation preserve the
(elliptic) metric given by:
(3.1) $x^{2}+y^{2}=(x+\mathrm{i}y)(x-\mathrm{i}y).$
Therefore the orbits of rotations are circles, any line passing the origin (a
“spoke”) is rotated by the angle $t$, see Fig. 9.
Dual and double numbers produces the most straightforward adaptation of this
result.
Figure 9. Rotations of algebraic wheels, i.e. the multiplication by $e^{\iota
t}$: elliptic ($E$), trivial parabolic ($P_{0}$) and hyperbolic ($H$). All
blue orbits are defined by the identity $x^{2}-\iota^{2}y^{2}=r^{2}$. Thin
“spokes” (straight lines from the origin to a point on the orbit) are
“rotated” from the real axis. This is symplectic linear transformations of the
classical phase space as well.
###### Proposition 3.1.
The following table show correspondences between three types of algebraic
characters:
Elliptic | Parabolic | Hyperbolic
---|---|---
$\mathrm{i}^{2}=-1$ | $\varepsilon^{2}=0$ | $\mathrm{j}^{2}=1$
$w=x+\mathrm{i}y$ | $w=x+\varepsilon y$ | $w=x+\mathrm{j}y$
$\bar{w}=x-\mathrm{i}y$ | $\bar{w}=x-\varepsilon y$ | $\bar{w}=x-\mathrm{j}y$
$e^{\mathrm{i}t}=\cos t+\mathrm{i}\sin t$ | $e^{\varepsilon t}=1+\varepsilon t$ | $e^{\mathrm{j}t}=\cosh t+\mathrm{j}\sinh t$
$\left|w\right|_{e}^{2}=w\bar{w}=x^{2}+y^{2}$ | $\left|w\right|_{p}^{2}=w\bar{w}=x^{2}$ | $\left|w\right|_{h}^{2}=w\bar{w}=x^{2}-y^{2}$
$\arg w=\tan^{-1}\frac{y}{x}\frac{}{}$ | $\arg w=\frac{y}{x}$ | $\arg w=\tanh^{-1}\frac{y}{x}$
unit circle $\left|w\right|_{e}^{2}=1$ | “unit” strip $x=\pm 1$ | unit hyperbola $\left|w\right|_{h}^{2}=1$
Geometrical action of multiplication by $e^{\iota t}$ is drawn in Fig. 9 for
all three cases.
Explicitly parabolic rotations associated with $\mathrm{e}^{\varepsilon t}$
acts on dual numbers as follows:
(3.2) $\mathrm{e}^{\varepsilon x}:a+\varepsilon b\mapsto a+\varepsilon(ax+b).$
This links the parabolic case with the Galilean group [Yaglom79] of symmetries
of the classic mechanics, with the absolute time disconnected from space.
The obvious algebraic similarity and the connection to classical kinematic is
a wide spread justification for the following viewpoint on the parabolic case,
cf. [HerranzOrtegaSantander99a, Yaglom79]:
* •
the parabolic trigonometric functions are trivial:
(3.3) $\mathop{\operator@font cosp}\nolimits t=\pm
1,\qquad\mathop{\operator@font sinp}\nolimits t=t;$
* •
the parabolic distance is independent from $y$ if $x\neq 0$:
(3.4) $x^{2}=(x+\varepsilon y)(x-\varepsilon y);$
* •
the polar decomposition of a dual number is defined by [Yaglom79]*App. C(30’):
(3.5) $u+\varepsilon v=u(1+\varepsilon\frac{v}{u}),\quad\text{ thus
}\quad\left|u+\varepsilon v\right|=u,\quad\arg(u+\varepsilon v)=\frac{v}{u};$
* •
the parabolic wheel looks rectangular, see Fig. 9.
Those algebraic analogies are quite explicit and widely accepted as an
ultimate source for parabolic trigonometry [LavrentShabat77,
HerranzOrtegaSantander99a, Yaglom79]. Moreover, those three rotations are all
non-isomorphic symplectic linear transformations of the phase space, which
makes them useful in the context of classical and quantum mechanics [Kisil10a,
Kisil11a], see Section 7. There exist also alternative characters [Kisil09a]
based on Möbius transformations with geometric motivation and connections to
equations of mathematical physics.
#### 3.2. Induced Representations
Let $G$ be a group, $H$ be its closed subgroup with the corresponding
homogeneous space $X=G/H$ with an invariant measure. We are using notations
and definitions of maps $p:G\rightarrow X$, $s:X\rightarrow G$ and
$r:G\rightarrow H$ from Subsection 2.1. Let $\chi$ be an irreducible
representation of $H$ in a vector space $V$, then it induces a representation
of $G$ in the sense of Mackey [Kirillov76]*§ 13.2. This representation has the
realisation ${\rho_{\chi}}$ in the space $L_{2}{}(X)$ of $V$-valued functions
by the formula [Kirillov76]*§ 13.2.(7)–(9):
(3.6) $[{\rho_{\chi}}(g)f](x)=\chi(r(g^{-1}*s(x)))f(g^{-1}\cdot x),.$
where $g\in G$, $x\in X$, $h\in H$ and $r:G\rightarrow H$, $s:X\rightarrow G$
are maps defined above; $*$ denotes multiplication on $G$ and $\cdot$ denotes
the action (2.2) of $G$ on $X$.
Consider this scheme for representations of $SL_{2}{}(\mathbb{R}{})$ induced
from characters of its one-dimensional subgroups. We can notice that only the
subgroup $K$ requires a complex valued character due to the fact of its
compactness. For subgroups $N^{\prime}$ and $A\\!^{\prime}$ we can consider
characters of all three types—elliptic, parabolic and hyperbolic. Therefore we
have seven essentially different induced representations. We will write
explicitly only three of them here.
###### Example 3.2.
Consider the subgroup $H=K$, due to its compactness we are limited to complex
valued characters of $K$ only. All of them are of the form $\chi_{k}$:
(3.7) $\chi_{k}\begin{pmatrix}\cos t&\sin t\\\ -\sin t&\cos
t\end{pmatrix}=e^{-\mathrm{i}kt},\qquad\text{ where }k\in\mathbb{Z}{}.$
Using the explicit form (2.3) of the map $s$ we find the map $r$ given in
(2.1) as follows:
$r\begin{pmatrix}a&b\\\
c&d\end{pmatrix}=\frac{1}{\sqrt{c^{2}+d^{2}}}\begin{pmatrix}d&-c\\\
c&d\end{pmatrix}\in K.$
Therefore:
$r(g^{-1}*s(u,v))=\frac{1}{\sqrt{(cu+d)^{2}+(cv)^{2}}}\begin{pmatrix}cu+d&-cv\\\
cv&cu+d\end{pmatrix},\quad\text{where }g^{-1}=\begin{pmatrix}a&b\\\
c&d\end{pmatrix}.$
Substituting this into (3.7) and combining with the Möbius transformation of
the domain (1.1) we get the explicit realisation ${\rho_{k}}{}$ of the induced
representation (3.6):
(3.8)
${\rho_{k}}{}(g)f(w)=\frac{\left|cw+d\right|^{k}}{(cw+d)^{k}}f\left(\frac{aw+b}{cw+d}\right),\quad\text{
where }g^{-1}=\begin{pmatrix}a&b\\\ c&d\end{pmatrix},\ w=u+\mathrm{i}v.$
This representation acts on complex valued functions in the upper half-plane
$\mathbb{R}^{2}_{+}{}=SL_{2}{}(\mathbb{R}{})/K$ and belongs to the discrete
series [Lang85]*§ IX.2. It is common to get rid of the factor
$\left|cw+d\right|^{k}$ from that expression in order to keep analyticity and
we will follow this practise for a convenience as well.
###### Example 3.3.
In the case of the subgroup $N$ there is a wider choice of possible
characters.
1. (i)
Traditionally only complex valued characters of the subgroup $N$ are
considered, they are:
(3.9) $\chi_{\tau}^{\mathbb{C}{}}\begin{pmatrix}1&0\\\
t&1\end{pmatrix}=e^{\mathrm{i}\tau t},\qquad\text{ where
}\tau\in\mathbb{R}{}.$
A direct calculation shows that:
$r\begin{pmatrix}a&b\\\ c&d\end{pmatrix}=\begin{pmatrix}1&0\\\
\frac{c}{d}&1\end{pmatrix}\in N^{\prime}.$
Thus:
(3.10) $r(g^{-1}*s(u,v))=\begin{pmatrix}1&0\\\
\frac{cv}{d+cu}&1\end{pmatrix},\quad\text{ where }g^{-1}=\begin{pmatrix}a&b\\\
c&d\end{pmatrix}.$
A substitution of this value into the character (3.9) together with the Möbius
transformation (1.1) we obtain the next realisation of (3.6):
${\rho^{\mathbb{C}{}}_{\tau}}(g)f(w)=\exp\left(\mathrm{i}\frac{\tau
cv}{cu+d}\right)f\left(\frac{aw+b}{cw+d}\right),\quad\text{where
}w=u+\varepsilon v,\ g^{-1}=\begin{pmatrix}a&b\\\ c&d\end{pmatrix}.$
The representation acts on the space of _complex_ valued functions on the
upper half-plane $\mathbb{R}^{2}_{+}{}$, which is a subset of _dual_ numbers
as a homogeneous space $SL_{2}{}(\mathbb{R}{})/N^{\prime}$. The mixture of
complex and dual numbers in the same expression is confusing.
2. (ii)
The parabolic character $\chi_{\tau}$ with the algebraic flavour is provided
by multiplication (3.2) with the dual number:
$\chi_{\tau}\begin{pmatrix}1&0\\\ t&1\end{pmatrix}=e^{\varepsilon\tau
t}=1+\varepsilon\tau t,\qquad\text{ where }\tau\in\mathbb{R}{}.$
If we substitute the value (3.10) into this character, then we receive the
representation:
${\rho_{\tau}}{}(g)f(w)=\left(1+\varepsilon\frac{\tau
cv}{cu+d}\right)f\left(\frac{aw+b}{cw+d}\right),$
where $w$, $\tau$ and $g$ are as above. The representation is defined on the
space of dual numbers valued functions on the upper half-plane of dual
numbers. Thus expression contains only dual numbers with their usual algebraic
operations. Thus it is linear with respect to them.
All characters in the previous Example are unitary. Then the general scheme of
induced representations [Kirillov76]*§ 13.2 implies their unitarity in proper
senses.
###### Theorem 3.4 ([Kisil09c]).
Both representations of $SL_{2}{}(\mathbb{R}{})$ from Example 3.3 are unitary
on the space of function on the upper half-plane $\mathbb{R}^{2}_{+}{}$ of
dual numbers with the inner product:
(3.11) $\left\langle
f_{1},f_{2}\right\rangle=\int_{\mathbb{R}^{2}_{+}{}}f_{1}(w)\bar{f}_{2}(w)\,\frac{du\,dv}{v^{2}},\qquad\text{
where }w=u+\varepsilon v,$
and we use the conjugation and multiplication of functions’ values in algebras
of complex and dual numbers for representations ${\rho^{\mathbb{C}{}}_{\tau}}$
and ${\rho_{\tau}}$ respectively.
The inner product (3.11) is positive defined for the representation
${\rho^{\mathbb{C}{}}_{\tau}}$ but is not for the other. The respective spaces
are parabolic cousins of the _Krein spaces_ [ArovDym08], which are hyperbolic
in our sense.
#### 3.3. Similarity and Correspondence: Ladder Operators
From the above observation we can deduce the following empirical principle,
which has a heuristic value.
###### Principle 3.5 (Similarity and correspondence).
1. (i)
Subgroups $K$, $N^{\prime}$ and $A\\!^{\prime}$ play a similar rôle in the
structure of the group $SL_{2}{}(\mathbb{R}{})$ and its representations.
2. (ii)
The subgroups shall be swapped simultaneously with the respective replacement
of hypercomplex unit $\iota$.
The first part of the Principle (similarity) does not look sound alone. It is
enough to mention that the subgroup $K$ is compact (and thus its spectrum is
discrete) while two other subgroups are not. However in a conjunction with the
second part (correspondence) the Principle have received the following
confirmations so far, see [Kisil09c] for details:
* •
The action of $SL_{2}{}(\mathbb{R}{})$ on the homogeneous space
$SL_{2}{}(\mathbb{R}{})/H$ for $H=K$, $N^{\prime}$ or $A\\!^{\prime}$ is given
by linear-fractional transformations of complex, dual or double numbers
respectively.
* •
Subgroups $K$, $N^{\prime}$ or $A\\!^{\prime}$ are isomorphic to the groups of
unitary rotations of respective unit cycles in complex, dual or double
numbers.
* •
Representations induced from subgroups $K$, $N^{\prime}$ or $A\\!^{\prime}$
are unitary if the inner product spaces of functions with values in complex,
dual or double numbers.
###### Remark 3.6.
The principle of similarity and correspondence resembles _supersymmetry_
between bosons and fermions in particle physics, but we have similarity
between three different types of entities in our case.
Let us give another illustration to the Principle. Consider the Lie algebra
$\mathfrak{sl}_{2}$ of the group $SL_{2}{}(\mathbb{R}{})$. Pick up the
following basis in $\mathfrak{sl}_{2}$ [MTaylor86]*§ 8.1:
(3.12) $A=\frac{1}{2}\begin{pmatrix}-1&0\\\ 0&1\end{pmatrix},\quad
B=\frac{1}{2}\ \begin{pmatrix}0&1\\\ 1&0\end{pmatrix},\quad
Z=\begin{pmatrix}0&1\\\ -1&0\end{pmatrix}.$
The commutation relations between the elements are:
(3.13) $[Z,A]=2B,\qquad[Z,B]=-2A,\qquad[A,B]=-\frac{1}{2}Z.$
Let ${\rho}$ be a representation of the group $SL_{2}{}(\mathbb{R}{})$ in a
space $V$. Consider the derived representation $d{\rho}$ of the Lie algebra
$\mathfrak{sl}_{2}$ [Lang85]*§ VI.1 and denote $\tilde{X}=d{\rho}(X)$ for
$X\in\mathfrak{sl}_{2}$. To see the structure of the representation ${\rho}$
we can decompose the space $V$ into eigenspaces of the operator $\tilde{X}$
for some $X\in\mathfrak{sl}_{2}$, cf. the Taylor series in Section 5.4.
##### 3.3.1. Elliptic Ladder Operators
It would not be surprising that we are going to consider three cases: Let
$X=Z$ be a generator of the subgroup $K$ (2.4). Since this is a compact
subgroup the corresponding eigenspaces $\tilde{Z}v_{k}=\mathrm{i}kv_{k}$ are
parametrised by an integer $k\in\mathbb{Z}{}$. The _raising/lowering_ or
_ladder operators_ $L^{\\!\pm}$ [Lang85]*§ VI.2 [MTaylor86]*§ 8.2 are defined
by the following commutation relations:
(3.14) $[\tilde{Z},L^{\\!\pm}]=\lambda_{\pm}L^{\\!\pm}.$
In other words $L^{\\!\pm}$ are eigenvectors for operators
$\mathop{\operator@font ad}\nolimits Z$ of adjoint representation of
$\mathfrak{sl}_{2}$ [Lang85]*§ VI.2.
###### Remark 3.7.
The existence of such ladder operators follows from the general properties of
Lie algebras if the element $X\in\mathfrak{sl}_{2}$ belongs to a _Cartan
subalgebra_. This is the case for vectors $Z$ and $B$, which are the only two
non-isomorphic types of Cartan subalgebras in $\mathfrak{sl}_{2}$. However the
third case considered in this paper, the parabolic vector $B+Z/2$, does not
belong to a Cartan subalgebra, yet a sort of ladder operators is still
possible with dual number coefficients. Moreover, for the hyperbolic vector
$B$, besides the standard ladder operators an additional pair with double
number coefficients will also be described.
From the commutators (3.14) we deduce that $L^{\\!+}v_{k}$ are eigenvectors of
$\tilde{Z}$ as well:
(3.15) $\displaystyle\tilde{Z}(L^{\\!+}v_{k})$ $\displaystyle=$
$\displaystyle(L^{\\!+}\tilde{Z}+\lambda_{+}L^{\\!+})v_{k}=L^{\\!+}(\tilde{Z}v_{k})+\lambda_{+}L^{\\!+}v_{k}=\mathrm{i}kL^{\\!+}v_{k}+\lambda_{+}L^{\\!+}v_{k}$
$\displaystyle=$ $\displaystyle(\mathrm{i}k+\lambda_{+})L^{\\!+}v_{k}.$
Thus action of ladder operators on respective eigenspaces can be visualised by
the diagram:
(3.16)
$\textstyle{\ldots\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L^{\\!+}}$$\textstyle{\,V_{\mathrm{i}k-\lambda}\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L^{\\!-}}$$\scriptstyle{L^{\\!+}}$$\textstyle{\,V_{\mathrm{i}k}\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L^{\\!-}}$$\scriptstyle{L^{\\!+}}$$\textstyle{\,V_{\mathrm{i}k+\lambda}\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L^{\\!-}}$$\scriptstyle{L^{\\!+}}$$\textstyle{\,\ldots\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L^{\\!-}}$
Assuming $L^{\\!+}=a\tilde{A}+b\tilde{B}+c\tilde{Z}$ from the relations (3.13)
and defining condition (3.14) we obtain linear equations with unknown $a$, $b$
and $c$:
$c=0,\qquad 2a=\lambda_{+}b,\qquad-2b=\lambda_{+}a.$
The equations have a solution if and only if $\lambda_{+}^{2}+4=0$, and the
raising/lowering operators are $L^{\\!\pm}=\pm\mathrm{i}\tilde{A}+\tilde{B}$.
##### 3.3.2. Hyperbolic Ladder Operators
Consider the case $X=2B$ of a generator of the subgroup $A\\!^{\prime}$ (2.6).
The subgroup is not compact and eigenvalues of the operator $\tilde{B}$ can be
arbitrary, however raising/lowering operators are still important
[HoweTan92]*§ II.1 [Mazorchuk09a]*§ 1.1. We again seek a solution in the form
$L_{h}^{\\!+}=a\tilde{A}+b\tilde{B}+c\tilde{Z}$ for the commutator
$[2\tilde{B},L_{h}^{\\!+}]=\lambda L_{h}^{\\!+}$. We will get the system:
$4c=\lambda a,\qquad b=0,\qquad{a}=\lambda c.$
A solution exists if and only if $\lambda^{2}=4$. There are obvious values
$\lambda=\pm 2$ with the ladder operators $L_{h}^{\\!\pm}=\pm
2\tilde{A}+\tilde{Z}$, see [HoweTan92]*§ II.1 [Mazorchuk09a]*§ 1.1. Each
indecomposable $\mathfrak{sl}_{2}$-module is formed by a one-dimensional chain
of eigenvalues with a transitive action of ladder operators.
Admitting double numbers we have an extra possibility to satisfy
$\lambda^{2}=4$ with values $\lambda=\pm 2\mathrm{j}$. Then there is an
additional pair of hyperbolic ladder operators $L_{\mathrm{j}}^{\\!\pm}=\pm
2\mathrm{j}\tilde{A}+\tilde{Z}$, which shift eigenvectors in the “orthogonal”
direction to the standard operators $L_{h}^{\\!\pm}$. Therefore an
indecomposable $\mathfrak{sl}_{2}$-module can be parametrised by a two-
dimensional lattice of eigenvalues on the double number plane, see Fig. 10
$\textstyle{\,\ldots\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{\mathrm{j}}^{\\!+}}$$\textstyle{\,\ldots\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{\mathrm{j}}^{\\!+}}$$\textstyle{\,\ldots\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{\mathrm{j}}^{\\!+}}$$\textstyle{\ldots\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{h}^{\\!+}}$$\textstyle{\,V_{(n-2)+\mathrm{j}(k-2)}\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{h}^{\\!-}}$$\scriptstyle{L_{h}^{\\!+}}$$\scriptstyle{L_{\mathrm{j}}^{\\!-}}$$\scriptstyle{L_{\mathrm{j}}^{\\!+}}$$\textstyle{\,V_{n+\mathrm{j}(k-2)}\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{h}^{\\!-}}$$\scriptstyle{L_{h}^{\\!+}}$$\scriptstyle{L_{\mathrm{j}}^{\\!-}}$$\scriptstyle{L_{\mathrm{j}}^{\\!+}}$$\textstyle{\,V_{(n+2)+\mathrm{j}(k-2)}\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{h}^{\\!-}}$$\scriptstyle{L_{h}^{\\!+}}$$\scriptstyle{L_{\mathrm{j}}^{\\!-}}$$\scriptstyle{L_{\mathrm{j}}^{\\!+}}$$\textstyle{\,\ldots\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{h}^{\\!-}}$$\textstyle{\ldots\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{h}^{\\!+}}$$\textstyle{\,V_{(n-2)+\mathrm{j}k}\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{h}^{\\!-}}$$\scriptstyle{L_{h}^{\\!+}}$$\scriptstyle{L_{\mathrm{j}}^{\\!-}}$$\scriptstyle{L_{\mathrm{j}}^{\\!+}}$$\textstyle{\,V_{n+\mathrm{j}k}\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{h}^{\\!-}}$$\scriptstyle{L_{h}^{\\!+}}$$\scriptstyle{L_{\mathrm{j}}^{\\!-}}$$\scriptstyle{L_{\mathrm{j}}^{\\!+}}$$\textstyle{\,V_{(n+2)+\mathrm{j}k}\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{h}^{\\!-}}$$\scriptstyle{L_{h}^{\\!+}}$$\scriptstyle{L_{\mathrm{j}}^{\\!-}}$$\scriptstyle{L_{\mathrm{j}}^{\\!+}}$$\textstyle{\,\ldots\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{h}^{\\!-}}$$\textstyle{\ldots\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{h}^{\\!+}}$$\textstyle{\,V_{(n-2)+\mathrm{j}(k+2)}\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{h}^{\\!-}}$$\scriptstyle{L_{h}^{\\!+}}$$\scriptstyle{L_{\mathrm{j}}^{\\!-}}$$\scriptstyle{L_{\mathrm{j}}^{\\!+}}$$\textstyle{\,V_{n+\mathrm{j}(k+2)}\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{h}^{\\!-}}$$\scriptstyle{L_{h}^{\\!+}}$$\scriptstyle{L_{\mathrm{j}}^{\\!-}}$$\scriptstyle{L_{\mathrm{j}}^{\\!+}}$$\textstyle{\,V_{(n+2)+\mathrm{j}(k+2)}\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{h}^{\\!-}}$$\scriptstyle{L_{h}^{\\!+}}$$\scriptstyle{L_{\mathrm{j}}^{\\!-}}$$\scriptstyle{L_{\mathrm{j}}^{\\!+}}$$\textstyle{\,\ldots\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{h}^{\\!-}}$$\textstyle{\,\ldots\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{\mathrm{j}}^{\\!-}}$$\textstyle{\,\ldots\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{\mathrm{j}}^{\\!-}}$$\textstyle{\,\ldots\,\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L_{\mathrm{j}}^{\\!-}}$
Figure 10. The action of hyperbolic ladder operators on a 2D lattice of
eigenspaces. Operators $L_{h}^{\\!\pm}$ move the eigenvalues by $2$, making
shifts in the horizontal direction. Operators $L_{\mathrm{j}}^{\\!\pm}$ change
the eigenvalues by $2\mathrm{j}$, shown as vertical shifts.
##### 3.3.3. Parabolic Ladder Operators
Finally consider the case of a generator $X=-B+Z/2$ of the subgroup
$N^{\prime}$ (2.5). According to the above procedure we get the equations:
$b+2c=\lambda a,\qquad-a=\lambda b,\qquad\frac{a}{2}=\lambda c,$
which can be resolved if and only if $\lambda^{2}=0$. If we restrict ourselves
with the only real (complex) root $\lambda=0$, then the corresponding
operators $L_{p}^{\\!\pm}=-\tilde{B}+\tilde{Z}/2$ will not affect eigenvalues
and thus are useless in the above context. However the dual number roots
$\lambda=\pm\varepsilon t$, $t\in\mathbb{R}{}$ lead to the operators
$L_{\varepsilon}^{\\!\pm}=\pm\varepsilon t\tilde{A}-\tilde{B}+\tilde{Z}/2$.
These operators are suitable to build an $\mathfrak{sl}_{2}$-modules with a
one-dimensional chain of eigenvalues.
###### Remark 3.8.
The following rôles of hypercomplex numbers are noteworthy:
* •
the introduction of complex numbers is a necessity for the _existence_ of
ladder operators in the elliptic case;
* •
in the parabolic case we need dual numbers to make ladder operators _useful_ ;
* •
in the hyperbolic case double numbers are not required neither for the
existence or for the usability of ladder operators, but they do provide an
enhancement.
We summarise the above consideration with a focus on the Principle of
similarity and correspondence:
###### Proposition 3.9.
Let a vector $X\in\mathfrak{sl}_{2}$ generates the subgroup $K$, $N^{\prime}$
or $A\\!^{\prime}$, that is $X=Z$, $B-Z/2$, or $B$ respectively. Let $\iota$
be the respective hypercomplex unit.
Then raising/lowering operators $L^{\\!\pm}$ satisfying to the commutation
relation:
$[X,L^{\\!\pm}]=\pm\iota L^{\\!\pm},\qquad[L^{\\!-},L^{\\!+}]=2\iota X.$
are:
$L^{\\!\pm}=\pm\iota\tilde{A}+\tilde{Y}.$
Here $Y\in\mathfrak{sl}_{2}$ is a linear combination of $B$ and $Z$ with the
properties:
* •
$Y=[A,X]$.
* •
$X=[A,Y]$.
* •
Killings form $K(X,Y)$ [Kirillov76]*§ 6.2 vanishes.
Any of the above properties defines the vector $Y\in\mathop{\operator@font
span}\nolimits\\{B,Z\\}$ up to a real constant factor.
The usability of the Principle of similarity and correspondence will be
illustrated by more examples below.
### 4\. Covariant Transform
A general group-theoretical construction [Perelomov86, FeichGroech89a,
Kisil98a, AliAntGaz00, Fuhr05a, ChristensenOlafsson09a, KlaSkag85] of
_wavelets_ (or _coherent state_) starts from an irreducible square integrable
representation—in the proper sense or modulo a subgroup. Then a mother wavelet
is chosen to be _admissible_. This leads to a wavelet transform which is an
isometry to $L_{2}{}$ space with respect to the Haar measure on the group or
(quasi)invariant measure on a homogeneous space.
The importance of the above situation shall not be diminished, however an
exclusive restriction to such a setup is not necessary, in fact. Here is a
classical example from complex analysis: the Hardy space
$H_{2}{}(\mathbb{T}{})$ on the unit circle and Bergman spaces
$B_{2}^{n}{}(\mathbb{D}{})$, $n\geq 2$ in the unit disk produce wavelets
associated with representations $\rho_{1}$ and $\rho_{n}$ of the group
$SL_{2}{}(\mathbb{R}{})$ respectively [Kisil97c]. While representations
$\rho_{n}$, $n\geq 2$ are from square integrable discrete series, the mock
discrete series representation $\rho_{1}$ is not square integrable [Lang85]*§
VI.5 [MTaylor86]*§ 8.4. However it would be natural to treat the Hardy space
in the same framework as Bergman ones. Some more examples will be presented
below.
#### 4.1. Extending Wavelet Transform
To make a sharp but still natural generalisation of wavelets we give the
following definition.
###### Definition 4.1.
[Kisil09d] Let ${\rho}$ be a representation of a group $G$ in a space $V$ and
$F$ be an operator from $V$ to a space $U$. We define a _covariant transform_
$\mathcal{W}$ from $V$ to the space $L{}(G,U)$ of $U$-valued functions on $G$
by the formula:
(4.1) $\mathcal{W}:v\mapsto\hat{v}(g)=F({\rho}(g^{-1})v),\qquad v\in V,\ g\in
G.$
Operator $F$ will be called _fiducial operator_ in this context.
We borrow the name for operator $F$ from fiducial vectors of Klauder and
Skagerstam [KlaSkag85].
###### Remark 4.2.
We do not require that fiducial operator $F$ shall be linear. Sometimes the
positive homogeneity, i.e. $F(tv)=tF(v)$ for $t>0$, alone can be already
sufficient, see Example 4.14.
###### Remark 4.3.
Usefulness of the covariant transform is in the reverse proportion to the
dimensionality of the space $U$. The covariant transform encodes properties of
$v$ in a function $\mathcal{W}v$ on $G$. For a low dimensional $U$ this
function can be ultimately investigated by means of harmonic analysis. Thus
$\dim U=1$ (scalar-valued functions) is the ideal case, however, it is
unattainable sometimes, see Example 4.11 below. We may have to use a higher
dimensions of $U$ if the given group $G$ is not rich enough.
As we will see below covariant transform is a close relative of wavelet
transform. The name is chosen due to the following common property of both
transformations.
###### Theorem 4.4.
The covariant transform (4.1) intertwines ${\rho}$ and the left regular
representation $\Lambda$ on $L{}(G,U)$:
$\mathcal{W}{\rho}(g)=\Lambda(g)\mathcal{W}.$
Here $\Lambda$ is defined as usual by:
(4.2) $\Lambda(g):f(h)\mapsto f(g^{-1}h).$
###### Proof.
We have a calculation similar to wavelet transform [Kisil98a]*Prop. 2.6. Take
$u={\rho}(g)v$ and calculate its covariant transform:
$\displaystyle{}[\mathcal{W}({\rho}(g)v)](h)$ $\displaystyle=$
$\displaystyle[\mathcal{W}({\rho}(g)v)](h)=F({\rho}(h^{-1}){\rho}(g)v)$
$\displaystyle=$ $\displaystyle F({\rho}((g^{-1}h)^{-1})v)$ $\displaystyle=$
$\displaystyle[\mathcal{W}v](g^{-1}h)$ $\displaystyle=$
$\displaystyle\Lambda(g)[\mathcal{W}v](h).$
∎
The next result follows immediately:
###### Corollary 4.5.
The image space $\mathcal{W}(V)$ is invariant under the left shifts on $G$.
###### Remark 4.6.
A further generalisation of the covariant transform can be obtained if we
relax the group structure. Consider, for example, a _cancellative semigroup_
$\mathbb{Z}_{+}{}$ of non-negative integers. It has a linear presentation on
the space of polynomials in a variable $t$ defined by the action
$m:t^{n}\mapsto t^{m+n}$ on the monomials. Application of a linear functional
$l$, e.g. defined by an integration over a measure on the real line, produces
_umbral calculus_ $l(t^{n})=c_{n}$, which has a magic efficiency in many
areas, notably in combinatorics [Rota95, Kisil97b]. In this direction we also
find fruitful to expand the notion of an intertwining operator to a _token_
[Kisil01b].
#### 4.2. Examples of Covariant Transform
In this Subsection we will provide several examples of covariant transforms.
Some of them will be expanded in subsequent sections, however a detailed study
of all aspects will not fit into the present work. We start from the classical
example of the group-theoretical wavelet transform:
###### Example 4.7.
Let $V$ be a Hilbert space with an inner product
$\left\langle\cdot,\cdot\right\rangle$ and ${\rho}$ be a unitary
representation of a group $G$ in the space $V$. Let
$F:V\rightarrow\mathbb{C}{}$ be a functional $v\mapsto\left\langle
v,v_{0}\right\rangle$ defined by a vector $v_{0}\in V$. The vector $v_{0}$ is
oftenly called the _mother wavelet_ in areas related to signal processing or
the _vacuum state_ in quantum framework.
Then the transformation (4.1) is the well-known expression for a _wavelet
transform_ [AliAntGaz00]*(7.48) (or _representation coefficients_):
(4.3)
$\mathcal{W}:v\mapsto\hat{v}(g)=\left\langle{\rho}(g^{-1})v,v_{0}\right\rangle=\left\langle
v,{\rho}(g)v_{0}\right\rangle,\qquad v\in V,\ g\in G.$
The family of vectors $v_{g}={\rho}(g)v_{0}$ is called _wavelets_ or _coherent
states_. In this case we obtain scalar valued functions on $G$, thus the
fundamental rôle of this example is explained in Rem. 4.3.
This scheme is typically carried out for a square integrable representation
${\rho}$ and $v_{0}$ being an admissible vector [Perelomov86, FeichGroech89a,
AliAntGaz00, Fuhr05a, ChristensenOlafsson09a]. In this case the wavelet
(covariant) transform is a map into the square integrable functions
[DufloMoore] with respect to the left Haar measure. The map becomes an
isometry if $v_{0}$ is properly scaled.
However square integrable representations and admissible vectors does not
cover all interesting cases.
###### Example 4.8.
Let $G=\mathrm{Aff}$ be the “$ax+b$” (or _affine_) group [AliAntGaz00]*§ 8.2:
the set of points $(a,b)$, $a\in\mathbb{R}_{+}{}$, $b\in\mathbb{R}{}$ in the
upper half-plane with the group law:
(4.4) $(a,b)*(a^{\prime},b^{\prime})=(aa^{\prime},ab^{\prime}+b)$
and left invariant measure $a^{-2}\,da\,db$. Its isometric representation on
$V=L_{p}{}(\mathbb{R}{})$ is given by the formula:
(4.5)
$[{\rho_{p}}(g)\,f](x)=a^{\frac{1}{p}}f\left(ax+b\right),\qquad\text{where
}g^{-1}=(a,b).$
We consider the operators
$F_{\pm}:L_{2}{}(\mathbb{R}{})\rightarrow\mathbb{C}{}$ defined by:
(4.6) $F_{\pm}(f)=\frac{1}{2\pi
i}\int_{\mathbb{R}{}}\frac{f(t)\,dt}{x\mp\mathrm{i}}.$
Then the covariant transform (4.1) is the Cauchy integral from
$L_{p}{}(\mathbb{R}{})$ to the space of functions $\hat{f}(a,b)$ such that
$a^{-\frac{1}{p}}\hat{f}(a,b)$ is in the Hardy space in the upper/lower half-
plane $H_{p}{}(\mathbb{R}^{2}_{\pm}{})$. Although the representation (4.5) is
square integrable for $p=2$, the function $\frac{1}{x\pm\mathrm{i}}$ used in
(4.6) is not an admissible vacuum vector. Thus the complex analysis become
decoupled from the traditional wavelet theory. As a result the application of
wavelet theory shall relay on an extraneous mother wavelets [Hutnik09a].
Many important objects in complex analysis are generated by inadmissible
mother wavelets like (4.6). For example, if
$F:L_{2}{}(\mathbb{R}{})\rightarrow\mathbb{C}{}$ is defined by $F:f\mapsto
F_{+}f+F_{-}f$ then the covariant transform (4.1) reduces to the _Poisson
integral_. If $F:L_{2}{}(\mathbb{R}{})\rightarrow\mathbb{C}^{2}{}$ is defined
by $F:f\mapsto(F_{+}f,F_{-}f)$ then the covariant transform (4.1) represents a
function $f$ on the real line as a jump:
(4.7) $f(z)=f_{+}(z)-f_{-}(z),\qquad f_{\pm}(z)\in
H_{p}{}(\mathbb{R}^{2}_{\pm}{})$
between functions analytic in the upper and the lower half-planes. This makes
a decomposition of $L_{2}{}(\mathbb{R}{})$ into irreducible components of the
representation (4.5). Another interesting but non-admissible vector is the
_Gaussian_ $e^{-x^{2}}$.
###### Example 4.9.
For the group $G=SL_{2}{}(\mathbb{R}{})$ [Lang85] let us consider the unitary
representation ${\rho}$ on the space of square integrable function
$L_{2}{}(\mathbb{R}^{2}_{+}{})$ on the upper half-plane through the Möbius
transformations (1.1):
(4.8)
${\rho}(g):f(z)\mapsto\frac{1}{(cz+d)^{2}}\,f\left(\frac{az+b}{cz+d}\right),\qquad
g^{-1}=\ \begin{pmatrix}a&b\\\ c&d\end{pmatrix}.$
This is a representation from the discrete series and $L_{2}{}(\mathbb{D}{})$
and irreducible invariant subspaces are parametrised by integers. Let $F_{k}$
be the functional $L_{2}{}(\mathbb{R}^{2}_{+}{})\rightarrow\mathbb{C}{}$ of
pairing with the lowest/highest $k$-weight vector in the corresponding
irreducible component (Bergman space) $B_{k}{}(\mathbb{R}^{2}_{\pm}{})$,
$k\geq 2$ of the discrete series [Lang85]*Ch. VI. Then we can build an
operator $F$ from various $F_{k}$ similarly to the previous Example. In
particular, the jump representation (4.7) on the real line generalises to the
representation of a square integrable function $f$ on the upper half-plane as
a sum
$f(z)=\sum_{k}a_{k}f_{k}(z),\qquad f_{k}\in B_{n}{}(\mathbb{R}^{2}_{\pm}{})$
for prescribed coefficients $a_{k}$ and analytic functions $f_{k}$ in question
from different irreducible subspaces.
Covariant transform is also meaningful for principal and complementary series
of representations of the group $SL_{2}{}(\mathbb{R}{})$, which are not square
integrable [Kisil97c].
###### Example 4.10.
Let $G=\mathrm{SU}(2)\times\mathrm{Aff}$ be the Cartesian product of the
groups $\mathrm{SU}(2)$ of unitary rotations of $\mathbb{C}^{2}{}$ and the
$ax+b$ group $\mathrm{Aff}$. This group has a unitary linear representation on
the space $L_{2}{}(\mathbb{R}{},\mathbb{C}^{2}{})$ of square-integrable
(vector) $\mathbb{C}^{2}{}$-valued functions by the formula:
${\rho}{}(g)\begin{pmatrix}f_{1}(t)\\\
f_{2}(t)\end{pmatrix}=\begin{pmatrix}\alpha f_{1}(at+b)+\beta f_{2}(at+b)\\\
\mathsf{a}mmaf_{1}(at+b)+\delta f_{2}(at+b)\end{pmatrix},$
where $g=\begin{pmatrix}\alpha&\beta\\\
\mathsf{a}mma&\delta\end{pmatrix}\times(a,b)\in\mathrm{SU}(2)\times\mathrm{Aff}$.
It is obvious that the vector Hardy space, that is functions with both
components being analytic, is invariant under such action of $G$.
As a fiducial operator
$F:L_{2}{}(\mathbb{R}{},\mathbb{C}^{2}{})\rightarrow\mathbb{C}{}$ we can take,
cf. (4.6):
(4.9) $F\begin{pmatrix}f_{1}(t)\\\ f_{2}(t)\end{pmatrix}=\frac{1}{2\pi
i}\int_{\mathbb{R}{}}\frac{f_{1}(t)\,dt}{x-\mathrm{i}}.$
Thus the image of the associated covariant transform is a subspace of scalar
valued bounded functions on $G$. In this way we can transform (without a loss
of information) vector-valued problems, e.g. matrix _Wiener–Hopf
factorisation_ [BoetcherKarlovichSpitkovsky02a], to scalar question of
harmonic analysis on the group $G$.
###### Example 4.11.
A straightforward generalisation of Ex. 4.7 is obtained if $V$ is a Banach
space and $F:V\rightarrow\mathbb{C}{}$ is an element of $V^{*}$. Then the
covariant transform coincides with the construction of wavelets in Banach
spaces [Kisil98a].
###### Example 4.12.
The next stage of generalisation is achieved if $V$ is a Banach space and
$F:V\rightarrow\mathbb{C}^{n}{}$ is a linear operator. Then the corresponding
covariant transform is a map $\mathcal{W}:V\rightarrow
L{}(G,\mathbb{C}^{n}{})$. This is closely related to M.G. Krein’s works on
_directing functionals_ [Krein48a], see also _multiresolution wavelet
analysis_ [BratJorg97a], Clifford-valued Fock–Segal–Bargmann spaces
[CnopsKisil97a] and [AliAntGaz00]*Thm. 7.3.1.
###### Example 4.13.
Let $F$ be a projector $L_{p}{}(\mathbb{R}{})\rightarrow
L_{p}{}(\mathbb{R}{})$ defined by the relation $(Ff)\hat{\
}(\lambda)=\chi(\lambda)\hat{f}(\lambda)$, where the hat denotes the Fourier
transform and $\chi(\lambda)$ is the characteristic function of the set
$[-2,-1]\cup[1,2]$. Then the covariant transform
$L_{p}{}(\mathbb{R}{})\rightarrow C{}(\mathrm{Aff},L_{p}{}(\mathbb{R}{}))$
generated by the representation (4.5) of the affine group from $F$ contains
all information provided by the _Littlewood–Paley operator_ [Grafakos08]*§
5.1.1.
###### Example 4.14.
A step in a different direction is a consideration of non-linear operators.
Take again the “$ax+b$” group and its representation (4.5). We define $F$ to
be a homogeneous but non-linear functional $V\rightarrow\mathbb{R}_{+}{}$:
$F(f)=\frac{1}{2}\int\limits_{-1}^{1}\left|f(x)\right|\,dx.$
The covariant transform (4.1) becomes:
(4.10)
$[\mathcal{W}_{p}f](a,b)=F({\rho_{p}}(a,b)f)=\frac{1}{2}\int\limits_{-1}^{1}\left|a^{\frac{1}{p}}f\left(ax+b\right)\right|\,dx=a^{\frac{1}{p}}\frac{1}{2a}\int\limits^{b+a}_{b-a}\left|f\left(x\right)\right|\,dx.$
Obviously $M_{f}(b)=\max_{a}[\mathcal{W}_{\infty}f](a,b)$ coincides with the
Hardy _maximal function_ , which contains important information on the
original function $f$. From the Cor. 4.5 we deduce that the operator
$M:f\mapsto M_{f}$ intertwines ${\rho_{p}}$ with itself
${\rho_{p}}M=M{\rho_{p}}$.
Of course, the full covariant transform (4.10) is even more detailed than $M$.
For example, $\left\|f\right\|=\max_{b}[\mathcal{W}_{\infty}f](\frac{1}{2},b)$
is the shift invariant norm [Johansson08a].
###### Example 4.15.
Let $V=L_{c}{}(\mathbb{R}^{2}{})$ be the space of compactly supported bounded
functions on the plane. We take $F$ be the linear operator
$V\rightarrow\mathbb{C}{}$ of integration over the real line:
$F:f(x,y)\mapsto F(f)=\int_{\mathbb{R}{}}f(x,0)\,dx.$
Let $G$ be the group of Euclidean motions of the plane represented by ${\rho}$
on $V$ by a change of variables. Then the wavelet transform $F({\rho}(g)f)$ is
the _Radon transform_ [Helgason11a].
#### 4.3. Symbolic Calculi
There is a very important class of the covariant transforms which maps
operators to functions. Among numerous sources we wish to single out works of
Berezin [Berezin72, Berezin86]. We start from the Berezin _covariant symbol_.
###### Example 4.16.
Let a representation ${\rho}$ of a group $G$ act on a space $X$. Then there is
an associated representation ${\rho_{B}}$ of $G$ on a space $V=B{}(X,Y)$ of
linear operators $X\rightarrow Y$ defined by the identity [Berezin86,
Kisil98a]:
(4.11) $({\rho_{B}}(g)A)x=A({\rho}(g^{-1})x),\qquad x\in X,\ g\in G,\ A\in
B{}(X,Y).$
Following the Remark 4.3 we take $F$ to be a functional
$V\rightarrow\mathbb{C}{}$, for example $F$ can be defined from a pair $x\in
X$, $l\in Y^{*}$ by the expression $F:A\mapsto\left\langle Ax,l\right\rangle$.
Then the covariant transform is:
$\mathcal{W}:A\mapsto\hat{A}(g)=F({\rho_{B}}(g)A).$
This is an example of _covariant calculus_ [Kisil98a, Berezin72].
There are several variants of the last Example which are of a separate
interest.
###### Example 4.17.
A modification of the previous construction is obtained if we have two groups
$G_{1}$ and $G_{2}$ represented by ${\rho_{1}}$ and ${\rho_{2}}$ on $X$ and
$Y^{*}$ respectively. Then we have a covariant transform $B{}(X,Y)\rightarrow
L{}(G_{1}\times G_{2},\mathbb{C}{})$ defined by the formula:
$\mathcal{W}:A\mapsto\hat{A}(g_{1},g_{2})=\left\langle
A{\rho_{1}}(g_{1})x,{\rho_{2}}(g_{2})l\right\rangle.$
This generalises the above _Berezin covariant calculi_ [Kisil98a].
###### Example 4.18.
Let us restrict the previous example to the case when $X=Y$ is a Hilbert
space, ${\rho_{1}}{}={\rho_{2}}{}={\rho}$ and $x=l$ with $\left\|x\right\|=1$.
Than the range of the covariant transform:
$\mathcal{W}:A\mapsto\hat{A}(g)=\left\langle
A{\rho}(g)x,{\rho}(g)x\right\rangle$
is a subset of the _numerical range_ of the operator $A$. As a function on a
group $\hat{A}(g)$ provides a better description of $A$ than the set of its
values—numerical range.
###### Example 4.19.
The group $\mathrm{SU}(1,1)\simeq SL_{2}{}(\mathbb{R}{})$ consists of $2\times
2$ matrices of the form $\begin{pmatrix}\alpha&\beta\\\
\bar{\beta}&\bar{\alpha}\end{pmatrix}$ with the unit determinant [Lang85]*§
IX.1. Let $T$ be an operator with the spectral radius less than $1$. Then the
associated Möbius transformation
(4.12) $g:T\mapsto g\cdot T=\frac{\alpha T+\beta
I}{\bar{\beta}T+\bar{\alpha}I},\qquad\text{where}\quad
g=\begin{pmatrix}\alpha&\beta\\\ \bar{\beta}&\bar{\alpha}\end{pmatrix}\in
SL_{2}{}(\mathbb{R}{}),\ $
produces a well-defined operator with the spectral radius less than $1$ as
well. Thus we have a representation of $SU(1,1)$.
Let us introduce the _defect operators_ $D_{T}=(I-T^{*}T)^{1/2}$ and
$D_{T^{*}}=(I-TT^{*})^{1/2}$. For the fiducial operator $F=D_{T^{*}}$ the
covariant transform is, cf. [NagyFoias70]*§ VI.1, (1.2):
${}[\mathcal{W}T](g)=F(g\cdot
T)=-e^{\mathrm{i}\phi}\,\Theta_{T}(z)\,D_{T},\qquad\text{for
}g=\begin{pmatrix}e^{\mathrm{i}\phi/2}&0\\\
0&e^{-\mathrm{i}\phi/2}\end{pmatrix}\begin{pmatrix}1&-z\\\
-\bar{z}&1\end{pmatrix},$
where the _characteristic function_ $\Theta_{T}(z)$ [NagyFoias70]*§ VI.1,
(1.1) is:
$\Theta_{T}(z)=-T+D_{T^{*}}\,(I-zT^{*})^{-1}\,z\,D_{T}.$
Thus we approached the _functional model_ of operators from the covariant
transform. In accordance with Remark 4.3 the model is most fruitful for the
case of operator $F=D_{T^{*}}$ being one-dimensional.
The intertwining property in the previous examples was obtained as a
consequence of the general Prop. 4.4 about the covariant transform. However it
may be worth to select it as a separate definition:
###### Definition 4.20.
A _covariant calculus_ , also known as _symbolic calculus_ , is a map from
operators to functions, which intertwines two representations of the same
group in the respective spaces.
There is a dual class of covariant transforms acting in the opposite
direction: from functions to operators. The prominent examples are the Berezin
_contravariant symbol_ [Berezin72, Kisil98a] and symbols of a
_pseudodifferential operators_ (PDO) [Howe80b, Kisil98a].
###### Example 4.21.
The classical _Riesz–Dunford functional calculus_ [DunfordSchwartzI]*§ VII.3
[Nikolskii86]*§ IV.2 maps analytical functions on the unit disk to the linear
operators, it is defined through the Cauchy-type formula with the resolvent.
The calculus is an intertwining operator [Kisil02a] between the Möbius
transformations of the unit disk, cf. (5.33), and the actions (4.12) on
operators from the Example 4.19. This topic will be developed in Subsection
6.1.
In line with the Defn. 4.20 we can directly define the corresponding calculus
through the intertwining property [Kisil95i, Kisil02a]:
###### Definition 4.22.
A _contravariant calculus_ , also know as _functional calculus_ , is a map
from functions to operators, which intertwines two representations of the same
group in the respective spaces.
The duality between co- and contravariant calculi is the particular case of
the duality between covariant transform and the inverse covariant transform
defined in the next Subsection. In many cases a proper choice of spaces makes
covariant and/or contravariant calculus a bijection between functions and
operators. Subsequently only one form of calculus, either co- or
contravariant, is defined explicitly, although both of them are there in fact.
#### 4.4. Inverse Covariant Transform
An object invariant under the left action $\Lambda$ (4.2) is called _left
invariant_. For example, let $L$ and $L^{\prime}$ be two left invariant spaces
of functions on $G$. We say that a pairing
$\left\langle\cdot,\cdot\right\rangle:L\times
L^{\prime}\rightarrow\mathbb{C}{}$ is _left invariant_ if
(4.13) $\left\langle\Lambda(g)f,\Lambda(g)f^{\prime}\right\rangle=\left\langle
f,f^{\prime}\right\rangle,\quad\textrm{ for all }\quad f\in L,\ f^{\prime}\in
L^{\prime}.$
###### Remark 4.23.
1. (i)
We do not require the pairing to be linear in general.
2. (ii)
If the pairing is invariant on space $L\times L^{\prime}$ it is not
necessarily invariant (or even defined) on the whole $C{}(G)\times C{}(G)$.
3. (iii)
In a more general setting we shall study an invariant pairing on a homogeneous
spaces instead of the group. However due to length constraints we cannot
consider it here beyond the Example 4.26.
4. (iv)
An invariant pairing on $G$ can be obtained from an _invariant functional_ $l$
by the formula $\left\langle f_{1},f_{2}\right\rangle=l(f_{1}\bar{f}_{2})$.
For a representation ${\rho}$ of $G$ in $V$ and $v_{0}\in V$ we fix a function
$w(g)={\rho}(g)v_{0}$. We assume that the pairing can be extended in its
second component to this $V$-valued functions, say, in the weak sense.
###### Definition 4.24.
Let $\left\langle\cdot,\cdot\right\rangle$ be a left invariant pairing on
$L\times L^{\prime}$ as above, let ${\rho}$ be a representation of $G$ in a
space $V$, we define the function $w(g)={\rho}(g)v_{0}$ for $v_{0}\in V$. The
_inverse covariant transform_ $\mathcal{M}$ is a map $L\rightarrow V$ defined
by the pairing:
(4.14) $\mathcal{M}:f\mapsto\left\langle f,w\right\rangle,\qquad\text{ where
}f\in L.$
###### Example 4.25.
Let $G$ be a group with a unitary square integrable representation $\rho$. An
invariant pairing of two square integrable functions is obviously done by the
integration over the Haar measure:
$\left\langle f_{1},f_{2}\right\rangle=\int_{G}f_{1}(g)\bar{f}_{2}(g)\,dg.$
For an admissible vector $v_{0}$ [DufloMoore] [AliAntGaz00]*Chap. 8 the
inverse covariant transform is known in this setup as a _reconstruction
formula_.
###### Example 4.26.
Let $\rho$ be a square integrable representation of $G$ modulo a subgroup
$H\subset G$ and let $X=G/H$ be the corresponding homogeneous space with a
quasi-invariant measure $dx$. Then integration over $dx$ with an appropriate
weight produces an invariant pairing. The inverse covariant transform is a
more general version [AliAntGaz00]*(7.52) of the _reconstruction formula_
mentioned in the previous example.
Let $\rho$ be not a square integrable representation (even modulo a subgroup)
or let $v_{0}$ be inadmissible vector of a square integrable representation
$\rho$. An invariant pairing in this case is not associated with an
integration over any non singular invariant measure on $G$. In this case we
have a _Hardy pairing_. The following example explains the name.
###### Example 4.27.
Let $G$ be the “$ax+b$” group and its representation ${\rho}$ (4.5) from Ex.
4.8. An invariant pairing on $G$, which is not generated by the Haar measure
$a^{-2}da\,db$, is:
(4.15) $\left\langle f_{1},f_{2}\right\rangle=\lim_{a\rightarrow
0}\int\limits_{-\infty}^{\infty}f_{1}(a,b)\,\bar{f}_{2}(a,b)\,db.$
For this pairing we can consider functions $\frac{1}{2\pi i(x+i)}$ or
$e^{-x^{2}}$, which are not admissible vectors in the sense of square
integrable representations. Then the inverse covariant transform provides an
_integral resolutions_ of the identity.
Similar pairings can be defined for other semi-direct products of two groups.
We can also extend a Hardy pairing to a group, which has a subgroup with such
a pairing.
###### Example 4.28.
Let $G$ be the group $SL_{2}{}(\mathbb{R}{})$ from the Ex. 4.9. Then the
“$ax+b$” group is a subgroup of $SL_{2}{}(\mathbb{R}{})$, moreover we can
parametrise $SL_{2}{}(\mathbb{R}{})$ by triples $(a,b,\theta)$,
$\theta\in(-\pi,\pi]$ with the respective Haar measure [Lang85]*III.1(3). Then
the Hardy pairing
(4.16) $\left\langle f_{1},f_{2}\right\rangle=\lim_{a\rightarrow
0}\int\limits_{-\infty}^{\infty}f_{1}(a,b,\theta)\,\bar{f}_{2}(a,b,\theta)\,db\,d\theta.$
is invariant on $SL_{2}{}(\mathbb{R}{})$ as well. The corresponding inverse
covariant transform provides even a finer resolution of the identity which is
invariant under conformal mappings of the Lobachevsky half-plane.
### 5\. Analytic Functions
We saw in the first section that an inspiring geometry of cycles can be
recovered from the properties of $SL_{2}{}(\mathbb{R}{})$. In this section we
consider a realisation of the function theory within Erlangen approach
[Kisil97c, Kisil97a, Kisil01a, Kisil02c]. The covariant transform will be our
principal tool in this construction.
#### 5.1. Induced Covariant Transform
The choice of a mother wavelet or fiducial operator $F$ from Section 4.1 can
significantly influence the behaviour of the covariant transform. Let $G$ be a
group and ${H}$ be its closed subgroup with the corresponding homogeneous
space $X=G/{H}$. Let ${\rho}$ be a representation of $G$ by operators on a
space $V$, we denote by ${\rho_{H}}$ the restriction of ${\rho}$ to the
subgroup $H$.
###### Definition 5.1.
Let $\chi$ be a representation of the subgroup ${H}$ in a space $U$ and
$F:V\rightarrow U$ be an intertwining operator between $\chi$ and the
representation ${\rho_{H}}$:
(5.1) $F({\rho}(h)v)=F(v)\chi(h),\qquad\text{ for all }h\in{H},\ v\in V.$
Then the covariant transform (4.1) generated by $F$ is called the _induced
covariant transform_
The following is the main motivating example.
###### Example 5.2.
Consider the traditional wavelet transform as outlined in Ex. 4.7. Chose a
vacuum vector $v_{0}$ to be a joint eigenvector for all operators ${\rho}(h)$,
$h\in H$, that is ${\rho}(h)v_{0}=\chi(h)v_{0}$, where $\chi(h)$ is a complex
number depending of $h$. Then $\chi$ is obviously a character of $H$.
The image of wavelet transform (4.3) with such a mother wavelet will have a
property:
$\hat{v}(gh)=\left\langle v,{\rho}(gh)v_{0}\right\rangle=\left\langle
v,{\rho}(g)\chi(h)v_{0}\right\rangle=\chi(h)\hat{v}(g).$
Thus the wavelet transform is uniquely defined by cosets on the homogeneous
space $G/H$. In this case we previously spoke about the _reduced wavelet
transform_ [Kisil97a]. A representation ${\rho_{0}}$ is called _square
integrable_ $\mod H$ if the induced wavelet transform $[\mathcal{W}f_{0}](w)$
of the vacuum vector $f_{0}(x)$ is square integrable on $X$.
The image of induced covariant transform have the similar property:
(5.2)
$\hat{v}(gh)=F({\rho}((gh)^{-1})v)=F({\rho}(h^{-1}){\rho}(g^{-1})v)=F({\rho}(g^{-1})v)\chi{}{}(h^{-1}).$
Thus it is enough to know the value of the covariant transform only at a
single element in every coset $G/H$ in order to reconstruct it for the entire
group $G$ by the representation $\chi$. Since coherent states (wavelets) are
now parametrised by points homogeneous space $G/H$ they are referred sometimes
as coherent states which are not connected to a group [Klauder96a], however
this is true only in a very narrow sense as explained above.
###### Example 5.3.
To make it more specific we can consider the representation of
$SL_{2}{}(\mathbb{R}{})$ defined on $L_{2}{}(\mathbb{R}{})$ by the formula,
cf. (3.8):
${\rho}(g):f(z)\mapsto\frac{1}{(cx+d)}\,f\left(\frac{ax+b}{cx+d}\right),\qquad
g^{-1}=\ \begin{pmatrix}a&b\\\ c&d\end{pmatrix}.$
Let $K\subset SL_{2}{}(\mathbb{R}{})$ be the compact subgroup of matrices
$h_{t}=\begin{pmatrix}\cos t&\sin t\\\ -\sin t&\cos t\end{pmatrix}$. Then for
the fiducial operator $F_{\pm}$ (4.6) we have
$F_{\pm}\circ{\rho}(h_{t})=e^{\mp\mathrm{i}t}F_{\pm}$. Thus we can consider
the covariant transform only for points in the homogeneous space
$SL_{2}{}(\mathbb{R}{})/K$, moreover this set can be naturally identified with
the $ax+b$ group. Thus we do not obtain any advantage of extending the group
in Ex. 4.8 from $ax+b$ to $SL_{2}{}(\mathbb{R}{})$ if we will be still using
the fiducial operator $F_{\pm}$ (4.6).
Functions on the group $G$, which have the property
$\hat{v}(gh)=\hat{v}(g)\chi(h)$ (5.2), provide a space for the representation
of $G$ induced by the representation $\chi$ of the subgroup $H$. This explains
the choice of the name for induced covariant transform.
###### Remark 5.4.
Induced covariant transform uses the fiducial operator $F$ which passes
through the action of the subgroup ${H}$. This reduces information which we
obtained from this transform in some cases.
There is also a simple connection between a covariant transform and right
shifts:
###### Proposition 5.5.
Let $G$ be a Lie group and ${\rho}$ be a representation of $G$ in a space $V$.
Let $[\mathcal{W}f](g)=F({\rho}(g^{-1})f)$ be a covariant transform defined by
the fiducial operator $F:V\rightarrow U$. Then the right shift
$[\mathcal{W}f](gg^{\prime})$ by $g^{\prime}$ is the covariant transform
$[\mathcal{W^{\prime}}f](g)=F^{\prime}({\rho}(g^{-1})f)]$ defined by the
fiducial operator $F^{\prime}=F\circ{\rho}(g^{-1})$.
In other words the covariant transform intertwines right shifts on the group
$G$ with the associated action ${\rho_{B}}$ (4.11) on fiducial operators.
Although the above result is obvious, its infinitesimal version has
interesting consequences.
###### Corollary 5.6 ([Kisil10c]).
Let $G$ be a Lie group with a Lie algebra $\mathfrak{g}$ and ${\rho}$ be a
smooth representation of $G$. We denote by $d{\rho_{B}}$ the derived
representation of the associated representation ${\rho_{B}}$ (4.11) on
fiducial operators.
Let a fiducial operator $F$ be a null-solution, i.e. $AF=0$, for the operator
$A=\sum_{J}a_{j}d{\rho^{X_{j}}_{B}}$, where $X_{j}\in\mathfrak{g}$ and $a_{j}$
are constants. Then the covariant transform
$[\mathcal{W}f](g)=F({\rho}(g^{-1})f)$ for any $f$ satisfies:
$DF(g)=0,\qquad\text{where}\quad D=\sum_{j}\bar{a}_{j}\mathfrak{L}^{X_{j}}.$
Here $\mathfrak{L}^{X_{j}}$ are the left invariant fields (Lie derivatives) on
$G$ corresponding to $X_{j}$.
###### Example 5.7.
Consider the representation ${\rho}$ (4.5) of the $ax+b$ group with the $p=1$.
Let $A$ and $N$ be the basis of the corresponding Lie algebra generating one-
parameter subgroups $(e^{t},0)$ and $(0,t)$. Then the derived representations
are:
$[d{\rho^{A}}f](x)=f(x)+xf^{\prime}(x),\qquad[d{\rho^{N}}f](x)=f^{\prime}(x).$
The corresponding left invariant vector fields on $ax+b$ group are:
$\mathfrak{L}^{A}=a\partial_{a},\qquad\mathfrak{L}^{N}=a\partial_{b}.$
The mother wavelet $\frac{1}{x+\mathrm{i}}$ is a null solution of the operator
$d{\rho^{A}}+\mathrm{i}d{\rho^{N}}=I+(x+\mathrm{i})\frac{d}{dx}$. Therefore
the covariant transform with the fiducial operator $F_{+}$ (4.6) will consist
with the null solutions to the operator
$\mathfrak{L}^{A}-\mathrm{i}\mathfrak{L}^{N}=-\mathrm{i}a(\partial_{b}+\mathrm{i}\partial_{a})$,
that is in the essence the Cauchy-Riemann operator in the upper half-plane.
There is a statement which extends the previous Corollary from differential
operators to integro-differential ones. We will formulate it for the wavelets
setting.
###### Corollary 5.8.
Let $G$ be a group and ${\rho}$ be a unitary representation of $G$, which can
be extended to a vector space $V$ of functions or distributions on $G$. Let a
mother wavelet $w\in V^{\prime}$ satisfy the equation
$\int_{G}a(g)\,{\rho}(g)w\,dg=0,$
for a fixed distribution $a(g)\in V$ and a (not necessarily invariant) measure
$dg$. Then any wavelet transform $F(g)=\mathcal{W}f(g)=\left\langle
f,{\rho}(g)w_{0}\right\rangle$ obeys the condition:
$DF=0,\qquad\text{where}\quad D=\int_{G}\bar{a}(g)\,R(g)\,dg,$
with $R$ being the right regular representation of $G$.
Clearly, the Corollary 5.6 is a particular case of the Corollary 5.8 with a
distribution $a$, which is a combination of derivatives of Dirac’s delta
functions. The last Corollary will be illustrated at the end of Section 6.1.
###### Remark 5.9.
We note that Corollaries 5.6 and 5.8 are true whenever we have an intertwining
property between ${\rho}$ with the right regular representation of $G$.
#### 5.2. Induced Wavelet Transform and Cauchy Integral
We again use the general scheme from Subsection 3.2. The $ax+b$ group is
isomorphic to a subgroups of $SL_{2}{}(\mathbb{R}{})$ consisting of the lower-
triangular matrices:
$F=\left\\{\frac{1}{\sqrt{a}}\begin{pmatrix}a&0\\\ b&1\end{pmatrix},\
a>0\right\\}.$
The corresponding homogeneous space $X=SL_{2}{}(\mathbb{R}{})/F$ is one-
dimensional and can be parametrised by a real number. The natural projection
$p:SL_{2}{}(\mathbb{R}{})\rightarrow\mathbb{R}{}$ and its left inverse
$s:\mathbb{R}{}\rightarrow SL_{2}{}(\mathbb{R}{})$ can be defined as follows:
(5.3) $p:\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\mapsto\frac{b}{d},\qquad
s:u\mapsto\begin{pmatrix}1&u\\\ 0&1\end{pmatrix}.$
Thus we calculate the corresponding map $r:SL_{2}{}(\mathbb{R}{})\rightarrow
F$, see Subsection 2.1:
(5.4) $r:\begin{pmatrix}a&b\\\
c&d\end{pmatrix}\mapsto\begin{pmatrix}d^{-1}&0\\\ c&d\end{pmatrix}.$
Therefore the action of $SL_{2}{}(\mathbb{R}{})$ on the real line is exactly
the Möbius map (1.1):
$g:u\mapsto p(g^{-1}*s(u))=\frac{au+b}{cu+d},\quad\text{ where
}g^{-1}=\begin{pmatrix}a&b\\\ c&d\end{pmatrix}.$
We also calculate that
$r(g^{-1}*s(u))=\begin{pmatrix}(cu+d)^{-1}&0\\\ c&cu+d\end{pmatrix}.$
To build an induced representation we need a character of the affine group. A
generic character of $F$ is a power of its diagonal element:
${\rho_{\kappa}}\begin{pmatrix}a&0\\\ c&a^{-1}\end{pmatrix}=a^{\kappa}.$
Thus the corresponding realisation of induced representation (3.6) is:
(5.5)
${\rho_{\kappa}}(g):f(u)\mapsto\frac{1}{(cu+d)^{\kappa}}\,f\left(\frac{au+b}{cu+d}\right)\qquad\text{
where }g^{-1}=\begin{pmatrix}a&b\\\ c&d\end{pmatrix}.$
The only freedom remaining by the scheme is in a choice of a value of _number_
$\kappa$ and the corresponding functional space where our representation acts.
At this point we have a wider choice of $\kappa$ than it is usually assumed:
it can belong to different hypercomplex systems.
One of the important properties which would be nice to have is the unitarity
of the representation (5.5) with respect to the standard inner product:
$\left\langle
f_{1},f_{2}\right\rangle=\int_{\mathbb{R}^{2}{}}f_{1}(u)\bar{f}_{2}(u)\,du.$
A change of variables $x=\frac{au+b}{cu+d}$ in the integral suggests the
following property is necessary and sufficient for that:
(5.6) $\kappa+\bar{\kappa}=2.$
A mother wavelet for an induced wavelet transform shall be an eigenvector for
the action of a subgroup $\tilde{H}$ of $SL_{2}{}(\mathbb{R}{})$, see (5.1).
Let us consider the most common case of $\tilde{H}=K$ and take the
infinitesimal condition with the derived representation:
$d{\rho^{Z}_{n}}w_{0}=\lambda w_{0}$, since $Z$ (3.12) is the generator of the
subgroup $K$. In other word the restriction of $w_{0}$ to a $K$-orbit should
be given by $e^{\lambda t}$ in the exponential coordinate $t$ along the
$K$-orbit. However we usually need its expression in other “more natural”
coordinates. For example [Kisil11b], an eigenvector of the derived
representation of $d{\rho^{Z}_{n}}$ should satisfy the differential equation
in the ordinary parameter $x\in\mathbb{R}{}$:
(5.7) $-\kappa xf(x)-f^{\prime}(x)(1+x^{2})=\lambda f(x).$
The equation does not have singular points, the general solution is globally
defined (up to a constant factor) by:
(5.8)
$w_{\lambda,\kappa}(x)=\frac{1}{(1+x^{2})^{\kappa/2}}\left(\frac{x-\mathrm{i}}{x+\mathrm{i}}\right)^{\mathrm{i}\lambda/2}=\frac{(x-\mathrm{i})^{(\mathrm{i}\lambda-\kappa)/2}}{(x+\mathrm{i})^{(\mathrm{i}\lambda+\kappa)/2}}.$
To avoid multivalent functions we need $2\pi$-periodicity along the
exponential coordinate on $K$. This implies that the parameter
$m=-\mathrm{i}\lambda$ is an integer. Therefore the solution becomes:
(5.9)
$w_{m,\kappa}(x)=\frac{(x+\mathrm{i})^{(m-\kappa)/2}}{(x-\mathrm{i})^{(m+\kappa)/2}}.$
The corresponding wavelets resemble the Cauchy kernel normalised to the
invariant metric in the Lobachevsky half-plane:
$\displaystyle w_{m,\kappa}(u,v;x)$ $\displaystyle=$
$\displaystyle{\rho^{F}_{\kappa}}(s(u,v))w_{m,\kappa}(x)=v^{\kappa/2}\frac{\left(x-u+\mathrm{i}v\right)^{(m-\kappa)/2}}{\left(x-u-\mathrm{i}v\right)^{(m+\kappa)/2}}$
Therefore the wavelet transform (4.3) from function on the real line to
functions on the upper half-plane is:
$\displaystyle\hat{f}(u,v)$ $\displaystyle=$ $\displaystyle\left\langle
f,{\rho^{F}_{\kappa}}(u,v)w_{m,\kappa}\right\rangle=v^{\bar{\kappa}/2}\int_{\mathbb{R}{}}f(x)\,\frac{(x-(u+\mathrm{i}v))^{(m-\kappa)/2}}{(x-(u-\mathrm{i}v))^{(m+\kappa)/2}}\,dx.$
Introduction of a complex variable $z=u+\mathrm{i}v$ allows to write it as:
(5.10) $\hat{f}(z)=(\Im
z)^{\bar{\kappa}/2}\int_{\mathbb{R}{}}f(x)\frac{(x-{z})^{(m-\kappa)/2}}{(x-\bar{z})^{(m+\kappa)/2}}\,dx.$
According to the general theory this wavelet transform intertwines
representations ${\rho^{F}_{\kappa}}$ (5.5) on the real line (induced by the
character $a^{\kappa}$ of the subgroup $F$) and ${\rho^{K}_{m}}$ (3.8) on the
upper half-plane (induced by the character $e^{\mathrm{i}mt}$ of the subgroup
$K$).
#### 5.3. The Cauchy-Riemann (Dirac) and Laplace Operators
Ladder operators $L^{\\!\pm}=\pm\mathrm{i}A+B$ act by raising/lowering indexes
of the $K$-eigenfunctions $w_{m,\kappa}$ (5.8), see Subsection 3.3. More
explicitly [Kisil11b]:
(5.11)
$d{\rho^{L^{\\!\pm}}_{\kappa}}:w_{m,\kappa}\mapsto-\frac{\mathrm{i}}{2}(m\pm\kappa)w_{m\pm
2,\kappa}.$
There are two possibilities here: $m\pm\kappa$ is zero for some $m$ or not. In
the first case the chain (5.11) of eigenfunction $w_{m,\kappa}$ terminates on
one side under the transitive action (3.16) of the ladder operators; otherwise
the chain is infinite in both directions. That is, the values $m=\mp\kappa$
and only those correspond to the maximal (minimal) weight function
$w_{\mp\kappa,\kappa}(x)=\frac{1}{(x\pm\mathrm{i})^{\kappa}}\in
L_{2}{}(\mathbb{R}{})$, which are annihilated by $L^{\\!\pm}$:
(5.12) $\displaystyle
d{\rho^{L^{\\!\pm}}_{\kappa}}w_{\mp\kappa,\kappa}=(\pm\mathrm{i}d{\rho^{A}_{\kappa}}+d{\rho^{B}_{\kappa}})\,w_{\mp\kappa,\kappa}=0.$
By the Cor. 5.6 for the mother wavelets $w_{\mp\kappa,\kappa}$, which are
annihilated by (5.12), the images of the respective wavelet transforms are
null solutions to the left-invariant differential operator
$D_{\pm}=\overline{\mathfrak{L}^{{L^{\\!\pm}}}}$:
(5.13)
$D_{\pm}=\mp\mathrm{i}\mathfrak{L}^{A}+\mathfrak{L}^{B}=\textstyle-\frac{\mathrm{i}\kappa}{2}+v(\partial_{u}\pm\mathrm{i}\partial_{v}).$
This is a conformal version of the Cauchy–Riemann equation. The second order
conformal Laplace-type operators
$\Delta_{+}=\overline{\mathfrak{L}^{L^{\\!-}}\mathfrak{L}^{L^{\\!+}}}$ and
$\Delta_{-}=\overline{\mathfrak{L}^{L^{\\!+}}\mathfrak{L}^{L^{\\!-}}}$ are:
(5.14)
$\Delta_{\pm}=\textstyle(v\partial_{u}-\frac{\mathrm{i}\kappa}{2})^{2}+v^{2}\partial_{v}^{2}\pm\frac{\kappa}{2}.$
For the mother wavelets $w_{m,\kappa}$ in (5.12) such that $m=\mp\kappa$ the
unitarity condition $\kappa+\bar{\kappa}=2$, see (5.6), together with
$m\in\mathbb{Z}{}$ implies $\kappa=\mp m=1$. In such a case the wavelet
transforms (5.10) are:
(5.15) $\hat{f}^{+}(z)=(\Im
z)^{\frac{1}{2}}\int_{\mathbb{R}{}}\frac{f(x)\,dx}{x-z}\quad\text{and}\quad\hat{f}^{-}(z)=(\Im
z)^{\frac{1}{2}}\int_{\mathbb{R}{}}\frac{f(x)\,dx}{x-\bar{z}},$
for $w_{-1,1}$ and $w_{1,1}$ respectively. The first one is the Cauchy
integral formula up to the factor $2\pi\mathrm{i}\sqrt{\Im z}$. Clearly, one
integral is the complex conjugation of another. Moreover, the minimal/maximal
weight cases can be intertwined by the following automorphism of the Lie
algebra $\mathfrak{sl}_{2}$:
$A\rightarrow B,\quad B\rightarrow A,\quad Z\rightarrow-Z.$
As explained before $\hat{f}^{\pm}(w)$ are null solutions to the operators
$D_{\pm}$ (5.13) and $\Delta_{\pm}$ (5.14). These transformations intertwine
unitary equivalent representations on the real line and on the upper half-
plane, thus they can be made unitary for proper spaces. This is the source of
two faces of the Hardy spaces: they can be defined either as square-integrable
on the real line with an analytic extension to the half-plane, or analytic on
the half-plane with square-integrability on an infinitesimal displacement of
the real line.
For the third possibility, $m\pm\kappa\neq 0$, there is no an operator spanned
by the derived representation of the Lie algebra $\mathfrak{sl}_{2}$ which
kills the mother wavelet $w_{m,\kappa}$. However the remarkable _Casimir
operator_ $C=Z^{2}-2(L^{\\!-}L^{\\!+}+L^{\\!+}L^{\\!-})$, which spans the
centre of the universal enveloping algebra of $\mathfrak{sl}_{2}$
[MTaylor86]*§ 8.1 [Lang85]*§ X.1, produces a second order operator which does
the job. Indeed from the identities (5.11) we get:
(5.16) $d{\rho^{C}_{\kappa}}w_{m,\kappa}=(2\kappa-\kappa^{2})w_{m,\kappa}.$
Thus we get $d{\rho^{C}_{\kappa}}w_{m,2}=0$ for $\kappa=2$ or $0$. The mother
wavelet $w_{0,2}$ turns to be the _Poisson kernel_ [Grafakos08]*Ex. 1.2.17.
The associated wavelet transform
(5.17) $\hat{f}(w)=\Im
z\int_{\mathbb{R}{}}\frac{f(x)\,dz}{\left|x-z\right|^{2}}$
consists of null solutions of the left-invariant second-order Laplacian, image
of the Casimir operator, cf. (5.14):
$\Delta(:=\mathfrak{L}^{C})=v^{2}\partial^{2}_{u}+v^{2}\partial_{v}^{2}.$
Another integral formula producing solutions to this equation delivered by the
mother wavelet $w_{m,0}$ with the value $\kappa=0$ in (5.16):
(5.18)
$\hat{f}(z)=\int_{\mathbb{R}{}}f(x)\left(\frac{x-{z}}{x-\bar{z}}\right)^{m/2}\,dx.$
Furthermore, we can introduce higher order differential operators. The
functions $w_{\mp 2m+1,1}$ are annihilated by $n$-th power of operator
$d{\rho^{L^{\\!\pm}}_{\kappa}}$ with $1\leq m\leq n$. By the Cor. 5.6 the the
image of wavelet transform (5.10) from a mother wavelet
$\sum_{1}^{n}a_{m}w_{\mp 2m,1}$ will consist of null-solutions of the $n$-th
power $D^{n}_{\pm}$ of the conformal Cauchy–Riemann operator (5.13). They are
a conformal flavour of _polyanalytic_ functions [Balk97a].
We can similarly look for mother wavelets which are eigenvectors for other
types of one dimensional subgroups. Our consideration of subgroup $K$ is
simplified by several facts:
* •
The parameter $\kappa$ takes only complex values.
* •
The derived representation does not have singular points on the real line.
For both subgroups $A\\!^{\prime}$ and $N^{\prime}$ this will not be true. The
further consideration will be given in [Kisil11b].
#### 5.4. The Taylor Expansion
Consider an induced wavelet transform generated by a Lie group $G$, its
representation ${\rho}$ and a mother wavelet $w$ which is an eigenvector of a
one-dimensional subgroup $\tilde{H}\subset G$. Then by Prop. 5.5 the wavelet
transform intertwines ${\rho}$ with a representation ${\rho^{\tilde{H}}}$
induced by a character of $\tilde{H}$.
If the mother wavelet is itself in the domain of the induced wavelet transform
then the chain (3.16) of $\tilde{H}$-eigenvectors $w_{m}$ will be mapped to
the similar chain of their images $\hat{w}_{m}$. The corresponding derived
induced representation $d{\rho^{\tilde{H}}}$ produces ladder operators with
the transitive action of the ladder operators on the chain of $\hat{w}_{m}$.
Then the vector space of “formal power series”:
(5.19) $\hat{f}(z)=\sum_{m\in\mathbb{Z}{}}a_{m}\hat{w}_{m}(z)$
is a module for the Lie algebra of the group $G$.
Coming back to the case of the group $G=SL_{2}{}(\mathbb{R}{})$ and subgroup
$\tilde{H}=K$. Images $\hat{w}_{m,1}$ of the eigenfunctions (5.9) under the
Cauchy integral transform (5.15) are:
$\hat{w}_{m,1}(z)=(\Im
z)^{1/2}\frac{(z+\mathrm{i})^{(m-1)/2}}{(z-\mathrm{i})^{(m+1)/2}}.$
They are eigenfunctions of the derived representation on the upper half-plane
and the action of ladder operators is given by the same expressions (5.11). In
particular, the $\mathfrak{sl}_{2}$-module generated by $\hat{w}_{1,1}$ will
be one-sided since this vector is annihilated by the lowering operator. Since
the Cauchy integral produces an unitary intertwining operator between two
representations we get the following variant of Taylor series:
$\hat{f}(z)=\sum_{m=0}^{\infty}c_{m}\hat{w}_{m,1}(z),\qquad\text{ where }\quad
c_{m}=\left\langle f,w_{m,1}\right\rangle.$
For two other types of subgroups, representations and mother wavelets this
scheme shall be suitably adapted and detailed study will be presented
elsewhere [Kisil11b].
#### 5.5. Wavelet Transform in the Unit Disk and Other Domains
We can similarly construct an analytic function theories in unit disks,
including parabolic and hyperbolic ones [Kisil05a]. This can be done simply by
an application of the _Cayley transform_ to the function theories in the upper
half-plane. Alternatively we can apply the full procedure for properly chosen
groups and subgroups. We will briefly outline such a possibility here, see
also [Kisil97c].
Elements of $SL_{2}{}(\mathbb{R}{})$ could be also represented by $2\times
2$-matrices with complex entries such that, cf. Example 4.21:
$g={\left(\\!\\!\begin{array}[]{cc}\alpha&\bar{\beta}\\\
\beta&\bar{\alpha}\end{array}\\!\\!\right)},\qquad
g^{-1}={\left(\\!\\!\begin{array}[]{cc}\bar{\alpha}&-\bar{\beta}\\\
-\beta&\alpha\end{array}\\!\\!\right)},\qquad\left|\alpha\right|^{2}-\left|\beta\right|^{2}=1.$
This realisations of $SL_{2}{}(\mathbb{R}{})$ (or rather $SU(2,\mathbb{C}{})$)
is more suitable for function theory in the unit disk. It is obtained from the
form, which we used before for the upper half-plane, by means of the Cayley
transform [Kisil05a]*§ 8.1.
We may identify the _unit disk_ $\mathbb{D}{}$ with the homogeneous space
$SL_{2}{}(\mathbb{R}{})/\mathbb{T}{}$ for the _unit circle_ $\mathbb{T}{}$
through the important decomposition
$SL_{2}{}(\mathbb{R}{})\sim\mathbb{D}{}\times\mathbb{T}{}$ with
$K=\mathbb{T}{}$—the compact subgroup of $SL_{2}{}(\mathbb{R}{})$:
(5.26) $\displaystyle{\left(\\!\\!\begin{array}[]{cc}\alpha&\bar{\beta}\\\
\beta&\bar{\alpha}\end{array}\\!\\!\right)}$ $\displaystyle=$
$\displaystyle\left|\alpha\right|{\left(\\!\\!\begin{array}[]{cc}1&\bar{\beta}\bar{\alpha}^{-1}\\\
{\beta}{\alpha}^{-1}&1\end{array}\\!\\!\right)}{\left(\\!\\!\begin{array}[]{cc}\frac{{\alpha}}{\left|\alpha\right|}&0\\\
0&\frac{\bar{\alpha}}{\left|\alpha\right|}\end{array}\\!\\!\right)}$ (5.31)
$\displaystyle=$
$\displaystyle\frac{1}{\sqrt{1-\left|u\right|^{2}}}{\left(\\!\\!\begin{array}[]{cc}1&u\\\
\bar{u}&1\end{array}\\!\\!\right)},{\left(\\!\\!\begin{array}[]{cc}e^{ix}&0\\\
0&e^{-ix}\end{array}\\!\\!\right)}$
where
$x=\arg\alpha,\qquad u=\bar{\beta}\bar{\alpha}^{-1},\qquad\left|u\right|<1.$
Each element $g\in SL_{2}{}(\mathbb{R}{})$ acts by the linear-fractional
transformation (the Möbius map) on $\mathbb{D}{}$ and $\mathbb{T}{}$
$H_{2}{}(\mathbb{T}{})$ as follows:
(5.32) $g:z\mapsto\frac{\alpha
z+\beta}{{\bar{\beta}}z+\bar{\alpha}},\qquad\textrm{ where }\quad
g={\left(\\!\\!\begin{array}[]{cc}\alpha&\beta\\\
\bar{\beta}&\bar{\alpha}\end{array}\\!\\!\right)}.$
In the decomposition (5.26) the first matrix on the right hand side acts by
transformation (5.32) as an orthogonal rotation of $\mathbb{T}{}$ or
$\mathbb{D}{}$; and the second one—by transitive family of maps of the unit
disk onto itself.
The representation induced by a complex-valued character $\chi_{k}(z)=z^{-k}$
of $\mathbb{T}{}$ according to the Section 3.2 is:
(5.33)
$\rho_{k}(g):f(z)\mapsto\frac{1}{(\alpha-{\beta}{z})^{k}}\,f\left(\frac{\bar{\alpha}z-\bar{\beta}}{\alpha-{\beta}z}\right)\qquad\textrm{
where }\quad g={\left(\\!\\!\begin{array}[]{cc}\alpha&\beta\\\
\bar{\beta}&\bar{\alpha}\end{array}\\!\\!\right)}.$
The representation ${\rho_{1}}$ is unitary on square-integrable functions and
irreducible on the _Hardy space_ on the unit circle.
We choose [Kisil98a, Kisil01a] $K$-invariant function $v_{0}(z)\equiv 1$ to be
a _vacuum vector_. Thus the associated _coherent states_
$v(g,z)=\rho_{1}(g)v_{0}(z)=(u-z)^{-1}$
are completely determined by the point on the unit disk
$u=\bar{\beta}\bar{\alpha}^{-1}$. The family of coherent states considered as
a function of both $u$ and $z$ is obviously the _Cauchy kernel_ [Kisil97c].
The _wavelet transform_ [Kisil97c, Kisil98a]
$\mathcal{W}:L_{2}{}(\mathbb{T}{})\rightarrow
H_{2}{}(\mathbb{D}{}):f(z)\mapsto\mathcal{W}f(g)=\left\langle
f,v_{g}\right\rangle$ is the _Cauchy integral_ :
(5.34) $\mathcal{W}f(u)=\frac{1}{2\pi
i}\int_{\mathbb{T}{}}f(z)\frac{1}{u-z}\,dz.$
This approach can be extended to arbitrary connected simply-connected domain.
Indeed, it is known that Möbius maps is the whole group of biholomorphic
automorphisms of the unit disk or upper half-plane. Thus we can state the
following corollary from the _Riemann mapping theorem_ :
###### Corollary 5.10.
The group of biholomorphic automorphisms of a connected simply connected
domain with at least two points on its boundary is isomorphic to
$SL_{2}{}(\mathbb{R}{})$.
If a domain is non-simply connected, then the group of its biholomorphic
mapping can be trivial [MityushevRogosin00a, Beardon07a]. However we may look
for a rich group acting on function spaces rather than on geometric sets. Let
a connected non-simply connected domain $D$ be bounded by a finite collection
of non-intersecting contours $\mathsf{a}mma_{i}$, $i=1,\ldots,n$. For each
$\mathsf{a}mma_{i}$ consider the isomorphic image $G_{i}$ of the
$SL_{2}{}(\mathbb{R}{})$ group which is defined by the Corollary 5.10. Then
define the group $G=G_{1}\times G_{2}\times\ldots\times G_{n}$ and its action
on $L_{2}{}(\partial D)=L_{2}{}(\mathsf{a}mma_{1})\oplus
L_{2}{}(\mathsf{a}mma_{2})\oplus\ldots\oplus L_{2}{}(\mathsf{a}mma_{n})$
through the Moebius action of $G_{i}$ on $L_{2}{}(\mathsf{a}mma_{i})$.
###### Example 5.11.
Consider an _annulus_ defined by $r<\left|z\right|<R$. It is bounded by two
circles: $\mathsf{a}mma_{1}=\\{z:\left|z\right|=r\\}$ and
$\mathsf{a}mma_{2}=\\{z:\left|z\right|=R\\}$. For $\mathsf{a}mma_{1}$ the
Möbius action of $SL_{2}{}(\mathbb{R}{})$ is
$\begin{pmatrix}\alpha&\bar{\beta}\\\
\beta&\bar{\alpha}\end{pmatrix}:z\mapsto\frac{\alpha z+\bar{\beta}/r}{\beta
z/r+\bar{\alpha}},\qquad\text{where}\quad\left|\alpha\right|^{2}-\left|\beta\right|^{2}=1,$
with the respective action on $\mathsf{a}mma_{2}$. Those action can be
linearised in the spaces $L_{2}{}(\mathsf{a}mma_{1})$ and
$L_{2}{}(\mathsf{a}mma_{2})$. If we consider a subrepresentation reduced to
analytic function on the annulus, then one copy of $SL_{2}{}(\mathbb{R}{})$
will act on the part of functions analytic outside of $\mathsf{a}mma_{1}$ and
another copy—on the part of functions analytic inside of $\mathsf{a}mma_{2}$.
Thus all classical objects of complex analysis (the Cauchy-Riemann equation,
the Taylor series, the Bergman space, etc.) for a rather generic domain $D$
can be also obtained from suitable representations similarly to the case of
the upper half-plane [Kisil97c, Kisil01a].
### 6\. Covariant and Contravariant Calculi
United in the trinity functional calculus, spectrum, and spectral mapping
theorem play the exceptional rôle in functional analysis and could not be
substituted by anything else. Many traditional definitions of functional
calculus are covered by the following rigid template based on the _algebra
homomorphism_ property:
###### Definition 6.1.
An _functional calculus_ for an element $a\in\mathfrak{A}$ is a continuous
linear mapping $\Phi:\mathcal{A}\rightarrow\mathfrak{A}$ such that
1. (i)
$\Phi$ is a unital _algebra homomorphism_
$\Phi(f\cdot g)=\Phi(f)\cdot\Phi(g).$
2. (ii)
There is an initialisation condition: $\Phi[v_{0}]=a$ for for a fixed function
$v_{0}$, e.g. $v_{0}(z)=z$.
The most typical definition of the spectrum is seemingly independent and uses
the important notion of resolvent:
###### Definition 6.2.
A _resolvent_ of element $a\in\mathfrak{A}$ is the function
$R(\lambda)=(a-\lambda e)^{-1}$, which is the image under $\Phi$ of the Cauchy
kernel $(z-\lambda)^{-1}$.
A _spectrum_ of $a\in\mathfrak{A}$ is the set $\mathbf{sp}\,a$ of singular
points of its resolvent $R(\lambda)$.
Then the following important theorem links spectrum and functional calculus
together.
###### Theorem 6.3 (Spectral Mapping).
For a function $f$ suitable for the functional calculus:
(6.1) $f(\mathbf{sp}\,a)=\mathbf{sp}\,f(a).$
However the power of the classic spectral theory rapidly decreases if we move
beyond the study of one normal operator (e.g. for quasinilpotent ones) and is
virtually nil if we consider several non-commuting ones. Sometimes these
severe limitations are seen to be irresistible and alternative constructions,
i.e. model theory cf. Example 4.19 and [Nikolskii86], were developed.
Yet the spectral theory can be revived from a fresh start. While three
components—functional calculus, spectrum, and spectral mapping theorem—are
highly interdependent in various ways we will nevertheless arrange them as
follows:
1. (i)
Functional calculus is an _original_ notion defined in some independent terms;
2. (ii)
Spectrum (or more specifically _contravariant spectrum_) (or spectral
decomposition) is derived from previously defined functional calculus as its
_support_ (in some appropriate sense);
3. (iii)
Spectral mapping theorem then should drop out naturally in the form (6.1) or
some its variation.
Thus the entire scheme depends from the notion of the functional calculus and
our ability to escape limitations of Definition 6.1. The first known to the
present author definition of functional calculus not linked to algebra
homomorphism property was the Weyl functional calculus defined by an integral
formula [Anderson69]. Then its intertwining property with affine
transformations of Euclidean space was proved as a theorem. However it seems
to be the only “non-homomorphism” calculus for decades.
The different approach to whole range of calculi was given in [Kisil95i] and
developed in [Kisil98a, Kisil02a, Kisil04d, Kisil10c] in terms of
_intertwining operators_ for group representations. It was initially targeted
for several non-commuting operators because no non-trivial algebra
homomorphism is possible with a commutative algebra of function in this case.
However it emerged later that the new definition is a useful replacement for
classical one across all range of problems.
In the following Subsections we will support the last claim by consideration
of the simple known problem: characterisation a $n\times n$ matrix up to
similarity. Even that “freshman” question could be only sorted out by the
classical spectral theory for a small set of diagonalisable matrices. Our
solution in terms of new spectrum will be full and thus unavoidably coincides
with one given by the Jordan normal form of matrices. Other more difficult
questions are the subject of ongoing research.
#### 6.1. Intertwining Group Actions on Functions and Operators
Any functional calculus uses properties of _functions_ to model properties of
_operators_. Thus changing our viewpoint on functions, as was done in Section
5, we could get another approach to operators. The two main possibilities are
encoded in Definitions 4.20 and 4.22: we can assign a certain function to the
given operator or wise verse. Here we consider the second possibility and
treat the first in the Subsection 6.4.
The representation ${\rho_{1}}$ (5.33) is unitary irreducible when acts on the
Hardy space $H_{2}{}$. Consequently we have one more reason to abolish the
template definition 6.1: $H_{2}{}$ is _not_ an algebra. Instead we replace the
_homomorphism property_ by a _symmetric covariance_ :
###### Definition 6.4 ([Kisil95i]).
An _contravariant analytic calculus_ for an element $a\in\mathfrak{A}$ and an
$\mathfrak{A}$-module $M$ is a _continuous linear_ mapping
$\Phi:A{}(\mathbb{D}{})\rightarrow A{}(\mathbb{D}{},M)$ such that
1. (i)
$\Phi$ is an _intertwining operator_
$\Phi\rho_{1}=\rho_{a}\Phi$
between two representations of the $SL_{2}{}(\mathbb{R}{})$ group $\rho_{1}$
(5.33) and $\rho_{a}$ defined below in (6.1).
2. (ii)
There is an initialisation condition: $\Phi[v_{0}]=m$ for $v_{0}(z)\equiv 1$
and $m\in M$, where $M$ is a left $\mathfrak{A}$-module.
Note that our functional calculus released from the homomorphism condition can
take value in any left $\mathfrak{A}$-module $M$, which however could be
$\mathfrak{A}$ itself if suitable. This add much flexibility to our
construction.
The earliest functional calculus, which is _not_ an algebraic homomorphism,
was the Weyl functional calculus and was defined just by an integral formula
as an operator valued distribution [Anderson69]. In that paper (joint)
spectrum was defined as support of the Weyl calculus, i.e. as the set of point
where this operator valued distribution does not vanish. We also define the
spectrum as a support of functional calculus, but due to our Definition 6.4 it
will means the set of non-vanishing intertwining operators with primary
subrepresentations.
###### Definition 6.5.
A corresponding _spectrum_ of $a\in\mathfrak{A}$ is the _support_ of the
functional calculus $\Phi$, i.e. the collection of intertwining operators of
$\rho_{a}$ with _primary representations_ [Kirillov76]*§ 8.3.
More variations of contravariant functional calculi are obtained from other
groups and their representations [Kisil95i, Kisil98a, Kisil02a, Kisil04d,
Kisil10c].
A simple but important observation is that the Möbius transformations (1.1)
can be easily extended to any Banach algebra. Let $\mathfrak{A}$ be a Banach
algebra with the unit $e$, an element $a\in\mathfrak{A}$ with
$\left\|a\right\|<1$ be fixed, then
(6.2) $g:a\mapsto g\cdot a=(\bar{\alpha}a-\bar{\beta}e)(\alpha e-\beta
a)^{-1},\qquad g\in SL_{2}{}(\mathbb{R}{})$
is a well defined $SL_{2}{}(\mathbb{R}{})$ action on a subset
$\mathbb{A}{}=\\{g\cdot a\,\mid\,g\in
SL_{2}{}(\mathbb{R}{})\\}\subset\mathfrak{A}$, i.e. $\mathbb{A}{}$ is a
$SL_{2}{}(\mathbb{R}{})$-homogeneous space. Let us define the _resolvent_
function $R(g,a):\mathbb{A}{}\rightarrow\mathfrak{A}$:
$R(g,a)=(\alpha e-\beta a)^{-1}\quad$
then
(6.3)
$R(g_{1},\mathsf{a})R(g_{2},g_{1}^{-1}\mathsf{a})=R(g_{1}g_{2},\mathsf{a}).$
The last identity is well known in representation theory [Kirillov76]*§
13.2(10) and is a key ingredient of _induced representations_. Thus we can
again linearise (6.2), cf. (5.33), in the space of continuous functions
$C{}(\mathbb{A}{},M)$ with values in a left $\mathfrak{A}$-module $M$, e.g.
$M=\mathfrak{A}$:
$\displaystyle\rho_{a}(g_{1}):f(g^{-1}\cdot a)$ $\displaystyle\mapsto$
$\displaystyle R(g_{1}^{-1}g^{-1},a)f(g_{1}^{-1}g^{-1}\cdot a)$
$\displaystyle\quad=(\alpha^{\prime}e-\beta^{\prime}a)^{-1}\,f\left(\frac{\bar{\alpha}^{\prime}\cdot
a-\bar{\beta}^{\prime}e}{\alpha^{\prime}e-\beta^{\prime}a}\right).$
For any $m\in M$ we can define a $K$-invariant _vacuum vector_ as
$v_{m}(g^{-1}\cdot a)=m\otimes v_{0}(g^{-1}\cdot a)\in C{}(\mathbb{A}{},M)$.
It generates the associated with $v_{m}$ family of _coherent states_
$v_{m}(u,a)=(ue-a)^{-1}m$, where $u\in\mathbb{D}{}$.
The _wavelet transform_ defined by the same common formula based on coherent
states (cf. (5.34)):
(6.5) $\mathcal{W}_{m}f(g)=\left\langle f,\rho_{a}(g)v_{m}\right\rangle,$
is a version of Cauchy integral, which maps $L_{2}{}(\mathbb{A}{})$ to
$C{}(SL_{2}{}(\mathbb{R}{}),M)$. It is closely related (but not identical!) to
the Riesz-Dunford functional calculus: the traditional functional calculus is
given by the case:
$\Phi:f\mapsto\mathcal{W}_{m}f(0)\qquad\textrm{ for }M=\mathfrak{A}\textrm{
and }m=e.$
The both conditions—the intertwining property and initial value—required by
Definition 6.4 easily follows from our construction. Finally, we wish to
provide an example of application of the Corollary 5.8.
###### Example 6.6.
Let $a$ be an operator and $\phi$ be a function which annihilates it, i.e.
$\phi(a)=0$. For example, if $a$ is a matrix $\phi$ can be its minimal
polynomial. From the integral representation of the contravariant calculus on
$G=SL_{2}{}(\mathbb{R}{})$ we can rewrite the annihilation property like this:
$\int_{G}\phi(g)R(g,a)\,dg=0.$
Then the vector-valued function $[\mathcal{W}_{m}f](g)$ defined by (6.5) shall
satisfy to the following condition:
$\int_{G}\phi(g^{\prime})\,[\mathcal{W}_{m}f](gg^{\prime})\,dg^{\prime}=0$
due to the Corollary 5.8.
#### 6.2. Jet Bundles and Prolongations of $\rho_{1}$
Spectrum was defined in 6.5 as the _support_ of our functional calculus. To
elaborate its meaning we need the notion of a _prolongation_ of
representations introduced by S. Lie, see [Olver93, Olver95] for a detailed
exposition.
###### Definition 6.7.
[Olver95]*Chap. 4 Two holomorphic functions have $n$th _order contact_ in a
point if their value and their first $n$ derivatives agree at that point, in
other words their Taylor expansions are the same in first $n+1$ terms.
A point $(z,u^{(n)})=(z,u,u_{1},\ldots,u_{n})$ of the _jet space_
$\mathbb{J}^{n}{}\sim\mathbb{D}{}\times\mathbb{C}^{n}{}$ is the equivalence
class of holomorphic functions having $n$th contact at the point $z$ with the
polynomial:
(6.6) $p_{n}(w)=u_{n}\frac{(w-z)^{n}}{n!}+\cdots+u_{1}\frac{(w-z)}{1!}+u.$
For a fixed $n$ each holomorphic function
$f:\mathbb{D}{}\rightarrow\mathbb{C}{}$ has $n$th _prolongation_ (or _$n$
-jet_) $\mathrm{j}_{n}f:\mathbb{D}{}\rightarrow\mathbb{C}^{n+1}{}$:
(6.7) $\mathrm{j}_{n}f(z)=(f(z),f^{\prime}(z),\ldots,f^{(n)}(z)).$
The graph $\mathsf{a}mma^{(n)}_{f}$ of $\mathrm{j}_{n}f$ is a submanifold of
$\mathbb{J}^{n}{}$ which is section of the _jet bundle_ over $\mathbb{D}{}$
with a fibre $\mathbb{C}^{n+1}{}$. We also introduce a notation $J_{n}$ for
the map $J_{n}:f\mapsto\mathsf{a}mma^{(n)}_{f}$ of a holomorphic $f$ to the
graph $\mathsf{a}mma^{(n)}_{f}$ of its $n$-jet $\mathrm{j}_{n}f(z)$ (6.7).
One can prolong any map of functions $\psi:f(z)\mapsto[\psi f](z)$ to a map
$\psi^{(n)}$ of $n$-jets by the formula
(6.8) $\psi^{(n)}(J_{n}f)=J_{n}(\psi f).$
For example such a prolongation $\rho_{1}^{(n)}$ of the representation
$\rho_{1}$ of the group $SL_{2}{}(\mathbb{R}{})$ in $H_{2}{}(\mathbb{D}{})$
(as any other representation of a Lie group [Olver95]) will be again a
representation of $SL_{2}{}(\mathbb{R}{})$. Equivalently we can say that
$J_{n}$ _intertwines_ $\rho_{1}$ and $\rho^{(n)}_{1}$:
$J_{n}\rho_{1}(g)=\rho_{1}^{(n)}(g)J_{n}\quad\textrm{ for all }g\in
SL_{2}{}(\mathbb{R}{}).$
Of course, the representation $\rho^{(n)}_{1}$ is not irreducible: any jet
subspace $\mathbb{J}^{k}{}$, $0\leq k\leq n$ is $\rho^{(n)}_{1}$-invariant
subspace of $\mathbb{J}^{n}{}$. However the representations $\rho^{(n)}_{1}$
are _primary_ [Kirillov76]*§ 8.3 in the sense that they are not sums of two
subrepresentations.
The following statement explains why jet spaces appeared in our study of
functional calculus.
###### Proposition 6.8.
Let matrix $a$ be a Jordan block of a length $k$ with the eigenvalue
$\lambda=0$, and $m$ be its root vector of order $k$, i.e. $a^{k-1}m\neq
a^{k}m=0$. Then the restriction of $\rho_{a}$ on the subspace generated by
$v_{m}$ is equivalent to the representation $\rho_{1}^{k}$.
#### 6.3. Spectrum and Spectral Mapping Theorem
Now we are prepared to describe a spectrum of a matrix. Since the functional
calculus is an intertwining operator its support is a decomposition into
intertwining operators with primary representations (we could not expect
generally that these primary subrepresentations are irreducible).
Recall the transitive on $\mathbb{D}{}$ group of inner automorphisms of
$SL_{2}{}(\mathbb{R}{})$, which can send any $\lambda\in\mathbb{D}{}$ to $0$
and are actually parametrised by such a $\lambda$. This group extends
Proposition 6.8 to the complete characterisation of $\rho_{a}$ for matrices.
###### Proposition 6.9.
Representation $\rho_{a}$ is equivalent to a direct sum of the prolongations
$\rho_{1}^{(k)}$ of $\rho_{1}$ in the $k$th jet space $\mathbb{J}^{k}{}$
intertwined with inner automorphisms. Consequently the _spectrum_ of $a$
(defined via the functional calculus $\Phi=\mathcal{W}_{m}$) labelled exactly
by $n$ pairs of numbers $(\lambda_{i},k_{i})$, $\lambda_{i}\in\mathbb{D}{}$,
$k_{i}\in\mathbb{Z}_{+}{}$, $1\leq i\leq n$ some of whom could coincide.
Obviously this spectral theory is a fancy restatement of the _Jordan normal
form_ of matrices.
(a) (b) (c)
Figure 11. Classical spectrum of the matrix from the Ex. 6.10 is shown at (a).
Contravariant spectrum of the same matrix in the jet space is drawn at (b).
The image of the contravariant spectrum under the map from Ex. 6.12 is
presented at (c).
###### Example 6.10.
Let $J_{k}(\lambda)$ denote the Jordan block of the length $k$ for the
eigenvalue $\lambda$. In Fig. 11 there are two pictures of the spectrum for
the matrix
$a=J_{3}\left(\lambda_{1}\right)\oplus J_{4}\left(\lambda_{2}\right)\oplus
J_{1}\left(\lambda_{3}\right)\oplus J_{2}\left(\lambda_{4}\right),$
where
$\lambda_{1}=\frac{3}{4}e^{i\pi/4},\quad\lambda_{2}=\frac{2}{3}e^{i5\pi/6},\quad\lambda_{3}=\frac{2}{5}e^{-i3\pi/4},\quad\lambda_{4}=\frac{3}{5}e^{-i\pi/3}.$
Part (a) represents the conventional two-dimensional image of the spectrum,
i.e. eigenvalues of $a$, and (b) describes spectrum $\mathbf{sp}\,{}a$ arising
from the wavelet construction. The first image did not allow to distinguish
$a$ from many other essentially different matrices, e.g. the diagonal matrix
$\mathop{\operator@font
diag}\nolimits\left(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}\right),$
which even have a different dimensionality. At the same time Fig. 11(b)
completely characterise $a$ up to a similarity. Note that each point of
$\mathbf{sp}\,a$ in Fig. 11(b) corresponds to a particular root vector, which
spans a primary subrepresentation.
As was mentioned in the beginning of this section a resonable spectrum should
be linked to the corresponding functional calculus by an appropriate spectral
mapping theorem. The new version of spectrum is based on prolongation of
$\rho_{1}$ into jet spaces (see Section 6.2). Naturally a correct version of
spectral mapping theorem should also operate in jet spaces.
Let $\phi:\mathbb{D}{}\rightarrow\mathbb{D}{}$ be a holomorphic map, let us
define its action on functions $[\phi_{*}f](z)=f(\phi(z))$. According to the
general formula (6.8) we can define the prolongation $\phi_{*}^{(n)}$ onto the
jet space $\mathbb{J}^{n}{}$. Its associated action
$\rho_{1}^{k}\phi_{*}^{(n)}=\phi_{*}^{(n)}\rho_{1}^{n}$ on the pairs
$(\lambda,k)$ is given by the formula:
(6.9)
$\phi_{*}^{(n)}(\lambda,k)=\left(\phi(\lambda),\left[\frac{k}{\deg_{\lambda}\phi}\right]\right),$
where $\deg_{\lambda}\phi$ denotes the degree of zero of the function
$\phi(z)-\phi(\lambda)$ at the point $z=\lambda$ and $[x]$ denotes the integer
part of $x$.
###### Theorem 6.11 (Spectral mapping).
Let $\phi$ be a holomorphic mapping $\phi:\mathbb{D}{}\rightarrow\mathbb{D}{}$
and its prolonged action $\phi_{*}^{(n)}$ defined by (6.9), then
$\mathbf{sp}\,\phi(a)=\phi_{*}^{(n)}\mathbf{sp}\,a.$
The explicit expression of (6.9) for $\phi_{*}^{(n)}$, which involves
derivatives of $\phi$ upto $n$th order, is known, see for example
[HornJohnson94]*Thm. 6.2.25, but was not recognised before as form of spectral
mapping.
###### Example 6.12.
Let us continue with Example 6.10. Let $\phi$ map all four eigenvalues
$\lambda_{1}$, …, $\lambda_{4}$ of the matrix $a$ into themselves. Then Fig.
11(a) will represent the classical spectrum of $\phi(a)$ as well as $a$.
However Fig. 11(c) shows mapping of the new spectrum for the case $\phi$ has
_orders of zeros_ at these points as follows: the order $1$ at $\lambda_{1}$,
exactly the order $3$ at $\lambda_{2}$, an order at least $2$ at
$\lambda_{3}$, and finally any order at $\lambda_{4}$.
#### 6.4. Functional Model and Spectral Distance
Let $a$ be a matrix and $\mu(z)$ be its _minimal polynomial_ :
$\mu_{a}(z)=(z-\lambda_{1})^{m_{1}}\cdot\ldots\cdot(z-\lambda_{n})^{m_{n}}.$
If all eigenvalues $\lambda_{i}$ of $a$ (i.e. all roots of $\mu(z)$ belong to
the unit disk we can consider the respective _Blaschke product_
$B_{a}(z)=\prod_{i=1}^{n}\left(\frac{z-\lambda_{i}}{1-\overline{\lambda_{i}}z}\right)^{m_{i}},$
such that its numerator coincides with the minimal polynomial $\mu(z)$.
Moreover, for an unimodular $z$ we have
$B_{a}(z)=\mu_{a}(z)\overline{\mu^{-1}_{a}(z)}z^{-m}$, where
$m=m_{1}+\ldots+m_{n}$. We also have the following covariance property:
###### Proposition 6.13.
The above correspondence $a\mapsto B_{a}$ intertwines the
$SL_{2}{}(\mathbb{R}{})$ action (6.2) on the matrices with the action (5.33)
with $k=0$ on functions.
The result follows from the observation that every elementary product
$\frac{z-\lambda_{i}}{1-\overline{\lambda_{i}}z}$ is the Moebius
transformation of $z$ with the matrix $\begin{pmatrix}1&-\lambda_{i}\\\
-\overline{\lambda_{i}}&1\end{pmatrix}$. Thus the correspondence $a\mapsto
B_{a}(z)$ is a covariant (symbolic) calculus in the sense of the Defn. 4.20.
See also the Example 4.19.
The Jordan normal form of a matrix provides a description, which is equivalent
to its contravariant spectrum. From various viewpoints, e.g. numerical
approximations, it is worth to consider its stability under a perturbation. It
is easy to see, that an arbitrarily small disturbance breaks the Jordan
structure of a matrix. However, the result of random small perturbation will
not be random, its nature is described by the following remarkable theorem:
###### Theorem 6.14 (Lidskii [Lidskii66a], see also [MoroBurkeOverton97a]).
Let $J_{n}$ be a Jordan block of a length $n>1$ with zero eigenvalues and $K$
be an arbitrary matrix. Then eigenvalues of the perturbed matrix
$J_{n}+\varepsilon^{n}K$ admit the expansion
$\lambda_{j}=\varepsilon\xi^{1/n}+o(\varepsilon),\qquad j=1,\ldots,n,$
where $\xi^{1/n}$ represents all $n$-th complex roots of certain
$\xi\in\mathbb{C}{}$.
(a) (b)
Figure 12. Perturbation of the Jordan block’s spectrum: (a) The spectrum of
the perturbation $J_{100}+\varepsilon^{1}00K$ of the Jordan block $J_{100}$ by
a random matrix $K$. (b) The spectrum of the random matrix $K$.
The left picture in Fig. 12 presents a perturbation of a Jordan block
$J_{100}$ by a random matrix. Perturbed eigenvalues are close to vertices of a
right polygon with $100$ vertices. Those regular arrangements occur despite of
the fact that eigenvalues of the matrix $K$ are dispersed through the unit
disk (the right picture in Fig. 12). In a sense it is rather the Jordan block
regularises eigenvalues of $K$ than $K$ perturbs the eigenvalue of the Jordan
block.
Although the Jordan structure itself is extremely fragile, it still can be
easily guessed from a perturbed eigenvalues. Thus there exists a certain
characterisation of matrices which is stable under small perturbations. We
will describe a sense, in which the covariant spectrum of the matrix
$J_{n}+\varepsilon^{n}K$ is stable for small $\varepsilon$. For this we
introduce the covariant version of spectral distances motivated by the
functional model. Our definition is different from other types known in the
literature [Tyrtyshnikov97a]*Ch. 5.
###### Definition 6.15.
Let $a$ and $b$ be two matrices with all their eigenvalues sitting inside of
the unit disk and $B_{a}(z)$ and $B_{b}(z)$ be respective Blaschke products as
defined above. The _(covariant) spectral distance_ $d(a,b)$ between $a$ and
$b$ is equal to the distance $\left\|B_{a}-B_{b}\right\|_{2}$ between
$B_{a}(z)$ and $B_{b}(z)$ in the Hardy space on the unit circle.
Since the spectral distance is defined through the distance in $H_{2}{}$ all
standard axioms of a distance are automatically satisfied. For a Blaschke
products we have $\left|B_{a}(z)\right|=1$ if $\left|z\right|=1$, thus
$\left\|B_{a}\right\|_{p}=1$ in any $L_{p}{}$ on the unit circle. Therefore an
alternative expression for the spectral distance is:
$d(a,b)=2(1-\left\langle B_{a},B_{b}\right\rangle).$
In particular, we always have $0\leq d(a,b)\leq 2$. We get an obvious
consequence of Prop. 6.13, which justifies the name of the covariant spectral
distance:
###### Corollary 6.16.
For any $g\in SL_{2}{}(\mathbb{R}{})$ we have $d(a,b)=d(g\cdot a,g\cdot a)$,
where $\cdot$ denotes the Möbius action (6.2).
An important property of the covariant spectral distance is its stability
under small perturbations.
###### Theorem 6.17.
For $n=2$ let $\lambda_{1}(\varepsilon)$ and $\lambda_{2}(\varepsilon)$ be
eigenvalues of the matrix $J_{2}+\varepsilon^{2}\cdot K$ for some matrix $K$.
Then
(6.10)
$\left|\lambda_{1}(\varepsilon)\right|+\left|\lambda_{2}(\varepsilon)\right|=O(\varepsilon),\quad\text{
however
}\quad\left|\lambda_{1}(\varepsilon)+\lambda_{2}(\varepsilon)\right|=O(\varepsilon^{2}).$
The spectral distance from the $1$-jet at $0$ to two $0$-jets at points
$\lambda_{1}$ and $\lambda_{2}$ bounded only by the first condition in (6.10)
is $O(\varepsilon^{2})$. However the spectral distance between $J_{2}$ and
$J_{2}+\varepsilon^{2}\cdot K$ is $O(\varepsilon^{4})$.
In other words, a matrix with eigenvalues satisfying to the Lisdkii condition
from the Thm. 6.14 is much closer to the Jordan block $J_{2}$ than a generic
one with eigenvalues of the same order. Thus the covariant spectral distance
is more stable under perturbation that magnitude of eigenvalues. For $n=2$ a
proof can be forced through a direct calculation. We also conjecture that the
similar statement is true for any $n\geq 2$.
#### 6.5. Covariant Pencils of Operators
Let $H$ be a real Hilbert space, possibly of finite dimensionality. For
bounded linear operators $A$ and $B$ consider the _generalised eigenvalue
problem_ , that is finding a scalar $\lambda$ and a vector $x\in H$ such that:
(6.11) $Ax=\lambda Bx\qquad\text{or equivalently}\qquad(A-\lambda B)x=0.$
The standard eigenvalue problem corresponds to the case $B=I$, moreover for an
invertible $B$ the generalised problem can be reduced to the standard one for
the operator $B^{-1}A$. Thus it is sensible to introduce the equivalence
relation on the pairs of operators:
(6.12) $(A,B)\sim(DA,DB)\quad\text{for any invertible operator }D.$
We may treat the pair $(A,B)$ as a column vector $\begin{pmatrix}A\\\
B\end{pmatrix}$. Then there is an action of the $SL_{2}{}(\mathbb{R}{})$ group
on the pairs:
(6.13) $g\cdot\begin{pmatrix}A\\\ B\end{pmatrix}=\begin{pmatrix}aA+bB\\\
cA+dB\end{pmatrix},\qquad\text{where }g=\begin{pmatrix}a&b\\\
c&d\end{pmatrix}\in SL_{2}{}(\mathbb{R}{}).$
If we consider this $SL_{2}{}(\mathbb{R}{})$-action subject to the equivalence
relation (6.12) then we will arrive to a version of the linear-fractional
transformation of the operator defined in (6.2). There is a connection of the
$SL_{2}{}(\mathbb{R}{})$-action (6.13) to the problem (6.11) through the
following intertwining relation:
###### Proposition 6.18.
Let $\lambda$ and $x\in H$ solve the generalised eigenvalue problem (6.11) for
the pair $(A,B)$. Then the pair $(C,D)=g\cdot(A,B)$, $g\in
SL_{2}{}(\mathbb{R}{})$ has a solution $\mu$ and $x$, where
$\mu=g\cdot\lambda=\frac{a\lambda+b}{c\lambda+d},\qquad\text{for
}g=\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\in SL_{2}{}(\mathbb{R}{}),$
is defined by the Möbius transformation (1.1).
In other words the correspondence
$(A,B)\mapsto\text{all generalised eigenvalues}$
is another realisation of a covariant calculus in the sense of Defn. 4.20. The
collection of all pairs $g\cdot(A,B)$, $g\in SL_{2}{}(\mathbb{R}{})$ is an
example of _covariant pencil_ of operators. This set is a
$SL_{2}{}(\mathbb{R}{})$-homogeneous spaces, thus it shall be within the
classification of such homogeneous spaces provided in the Subsection 2.1.
###### Example 6.19.
It is easy to demonstrate that all existing homogeneous spaces can be realised
by matrix pairs.
1. (i)
Take the pair $(O,I)$ where $O$ and $I$ are the zero and identity $n\times n$
matrices respectively. Then any transformation of this pair by a lower-
triangular matrix from $SL_{2}{}(\mathbb{R}{})$ is equivalent to $(O,I)$. The
respective homogeneous space is isomorphic to the real line with the Möbius
transformations (1.1).
2. (ii)
Consider $H=\mathbb{R}^{2}{}$. Using the notations $\iota$ from Subsection 1.1
we define three realisations (elliptic, parabolic and hyperbolic) of an
operator $A_{\iota}$:
(6.14) $A_{\mathrm{i}}=\begin{pmatrix}0&1\\\ -1&0\end{pmatrix},\qquad
A_{\varepsilon}=\begin{pmatrix}0&1\\\ 0&0\end{pmatrix},\qquad
A_{\mathrm{j}}=\begin{pmatrix}0&1\\\ 1&0\end{pmatrix}.$
Then for an arbitrary element $h$ of the subgroup $K$, $N$ or $A$ the
respective (in the sense of the Principle 3.5) pair $h\cdot(A_{\iota},I)$ is
equivalent to $(A_{\iota},I)$ itself. Thus those three homogeneous spaces are
isomorphic to the elliptic, parabolic and hyperbolic half-planes under
respective actions of $SL_{2}{}(\mathbb{R}{})$. Note, that
$A_{\iota}^{2}=\iota^{2}I$, that is $A_{\iota}$ is a model for hypercomplex
units.
3. (iii)
Let $A$ be a direct sum of any two different matrices out of the three
$A_{\iota}$ from (6.14), then the fix group of the equivalence class of the
pair $(A,I)$ is the identity of $SL_{2}{}(\mathbb{R}{})$. Thus the
corresponding homogeneous space coincides with the group itself.
Hawing homogeneous spaces generated by pairs of operators we can define
respective functions on those spaces. The special attention is due the
following paraphrase of the resolvent:
$R_{(A,B)}(g)=(cA+dB)^{-1}\qquad\text{where}\quad g^{-1}=\begin{pmatrix}a&b\\\
c&d\end{pmatrix}\in SL_{2}(\mathbb{R}).$
Obviously $R_{(A,B)}(g)$ contains the essential information about the pair
$(A,B)$. Probably, the function $R_{(A,B)}(g)$ contains too much simultaneous
information, we may restrict it to get a more detailed view. For vectors $u$,
$v\in H$ we also consider vector and scalar-valued functions related to the
generalised resolvent:
$R^{u}_{(A,B)}(g)=(cA+dB)^{-1}u,\qquad\text{ and }\qquad
R^{(u,v)}_{(A,B)}(g)=\left\langle(cA+dB)^{-1}u,v\right\rangle,$
where $(cA+dB)^{-1}u$ is understood as a solution $w$ of the equation
$u=(cA+dB)w$ if it exists and is unique, this does not require the full
invertibility of $cA+dB$.
It is easy to see that the map $(A,B)\mapsto R^{(u,v)}_{(A,B)}(g)$ is a
covariant calculus as well. It worth to notice that function $R_{(A,B)}$ can
again fall into three EPH cases.
###### Example 6.20.
For the three matrices $A_{\iota}$ considered in the previous Example we
denote by $R_{\iota}(g)$ the resolvetn-type function of the pair
$(A_{\iota},I)$. Then:
$R_{\mathrm{i}}(g)=\frac{1}{c^{2}+d^{2}}\begin{pmatrix}d&-c\\\
c&d\end{pmatrix},\quad
R_{\varepsilon}(g)=\frac{1}{d^{2}}\begin{pmatrix}d&-c\\\
0&d\end{pmatrix},\quad
R_{\mathrm{j}}(g)=\frac{1}{d^{2}-c^{2}}\begin{pmatrix}d&-c\\\
-c&d\end{pmatrix}.$
Put $u=(1,0)\in H$, then $R_{\iota}(g)u$ is a two-dimensional real vector
valued functions with components equal to real and imaginary part of
hypercomplex Cauchy kernel considered in [Kisil11b].
Consider the space $L(G)$ of functions spanned by all left translations of
$R_{(A,B)}(g)$. As usual, a closure in a suitable metric, say $L_{p}{}$, can
be taken. The left action $g:f(h)\mapsto f(g^{-1}h)$ of $SL_{2}(\mathbb{R})$
on this space is a linear representation of this group. Afterwards the
representation can be decomposed into a sum of primary subrepresentations.
###### Example 6.21.
For the matrices $A_{\iota}$ the irreducible components are isomorphic to
analytic spaces of hypercomplex functions under the fraction-linear
transformations build in Subsection 3.2.
An important observation is that a decomposition into irreducible or primary
components can reveal an EPH structure even in the cases hiding it on the
homogeneous space level.
###### Example 6.22.
Take the operator $A=A_{\mathrm{i}}\oplus A_{\mathrm{j}}$ from the Example
6.19(iii). The corresponding homogeneous space coincides with the entire
$SL_{2}{}(\mathbb{R}{})$. However if we take two vectors
$u_{\mathrm{i}}=(1,0)\oplus(0,0)$ and $u_{\mathrm{j}}=(0,0)\oplus(1,0)$ then
the respective linear spaces generated by functions $R_{A}(g)u_{\mathrm{i}}$
and $R_{A}(g)u_{\mathrm{j}}$ will be of elliptic and hyperbolic types
respectively.
Let us briefly consider a _quadratic eigenvalue_ problem: for given operators
(matrices) $A_{0}$, $A_{1}$ and $A_{2}$ from $B(H)$ find a scalar $\lambda$
and a vector $x\in H$ such that
(6.15) $Q(\lambda)x=0,\qquad\text{where}\quad
Q(\lambda)=\lambda^{2}A_{2}+\lambda A_{1}+A_{0}.$
There is a connection with our study of conic sections from Subsection 2.2
which we will only hint for now. Comparing (6.15) with the equation of the
cycle (2.7) we can associate the respective Fillmore–Springer–Cnops–type
matrix to $Q(\lambda)$, cf. (2.8):
(6.16) $Q(\lambda)=\lambda^{2}A_{2}+\lambda
A_{1}+A_{0}\quad\longleftrightarrow\quad C_{Q}=\begin{pmatrix}A_{1}&A_{0}\\\
A_{2}&-A_{1}\end{pmatrix}.$
Then we can state the following analogue of Thm. 2.4 for the quadratic
eigenvalues:
###### Proposition 6.23.
Let two quadratic matrix polynomials $Q$ and $\tilde{Q}$ are such that their
FSC matrices (6.16) are conjugated $C_{\tilde{Q}}=gC_{{Q}}g^{-1}$ by an
element $g\in SL_{2}{}(\mathbb{R}{})$. Then $\lambda$ is a solution of the
quadratic eigenvalue problem for $Q$ and $x\in H$ if and only if
$\mu=g\cdot\lambda$ is a solution of the quadratic eigenvalue problem for
$\tilde{Q}$ and $x$. Here $\mu=g\cdot\lambda$ is the Möbius transformation
(1.1) associated to $g\in SL_{2}{}(\mathbb{R}{})$.
So quadratic matrix polynomials are non-commuting analogues of the cycles and
it would be exciting to extend the geometry from Section 2 to this non-
commutative setting as much as possible.
###### Remark 6.24.
It is beneficial to extend a notion of a scalar in an (generalised) eigenvalue
problem to an abstract field or ring. For example, we can consider pencils of
operators/matrices with polynomial coefficients. In many circumstances we may
factorise the polynomial ring by an ideal generated by a collection of
algebraic equations. Our work with hypercomplex units is the most elementary
realisation of this setup. Indeed, the algebra of hypercomplex numbers with
the hypercomplex unit $\iota$ is a realisation of the polynomial ring in a
variable $t$ factored by the single quadratic relation $t^{2}+\sigma=0$, where
$\sigma=\iota^{2}$.
### 7\. Quantum Mechanics
Complex valued representations of the Heisenberg group (also known as Weyl or
Heisenberg-Weyl group) provide a natural framework for quantum mechanics
[Howe80b, Folland89]. This is the most fundamental example of the Kirillov
orbit method, induced representations and geometrical quantisation technique
[Kirillov99, Kirillov94a]. Following the presentation in Section 3 we will
consider representations of the Heisenberg group which are induced by
hypercomplex characters of its centre: complex (which correspond to the
elliptic case), dual (parabolic) and double (hyperbolic).
To describe dynamics of a physical system we use a universal equation based on
inner derivations (commutator) of the convolution algebra [Kisil00a]
[Kisil02e]. The complex valued representations produce the standard framework
for quantum mechanics with the Heisenberg dynamical equation [Vourdas06a].
The double number valued representations, with the hyperbolic unit
$\mathrm{j}^{2}=1$, is a natural source of hyperbolic quantum mechanics
developed for a while [Hudson04a, Hudson66a, Khrennikov03a, Khrennikov05a,
Khrennikov08a]. The universal dynamical equation employs hyperbolic commutator
in this case. This can be seen as a _Moyal bracket_ based on the hyperbolic
sine function. The hyperbolic observables act as operators on a Krein space
with an indefinite inner product. Such spaces are employed in study of
$\mathcal{PT}$-symmetric Hamiltonians and hyperbolic unit $\mathrm{j}^{2}=1$
naturally appear in this setup [GuentherKuzhel10a].
The representations with values in dual numbers provide a convenient
description of the classical mechanics. For this we do not take any sort of
semiclassical limit, rather the nilpotency of the parabolic unit
($\varepsilon^{2}=0$) do the task. This removes the vicious necessity to
consider the Planck _constant_ tending to zero. The dynamical equation takes
the Hamiltonian form. We also describe classical non-commutative
representations of the Heisenberg group which acts in the first jet space.
###### Remark 7.1.
It is worth to note that our technique is different from contraction technique
in the theory of Lie groups [LevyLeblond65a, GromovKuratov05b]. Indeed a
contraction of the Heisenberg group $\mathbb{H}^{n}{}$ is the commutative
Euclidean group $\mathbb{R}^{2n}{}$ which does not recreate neither quantum
nor classical mechanics.
The approach provides not only three different types of dynamics, it also
generates the respective rules for addition of probabilities as well. For
example, the quantum interference is the consequence of the same complex-
valued structure, which directs the Heisenberg equation. The absence of an
interference (a particle behaviour) in the classical mechanics is again the
consequence the nilpotency of the parabolic unit. Double numbers creates the
hyperbolic law of additions of probabilities, which was extensively
investigates [Khrennikov03a, Khrennikov05a]. There are still unresolved issues
with positivity of the probabilistic interpretation in the hyperbolic case
[Hudson04a, Hudson66a].
###### Remark 7.2.
It is commonly accepted since the Dirac’s paper [Dirac26a] that the striking
(or even _the only_) difference between quantum and classical mechanics is
non-commutativity of observables in the first case. In particular the
Heisenberg commutation relations (7.5) imply the uncertainty principle, the
Heisenberg equation of motion and other quantum features. However, the entire
book of Feynman on QED [Feynman1990qed] does not contains any reference to
non-commutativity. Moreover, our work shows that there is a non-commutative
formulation of classical mechanics. Non-commutative representations of the
Heisenberg group in dual numbers implies the Poisson dynamical equation and
local addition of probabilities in Section7.6, which are completely classical.
This entirely dispels any illusive correlation between classical/quantum and
commutative/non-commutative. Instead we show that quantum mechanics is fully
determined by the properties of complex numbers. In Feynman’s exposition
[Feynman1990qed] complex numbers are presented by a clock, rotations of its
arm encode multiplications by unimodular complex numbers. Moreover, there is
no a presentation of quantum mechanics, which does not employ complex phases
(numbers) in one or another form. Analogous parabolic and hyperbolic phases
(or characters produced by associated hypercomplex numbers, see Section 3.1)
lead to classical and hypercomplex mechanics respectively.
This section clarifies foundations of quantum and classical mechanics. We
recovered the existence of three non-isomorphic models of mechanics from the
representation theory. They were already derived in [Hudson04a, Hudson66a]
from translation invariant formulation, that is from the group theory as well.
It also hinted that hyperbolic counterpart is (at least theoretically) as
natural as classical and quantum mechanics are. The approach provides a
framework for a description of aggregate system which have say both quantum
and classical components. This can be used to model quantum computers with
classical terminals [Kisil09b].
Remarkably, simultaneously with the work [Hudson66a] group-invariant
axiomatics of geometry leaded R.I. Pimenov [Pimenov65a] to description of
$3^{n}$ Cayley–Klein constructions. The connection between group-invariant
geometry and respective mechanics were explored in many works of N.A. Gromov,
see for example [Gromov90a, Gromov90b, GromovKuratov05b]. They already
highlighted the rôle of three types of hypercomplex units for the realisation
of elliptic, parabolic and hyperbolic geometry and kinematic.
There is a further connection between representations of the Heisenberg group
and hypercomplex numbers. The symplectomorphism of phase space are also
automorphism of the Heisenberg group [Folland89]*§ 1.2. We recall that the
symplectic group $\mathrm{Sp}(2)$ [Folland89]*§ 1.2 is isomorphic to the group
$SL_{2}{}(\mathbb{R}{})$ [Lang85] [HoweTan92] [Mazorchuk09a] and provides
linear symplectomorphisms of the two-dimensional phase space. It has three
types of non-isomorphic one-dimensional continuous subgroups (2.4-2.6) with
symplectic action on the phase space illustrated by Fig. 9. Hamiltonians,
which produce those symplectomorphism, are of interest [Wulfman10a]*§ 3.8
[ATorre08a] [ATorre10a]. An analysis of those Hamiltonians from Subsection 3.3
by means of ladder operators recreates hypercomplex coefficients as well
[Kisil11a].
Harmonic oscillators, which we shall use as the main illustration here, are
treated in most textbooks on quantum mechanics. This is efficiently done
through creation/annihilation (ladder) operators, cf. § 3.3 and [Gazeau09a]
[BoyerMiller74a]. The underlying structure is the representation theory of the
Heisenberg and symplectic groups [Lang85]*§ VI.2 [MTaylor86]*§ 8.2 [Howe80b]
[Folland89]. As we will see, they are naturally connected with respective
hypercomplex numbers. As a result we obtain further illustrations to the
Similarity and Correspondence Principle 3.5.
We work with the simplest case of a particle with only one degree of freedom.
Higher dimensions and the respective group of symplectomorphisms
$\mathrm{Sp}(2n)$ may require consideration of Clifford algebras [Kisil93c]
[ConstalesFaustinoKrausshar11a] [CnopsKisil97a] [GuentherKuzhel10a]
[Porteous95].
#### 7.1. The Heisenberg Group and Its Automorphisms
##### 7.1.1. The Heisenberg group and induced representations
Let $(s,x,y)$, where $s$, $x$, $y\in\mathbb{R}{}$, be an element of the one-
dimensional _Heisenberg group_ $\mathbb{H}^{1}{}$ [Folland89, Howe80b].
Consideration of the general case of $\mathbb{H}^{n}{}$ will be similar, but
is beyond the scope of present paper. The group law on $\mathbb{H}^{1}{}$ is
given as follows:
(7.1)
$\textstyle(s,x,y)\cdot(s^{\prime},x^{\prime},y^{\prime})=(s+s^{\prime}+\frac{1}{2}\omega(x,y;x^{\prime},y^{\prime}),x+x^{\prime},y+y^{\prime}),$
where the non-commutativity is due to $\omega$—the _symplectic form_ on
$\mathbb{R}^{2n}{}$, which is the central object of the classical mechanics
[Arnold91]*§ 37:
(7.2) $\omega(x,y;x^{\prime},y^{\prime})=xy^{\prime}-x^{\prime}y.$
The Heisenberg group is a non-commutative Lie group with the centre
$Z=\\{(s,0,0)\in\mathbb{H}^{1}{},\ s\in\mathbb{R}{}\\}.$
The left shifts
(7.3) $\Lambda(g):f(g^{\prime})\mapsto f(g^{-1}g^{\prime})$
act as a representation of $\mathbb{H}^{1}{}$ on a certain linear space of
functions. For example, an action on $L_{2}{}(\mathbb{H}{},dg)$ with respect
to the Haar measure $dg=ds\,dx\,dy$ is the _left regular_ representation,
which is unitary.
The Lie algebra $\mathfrak{h}^{n}$ of $\mathbb{H}^{1}{}$ is spanned by
left-(right-)invariant vector fields
(7.4) $\textstyle S^{l(r)}=\pm{\partial_{s}},\quad
X^{l(r)}=\pm\partial_{x}-\frac{1}{2}y{\partial_{s}},\quad
Y^{l(r)}=\pm\partial_{y}+\frac{1}{2}x{\partial_{s}}$
on $\mathbb{H}^{1}{}$ with the Heisenberg _commutator relation_
(7.5) $[X^{l(r)},Y^{l(r)}]=S^{l(r)}$
and all other commutators vanishing. We will sometimes omit the superscript
$l$ for left-invariant field.
We can construct linear representations of $\mathbb{H}^{1}{}$ by induction
[Kirillov76]*§ 13 from a character $\chi$ of the centre $Z$. Here we prefer
the following one, cf. § 3.2 and [Kirillov76]*§ 13 [MTaylor86]*Ch. 5. Let
$F_{2}^{\chi}{}(\mathbb{H}^{n}{})$ be the space of functions on
$\mathbb{H}^{n}{}$ having the properties:
(7.6) $f(gh)=\chi(h)f(g),\qquad\text{ for all }g\in\mathbb{H}^{n}{},\ h\in Z$
and
(7.7) $\int_{\mathbb{R}^{2n}{}}\left|f(0,x,y)\right|^{2}dx\,dy<\infty.$
Then $F_{2}^{\chi}{}(\mathbb{H}^{n}{})$ is invariant under the left shifts and
those shifts restricted to $F_{2}^{\chi}{}(\mathbb{H}^{n}{})$ make a
representation ${\rho_{\chi}}$ of $\mathbb{H}^{n}{}$ induced by $\chi$.
If the character $\chi$ is unitary, then the induced representation is unitary
as well. However the representation ${\rho_{\chi}}$ is not necessarily
irreducible. Indeed, left shifts are commuting with the right action of the
group. Thus any subspace of null-solutions of a linear combination
$aS+\sum_{j=1}^{n}(b_{j}X_{j}+c_{j}Y_{j})$ of left-invariant vector fields is
left-invariant and we can restrict ${\rho_{\chi}}$ to this subspace. The left-
invariant differential operators define analytic condition for functions, cf.
Cor. 5.6.
###### Example 7.3.
The function $f_{0}(s,x,y)=e^{\mathrm{i}hs-h(x^{2}+y^{2})/4}$, where
$h=2\pi\hslash$, belongs to $F_{2}^{\chi}{}(\mathbb{H}^{n}{})$ for the
character $\chi(s)=e^{\mathrm{i}hs}$. It is also a null solution for all the
operators $X_{j}-\mathrm{i}Y_{j}$. The closed linear span of functions
$f_{g}=\Lambda(g)f_{0}$ is invariant under left shifts and provide a model for
Fock–Segal–Bargmann (FSB) type representation of the Heisenberg group, which
will be considered below.
##### 7.1.2. Symplectic Automorphisms of the Heisenberg Group
The group of outer automorphisms of $\mathbb{H}^{1}{}$, which trivially acts
on the centre of $\mathbb{H}^{1}{}$, is the symplectic group $\mathrm{Sp}(2)$.
It is the group of symmetries of the symplectic form $\omega$ in (7.1)
[Folland89]*Thm. 1.22 [Howe80a]*p. 830. The symplectic group is isomorphic to
$SL_{2}{}(\mathbb{R}{})$ considered in the first half of this work. The
explicit action of $\mathrm{Sp}(2)$ on the Heisenberg group is:
(7.8) $g:h=(s,x,y)\mapsto g(h)=(s,x^{\prime},y^{\prime}),$
where
$g=\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\in\mathrm{Sp}(2),\quad\text{ and
}\quad\begin{pmatrix}x^{\prime}\\\
y^{\prime}\end{pmatrix}=\begin{pmatrix}a&b\\\
c&d\end{pmatrix}\begin{pmatrix}x\\\ y\end{pmatrix}.$
The Shale–Weil theorem [Folland89]*§ 4.2 [Howe80a]*p. 830 states that any
representation ${\rho_{\hslash}}$ of the Heisenberg groups generates a unitary
_oscillator_ (or _metaplectic_) representation ${\rho^{\text{SW}}_{\hslash}}$
of the $\widetilde{\mathrm{Sp}}(2)$, the two-fold cover of the symplectic
group [Folland89]*Thm. 4.58.
We can consider the semidirect product
$G=\mathbb{H}^{1}{}\rtimes\widetilde{\mathrm{Sp}}(2)$ with the standard group
law:
(7.9)
$(h,g)*(h^{\prime},g^{\prime})=(h*g(h^{\prime}),g*g^{\prime}),\qquad\text{where
}h,h^{\prime}\in\mathbb{H}^{1}{},\quad
g,g^{\prime}\in\widetilde{\mathrm{Sp}}(2),$
and the stars denote the respective group operations while the action
$g(h^{\prime})$ is defined as the composition of the projection map
$\widetilde{\mathrm{Sp}}(2)\rightarrow{\mathrm{Sp}}(2)$ and the action (7.8).
This group is sometimes called the _Schrödinger group_ and it is known as the
maximal kinematical invariance group of both the free Schrödinger equation and
the quantum harmonic oscillator [Niederer73a]. This group is of interest not
only in quantum mechanics but also in optics [ATorre10a, ATorre08a]. The
Shale–Weil theorem allows us to expand any representation ${\rho_{\hslash}}$
of the Heisenberg group to the representation
${\rho^{2}_{\hslash}}={\rho_{\hslash}}\oplus{\rho^{\text{SW}}_{\hslash}}$ of
the group $G$.
Consider the Lie algebra $\mathfrak{sp}_{2}$ of the group $\mathrm{Sp}(2)$. We
again use the basis $A$, $B$, $Z$ (3.12) with commutators (3.13). Vectors $Z$,
$B-Z/2$ and $B$ are generators of the one-parameter subgroups $K$,
$N^{\prime}$ and $A\\!^{\prime}$ (2.4–2.6) respectively. Furthermore we can
consider the basis $\\{S,X,Y,A,B,Z\\}$ of the Lie algebra $\mathfrak{g}$ of
the Lie group $G=\mathbb{H}^{1}{}\rtimes\widetilde{\mathrm{Sp}}(2)$. All non-
zero commutators besides those already listed in (7.5) and (3.13) are:
(7.10) $\displaystyle[A,X]$ $\displaystyle=\textstyle\frac{1}{2}X,$
$\displaystyle[B,X]$ $\displaystyle=\textstyle-\frac{1}{2}Y,$
$\displaystyle[Z,X]$ $\displaystyle=Y;$ (7.11) $\displaystyle[A,Y]$
$\displaystyle=\textstyle-\frac{1}{2}Y,$ $\displaystyle[B,Y]$
$\displaystyle=\textstyle-\frac{1}{2}X,$ $\displaystyle[Z,Y]$
$\displaystyle=-X.$
Of course, there is the derived form of the Shale–Weil representation for
$\mathfrak{g}$. It can often be explicitly written in contrast to the
Shale–Weil representation.
###### Example 7.4.
Let ${\rho_{\hslash}}$ be the Schrödinger representation [Folland89]*§ 1.3 of
$\mathbb{H}^{1}{}$ in $L_{2}{}(\mathbb{R}{})$, that is [Kisil10a]*(3.5):
$[{\rho_{\chi}}(s,x,y)f\,](q)=e^{2\pi\mathrm{i}\hslash(s-xy/2)+2\pi\mathrm{i}xq}\,f(q-\hslash
y).$
Thus the action of the derived representation on the Lie algebra
$\mathfrak{h}_{1}$ is:
(7.12)
${\rho_{\hslash}}(X)=2\pi\mathrm{i}q,\qquad{\rho_{\hslash}}(Y)=-\hslash\frac{d}{dq},\qquad{\rho_{\hslash}}(S)=2\pi\mathrm{i}\hslash
I.$
Then the associated Shale–Weil representation of $\mathrm{Sp}(2)$ in
$L_{2}{}(\mathbb{R}{})$ has the derived action, cf. [ATorre08a]*(2.2)
[Folland89]*§ 4.3:
(7.13)
${\rho^{\text{SW}}_{\hslash}}(A)=-\frac{q}{2}\frac{d}{dq}-\frac{1}{4},\quad{\rho^{\text{SW}}_{\hslash}}(B)=-\frac{\hslash\mathrm{i}}{8\pi}\frac{d^{2}}{dq^{2}}-\frac{\pi\mathrm{i}q^{2}}{2\hslash},\quad{\rho^{\text{SW}}_{\hslash}}(Z)=\frac{\hslash\mathrm{i}}{4\pi}\frac{d^{2}}{dq^{2}}-\frac{\pi\mathrm{i}q^{2}}{\hslash}.$
We can verify commutators (7.5) and (3.13), (7.11) for operators (7.12–7.13).
It is also obvious that in this representation the following algebraic
relations hold:
$\displaystyle\qquad{\rho^{\text{SW}}_{\hslash}}(A)$ $\displaystyle=$
$\displaystyle\frac{\mathrm{i}}{4\pi\hslash}({\rho_{\hslash}}(X){\rho_{\hslash}}(Y)-{\textstyle\frac{1}{2}}{\rho_{\hslash}}(S))$
$\displaystyle=$
$\displaystyle\frac{\mathrm{i}}{8\pi\hslash}({\rho_{\hslash}}(X){\rho_{\hslash}}(Y)+{\rho_{\hslash}}(Y){\rho_{\hslash}}(X)),$
(7.15) $\displaystyle{\rho^{\text{SW}}_{\hslash}}(B)$ $\displaystyle=$
$\displaystyle\frac{\mathrm{i}}{8\pi\hslash}({\rho_{\hslash}}(X)^{2}-{\rho_{\hslash}}(Y)^{2}),$
(7.16) $\displaystyle{\rho^{\text{SW}}_{\hslash}}(Z)$ $\displaystyle=$
$\displaystyle\frac{\mathrm{i}}{4\pi\hslash}({\rho_{\hslash}}(X)^{2}+{\rho_{\hslash}}(Y)^{2}).$
Thus it is common in quantum optics to name $\mathfrak{g}$ as a Lie algebra
with quadratic generators, see [Gazeau09a]*§ 2.2.4.
Note that ${\rho^{\text{SW}}_{\hslash}}(Z)$ is the Hamiltonian of the harmonic
oscillator (up to a factor). Then we can consider
${\rho^{\text{SW}}_{\hslash}}(B)$ as the Hamiltonian of a repulsive
(hyperbolic) oscillator. The operator
${\rho^{\text{SW}}_{\hslash}}(B-Z/2)=\frac{\hslash\mathrm{i}}{4\pi}\frac{d^{2}}{dq^{2}}$
is the parabolic analog. A graphical representation of all three
transformations defined by those Hamiltonian is given in Fig. 9 and a further
discussion of these Hamiltonians can be found in [Wulfman10a]*§ 3.8.
An important observation, which is often missed, is that the three linear
symplectic transformations are unitary rotations in the corresponding
hypercomplex algebra, cf. [Kisil09c]*§ 3. This means, that the
symplectomorphisms generated by operators $Z$, $B-Z/2$, $B$ within time $t$
coincide with the multiplication of hypercomplex number $q+\iota p$ by
$e^{\iota t}$, see Subsection 3.1 and Fig. 9, which is just another
illustration of the Similarity and Correspondence Principle 3.5.
###### Example 7.5.
There are many advantages of considering representations of the Heisenberg
group on the phase space [Howe80b]*§ 1.7 [Folland89]*§ 1.6 [deGosson08a]. A
convenient expression for Fock–Segal–Bargmann (FSB) representation on the
phase space is, cf. § 7.3.1 and [Kisil02e]*(2.9) [deGosson08a]*(1):
(7.17) $\textstyle[{\rho_{F}}(s,x,y)f](q,p)=e^{-2\pi\mathrm{i}(\hslash
s+qx+py)}f\left(q-\frac{\hslash}{2}y,p+\frac{\hslash}{2}x\right).$
Then the derived representation of $\mathfrak{h}_{1}$ is:
(7.18)
$\textstyle{\rho_{F}}(X)=-2\pi\mathrm{i}q+\frac{\hslash}{2}\partial_{p},\qquad{\rho_{F}}(Y)=-2\pi\mathrm{i}p-\frac{\hslash}{2}\partial_{q},\qquad{\rho_{F}}(S)=-2\pi\mathrm{i}\hslash
I.$
This produces the derived form of the Shale–Weil representation:
(7.19)
$\textstyle{\rho^{\text{SW}}_{F}}(A)=\frac{1}{2}\left(q\partial_{q}-p\partial_{p}\right),\quad{\rho^{\text{SW}}_{F}}(B)=-\frac{1}{2}\left(p\partial_{q}+q\partial_{p}\right),\quad{\rho^{\text{SW}}_{F}}(Z)=p\partial_{q}-q\partial_{p}.$
Note that this representation does not contain the parameter $\hslash$ unlike
the equivalent representation (7.13). Thus the FSB model explicitly shows the
equivalence of ${\rho^{\text{SW}}_{\hslash_{1}}}$ and
${\rho^{\text{SW}}_{\hslash_{2}}}$ if $\hslash_{1}\hslash_{2}>0$
[Folland89]*Thm. 4.57.
As we will also see below the FSB-type representations in hypercomplex numbers
produce almost the same Shale–Weil representations.
#### 7.2. p-Mechanic Formalism
Here we briefly outline a formalism [Kisil96a, Prezhdo-Kisil97, Kisil00a,
BrodlieKisil03a, Kisil02e], which allows to unify quantum and classical
mechanics.
##### 7.2.1. Convolutions (Observables) on $\mathbb{H}^{n}{}$ and Commutator
Using a invariant measure $dg=ds\,dx\,dy$ on $\mathbb{H}^{n}{}$ we can define
the convolution of two functions:
(7.20) $\displaystyle(k_{1}*k_{2})(g)$ $\displaystyle=$
$\displaystyle\int_{\mathbb{H}^{n}{}}k_{1}(g_{1})\,k_{2}(g_{1}^{-1}g)\,dg_{1}.$
This is a non-commutative operation, which is meaningful for functions from
various spaces including $L_{1}{}(\mathbb{H}^{n}{},dg)$, the Schwartz space
$S{}$ and many classes of distributions, which form algebras under
convolutions. Convolutions on $\mathbb{H}^{n}{}$ are used as _observables_ in
$p$-mechanic [Kisil96a, Kisil02e].
A unitary representation ${\rho}$ of $\mathbb{H}^{n}{}$ extends to
$L_{1}{}(\mathbb{H}^{n}{},dg)$ by the formula:
(7.21) ${\rho}(k)=\int_{\mathbb{H}^{n}{}}k(g){\rho}(g)\,dg.$
This is also an algebra homomorphism of convolutions to linear operators.
For a dynamics of observables we need inner _derivations_ $D_{k}$ of the
convolution algebra $L_{1}{}(\mathbb{H}^{n}{})$, which are given by the
_commutator_ :
$\displaystyle\qquad D_{k}:f\mapsto[k,f]$ $\displaystyle=$ $\displaystyle
k*f-f*k$ $\displaystyle=$
$\displaystyle\int_{\mathbb{H}^{n}{}}k(g_{1})\left(f(g_{1}^{-1}g)-f(gg_{1}^{-1})\right)\,dg_{1},\quad
f,k\in L_{1}{}(\mathbb{H}^{n}{}).$
To describe dynamics of a time-dependent observable $f(t,g)$ we use the
universal equation, cf. [Kisil94d, Kisil96a]:
(7.23) $S\dot{f}=[H,f],$
where $S$ is the left-invariant vector field (7.4) generated by the centre of
$\mathbb{H}^{n}{}$. The presence of operator $S$ fixes the dimensionality of
both sides of the equation (7.23) if the observable $H$ (Hamiltonian) has the
dimensionality of energy [Kisil02e]*Rem 4.1. If we apply a right inverse
$\mathcal{A}$ of $S$ to both sides of the equation (7.23) we obtain the
equivalent equation
(7.24) $\dot{f}=\left\\{\\!\left[H,f\right]\\!\right\\},$
based on the universal bracket
$\left\\{\\!\left[k_{1},k_{2}\right]\\!\right\\}=k_{1}*\mathcal{A}k_{2}-k_{2}*\mathcal{A}k_{1}$
[Kisil02e].
###### Example 7.6 (Harmonic oscillator).
Let $H=\frac{1}{2}(mk^{2}q^{2}+\frac{1}{m}p^{2})$ be the Hamiltonian of a one-
dimensional harmonic oscillator, where $k$ is a constant frequency and $m$ is
a constant mass. Its _p-mechanisation_ will be the second order differential
operator on $\mathbb{H}^{n}{}$ [BrodlieKisil03a]*§ 5.1:
$\textstyle H=\frac{1}{2}(mk^{2}X^{2}+\frac{1}{m}Y^{2}),$
where we dropped sub-indexes of vector fields (7.4) in one dimensional
setting. We can express the commutator as a difference between the left and
the right action of the vector fields:
$\textstyle[H,f]=\frac{1}{2}(mk^{2}((X^{r})^{2}-(X^{l})^{2})+\frac{1}{m}((Y^{r})^{2}-(Y^{l})^{2}))f.$
Thus the equation (7.23) becomes [BrodlieKisil03a]*(5.2):
(7.25) $\frac{\partial}{\partial s}\dot{f}=\frac{\partial}{\partial
s}\left(mk^{2}y\frac{\partial}{\partial
x}-\frac{1}{m}x\frac{\partial}{\partial y}\right)f.$
Of course, the derivative $\frac{\partial}{\partial s}$ can be dropped from
both sides of the equation and the general solution is found to be:
(7.26) $\textstyle
f(t;s,x,y)=f_{0}\left(s,x\cos(kt)+mky\sin(kt),-\frac{x}{mk}\sin(kt)+y\cos(kt)\right),$
where $f_{0}(s,x,y)$ is the initial value of an observable on
$\mathbb{H}^{n}{}$.
###### Example 7.7 (Unharmonic oscillator).
We consider unharmonic oscillator with cubic potential, see
[CalzettaVerdaguer06a] and references therein:
(7.27) $H=\frac{mk^{2}}{2}q^{2}+\frac{\lambda}{6}q^{3}+\frac{1}{2m}p^{2}.$
Due to the absence of non-commutative products p-mechanisation is
straightforward:
$H=\frac{mk^{2}}{2}X^{2}+\frac{\lambda}{6}X^{3}+\frac{1}{m}Y^{2}.$
Similarly to the harmonic case the dynamic equation, after cancellation of
$\frac{\partial}{\partial s}$ on both sides, becomes:
(7.28) $\dot{f}=\left(mk^{2}y\frac{\partial}{\partial
x}+\frac{\lambda}{6}\left(3y\frac{\partial^{2}}{\partial
x^{2}}+\frac{1}{4}y^{3}\frac{\partial^{2}}{\partial
s^{2}}\right)-\frac{1}{m}x\frac{\partial}{\partial y}\right)f.$
Unfortunately, it cannot be solved analytically as easy as in the harmonic
case.
##### 7.2.2. States and Probability
Let an observable ${\rho}(k)$ (7.21) is defined by a kernel $k(g)$ on the
Heisenberg group and its representation ${\rho}$ at a Hilbert space
$\mathcal{H}$. A _state_ on the convolution algebra is given by a vector
$v\in\mathcal{H}$. A simple calculation:
$\displaystyle\left\langle{\rho}(k)v,v\right\rangle_{\mathcal{H}}$
$\displaystyle=$
$\displaystyle\left\langle\int_{\mathbb{H}^{n}{}}k(g){\rho}(g)v\,dg,v\right\rangle_{\mathcal{H}}$
$\displaystyle=$
$\displaystyle\int_{\mathbb{H}^{n}{}}k(g)\left\langle{\rho}(g)v,v\right\rangle_{\mathcal{H}}dg$
$\displaystyle=$
$\displaystyle\int_{\mathbb{H}^{n}{}}k(g)\overline{\left\langle
v,{\rho}(g)v\right\rangle_{\mathcal{H}}}\,dg$
can be restated as:
$\left\langle{\rho}(k)v,v\right\rangle_{\mathcal{H}}=\left\langle
k,l\right\rangle,\qquad\text{where}\quad l(g)=\left\langle
v,{\rho}(g)v\right\rangle_{\mathcal{H}}.$
Here the left-hand side contains the inner product on $\mathcal{H}$, while the
right-hand side uses a skew-linear pairing between functions on
$\mathbb{H}^{n}{}$ based on the Haar measure integration. In other words we
obtain, cf. [BrodlieKisil03a]*Thm. 3.11:
###### Proposition 7.8.
A state defined by a vector $v\in\mathcal{H}$ coincides with the linear
functional given by the wavelet transform
(7.29) $l(g)=\left\langle v,{\rho}(g)v\right\rangle_{\mathcal{H}}$
of $v$ used as the mother wavelet as well.
The addition of vectors in $\mathcal{H}$ implies the following operation on
states:
(7.30) $\displaystyle\left\langle
v_{1}+v_{2},{\rho}(g)(v_{1}+v_{2})\right\rangle_{\mathcal{H}}$
$\displaystyle=$ $\displaystyle\left\langle
v_{1},{\rho}(g)v_{1}\right\rangle_{\mathcal{H}}+\left\langle
v_{2},{\rho}(g)v_{2}\right\rangle_{\mathcal{H}}$ $\displaystyle{}+\left\langle
v_{1},{\rho}(g)v_{2}\right\rangle_{\mathcal{H}}+\overline{\left\langle
v_{1},{\rho}(g^{-1})v_{2}\right\rangle_{\mathcal{H}}}$
The last expression can be conveniently rewritten for kernels of the
functional as
(7.31) $l_{12}=l_{1}+l_{2}+2A\sqrt{l_{1}l_{2}}$
for some real number $A$. This formula is behind the contextual law of
addition of conditional probabilities [Khrennikov01a] and will be illustrated
below. Its physical interpretation is an interference, say, from two slits.
Despite of a common belief, the mechanism of such interference can be both
causal and local, see [Kisil01c] [KhrenVol01].
#### 7.3. Elliptic characters and Quantum Dynamics
In this subsection we consider the representation ${\rho_{h}}$ of
$\mathbb{H}^{n}{}$ induced by the elliptic character
$\chi_{h}(s)=e^{\mathrm{i}hs}$ in complex numbers parametrised by
$h\in\mathbb{R}{}$. We also use the convenient agreement $h=2\pi\hslash$
borrowed from physical literature.
##### 7.3.1. Fock–Segal–Bargmann and Schrödinger Representations
The realisation of ${\rho_{h}}$ by the left shifts (7.3) on
$L_{2}^{h}{}(\mathbb{H}^{n}{})$ is rarely used in quantum mechanics. Instead
two unitary equivalent forms are more common: the Schrödinger and
Fock–Segal–Bargmann (FSB) representations.
The FSB representation can be obtained from the orbit method of Kirillov
[Kirillov94a]. It allows spatially separate irreducible components of the left
regular representation, each of them become located on the orbit of the co-
adjoint representation, see [Kisil02e]*§ 2.1 [Kirillov94a] for details, we
only present a brief summary here.
We identify $\mathbb{H}^{n}{}$ and its Lie algebra $\mathfrak{h}_{n}$ through
the exponential map [Kirillov76]*§ 6.4. The dual $\mathfrak{h}_{n}^{*}$ of
$\mathfrak{h}_{n}$ is presented by the Euclidean space $\mathbb{R}^{2n+1}{}$
with coordinates $(\hslash,q,p)$. The pairing $\mathfrak{h}_{n}^{*}$ and
$\mathfrak{h}_{n}$ given by
$\left\langle(s,x,y),(\hslash,q,p)\right\rangle=\hslash s+q\cdot x+p\cdot y.$
This pairing defines the Fourier transform $\hat{\
}:L_{2}{}(\mathbb{H}^{n}{})\rightarrow L_{2}{}(\mathfrak{h}_{n}^{*})$ given by
[Kirillov99]*§ 2.3:
(7.32) $\hat{\phi}(F)=\int_{\mathfrak{h}^{n}}\phi(\exp
X)e^{-2\pi\mathrm{i}\left\langle X,F\right\rangle}\,dX\qquad\textrm{ where
}X\in\mathfrak{h}^{n},\ F\in\mathfrak{h}_{n}^{*}.$
For a fixed $\hslash$ the left regular representation (7.3) is mapped by the
Fourier transform to the FSB type representation (7.17). The collection of
points $(\hslash,q,p)\in\mathfrak{h}_{n}^{*}$ for a fixed $\hslash$ is
naturally identified with the _phase space_ of the system.
###### Remark 7.9.
It is possible to identify the case of $\hslash=0$ with classical mechanics
[Kisil02e]. Indeed, a substitution of the zero value of $\hslash$ into (7.17)
produces the commutative representation:
(7.33) ${\rho_{0}}(s,x,y):f(q,p)\mapsto
e^{-2\pi\mathrm{i}(qx+py)}f\left(q,p\right).$
It can be decomposed into the direct integral of one-dimensional
representations parametrised by the points $(q,p)$ of the phase space. The
classical mechanics, including the Hamilton equation, can be recovered from
those representations [Kisil02e]. However the condition $\hslash=0$ (as well
as the _semiclassical limit_ $\hslash\rightarrow 0$) is not completely
physical. Commutativity (and subsequent relative triviality) of those
representation is the main reason why they are oftenly neglected. The
commutativity can be outweighed by special arrangements, e.g. an
antiderivative [Kisil02e]*(4.1), but the procedure is not straightforward, see
discussion in [Kisil05c] [AgostiniCapraraCiccotti07a] [Kisil09a]. A direct
approach using dual numbers will be shown below, cf. Rem. 7.21.
To recover the Schrödinger representation we use notations and technique of
induced representations from § 3.2, see also [Kisil98a]*Ex. 4.1. The subgroup
$H=\\{(s,0,y)\,\mid\,s\in\mathbb{R}{},y\in\mathbb{R}^{n}{}\\}\subset\mathbb{H}^{n}{}$
defines the homogeneous space $X=G/H$, which coincides with $\mathbb{R}^{n}{}$
as a manifold. The natural projection $\mathbf{p}:G\rightarrow X$ is
$\mathbf{p}(s,x,y)=x$ and its left inverse $\mathbf{s}:X\rightarrow G$ can be
as simple as $\mathbf{s}(x)=(0,x,0)$. For the map $\mathbf{r}:G\rightarrow H$,
$\mathbf{r}(s,x,y)=(s-xy/2,0,y)$ we have the decomposition
$(s,x,y)=\mathbf{s}(p(s,x,y))*\mathbf{r}(s,x,y)=(0,x,0)*(s-\textstyle\frac{1}{2}xy,0,y).$
For a character $\chi_{h}(s,0,y)=e^{\mathrm{i}hs}$ of $H$ the lifting
$\mathcal{L}_{\chi}:L_{2}{}(G/H)\rightarrow L_{2}^{\chi}{}(G)$ is as follows:
$[\mathcal{L}_{\chi}f](s,x,y)=\chi_{h}(\mathbf{r}(s,x,y))\,f(\mathbf{p}(s,x,y))=e^{\mathrm{i}h(s-xy/2)}f(x).$
Thus the representation
${\rho_{\chi}}(g)=\mathcal{P}\circ\Lambda(g)\circ\mathcal{L}$ becomes:
(7.34)
$[{\rho_{\chi}}(s^{\prime},x^{\prime},y^{\prime})f](x)=e^{-2\pi\mathrm{i}\hslash(s^{\prime}+xy^{\prime}-x^{\prime}y^{\prime}/2)}\,f(x-x^{\prime}).$
After the Fourier transform $x\mapsto q$ we get the Schrödinger representation
on the _configuration space_ :
(7.35)
$[{\rho_{\chi}}(s^{\prime},x^{\prime},y^{\prime})\hat{f}\,](q)=e^{-2\pi\mathrm{i}\hslash(s^{\prime}+x^{\prime}y^{\prime}/2)-2\pi\mathrm{i}x^{\prime}q}\,\hat{f}(q+\hslash
y^{\prime}).$
Note that this again turns into a commutative representation (multiplication
by an unimodular function) if $\hslash=0$. To get the full set of commutative
representations in this way we need to use the character
$\chi_{(h,p)}(s,0,y)=e^{2\pi\mathrm{i}(\hslash+py)}$ in the above
consideration.
##### 7.3.2. Commutator and the Heisenberg Equation
The property (7.6) of $F_{2}^{\chi}{}(\mathbb{H}^{n}{})$ implies that the
restrictions of two operators ${\rho_{\chi}}(k_{1})$ and
${\rho_{\chi}}(k_{2})$ to this space are equal if
$\int_{\mathbb{R}{}}k_{1}(s,x,y)\,\chi(s)\,ds=\int_{\mathbb{R}{}}k_{2}(s,x,y)\,\chi(s)\,ds.$
In other words, for a character $\chi(s)=e^{2\pi\mathrm{i}\hslash s}$ the
operator ${\rho_{\chi}}(k)$ depends only on
$\hat{k}_{s}(\hslash,x,y)=\int_{\mathbb{R}{}}k(s,x,y)\,e^{-2\pi\mathrm{i}\hslash
s}\,ds,$
which is the partial Fourier transform $s\mapsto\hslash$ of $k(s,x,y)$. The
restriction to $F_{2}^{\chi}{}(\mathbb{H}^{n}{})$ of the composition formula
for convolutions is [Kisil02e]*(3.5):
(7.36)
$(k^{\prime}*k)\hat{{}_{s}}=\int_{\mathbb{R}^{2n}{}}e^{{\mathrm{i}h}{}(xy^{\prime}-yx^{\prime})/2}\,\hat{k}^{\prime}_{s}(\hslash,x^{\prime},y^{\prime})\,\hat{k}_{s}(\hslash,x-x^{\prime},y-y^{\prime})\,dx^{\prime}dy^{\prime}.$
Under the Schrödinger representation (7.35) the convolution (7.36) defines a
rule for composition of two pseudo-differential operators (PDO) in the Weyl
calculus [Howe80b] [Folland89]*§ 2.3.
Consequently the representation (7.21) of commutator (7.2.1) depends only on
its partial Fourier transform [Kisil02e]*(3.6):
$\displaystyle[k^{\prime},k]\hat{{}_{s}}$ $\displaystyle=$ $\displaystyle
2\mathrm{i}\int_{\mathbb{R}^{2n}{}}\\!\\!\sin(\textstyle\frac{h}{2}(xy^{\prime}-yx^{\prime}))\,$
$\displaystyle\qquad\times\hat{k}^{\prime}_{s}(\hslash,x^{\prime},y^{\prime})\,\hat{k}_{s}(\hslash,x-x^{\prime},y-y^{\prime})\,dx^{\prime}dy^{\prime}.$
Under the Fourier transform (7.32) this commutator is exactly the _Moyal
bracket_ [Zachos02a] for of $\hat{k}^{\prime}$ and $\hat{k}$ on the phase
space.
For observables in the space $F_{2}^{\chi}{}(\mathbb{H}^{n}{})$ the action of
$S$ is reduced to multiplication, e.g. for $\chi(s)=e^{\mathrm{i}hs}$ the
action of $S$ is multiplication by $\mathrm{i}h$. Thus the equation (7.23)
reduced to the space $F_{2}^{\chi}{}(\mathbb{H}^{n}{})$ becomes the Heisenberg
type equation [Kisil02e]*(4.4):
(7.38) $\dot{f}=\frac{1}{\mathrm{i}h}[H,f]\hat{{}_{s}},$
based on the above bracket (7.3.2). The Schrödinger representation (7.35)
transforms this equation to the original Heisenberg equation.
###### Example 7.10.
1. (i)
Under the Fourier transform $(x,y)\mapsto(q,p)$ the p-dynamic equation (7.25)
of the harmonic oscillator becomes:
(7.39) $\dot{f}=\left(mk^{2}q\frac{\partial}{\partial
p}-\frac{1}{m}p\frac{\partial}{\partial q}\right)f.$
The same transform creates its solution out of (7.26).
2. (ii)
Since $\frac{\partial}{\partial s}$ acts on $F_{2}^{\chi}{}(\mathbb{H}^{n}{})$
as multiplication by $\mathrm{i}\hslash$, the quantum representation of
unharmonic dynamics equation (7.28) is:
(7.40) $\dot{f}=\left(mk^{2}q\frac{\partial}{\partial
p}+\frac{\lambda}{6}\left(3q^{2}\frac{\partial}{\partial
p}-\frac{\hslash^{2}}{4}\frac{\partial^{3}}{\partial
p^{3}}\right)-\frac{1}{m}p\frac{\partial}{\partial q}\right)f.$
This is exactly the equation for the Wigner function obtained in
[CalzettaVerdaguer06a]*(30).
##### 7.3.3. Quantum Probabilities
For the elliptic character $\chi_{h}(s)=e^{\mathrm{i}hs}$ we can use the
Cauchy–Schwartz inequality to demonstrate that the real number $A$ in the
identity (7.31) is between $-1$ and $1$. Thus we can put $A=\cos\alpha$ for
some angle (phase) $\alpha$ to get the formula for counting quantum
probabilities, cf. [Khrennikov03a]*(2):
(7.41) $l_{12}=l_{1}+l_{2}+2\cos\alpha\,\sqrt{l_{1}l_{2}}$
###### Remark 7.11.
It is interesting to note that the both trigonometric functions are employed
in quantum mechanics: sine is in the heart of the Moyal bracket (7.3.2) and
cosine is responsible for the addition of probabilities (7.41). In the essence
the commutator and probabilities took respectively the odd and even parts of
the elliptic character $e^{\mathrm{i}hs}$.
###### Example 7.12.
Take a vector $v_{(a,b)}\in L_{2}^{h}{}(\mathbb{H}^{n}{})$ defined by a
Gaussian with mean value $(a,b)$ in the phase space for a harmonic oscillator
of the mass $m$ and the frequency $k$:
(7.42) $v_{(a,b)}(q,p)=\exp\left(-\frac{2\pi
km}{\hslash}(q-a)^{2}-\frac{2\pi}{\hslash km}(p-b)^{2}\right).$
A direct calculation shows:
$\displaystyle\left\langle
v_{(a,b)},{\rho_{\hslash}}(s,x,y)v_{(a^{\prime},b^{\prime})}\right\rangle=\frac{4}{\hslash}\exp\left(\pi\mathrm{i}\left(2s\hslash+x(a+a^{\prime})+y(b+b^{\prime})\right)\frac{}{}\right.$
$\displaystyle\left.{}-\frac{\pi}{2\hslash km}((\hslash
x+b-b^{\prime})^{2}+(b-b^{\prime})^{2})-\frac{\pi km}{2\hslash}((\hslash
y+a^{\prime}-a)^{2}+(a^{\prime}-a)^{2})\right)$ $\displaystyle=$
$\displaystyle\frac{4}{\hslash}\exp\left(\pi\mathrm{i}\left(2s\hslash+x(a+a^{\prime})+y(b+b^{\prime})\right)\frac{}{}\right.$
$\displaystyle\left.{}-\frac{\pi}{\hslash
km}((b-b^{\prime}+{\textstyle\frac{\hslash
x}{2}})^{2}+({\textstyle\frac{\hslash x}{2}})^{2})-\frac{\pi
km}{\hslash}((a-a^{\prime}-{\textstyle\frac{\hslash
y}{2}})^{2}+({\textstyle\frac{\hslash y}{2}})^{2})\right)$
Thus the kernel $l_{(a,b)}=\left\langle
v_{(a,b)},{\rho_{\hslash}}(s,x,y)v_{(a,b)}\right\rangle$ (7.29) for a state
$v_{(a,b)}$ is:
(7.43) $\displaystyle l_{(a,b)}$ $\displaystyle=$
$\displaystyle\frac{4}{\hslash}\exp\left(2\pi\mathrm{i}(s\hslash+xa+yb)\frac{}{}-\frac{\pi\hslash}{2km}x^{2}-\frac{\pi
km\hslash}{2\hslash}y^{2}\right)$
An observable registering a particle at a point $q=c$ of the configuration
space is $\delta(q-c)$. On the Heisenberg group this observable is given by
the kernel:
(7.44) $X_{c}(s,x,y)=e^{2\pi\mathrm{i}(s\hslash+xc)}\delta(y).$
The measurement of $X_{c}$ on the state (7.42) (through the kernel (7.43))
predictably is:
$\left\langle
X_{c},l_{(a,b)}\right\rangle=\sqrt{\frac{2km}{\hslash}}\exp\left(-\frac{2\pi
km}{\hslash}(c-a)^{2}\right).$
###### Example 7.13.
Now take two states $v_{(0,b)}$ and $v_{(0,-b)}$, where for the simplicity we
assume the mean values of coordinates vanish in the both cases. Then the
corresponding kernel (7.30) has the interference terms:
$\displaystyle l_{i}$ $\displaystyle=$ $\displaystyle\left\langle
v_{(0,b)},{\rho_{\hslash}}(s,x,y)v_{(0,-b)}\right\rangle$ $\displaystyle=$
$\displaystyle\frac{4}{\hslash}\exp\left(2\pi\mathrm{i}s\hslash-\frac{\pi}{2\hslash
km}((\hslash x+2b)^{2}+4b^{2})-\frac{\pi\hslash km}{2}y^{2}\right).$
The measurement of $X_{c}$ (7.44) on this term contains the oscillating part:
$\left\langle
X_{c},l_{i}\right\rangle=\sqrt{\frac{2km}{\hslash}}\exp\left(-\frac{2\pi
km}{\hslash}c^{2}-\frac{2\pi}{km\hslash}b^{2}+\frac{4\pi\mathrm{i}}{\hslash}cb\right)$
Therefore on the kernel $l$ corresponding to the state $v_{(0,b)}+v_{(0,-b)}$
the measurement is
$\displaystyle\left\langle X_{c},l\right\rangle$ $\displaystyle=$
$\displaystyle 2\sqrt{\frac{2km}{\hslash}}\exp\left(-\frac{2\pi
km}{\hslash}c^{2}\right)\left(1+\exp\left(-\frac{2\pi}{km\hslash}b^{2}\right)\cos\left(\frac{4\pi}{\hslash}cb\right)\right).$
(a) (b)
Figure 13. Quantum probabilities: the blue (dashed) graph shows the addition
of probabilities without interaction, the red (solid) graph present the
quantum interference. Left picture shows the Gaussian state (7.42), the
right—the rational state (7.45)
The presence of the cosine term in the last expression can generate an
interference picture. In practise it does not happen for the minimal
uncertainty state (7.42) which we are using here: it rapidly vanishes outside
of the neighbourhood of zero, where oscillations of the cosine occurs, see
Fig. 13(a).
###### Example 7.14.
To see a traditional interference pattern one can use a state which is far
from the minimal uncertainty. For example, we can consider the state:
(7.45)
$u_{(a,b)}(q,p)=\frac{\hslash^{2}}{((q-a)^{2}+\hslash/km)((p-b)^{2}+\hslash
km)}.$
To evaluate the observable $X_{c}$ (7.44) on the state $l(g)=\left\langle
u_{1},{\rho_{h}}(g)u_{2}\right\rangle$ (7.29) we use the following formula:
$\left\langle
X_{c},l\right\rangle=\frac{2}{\hslash}\int_{\mathbb{R}^{n}{}}\hat{u}_{1}(q,2(q-c)/\hslash)\,\overline{\hat{u}_{2}(q,2(q-c)/\hslash)}\,dq,$
where $\hat{u}_{i}(q,x)$ denotes the partial Fourier transform $p\mapsto x$ of
$u_{i}(q,p)$. The formula is obtained by swapping order of integrations. The
numerical evaluation of the state obtained by the addition
$u_{(0,b)}+u_{(0,-b)}$ is plotted on Fig. 13(b), the red curve shows the
canonical interference pattern.
#### 7.4. Ladder Operators and Harmonic Oscillator
Let ${\rho}$ be a representation of the Schrödinger group
$G=\mathbb{H}^{1}{}\rtimes\widetilde{\mathrm{Sp}}(2)$ (7.9) in a space $V$.
Consider the derived representation of the Lie algebra $\mathfrak{g}$
[Lang85]*§ VI.1 and denote $\tilde{X}={\rho}(X)$ for $X\in\mathfrak{g}$. To
see the structure of the representation ${\rho}$ we can decompose the space
$V$ into eigenspaces of the operator $\tilde{X}$ for some $X\in\mathfrak{g}$.
The canonical example is the Taylor series in complex analysis.
We are going to consider three cases corresponding to three non-isomorphic
subgroups (2.4–2.6) of $\mathrm{Sp}(2)$ starting from the compact case. Let
$H=Z$ be a generator of the compact subgroup $K$. Corresponding
symplectomorphisms (7.8) of the phase space are given by orthogonal rotations
with matrices $\begin{pmatrix}\cos t&\sin t\\\ -sint&\cos t\end{pmatrix}$. The
Shale–Weil representation (7.13) coincides with the Hamiltonian of the
harmonic oscillator in Schrödinger representation.
Since $\widetilde{\mathrm{Sp}}(2)$ is a two-fold cover the corresponding
eigenspaces of a compact group $\tilde{Z}v_{k}=\mathrm{i}kv_{k}$ are
parametrised by a half-integer $k\in\mathbb{Z}{}/2$. Explicitly for a half-
integer $k$ eigenvectors are:
(7.46)
$v_{k}(q)=H_{k+\frac{1}{2}}\left(\sqrt{\frac{2\pi}{\hslash}}q\right)e^{-\frac{\pi}{\hslash}q^{2}},$
where $H_{k}$ is the _Hermite polynomial_ [Folland89]*§ 1.7
[ErdelyiMagnusII]*8.2(9).
From the point of view of quantum mechanics as well as the representation
theory it is beneficial to introduce the ladder operators $L^{\\!\pm}$ (3.14),
known also as _creation/annihilation_ in quantum mechanics [Folland89]*p. 49
[BoyerMiller74a]. There are two ways to search for ladder operators: in
(complexified) Lie algebras $\mathfrak{h}_{1}$ and $\mathfrak{sp}_{2}$. The
later coincides with our consideration in Section 3.3 in the essence.
##### 7.4.1. Ladder Operators from the Heisenberg Group
Assuming $L^{\\!+}=a\tilde{X}+b\tilde{Y}$ we obtain from the relations
(7.10–7.11) and (3.14) the linear equations with unknown $a$ and $b$:
$a=\lambda_{+}b,\qquad-b=\lambda_{+}a.$
The equations have a solution if and only if $\lambda_{+}^{2}+1=0$, and the
raising/lowering operators are $L^{\\!\pm}=\tilde{X}\mp\mathrm{i}\tilde{Y}$.
###### Remark 7.15.
Here we have an interesting asymmetric response: due to the structure of the
semidirect product $\mathbb{H}^{1}{}\rtimes\widetilde{\mathrm{Sp}}(2)$ it is
the symplectic group which acts on $\mathbb{H}^{1}{}$, not vise versa. However
the Heisenberg group has a weak action in the opposite direction: it shifts
eigenfunctions of $\mathrm{Sp}(2)$.
In the Schrödinger representation (7.12) the ladder operators are
(7.47)
${\rho_{\hslash}}(L^{\\!\pm})=2\pi\mathrm{i}q\pm\mathrm{i}\hslash\frac{d}{dq}.$
The standard treatment of the harmonic oscillator in quantum mechanics, which
can be found in many textbooks, e.g. [Folland89]*§ 1.7 [Gazeau09a]*§ 2.2.3, is
as follows. The vector $v_{-1/2}(q)=e^{-\pi q^{2}/\hslash}$ is an eigenvector
of $\tilde{Z}$ with the eigenvalue $-\frac{\mathrm{i}}{2}$. In addition
$v_{-1/2}$ is annihilated by $L^{\\!+}$. Thus the chain (3.16) terminates to
the right and the complete set of eigenvectors of the harmonic oscillator
Hamiltonian is presented by $(L^{\\!-})^{k}v_{-1/2}$ with $k=0,1,2,\ldots$.
We can make a wavelet transform generated by the Heisenberg group with the
mother wavelet $v_{-1/2}$, and the image will be the Fock–Segal–Bargmann (FSB)
space [Howe80b] [Folland89]*§ 1.6. Since $v_{-1/2}$ is the null solution of
$L^{\\!+}=\tilde{X}-\mathrm{i}\tilde{Y}$, then by Cor. 5.6 the image of the
wavelet transform will be null-solutions of the corresponding linear
combination of the Lie derivatives (7.4):
(7.48)
$D=\overline{X^{r}-\mathrm{i}Y^{r}}=(\partial_{x}+\mathrm{i}\partial_{y})-\pi\hslash(x-\mathrm{i}y),$
which turns out to be the Cauchy–Riemann equation on a weighted FSB-type
space.
##### 7.4.2. Symplectic Ladder Operators
We can also look for ladder operators within the Lie algebra
$\mathfrak{sp}_{2}$, see § 3.3.1 and [Kisil09c]*§ 8. Assuming
$L_{2}^{\\!+}=a\tilde{A}+b\tilde{B}+c\tilde{Z}$ from the relations (3.13) and
defining condition (3.14) we obtain the linear equations with unknown $a$, $b$
and $c$:
$c=0,\qquad 2a=\lambda_{+}b,\qquad-2b=\lambda_{+}a.$
The equations have a solution if and only if $\lambda_{+}^{2}+4=0$, and the
raising/lowering operators are
$L_{2}^{\\!\pm}=\pm\mathrm{i}\tilde{A}+\tilde{B}$. In the Shale–Weil
representation (7.13) they turn out to be:
(7.49)
$L_{2}^{\\!\pm}=\pm\mathrm{i}\left(\frac{q}{2}\frac{d}{dq}+\frac{1}{4}\right)-\frac{\hslash\mathrm{i}}{8\pi}\frac{d^{2}}{dq^{2}}-\frac{\pi\mathrm{i}q^{2}}{2\hslash}=-\frac{\mathrm{i}}{8\pi\hslash}\left(\mp
2\pi q+\hslash\frac{d}{dq}\right)^{2}.$
Since this time $\lambda_{+}=2\mathrm{i}$ the ladder operators
$L_{2}^{\\!\pm}$ produce a shift on the diagram (3.16) twice bigger than the
operators $L^{\\!\pm}$ from the Heisenberg group. After all, this is not
surprising since from the explicit representations (7.47) and (7.49) we get:
$L_{2}^{\\!\pm}=-\frac{\mathrm{i}}{8\pi\hslash}(L^{\\!\pm})^{2}.$
#### 7.5. Hyperbolic Quantum Mechanics
Now we turn to double numbers also known as hyperbolic, split-complex, etc.
numbers [Yaglom79]*App. C [Ulrych05a] [KhrennikovSegre07a]. They form a two
dimensional algebra $\mathbb{O}{}$ spanned by $1$ and $\mathrm{j}$ with the
property $\mathrm{j}^{2}=1$. There are zero divisors:
$\mathrm{j}_{\pm}=\textstyle\frac{1}{\sqrt{2}}(1\pm j),\qquad\text{ such that
}\quad\mathrm{j}_{+}\mathrm{j}_{-}=0\quad\text{ and
}\quad\mathrm{j}_{\pm}^{2}=\mathrm{j}_{\pm}.$
Thus double numbers algebraically isomorphic to two copies of $\mathbb{R}{}$
spanned by $\mathrm{j}_{\pm}$. Being algebraically dull double numbers are
nevertheless interesting as a homogeneous space [Kisil05a, Kisil09c] and they
are relevant in physics [Khrennikov05a, Ulrych05a, Ulrych08a]. The combination
of p-mechanical approach with hyperbolic quantum mechanics was already
discussed in [BrodlieKisil03a]*§ 6.
For the hyperbolic character $\chi_{\mathrm{j}h}(s)=e^{\mathrm{j}hs}=\cosh
hs+\mathrm{j}\sinh hs$ of $\mathbb{R}{}$ one can define the hyperbolic
Fourier-type transform:
$\hat{k}(q)=\int_{\mathbb{R}{}}k(x)\,e^{-\mathrm{j}qx}dx.$
It can be understood in the sense of distributions on the space dual to the
set of analytic functions [Khrennikov08a]*§ 3. Hyperbolic Fourier transform
intertwines the derivative $\frac{d}{dx}$ and multiplication by $\mathrm{j}q$
[Khrennikov08a]*Prop. 1.
###### Example 7.16.
For the Gaussian the hyperbolic Fourier transform is the ordinary function
(note the sign difference!):
$\int_{\mathbb{R}{}}e^{-x^{2}/2}e^{-\mathrm{j}qx}dx=\sqrt{2\pi}\,e^{q^{2}/2}.$
However the opposite identity:
$\int_{\mathbb{R}{}}e^{x^{2}/2}e^{-\mathrm{j}qx}dx=\sqrt{2\pi}\,e^{-q^{2}/2}$
is true only in a suitable distributional sense. To this end we may note that
$e^{x^{2}/2}$ and $e^{-q^{2}/2}$ are null solutions to the differential
operators $\frac{d}{dx}-x$ and $\frac{d}{dq}+q$ respectively, which are
intertwined (up to the factor $\mathrm{j}$) by the hyperbolic Fourier
transform. The above differential operators $\frac{d}{dx}-x$ and
$\frac{d}{dq}+q$ are images of the ladder operators (7.47) in the Lie algebra
of the Heisenberg group. They are intertwining by the Fourier transform, since
this is an automorphism of the Heisenberg group [Howe80a].
An elegant theory of hyperbolic Fourier transform may be achieved by a
suitable adaptation of [Howe80a], which uses representation theory of the
Heisenberg group.
##### 7.5.1. Hyperbolic Representations of the Heisenberg Group
Consider the space $F_{h}^{\mathrm{j}}{}(\mathbb{H}^{n}{})$ of
$\mathbb{O}{}$-valued functions on $\mathbb{H}^{n}{}$ with the property:
(7.50) $f(s+s^{\prime},h,y)=e^{\mathrm{j}hs^{\prime}}f(s,x,y),\qquad\text{ for
all }(s,x,y)\in\mathbb{H}^{n}{},\ s^{\prime}\in\mathbb{R}{},$
and the square integrability condition (7.7). Then the hyperbolic
representation is obtained by the restriction of the left shifts to
$F_{h}^{\mathrm{j}}{}(\mathbb{H}^{n}{})$. To obtain an equivalent
representation on the phase space we take $\mathbb{O}{}$-valued functional of
the Lie algebra $\mathfrak{h}_{n}$:
(7.51)
$\chi^{j}_{(h,q,p)}(s,x,y)=e^{\mathrm{j}(hs+qx+py)}=\cosh(hs+qx+py)+\mathrm{j}\sinh(hs+qx+py).$
The hyperbolic Fock–Segal–Bargmann type representation is intertwined with the
left group action by means of the Fourier transform (7.32) with the hyperbolic
functional (7.51). Explicitly this representation is:
(7.52) ${\rho_{\hslash}}(s,x,y):f(q,p)\mapsto\textstyle
e^{-\mathrm{j}(hs+qx+py)}f\left(q-\frac{h}{2}y,p+\frac{h}{2}x\right).$
For a hyperbolic Schrödinger type representation we again use the scheme
described in § 3.2. Similarly to the elliptic case one obtains the formula,
resembling (7.34):
(7.53)
$[{\rho^{\mathrm{j}}_{\chi}}(s^{\prime},x^{\prime},y^{\prime})f](x)=e^{-\mathrm{j}h(s^{\prime}+xy^{\prime}-x^{\prime}y^{\prime}/2)}f(x-x^{\prime}).$
Application of the hyperbolic Fourier transform produces a Schrödinger type
representation on the configuration space, cf. (7.35):
$[{\rho^{\mathrm{j}}_{\chi}}(s^{\prime},x^{\prime},y^{\prime})\hat{f}\,](q)=e^{-\mathrm{j}h(s^{\prime}+x^{\prime}y^{\prime}/2)-\mathrm{j}x^{\prime}q}\,\hat{f}(q+hy^{\prime}).$
The extension of this representation to kernels according to (7.21) generates
hyperbolic pseudodifferential operators introduced in [Khrennikov08a]*(3.4).
##### 7.5.2. Hyperbolic Dynamics
Similarly to the elliptic (quantum) case we consider a convolution of two
kernels on $\mathbb{H}^{n}{}$ restricted to
$F_{h}^{\mathrm{j}}{}(\mathbb{H}^{n}{})$. The composition law becomes, cf.
(7.36):
(7.54)
$(k^{\prime}*k)\hat{{}_{s}}=\int_{\mathbb{R}^{2n}{}}e^{{\mathrm{j}h}{}(xy^{\prime}-yx^{\prime})}\,\hat{k}^{\prime}_{s}(h,x^{\prime},y^{\prime})\,\hat{k}_{s}(h,x-x^{\prime},y-y^{\prime})\,dx^{\prime}dy^{\prime}.$
This is close to the calculus of hyperbolic PDO obtained in
[Khrennikov08a]*Thm. 2. Respectively for the commutator of two convolutions we
get, cf. (7.3.2):
(7.55)
$[k^{\prime},k]\hat{{}_{s}}=\int_{\mathbb{R}^{2n}{}}\\!\\!\sinh(h(xy^{\prime}-yx^{\prime}))\,\hat{k}^{\prime}_{s}(h,x^{\prime},y^{\prime})\,\hat{k}_{s}(h,x-x^{\prime},y-y^{\prime})\,dx^{\prime}dy^{\prime}.$
This the hyperbolic version of the Moyal bracket, cf. [Khrennikov08a]*p. 849,
which generates the corresponding image of the dynamic equation (7.23).
###### Example 7.17.
1. (i)
For a quadratic Hamiltonian, e.g. harmonic oscillator from Example 7.6, the
hyperbolic equation and respective dynamics is identical to quantum considered
before.
2. (ii)
Since $\frac{\partial}{\partial s}$ acts on
$F_{2}^{\mathrm{j}}{}(\mathbb{H}^{n}{})$ as multiplication by $\mathrm{j}h$
and $\mathrm{j}^{2}=1$, the hyperbolic image of the unharmonic equation (7.28)
becomes:
$\dot{f}=\left(mk^{2}q\frac{\partial}{\partial
p}+\frac{\lambda}{6}\left(3q^{2}\frac{\partial}{\partial
p}+\frac{\hslash^{2}}{4}\frac{\partial^{3}}{\partial
p^{3}}\right)-\frac{1}{m}p\frac{\partial}{\partial q}\right)f.$
The difference with quantum mechanical equation (7.40) is in the sign of the
cubic derivative.
##### 7.5.3. Hyperbolic Probabilities
(a) (b)
Figure 14. Hyperbolic probabilities: the blue (dashed) graph shows the
addition of probabilities without interaction, the red (solid) graph present
the quantum interference. Left picture shows the Gaussian state (7.42), with
the same distribution as in quantum mechanics, cf. Fig. 13(a). The right
picture shows the rational state (7.45), note the absence of interference
oscillations in comparison with the quantum state on Fig. 13(b).
To calculate probability distribution generated by a hyperbolic state we are
using the general procedure from Section 7.2.2. The main differences with the
quantum case are as follows:
1. (i)
The real number $A$ in the expression (7.31) for the addition of probabilities
is bigger than $1$ in absolute value. Thus it can be associated with the
hyperbolic cosine $\cosh\alpha$, cf. Rem. 7.11, for certain phase
$\alpha\in\mathbb{R}{}$ [Khrennikov08a].
2. (ii)
The nature of hyperbolic interference on two slits is affected by the fact
that $e^{\mathrm{j}hs}$ is not periodic and the hyperbolic exponent
$e^{\mathrm{j}t}$ and cosine $\cosh t$ do not oscillate. It is worth to notice
that for Gaussian states the hyperbolic interference is exactly the same as
quantum one, cf. Figs. 13(a) and 14(a). This is similar to coincidence of
quantum and hyperbolic dynamics of harmonic oscillator.
The contrast between two types of interference is prominent for the rational
state (7.45), which is far from the minimal uncertainty, see the different
patterns on Figs. 13(b) and 14(b).
##### 7.5.4. Ladder Operators for the Hyperbolic Subgroup
Consider the case of the Hamiltonian $H=2B$, which is a repulsive (hyperbolic)
harmonic oscillator [Wulfman10a]*§ 3.8. The corresponding one-dimensional
subgroup of symplectomorphisms produces hyperbolic rotations of the phase
space, see Fig. 9. The eigenvectors $v_{\mu}$ of the operator
${\rho^{\text{SW}}_{\hslash}}(2B)v_{\nu}=-\mathrm{i}\left(\frac{\hslash}{4\pi}\frac{d^{2}}{dq^{2}}+\frac{\pi
q^{2}}{\hslash}\right)v_{\nu}=\mathrm{i}\nu v_{\nu},$
are _Weber–Hermite_ (or _parabolic cylinder_) functions
$v_{\nu}=D_{\nu-\frac{1}{2}}\left(\pm
2e^{\mathrm{i}\frac{\pi}{4}}\sqrt{\frac{\pi}{\hslash}}q\right)$, see
[ErdelyiMagnusII]*§ 8.2 [SrivastavaTuanYakubovich00a] for fundamentals of
Weber–Hermite functions and [ATorre08a] for further illustrations and
applications in optics.
The corresponding one-parameter group is not compact and the eigenvalues of
the operator $2\tilde{B}$ are not restricted by any integrality condition, but
the raising/lowering operators are still important [HoweTan92]*§ II.1
[Mazorchuk09a]*§ 1.1. We again seek solutions in two subalgebras
$\mathfrak{h}_{1}$ and $\mathfrak{sp}_{2}$ separately. However the additional
options will be provided by a choice of the number system: either complex or
double.
###### Example 7.18 (Complex Ladder Operators).
Assuming $L_{h}^{\\!+}=a\tilde{X}+b\tilde{Y}$ from the commutators (7.10–7.11)
we obtain the linear equations:
(7.56) $-a=\lambda_{+}b,\qquad-b=\lambda_{+}a.$
The equations have a solution if and only if $\lambda_{+}^{2}-1=0$. Taking the
real roots $\lambda=\pm 1$ we obtain that the raising/lowering operators are
$L_{h}^{\\!\pm}=\tilde{X}\mp\tilde{Y}$. In the Schrödinger representation
(7.12) the ladder operators are
(7.57) $L_{h}^{\\!\pm}=2\pi\mathrm{i}q\pm\hslash\frac{d}{dq}.$
The null solutions
$v_{\pm\frac{1}{2}}(q)=e^{\pm\frac{\pi\mathrm{i}}{\hslash}q^{2}}$ to operators
${\rho_{\hslash}}(L^{\\!\pm})$ are also eigenvectors of the Hamiltonian
${\rho^{\text{SW}}_{\hslash}}(2B)$ with the eigenvalue $\pm\frac{1}{2}$.
However the important distinction from the elliptic case is, that they are not
square-integrable on the real line anymore.
We can also look for ladder operators within the $\mathfrak{sp}_{2}$, that is
in the form $L_{2h}^{\\!+}=a\tilde{A}+b\tilde{B}+c\tilde{Z}$ for the
commutator $[2\tilde{B},L_{h}^{\\!+}]=\lambda L_{h}^{\\!+}$, see § 3.3.2.
Within complex numbers we get only the values $\lambda=\pm 2$ with the ladder
operators $L_{2h}^{\\!\pm}=\pm 2\tilde{A}+\tilde{Z}/2$, see [HoweTan92]*§ II.1
[Mazorchuk09a]*§ 1.1. Each indecomposable $\mathfrak{h}_{1}$\- or
$\mathfrak{sp}_{2}$-module is formed by a one-dimensional chain of eigenvalues
with a transitive action of ladder operators $L_{h}^{\\!\pm}$ or
$L_{2h}^{\\!\pm}$ respectively. And we again have a quadratic relation between
the ladder operators:
$L_{2h}^{\\!\pm}=\frac{\mathrm{i}}{4\pi\hslash}(L_{h}^{\\!\pm})^{2}.$
##### 7.5.5. Double Ladder Operators
There are extra possibilities in in the context of hyperbolic quantum
mechanics [Khrennikov03a] [Khrennikov05a] [Khrennikov08a]. Here we use the
representation of $\mathbb{H}^{1}{}$ induced by a hyperbolic character
$e^{\mathrm{j}ht}=\cosh(ht)+\mathrm{j}\sinh(ht)$, see [Kisil10a]*(4.5), and
obtain the hyperbolic representation of $\mathbb{H}^{1}{}$, cf. (7.35):
(7.58)
$[{\rho^{\mathrm{j}}_{h}}(s^{\prime},x^{\prime},y^{\prime})\hat{f}\,](q)=e^{\mathrm{j}h(s^{\prime}-x^{\prime}y^{\prime}/2)+\mathrm{j}x^{\prime}q}\,\hat{f}(q-hy^{\prime}).$
The corresponding derived representation is
(7.59)
${\rho^{\mathrm{j}}_{h}}(X)=\mathrm{j}q,\qquad{\rho^{\mathrm{j}}_{h}}(Y)=-h\frac{d}{dq},\qquad{\rho^{\mathrm{j}}_{h}}(S)=\mathrm{j}hI.$
Then the associated Shale–Weil derived representation of $\mathfrak{sp}_{2}$
in the Schwartz space $S{}(\mathbb{R}{})$ is, cf. (7.13):
(7.60)
${\rho^{\text{SW}}_{h}}(A)=-\frac{q}{2}\frac{d}{dq}-\frac{1}{4},\quad{\rho^{\text{SW}}_{h}}(B)=\frac{\mathrm{j}h}{4}\frac{d^{2}}{dq^{2}}-\frac{\mathrm{j}q^{2}}{4h},\quad{\rho^{\text{SW}}_{h}}(Z)=-\frac{\mathrm{j}h}{2}\frac{d^{2}}{dq^{2}}-\frac{\mathrm{j}q^{2}}{2h}.$
Note that ${\rho^{\text{SW}}_{h}}(B)$ now generates a usual harmonic
oscillator, not the repulsive one like ${\rho^{\text{SW}}_{\hslash}}(B)$ in
(7.13). However the expressions in the quadratic algebra are still the same
(up to a factor), cf. (7.4–7.16):
$\displaystyle\qquad{\rho^{\text{SW}}_{h}}(A)$ $\displaystyle=$
$\displaystyle-\frac{\mathrm{j}}{2h}({\rho^{\mathrm{j}}_{h}}(X){\rho^{\mathrm{j}}_{h}}(Y)-{\textstyle\frac{1}{2}}{\rho^{\mathrm{j}}_{h}}(S))$
$\displaystyle=$
$\displaystyle-\frac{\mathrm{j}}{4h}({\rho^{\mathrm{j}}_{h}}(X){\rho^{\mathrm{j}}_{h}}(Y)+{\rho^{\mathrm{j}}_{h}}(Y){\rho^{\mathrm{j}}_{h}}(X)),$
(7.62) $\displaystyle{\rho^{\text{SW}}_{h}}(B)$ $\displaystyle=$
$\displaystyle\frac{\mathrm{j}}{4h}({\rho^{\mathrm{j}}_{h}}(X)^{2}-{\rho^{\mathrm{j}}_{h}}(Y)^{2}),$
(7.63) $\displaystyle{\rho^{\text{SW}}_{h}}(Z)$ $\displaystyle=$
$\displaystyle-\frac{\mathrm{j}}{2h}({\rho^{\mathrm{j}}_{h}}(X)^{2}+{\rho^{\mathrm{j}}_{h}}(Y)^{2}).$
This is due to the Principle 3.5 of similarity and correspondence: we can swap
operators $Z$ and $B$ with simultaneous replacement of hypercomplex units
$\mathrm{i}$ and $\mathrm{j}$.
The eigenspace of the operator $2{\rho^{\text{SW}}_{h}}(B)$ with an eigenvalue
$\mathrm{j}\nu$ are spanned by the Weber–Hermite functions
$D_{-\nu-\frac{1}{2}}\left(\pm\sqrt{\frac{2}{h}}x\right)$, see
[ErdelyiMagnusII]*§ 8.2. Functions $D_{\nu}$ are generalisations of the Hermit
functions (7.46).
The compatibility condition for a ladder operator within the Lie algebra
$\mathfrak{h}_{1}$ will be (7.56) as before, since it depends only on the
commutators (7.10–7.11). Thus we still have the set of ladder operators
corresponding to values $\lambda=\pm 1$:
$L_{h}^{\\!\pm}=\tilde{X}\mp\tilde{Y}=\mathrm{j}q\pm h\frac{d}{dq}.$
Admitting double numbers we have an extra way to satisfy $\lambda^{2}=1$ in
(7.56) with values $\lambda=\pm\mathrm{j}$. Then there is an additional pair
of hyperbolic ladder operators, which are identical (up to factors) to (7.47):
$L_{\mathrm{j}}^{\\!\pm}=\tilde{X}\mp\mathrm{j}\tilde{Y}=\mathrm{j}q\pm\mathrm{j}h\frac{d}{dq}.$
Pairs $L_{h}^{\\!\pm}$ and $L_{\mathrm{j}}^{\\!\pm}$ shift eigenvectors in the
“orthogonal” directions changing their eigenvalues by $\pm 1$ and
$\pm\mathrm{j}$. Therefore an indecomposable $\mathfrak{sp}_{2}$-module can be
parametrised by a two-dimensional lattice of eigenvalues in double numbers,
see Fig. 10.
The following functions
$\displaystyle v_{\frac{1}{2}}^{\pm h}(q)$ $\displaystyle=$ $\displaystyle
e^{\mp\mathrm{j}q^{2}/(2h)}=\cosh\frac{q^{2}}{2h}\mp\mathrm{j}\sinh\frac{q^{2}}{2h},$
$\displaystyle v_{\frac{1}{2}}^{\pm\mathrm{j}}(q)$ $\displaystyle=$
$\displaystyle e^{\mp q^{2}/(2h)}$
are null solutions to the operators $L_{h}^{\\!\pm}$ and
$L_{\mathrm{j}}^{\\!\pm}$ respectively. They are also eigenvectors of
$2{\rho^{\text{SW}}_{h}}(B)$ with eigenvalues $\mp\frac{\mathrm{j}}{2}$ and
$\mp\frac{1}{2}$ respectively. If these functions are used as mother wavelets
for the wavelet transforms generated by the Heisenberg group, then the image
space will consist of the null-solutions of the following differential
operators, see Cor. 5.6:
$\textstyle
D_{h}=\overline{X^{r}-Y^{r}}=(\partial_{x}-\partial_{y})+\frac{h}{2}(x+y),\qquad
D_{\mathrm{j}}=\overline{X^{r}-\mathrm{j}Y^{r}}=(\partial_{x}+\mathrm{j}\partial_{y})-\frac{h}{2}(x-\mathrm{j}y),$
for $v_{\frac{1}{2}}^{\pm h}$ and $v_{\frac{1}{2}}^{\pm\mathrm{j}}$
respectively. This is again in line with the classical result (7.48). However
annihilation of the eigenvector by a ladder operator does not mean that the
part of the 2D-lattice becomes void since it can be reached via alternative
routes on this lattice. Instead of multiplication by a zero, as it happens in
the elliptic case, a half-plane of eigenvalues will be multiplied by the
divisors of zero $1\pm\mathrm{j}$.
We can also search ladder operators within the algebra $\mathfrak{sp}_{2}$ and
admitting double numbers we will again find two sets of them, cf. § 3.3.2:
$\displaystyle L_{2h}^{\\!\pm}$ $\displaystyle=$
$\displaystyle\pm\tilde{A}+\tilde{Z}/2=\mp\frac{q}{2}\frac{d}{dq}\mp\frac{1}{4}-\frac{\mathrm{j}h}{4}\frac{d^{2}}{dq^{2}}-\frac{\mathrm{j}q^{2}}{4h}=-\frac{\mathrm{j}}{4h}(L_{h}^{\\!\pm})^{2},$
$\displaystyle L_{2\mathrm{j}}^{\\!\pm}$ $\displaystyle=$
$\displaystyle\pm\mathrm{j}\tilde{A}+\tilde{Z}/2=\mp\frac{\mathrm{j}q}{2}\frac{d}{dq}\mp\frac{\mathrm{j}}{4}-\frac{\mathrm{j}h}{4}\frac{d^{2}}{dq^{2}}-\frac{\mathrm{j}q^{2}}{4h}=-\frac{\mathrm{j}}{4h}(L_{\mathrm{j}}^{\\!\pm})^{2}.$
Again the operators $L_{2h}^{\\!\pm}$ and $L_{2h}^{\\!\pm}$ produce double
shifts in the orthogonal directions on the same two-dimensional lattice in
Fig. 10.
#### 7.6. Parabolic (Classical) Representations on the Phase Space
After the previous two cases it is natural to link classical mechanics with
dual numbers generated by the parabolic unit $\varepsilon^{2}=0$. Connection
of the parabolic unit $\varepsilon$ with the Galilean group of symmetries of
classical mechanics is around for a while [Yaglom79]*App. C.
However the nilpotency of the parabolic unit $\varepsilon$ make it difficult
if we will work with dual number valued functions only. To overcome this issue
we consider a commutative real algebra $\mathfrak{C}$ spanned by $1$,
$\mathrm{i}$, $\varepsilon$ and $\mathrm{i}\varepsilon$ with identities
$\mathrm{i}^{2}=-1$ and $\varepsilon^{2}=0$. A seminorm on $\mathfrak{C}$ is
defined as follows:
$\left|a+b\mathrm{i}+c\varepsilon+d\mathrm{i}\varepsilon\right|^{2}=a^{2}+b^{2}.$
##### 7.6.1. Classical Non-Commutative Representations
We wish to build a representation of the Heisenberg group which will be a
classical analog of the Fock–Segal–Bargmann representation (7.17). To this end
we introduce the space $F_{h}^{\varepsilon}{}(\mathbb{H}^{n}{})$ of
$\mathfrak{C}$-valued functions on $\mathbb{H}^{n}{}$ with the property:
(7.64) $f(s+s^{\prime},h,y)=e^{\varepsilon hs^{\prime}}f(s,x,y),\qquad\text{
for all }(s,x,y)\in\mathbb{H}^{n}{},\ s^{\prime}\in\mathbb{R}{},$
and the square integrability condition (7.7). It is invariant under the left
shifts and we restrict the left group action to
$F_{h}^{\varepsilon}{}(\mathbb{H}^{n}{})$.
There is an unimodular $\mathfrak{C}$-valued function on the Heisenberg group
parametrised by a point $(h,q,p)\in\mathbb{R}^{2n+1}{}$:
$E_{(h,q,p)}(s,x,y)=e^{2\pi(\varepsilon
s\hslash+\mathrm{i}xq+\mathrm{i}yp)}=e^{2\pi\mathrm{i}(xq+yp)}(1+\varepsilon
sh).$
This function, if used instead of the ordinary exponent, produces a
modification $\mathcal{F}_{c}$ of the Fourier transform (7.32). The transform
intertwines the left regular representation with the following action on
$\mathfrak{C}$-valued functions on the phase space:
(7.65) ${\rho^{\varepsilon}_{h}}(s,x,y):f(q,p)\mapsto
e^{-2\pi\mathrm{i}(xq+yp)}(f(q,p)+\varepsilon
h(sf(q,p)+\frac{y}{2\pi\mathrm{i}}f^{\prime}_{q}(q,p)-\frac{x}{2\pi\mathrm{i}}f^{\prime}_{p}(q,p))).$
###### Remark 7.19.
Comparing the traditional infinite-dimensional (7.17) and one-dimensional
(7.33) representations of $\mathbb{H}^{n}{}$ we can note that the properties
of the representation (7.65) are a non-trivial mixture of the former:
1. (i)
The action (7.65) is non-commutative, similarly to the quantum representation
(7.17) and unlike the classical one (7.33). This non-commutativity will
produce the Hamilton equations below in a way very similar to Heisenberg
equation, see Rem. 7.21.
2. (ii)
The representation (7.65) does not change the support of a function $f$ on the
phase space, similarly to the classical representation (7.33) and unlike the
quantum one (7.17). Such a localised action will be responsible later for an
absence of an interference in classical probabilities.
3. (iii)
The parabolic representation (7.65) can not be derived from either the
elliptic (7.17) or hyperbolic (7.52) by the plain substitution $h=0$.
We may also write a classical Schrödinger type representation. According to §
3.2 we get a representation formally very similar to the elliptic (7.34) and
hyperbolic versions (7.53):
$\displaystyle[{\rho^{\varepsilon}_{\chi}}(s^{\prime},x^{\prime},y^{\prime})f](x)$
$\displaystyle=$ $\displaystyle e^{-\varepsilon
h(s^{\prime}+xy^{\prime}-x^{\prime}y^{\prime}/2)}f(x-x^{\prime})$
$\displaystyle=$ $\displaystyle(1-\varepsilon
h(s^{\prime}+xy^{\prime}-\textstyle\frac{1}{2}x^{\prime}y^{\prime}))f(x-x^{\prime}).$
However due to nilpotency of $\varepsilon$ the (complex) Fourier transform
$x\mapsto q$ produces a different formula for parabolic Schrödinger type
representation in the configuration space, cf. (7.35) and (7.58):
$[{\rho^{\varepsilon}_{\chi}}(s^{\prime},x^{\prime},y^{\prime})\hat{f}](q)=e^{2\pi\mathrm{i}x^{\prime}q}\left(\left(1-\varepsilon
h(s^{\prime}-{\textstyle\frac{1}{2}}x^{\prime}y^{\prime})\right)\hat{f}(q)+\frac{\varepsilon
hy^{\prime}}{2\pi\mathrm{i}}\hat{f}^{\prime}(q)\right).$
This representation shares all properties mentioned in Rem. 7.19 as well.
##### 7.6.2. Hamilton Equation
The identity $e^{\varepsilon t}-e^{-\varepsilon t}=2\varepsilon t$ can be
interpreted as a parabolic version of the sine function, while the parabolic
cosine is identically equal to one, cf. § 3.1 and [HerranzOrtegaSantander99a,
Kisil07a]. From this we obtain the parabolic version of the commutator
(7.3.2):
$\displaystyle[k^{\prime},k]\hat{{}_{s}}(\varepsilon h,x,y)$ $\displaystyle=$
$\displaystyle\varepsilon h\int_{\mathbb{R}^{2n}{}}(xy^{\prime}-yx^{\prime})$
$\displaystyle{}\times\,\hat{k}^{\prime}_{s}(\varepsilon
h,x^{\prime},y^{\prime})\,\hat{k}_{s}(\varepsilon
h,x-x^{\prime},y-y^{\prime})\,dx^{\prime}dy^{\prime},$
for the partial parabolic Fourier-type transform $\hat{k}_{s}$ of the kernels.
Thus the parabolic representation of the dynamical equation (7.23) becomes:
(7.67) $\varepsilon h\frac{d\hat{f}_{s}}{dt}(\varepsilon h,x,y;t)=\varepsilon
h\int_{\mathbb{R}^{2n}{}}(xy^{\prime}-yx^{\prime})\,\hat{H}_{s}(\varepsilon
h,x^{\prime},y^{\prime})\,\hat{f}_{s}(\varepsilon
h,x-x^{\prime},y-y^{\prime};t)\,dx^{\prime}dy^{\prime},$
Although there is no possibility to divide by $\varepsilon$ (since it is a
zero divisor) we can obviously eliminate $\varepsilon h$ from the both sides
if the rest of the expressions are real. Moreover this can be done “in
advance” through a kind of the antiderivative operator considered in
[Kisil02e]*(4.1). This will prevent “imaginary parts” of the remaining
expressions (which contain the factor $\varepsilon$) from vanishing.
###### Remark 7.20.
It is noteworthy that the Planck constant completely disappeared from the
dynamical equation. Thus the only prediction about it following from our
construction is $h\neq 0$, which was confirmed by experiments, of course.
Using the duality between the Lie algebra of $\mathbb{H}^{n}{}$ and the phase
space we can find an adjoint equation for observables on the phase space. To
this end we apply the usual Fourier transform $(x,y)\mapsto(q,p)$. It turn to
be the Hamilton equation [Kisil02e]*(4.7). However the transition to the phase
space is more a custom rather than a necessity and in many cases we can
efficiently work on the Heisenberg group itself.
###### Remark 7.21.
It is noteworthy, that the non-commutative representation (7.65) allows to
obtain the Hamilton equation directly from the commutator
$[{\rho^{\varepsilon}_{h}}(k_{1}),{\rho^{\varepsilon}_{h}}(k_{2})]$. Indeed
its straightforward evaluation will produce exactly the above expression. On
the contrast such a commutator for the commutative representation (7.33) is
zero and to obtain the Hamilton equation we have to work with an additional
tools, e.g. an anti-derivative [Kisil02e]*(4.1).
###### Example 7.22.
1. (i)
For the harmonic oscillator in Example 7.6 the equation (7.67) again reduces
to the form (7.25) with the solution given by (7.26). The adjoint equation of
the harmonic oscillator on the phase space is not different from the quantum
written in Example 7.10(i). This is true for any Hamiltonian of at most
quadratic order.
2. (ii)
For non-quadratic Hamiltonians classical and quantum dynamics are different,
of course. For example, the cubic term of $\partial_{s}$ in the equation
(7.28) will generate the factor $\varepsilon^{3}=0$ and thus vanish. Thus the
equation (7.67) of the unharmonic oscillator on $\mathbb{H}^{n}{}$ becomes:
$\dot{f}=\left(mk^{2}y\frac{\partial}{\partial x}+\frac{\lambda
y}{2}\frac{\partial^{2}}{\partial x^{2}}-\frac{1}{m}x\frac{\partial}{\partial
y}\right)f.$
The adjoint equation on the phase space is:
$\dot{f}=\left(\left(mk^{2}q+\frac{\lambda}{2}q^{2}\right)\frac{\partial}{\partial
p}-\frac{1}{m}p\frac{\partial}{\partial q}\right)f.$
The last equation is the classical Hamilton equation generated by the cubic
potential (7.27). Qualitative analysis of its dynamics can be found in many
textbooks [Arnold91]*§ 4.C, Pic. 12 [PercivalRichards82]*§ 4.4.
###### Remark 7.23.
We have obtained the _Poisson bracket_ from the commutator of convolutions on
$\mathbb{H}^{n}{}$ without any quasiclassical limit $h\rightarrow 0$. This has
a common source with the deduction of main calculus theorems in
[CatoniCannataNichelatti04] based on dual numbers. As explained in
[Kisil05a]*Rem. 6.9 this is due to the similarity between the parabolic unit
$\varepsilon$ and the infinitesimal number used in non-standard analysis
[Devis77]. In other words, we never need to take care about terms of order
$O(h^{2})$ because they will be wiped out by $\varepsilon^{2}=0$.
An alternative derivation of classical dynamics from the Heisenberg group is
given in the recent paper [Low09a].
##### 7.6.3. Classical probabilities
It is worth to notice that dual numbers are not only helpful in reproducing
classical Hamiltonian dynamics, they also provide the classic rule for
addition of probabilities. We use the same formula (7.29) to calculate kernels
of the states. The important difference now that the representation (7.65)
does not change the support of functions. Thus if we calculate the correlation
term $\left\langle v_{1},{\rho}(g)v_{2}\right\rangle$ in (7.30), then it will
be zero for every two vectors $v_{1}$ and $v_{2}$ which have disjoint supports
in the phase space. Thus no interference similar to quantum or hyperbolic
cases (Subsection 7.3.3) is possible.
##### 7.6.4. Ladder Operator for the Nilpotent Subgroup
Finally we look for ladder operators for the Hamiltonian
$\tilde{B}+\tilde{Z}/2$ or, equivalently, $-\tilde{B}+\tilde{Z}/2$. It can be
identified with a free particle [Wulfman10a]*§ 3.8.
We can look for ladder operators in the representation (7.12–7.13) within the
Lie algebra $\mathfrak{h}_{1}$ in the form
$L_{\varepsilon}^{\\!\pm}=a\tilde{X}+b\tilde{Y}$. This is possible if and only
if
(7.68) $-b=\lambda a,\quad 0=\lambda b.$
The compatibility condition $\lambda^{2}=0$ implies $\lambda=0$ within complex
numbers. However such a “ladder” operator produces only the zero shift on the
eigenvectors, cf. (3.15).
Another possibility appears if we consider the representation of the
Heisenberg group induced by dual-valued characters. On the configuration space
such a representation is [Kisil10a]*(4.11):
(7.69)
$[{\rho^{\varepsilon}_{\chi}}(s,x,y)f](q)=e^{2\pi\mathrm{i}xq}\left(\left(1-\varepsilon
h(s-{\textstyle\frac{1}{2}}xy)\right)f(q)+\frac{\varepsilon
hy}{2\pi\mathrm{i}}f^{\prime}(q)\right).$
The corresponding derived representation of $\mathfrak{h}_{1}$ is
(7.70)
${\rho^{p}_{h}}(X)=2\pi\mathrm{i}q,\qquad{\rho^{p}_{h}}(Y)=\frac{\varepsilon
h}{2\pi\mathrm{i}}\frac{d}{dq},\qquad{\rho^{p}_{h}}(S)=-\varepsilon hI.$
However the Shale–Weil extension generated by this representation is
inconvenient. It is better to consider the FSB–type parabolic representation
(7.65) on the phase space induced by the same dual-valued character. Then the
derived representation of $\mathfrak{h}_{1}$ is:
(7.71) ${\rho^{p}_{h}}(X)=-2\pi\mathrm{i}q-\frac{\varepsilon
h}{4\pi\mathrm{i}}\partial_{p},\qquad{\rho^{p}_{h}}(Y)=-2\pi\mathrm{i}p+\frac{\varepsilon
h}{4\pi\mathrm{i}}\partial_{q},\qquad{\rho^{p}_{h}}(S)=\varepsilon hI.$
An advantage of the FSB representation is that the derived form of the
parabolic Shale–Weil representation coincides with the elliptic one (7.19).
Eigenfunctions with the eigenvalue $\mu$ of the parabolic Hamiltonian
$\tilde{B}+\tilde{Z}/2=q\partial_{p}$ have the form
(7.72) $v_{\mu}(q,p)=e^{\mu p/q}f(q),\text{ with an arbitrary function }f(q).$
The linear equations defining the corresponding ladder operator
$L_{\varepsilon}^{\\!\pm}=a\tilde{X}+b\tilde{Y}$ in the algebra
$\mathfrak{h}_{1}$ are (7.68). The compatibility condition $\lambda^{2}=0$
implies $\lambda=0$ within complex numbers again. Admitting dual numbers we
have additional values $\lambda=\pm\varepsilon\lambda_{1}$ with
$\lambda_{1}\in\mathbb{C}{}$ with the corresponding ladder operators
$L_{\varepsilon}^{\\!\pm}=\tilde{X}\mp\varepsilon\lambda_{1}\tilde{Y}=-2\pi\mathrm{i}q-\frac{\varepsilon
h}{4\pi\mathrm{i}}\partial_{p}\pm
2\pi\varepsilon\lambda_{1}\mathrm{i}p=-2\pi\mathrm{i}q+\varepsilon\mathrm{i}(\pm
2\pi\lambda_{1}p+\frac{h}{4\pi}\partial_{p}).$
For the eigenvalue $\mu=\mu_{0}+\varepsilon\mu_{1}$ with $\mu_{0}$,
$\mu_{1}\in\mathbb{C}{}$ the eigenfunction (7.72) can be rewritten as:
(7.73) $v_{\mu}(q,p)=e^{\mu
p/q}f(q)=e^{\mu_{0}p/q}\left(1+\varepsilon\mu_{1}\frac{p}{q}\right)f(q)$
due to the nilpotency of $\varepsilon$. Then the ladder action of
$L_{\varepsilon}^{\\!\pm}$ is
$\mu_{0}+\varepsilon\mu_{1}\mapsto\mu_{0}+\varepsilon(\mu_{1}\pm\lambda_{1})$.
Therefore these operators are suitable for building
$\mathfrak{sp}_{2}$-modules with a one-dimensional chain of eigenvalues.
Finally, consider the ladder operator for the same element $B+Z/2$ within the
Lie algebra $\mathfrak{sp}_{2}$, cf. § 3.3.3. There is the only operator
$L_{p}^{\\!\pm}=\tilde{B}+\tilde{Z}/2$ corresponding to complex coefficients,
which does not affect the eigenvalues. However the dual numbers lead to the
operators
$L_{\varepsilon}^{\\!\pm}=\pm\varepsilon\lambda_{2}\tilde{A}+\tilde{B}+\tilde{Z}/2=\pm\frac{\varepsilon\lambda_{2}}{2}\left(q\partial_{q}-p\partial_{p}\right)+q\partial_{p},\qquad\lambda_{2}\in\mathbb{C}{}.$
These operator act on eigenvalues in a non-trivial way.
##### 7.6.5. Similarity and Correspondence
We wish to summarise our findings. Firstly, the appearance of hypercomplex
numbers in ladder operators for $\mathfrak{h}_{1}$ follows exactly the same
pattern as was already noted for $\mathfrak{sp}_{2}$, see Rem. 3.8:
* •
the introduction of complex numbers is a necessity for the _existence_ of
ladder operators in the elliptic case;
* •
in the parabolic case we need dual numbers to make ladder operators _useful_ ;
* •
in the hyperbolic case double numbers are not required neither for the
existence or for the usability of ladder operators, but they do provide an
enhancement.
In the spirit of the Similarity and Correspondence Principle 3.5 we have the
following extension of Prop. 3.9:
###### Proposition 7.24.
Let a vector $H\in\mathfrak{sp}_{2}$ generates the subgroup $K$, $N^{\prime}$
or $A\\!^{\prime}$, that is $H=Z$, $B+Z/2$, or $2B$ respectively. Let $\iota$
be the respective hypercomplex unit. Then the ladder operators $L^{\\!\pm}$
satisfying to the commutation relation:
$[H,L_{2}^{\\!\pm}]=\pm\iota L^{\\!\pm}$
are given by:
1. (i)
Within the Lie algebra $\mathfrak{h}_{1}$:
$L^{\\!\pm}=\tilde{X}\mp\iota\tilde{Y}.$
2. (ii)
Within the Lie algebra $\mathfrak{sp}_{2}$:
$L_{2}^{\\!\pm}=\pm\iota\tilde{A}+\tilde{E}$. Here $E\in\mathfrak{sp}_{2}$ is
a linear combination of $B$ and $Z$ with the properties:
* •
$E=[A,H]$.
* •
$H=[A,E]$.
* •
Killings form $K(H,E)$ [Kirillov76]*§ 6.2 vanishes.
Any of the above properties defines the vector $E\in\mathop{\operator@font
span}\nolimits\\{B,Z\\}$ up to a real constant factor.
It is worth continuing this investigation and describing in details hyperbolic
and parabolic versions of FSB spaces.
### 8\. Open Problems
A reader may already note numerous objects and results, which deserve a
further consideration. It may also worth to state some open problems
explicitly. In this section we indicate several directions for further work,
which go through four main areas described in the paper.
#### 8.1. Geometry
Geometry is most elaborated area so far, yet many directions are waiting for
further exploration.
1. (i)
Möbius transformations (1.1) with three types of hypercomplex units appear
from the action of the group $SL_{2}{}(\mathbb{R}{})$ on the homogeneous space
$SL_{2}{}(\mathbb{R}{})/H$ [Kisil09c], where $H$ is any subgroup $A$, $N$, $K$
from the Iwasawa decomposition (1.3). Which other actions and hypercomplex
numbers can be obtained from other Lie groups and their subgroups?
2. (ii)
Lobachevsky geometry of the upper half-plane is extremely beautiful and well-
developed subject [Beardon05a] [CoxeterGreitzer]. However the traditional
study is limited to one subtype out of nine possible: with the complex numbers
for Möbius transformation and the complex imaginary unit used in FSCc (2.8).
The remaining eight cases shall be explored in various directions, notably in
the context of discrete subgroups [Beardon95].
3. (iii)
The Fillmore-Springer-Cnops construction, see subsection 2.2, is closely
related to the _orbit method_ [Kirillov99] applied to
$SL_{2}{}(\mathbb{R}{})$. An extension of the orbit method from the Lie
algebra dual to matrices representing cycles may be fruitful for semisimple
Lie groups.
4. (iv)
A development of a discrete version of the geometrical notions can be derived
from suitable discrete groups. A natural first example is the group
$\mathrm{SL}_{2}(\mathbb{F}{})$, where $\mathbb{F}{}$ is a finite field, e.g.
$\mathbb{Z}_{p}{}$ the field of integers modulo a prime $p$.
#### 8.2. Analytic Functions
It is known that in several dimensions there are different notions of
analyticity, e.g. several complex variables and Clifford analysis. However,
analytic functions of a complex variable are usually thought to be the only
options in a plane domain. The following seems to be promising:
1. (i)
Development of the basic components of analytic function theory (the Cauchy
integral, the Taylor expansion, the Cauchy-Riemann and Laplace equations,
etc.) from the same construction and principles in the elliptic, parabolic and
hyperbolic cases and respective subcases.
2. (ii)
Identification of Hilbert spaces of analytic functions of Hardy and Bergman
types, investigation of their properties. Consideration of the corresponding
Toeplitz operators and algebras generated by them.
3. (iii)
Application of analytic methods to elliptic, parabolic and hyperbolic
equations and corresponding boundary and initial values problems.
4. (iv)
Generalisation of the results obtained to higher dimensional spaces. Detailed
investigation of physically significant cases of three and four dimensions.
5. (v)
There is a current interest in construction of analytic function theory on
discrete sets. Our approach is ready for application to an analytic functions
in discrete geometric set-up outlined in item 8.1.iv above.
#### 8.3. Functional Calculus
The functional calculus of a finite dimensional operator considered in Section
6 is elementary but provides a coherent and comprehensive treatment. It shall
be extended to further cases where other approaches seems to be rather
limited.
1. (i)
Nilpotent and quasinilpotent operators have the most trivial spectrum possible
(the single point $\\{0\\}$) while their structure can be highly non-trivial.
Thus the standard spectrum is insufficient for this class of operators. In
contract, the covariant calculus and the spectrum give complete description of
nilpotent operators—the basic prototypes of quasinilpotent ones. For
quasinilpotent operators the construction will be more complicated and shall
use analytic functions mentioned in 8.2.i.
2. (ii)
The version of covariant calculus described above is based on the _discrete
series_ representations of $SL_{2}{}(\mathbb{R}{})$ group and is particularly
suitable for the description of the _discrete spectrum_ (note the remarkable
coincidence in the names).
It is interesting to develop similar covariant calculi based on the two other
representation series of $SL_{2}{}(\mathbb{R}{})$: _principal_ and
_complementary_ [Lang85]. The corresponding versions of analytic function
theories for principal [Kisil97c] and complementary series [Kisil05a] were
initiated within a unifying framework. The classification of analytic function
theories into elliptic, parabolic, hyperbolic [Kisil05a, Kisil06a] hints the
following associative chains:
Representations — | Function Theory — | Type of Spectrum
---|---|---
discrete series — | elliptic — | discrete spectrum
principal series — | hyperbolic — | continuous spectrum
complementary series — | parabolic — | residual spectrum
3. (iii)
Let $a$ be an operator with $\mathbf{sp}\,a\in\bar{\mathbb{D}{}}$ and
$\left\|a^{k}\right\|<Ck^{p}$. It is typical to consider instead of $a$ the
_power bounded_ operator $ra$, where $0<r<1$, and consequently develop its
$H_{\infty}{}$ calculus. However such a regularisation is very rough and hides
the nature of extreme points of $\mathbf{sp}\,{a}$. To restore full
information a subsequent limit transition $r\rightarrow 1$ of the
regularisation parameter $r$ is required. This make the entire technique
rather cumbersome and many results have an indirect nature.
The regularisation $a^{k}\rightarrow a^{k}/k^{p}$ is more natural and accurate
for polynomially bounded operators. However it cannot be achieved within the
homomorphic calculus Defn. 6.1 because it is not compatible with any algebra
homomorphism. Albeit this may be achieved within the covariant calculus Defn.
6.4 and Bergman type space from 8.2.ii.
4. (iv)
Several non-commuting operators are especially difficult to treat with
functional calculus Defn. 6.1 or a joint spectrum. For example, deep insights
on joint spectrum of commuting tuples [JTaylor72] refused to be generalised to
non-commuting case so far. The covariant calculus was initiated [Kisil95i] as
a new approach to this hard problem and was later found useful elsewhere as
well. Multidimensional covariant calculus [Kisil04d] shall use analytic
functions described in 8.2.iv.
5. (v)
As we noted above there is a duality between the co- and contravariant calculi
from Defins. 4.20 and 4.22. We also seen in Section 6 that functional calculus
is an example of contravariant calculus and the functional model is a case of
a covariant one. It is interesting to explore the duality between them
further.
#### 8.4. Quantum Mechanics
Due to the space restrictions we only touched quantum mechanics, further
details can be found in [Kisil96a] [Kisil02e] [Kisil05c] [Kisil04a] [Kisil09a]
[Kisil10a]. In general, Erlangen approach is much more popular among
physicists rather than mathematicians. Nevertheless its potential is not
exhausted even there.
1. (i)
There is a possibility to build representation of the Heisenberg group using
characters of its centre with values in dual and double numbers rather than in
complex ones. This will naturally unifies classical mechanics, traditional QM
and hyperbolic QM [Khrennikov08a]. In particular, a full construction of the
corresponding Fock–Segal–Bargmann spaces would be of interest.
2. (ii)
Representations of nilpotent Lie groups with multidimensional centres in
Clifford algebras as a framework for consistent quantum field theories based
on De Donder–Weyl formalism [Kisil04a].
###### Remark 8.1.
This work is performed within the “Erlangen programme at large” framework
[Kisil06a, Kisil05a], thus it would be suitable to explain the numbering of
various papers. Since the logical order may be different from chronological
one the following numbering scheme is used:
Prefix | Branch description
---|---
“0” or no prefix | Mainly geometrical works, within the classical field of Erlangen programme by F. Klein, see [Kisil05a] [Kisil09c]
“1” | Papers on analytical functions theories and wavelets, e.g. [Kisil97c]
“2” | Papers on operator theory, functional calculi and spectra, e.g. [Kisil02a]
“3” | Papers on mathematical physics, e.g. [Kisil10a]
For example, [Kisil10a] is the first paper in the mathematical physics area.
The present paper [Kisil11c] outlines the whole framework and thus does not
carry a subdivision number. The on-line version of this paper may be updated
in due course to reflect the achieved progress.
### Acknowledgement
Material of these notes was lectured at various summer/winter schools and
advanced courses. Those presentations helped me to clarify ideas and improve
my understanding of the subject. In particular, I am grateful to Prof. S.V.
Rogosin and Dr. A.A. Koroleva for a kind invitation to the Minsk Winter School
in 2010, which were an exciting event. I would like also to acknowledge
support of MAGIC group during my work on those notes.
## References
## Index
* $\mathrm{\breve{\i}}$ (hypercomplex unit in cycle space) §2.2
* $\iota$ (hypercomplex unit) §1.1
* $\breve{\sigma}$ ($\breve{\sigma}:=\mathrm{\breve{\i}}^{2}$) §2.2
* $A$ subgroup §1.1
* $A$-orbit §1.1
* action
* derived §1.1
* transitive §1
* admissible wavelet §4, Example 4.25, Example 4.7
* affine group, see $ax+b$ group
* algebra
* Clifford §2.6, Example 4.12, §7, item 8.4.ii, §8.2
* homomorphism §6
* analysis
* multiresolution Example 4.12
* analytic
* contravariant calculus Definition 6.4
* function on discrete sets item 8.2.v
* annihilation operator, see ladder operator
* annulus Example 5.11
* Atiyah-Singer index theorem §1.1
* $ax+b$ group §1, Example 4.14, Example 4.27, Example 4.28, Example 4.8, §5.2, Example 5.3, Example 5.7
* invariant measure Example 4.8
* representation Example 4.8
* Bergman
* space §1.2, §4, Example 4.9, §5.5, item 8.2.ii
* Blaschke
* product §6.4
* boundary effect on the upper half-plane §2.3
* bracket
* Moyal §7.3.2, Remark 7.11
* hyperbolic §7, §7.5.2
* Poisson Remark 7.23
* calculus
* contravariant Definition 4.22, item 8.3.v
* analytic Definition 6.4
* covariant Definition 4.20, §6.4, §6.5, §6.5, item 8.3.v
* Berezin Example 4.17
* functional §1.2, Definition 4.22, §6, item 8.3.v, §8.3
* Riesz–Dunford Example 4.21
* support, see spectrum
* symbolic, see covariant calculus
* umbral Remark 4.6
* cancellative semigroup Remark 4.6
* Cartan
* subalgebra Remark 3.7
* case
* elliptic §1.1
* hyperbolic §1.1
* parabolic §1.1
* Casimir operator §5.3
* categorical viewpoint §2.3
* category theory §2.3
* Cauchy
* integral §1.2, Example 4.8, §5.3, §5.4, §5.4, §5.5, §6.1, item 8.2.i
* Cauchy-Riemann operator §1.2, §5.3, §5.3, Example 5.7, §7.4.1, item 8.2.i
* causal §7.2.2
* Cayley transform §5.5
* centre
* length from Definition 2.10
* of a cycle §2.2
* characteristic
* function Example 4.19
* classic
* Fock–Segal–Bargmann representation §7.6.1
* probability §7.6.3
* Schrödinger representation §7.6.1
* classical mechanics §7.1.1, Remark 7.9
* Clifford
* algebra §2.6, Example 4.12, §7, item 8.4.ii, §8.2
* Clifford algebras §2.6
* coherent state, see wavelet
* commutator §7, §7.2.1
* commutator relation §7.1.1
* concentric §2.2
* cone
* generator §2.2
* configuration
* space §7.3.1, §7.5.1, §7.6.1, §7.6.4, Example 7.12
* conformality §2.6
* conic
* section §1.1, §2.2
* conjugation
* cycles, of §2.5
* constant
* Planck §7, Remark 7.20
* construction
* Fillmore–Springer–Cnops §2.2, item 8.1.ii
* operator §6.5
* Gelfand–Naimark–Segal §2.4
* contravariant
* calculus Definition 4.22, item 8.3.v
* analytic Definition 6.4
* spectrum item ii, §6.4, Definition 6.5, Proposition 6.9
* stability §6.4
* symbol §4.3
* convolution §7.2.1
* correspondence, see principle of similarity and correspondence
* covariant
* calculus Definition 4.20, §6.4, §6.5, §6.5, item 8.3.v
* Berezin Example 4.17
* pencil §6.5
* spectral distance Definition 6.15
* symbol §4.3
* symbolic calculus Example 4.16
* transform Definition 4.1
* induced Definition 5.1
* covariant transform
* inverse Definition 4.24
* creation operator, see ladder operator
* cycle Definition 2.2, §6.5
* centre §2.2
* conjugation §2.5
* equation §2.2
* focus §2.2
* isotropic item i
* matrix §2.2
* normalised §2.3
* $k$- §2.6
* Kirillov §2.3
* radius §2.6
* reflection §2.5
* space §2.2
* zero-radius item i, §2.3
* cycles
* f-orthogonal §2.6, Definition 2.7
* focal orthogonal, see f-orthogonal
* orthogonal §2.2, Definition 2.5
* De Donder–Weyl formalism item 8.4.ii
* decomposition
* Iwasawa §1.1, item 8.1.i
* defect operator Example 4.19
* derivation
* inner §7, §7.2.1
* derived action §1.1
* determinant §2.3
* directing functional Example 4.12
* directrix §2.2
* discrete
* analytic function item 8.2.v
* geometry item 8.1.iv
* spectrum item 8.3.ii
* distance Definition 2.9
* covariant spectral Definition 6.15
* divisor
* zero §1.1, §7.5.5
* domain
* non-simply connected §5.5
* simply connected Corollary 5.10
* double §1.1
* number §1.1, 3rd item, §3.1, §3.3.2, §7, §7.5, §7.5.5
* drum
* hearing shape §2.3
* dual §1.1
* number §1.1, item i—§3.1, §3.1, §3.2, §7, §7.6—§7.6.4
* eigenvalue §3.3.1, §3.3.2, §6.4, Proposition 6.8
* generalised §6.5
* quadratic §6.5
* elliptic
* case §1.1
* EPH-classification §1.1
* equation
* Cauchy-Riemann, see Cauchy-Riemann operator
* dynamics in p-mechanics §7.2.1
* Hamilton §7, §7.6.2
* Heisenberg §7, §7.3.2
* Erlangen programme §1, §1
* f-orthogonality §2.6, Definition 2.7
* factorisation
* Wiener–Hopf Example 4.10
* fiducial operator Definition 4.1
* field
* quantum item 8.4.ii
* Fillmore–Springer–Cnops construction §2.2, item 8.1.ii
* operator §6.5
* focal
* orthogonality, see f-orthogonality
* Fock–Segal–Bargmann
* representation §7.6.4, §7.6.4, Example 7.3, Example 7.5, Example 7.5
* classic (parabolic) §7.6.1
* hyperbolic §7.5.1
* representations §7.3.1
* space Example 4.12, §7.4.1, §7.6.5, Example 7.5, item 8.4.i
* focus
* length from Definition 2.10
* of a cycle §2.2
* of a parabola §2.2
* form
* symplectic §7.1.1
* formula
* reconstruction Example 4.25
* FSB
* representation, see Fock–Segal–Bargmann representation
* space, see Fock–Segal–Bargmann space
* function
* characteristic Example 4.19
* Heaviside §2.4
* Hermite, see Hermite polynomial
* maximal Example 4.14
* parabolic cylinder, see Weber–Hermite function
* polyanalytic §5.3
* Weber–Hermite §7.5.4, §7.5.5
* functional
* calculus §1.2, Definition 4.22, §6, item 8.3.v, §8.3
* Riesz–Dunford Example 4.21
* support, see spectrum
* directing Example 4.12
* invariant item iv
* model Example 4.19, §6, §6.4, item 8.3.v
* Gaussian Example 4.8, item ii, Example 7.12, Example 7.16
* Gelfand–Naimark–Segal construction §2.4
* generalised
* eigenvalue §6.5
* generator
* of a cone §2.2
* of a subgroup §3.3.1
* quadratic Example 7.4
* geometrical quantisation §7
* geometry §8.1
* discrete item 8.1.iv
* Lobachevsky §1.1, Example 4.28, §5.2, item 8.1.ii
* non-commutative §1.2, §6.5
* GNS construction §2.4, see Gelfand–Naimark–Segal construction
* group §1
* affine, see $ax+b$ group
* $ax+b$ §1, Example 4.14, Example 4.27, Example 4.28, Example 4.8, §5.2, Example 5.3, Example 5.7
* invariant measure Example 4.8
* Heisenberg §1, §7, §7—§7.6.5, item 8.4.i
* Fock–Segal–Bargmann representation Example 7.3
* classic (parabolic) §7.6.1
* hyperbolic §7.5.1
* induced representation §7.1.1
* invariant measure §7.1.1
* representation
* linear §1.2
* Schrödinger §7.1.2, §7.4
* $SL_{2}{}(\mathbb{R}{})$ §1
* $\mathrm{Sp}(2)$ §7, see also $SL_{2}{}(\mathbb{R}{})$
* $\mathrm{SU}(1,1)$ Example 4.19, see also $SL_{2}{}(\mathbb{R}{})$
* symplectic §7, Remark 7.15
* $\mathbb{H}^{1}{}$, Heisenberg group §7.1.1
* Haar measure, see invariant measure
* half-plane §1.1, item ii
* boundary effect §2.3
* Hamilton
* equation §7, §7.6.2
* Hardy
* maximal function Example 4.14
* pairing §4.4, §4.4
* space §1.2, §4, Example 4.8, §5.3, §5.5, §6.1, Definition 6.15, item 8.2.ii
* harmonic
* oscillator item i, item i, item i, §7, §7.1.2, §7.4, Example 7.6
* repulsive (hyperbolic) §7.5.4
* hearing drum’s shape §2.3
* Heaviside
* function §2.4
* Heaviside function §2.4
* Heisenberg
* commutator relation §7.1.1
* equation §7, §7.3.2
* group §1, §7, §7—§7.6.5, item 8.4.i
* Fock–Segal–Bargmann representation Example 7.3
* classic (parabolic) §7.6.1
* hyperbolic §7.5.1
* induced representation §7.1.1
* invariant measure §7.1.1
* Heisenberg group §1
* Hermite
* polynomial §7.4, §7.5.5
* homomorphism
* algebraic §6
* hyperbolic
* case §1.1
* Fock–Segal–Bargmann representation §7.5.1
* harmonic oscillator §7.5.4
* Moyal bracket §7, §7.5.2
* probability §7.5.3
* Schrödinger representation §7.5.1
* unharmonic oscillator item ii
* unit ($\mathrm{j}$) §1.1
* hypercomplex
* number §1.1, item ii, Remark 3.8, §7.6.5
* $\mathrm{i}$ (imaginary unit) §1.1
* imaginary
* unit ($\mathrm{i}$) §1.1
* induced
* covariant transform Definition 5.1
* representation §3, §6.1, §7
* Heisenberg group,of §7.1.1
* induction §3
* inner
* derivation §7, §7.2.1
* integral
* Cauchy §1.2, Example 4.8, §5.3, §5.4, §5.4, §5.5, §6.1, item 8.2.i
* Poisson, see Poisson kernel
* interference §7.2.2, §7.6.3, Example 7.13
* intertwining operator §1.2, Example 4.14, Theorem 4.4, Remark 4.6, Definition 5.1, Proposition 5.5, item i, §6, §6.5
* invariant §4.4
* functional item iv
* joint §2.4, §2.5
* measure §4, Example 4.25, Example 4.27, Example 4.7, Example 4.8, §7.1.1
* pairing §4.4
* inverse
* covariant transform Definition 4.24
* isotropic cycles item i
* Iwasawa decomposition §1.1, item 8.1.i
* $\mathrm{j}$ (hyperbolic unit) §1.1
* jet §1.2, §6.3, Theorem 6.17, Definition 6.7, §7
* bundle §6.2
* $\mathbb{J}^{n}{}$ (jet space) Definition 6.7
* joint
* invariant §2.4, §2.5
* spectrum §6.1
* Jordan
* normal form §6.3
* $K$ subgroup §1.1
* $k$-normalised cycle §2.6
* $K$-orbit §1.1
* kernel
* Poisson Example 4.8, §5.3
* Krein
* directing functional Example 4.12
* space §3.2, §7
* ladder operator §3.3.1—§3.3.3, §5.3, §5.3, §5.4, §7.4—§7.4.2, §7.5.4—§7.5.5, §7.6.4—§7.6.5, Example 7.16
* Laplace operator, see Laplacian
* Laplacian §1.2, §5.3, §5.3, item 8.2.i
* left regular representation §7.1.1
* length Definition 2.10
* from centre Definition 2.10
* from focus Definition 2.10
* Lidskii theorem Theorem 6.14
* limit
* semiclassical §7, Remark 7.9
* Littlewood–Paley
* operator Example 4.13
* Lobachevsky
* geometry §1.1, Example 4.28, §5.2, item 8.1.ii
* lowering operator, see ladder operator
* map
* Möbius §1, Example 4.19, Example 4.9, §5.2, §5.5, §6.4, Proposition 6.18, Proposition 6.23
* on cycles Theorem 2.4
* matrix
* cycle, of a §2.2
* Jordan normal form §6.3
* maximal
* function Example 4.14
* measure
* Haar, see invariant measure
* invariant §4, Example 4.25, Example 4.27, Example 4.7, Example 4.8, §7.1.1
* mechanics
* classical §7.1.1, Remark 7.9
* metaplectic representation, see oscillator representation
* method
* orbits, of §7, item 8.1.iii
* minimal
* polynomial §6.4
* Möbius map §1, Example 4.19, Example 4.9, §5.2, §5.5, §6.4, Proposition 6.18, Proposition 6.23
* on cycles Theorem 2.4
* model
* functional Example 4.19, §6, §6.4, item 8.3.v
* mother wavelet Example 4.7, §6.1
* Moyal bracket §7.3.2, Remark 7.11
* hyperbolic §7, §7.5.2
* multiresolution analysis Example 4.12
* $N$ subgroup §1.1
* $N$-orbit §1.1
* nilpotent unit, see parabolic unit
* non-commutative geometry §1.2, §6.5
* non-simply connected domain §5.5
* norm
* shift invariant Example 4.14
* normalised
* cycle §2.3
* $k$- §2.6
* Kirillov §2.3
* number
* double §1.1, 3rd item, §3.1, §3.3.2, §7, §7.5, §7.5.5
* dual §1.1, item i—§3.1, §3.1, §3.2, §7, §7.6—§7.6.4
* hypercomplex §1.1, item ii, Remark 3.8, §7.6.5
* numerical
* range Example 4.18
* observable §7.2.1
* on hearing drum’s shape §2.3
* operator
* annihilation, see ladder operator
* Casimir §5.3
* Cauchy-Riemann §1.2, §5.3, §5.3, Example 5.7, §7.4.1, item 8.2.i
* creation, see ladder operator
* defect Example 4.19
* fiducial Definition 4.1
* Fillmore–Springer–Cnops construction §6.5
* intertwining §1.2, Example 4.14, Theorem 4.4, Remark 4.6, Definition 5.1, Proposition 5.5, item i, §6, §6.5
* ladder §3.3.1—§3.3.3, §5.3, §5.3, §5.4, §7.4—§7.4.2, §7.5.4—§7.5.5, §7.6.4—§7.6.5, Example 7.16
* Laplace, see Laplacian
* Littlewood–Paley Example 4.13
* lowering, see ladder operator
* power bounded item 8.3.iii
* pseudodifferential §4.3
* quasinilpotent item 8.3.i
* raising, see ladder operator
* Toeplitz item 8.2.ii
* optics §7.1.2
* orbit
* method §7, item 8.1.iii
* subgroup $A$, of §1.1
* subgroup $K$, of §1.1
* subgroup $N$, of §1.1
* orders
* of zero Example 6.12
* orthogonality
* cycles, of §2.2, Definition 2.5
* focal, see f-orthogonality
* oscillator
* harmonic item i, item i, item i, §7, §7.1.2, §7.4, Example 7.6
* repulsive (hyperbolic) §7.5.4
* representation §7.1.2
* unharmonic item ii, item ii, Example 7.7
* hyperbolic item ii
* p-mechanics §7.2
* dynamic equation §7.2.1
* observable §7.2.1
* state §7.2.2
* p-mechanisation Example 7.6
* pairing
* Hardy §4.4, §4.4
* invariant §4.4
* parabola
* directrix §2.2
* focus §2.2
* vertex §2.2
* parabolic
* case §1.1
* cylinder function, see Weber–Hermite function
* Fock–Segal–Bargmann representation §7.6.1
* probability, see classic probability
* Schrödinger representation §7.6.1
* unit ($\varepsilon$) §1.1
* PDO §4.3
* pencil
* covariant §6.5
* perpendicular Definition 2.12
* phase
* space §3.1, §7, §7.3.1, §7.4, §7.5.1, Example 7.12, Example 7.5
* Planck
* constant §7, Remark 7.20
* Plato’s cave §2.2
* point space §2.2
* Poisson
* bracket Remark 7.23
* kernel Example 4.8, §5.3
* polyanalytic function §5.3
* polynomial
* Hermite §7.4, §7.5.5
* minimal §6.4
* power bounded operator item 8.3.iii
* primary
* representation §6.2, §6.3, §6.5, Definition 6.5
* principle
* similarity and correspondence §3.3.3, Principle 3.5, §7.5.5, §7.6.5
* probability
* classic (parabolic) §7.6.3
* hyperbolic §7.5.3
* quantum §7, §7.2.2, §7.3.3
* product
* Blaschke §6.4
* projective space §2.2
* prolongation §6.2
* pseudodifferential operator §4.3
* $\mathcal{PT}$-symmetry §7
* quadratic
* eigenvalue §6.5
* generator Example 7.4
* quantisation
* geometrical §7
* quantum
* field item 8.4.ii
* probability §7, §7.2.2, §7.3.3
* quantum mechanics §7
* quasinilpotent
* operator item 8.3.i
* quaternion §2.6
* quaternions §2.6
* radius
* cycle, of §2.6
* Radon transform Example 4.15
* raising operator, see ladder operator
* range
* numerical Example 4.18
* reconstruction formula Example 4.25
* reduced wavelet transform Example 5.2
* reflection
* in a cycle §2.5
* representation
* $ax+b$ group Example 4.8
* coefficients, see wavelet transform
* complementary series §1.2, Example 4.9, item 8.3.ii, item 8.3.ii
* discrete series §1.2, Example 3.2, §4, Example 4.9, item 8.3.ii, item 8.3.ii
* Fock–Segal–Bargmann §7.6.4, §7.6.4, Example 7.3, Example 7.5, Example 7.5
* classic (parabolic) §7.6.1
* hyperbolic §7.5.1
* FSB, see Fock–Segal–Bargmann representation
* Heisenberg group
* classic (parabolic) §7.6.1
* hyperbolic §7.5.1
* Schrödinger Example 7.4
* induced §3, §6.1, §7
* Heisenberg group,of §7.1.1
* left regular §7.1.1
* linear §1.2
* metaplectic, see oscillator representation
* oscillator §7.1.2
* primary §6.2, §6.3, §6.5, Definition 6.5
* principal series §1.2, Example 4.9, item 8.3.ii, item 8.3.ii
* Schrödinger §7.3.1, §7.3.1, §7.4.1, Example 7.18, Example 7.4
* classic (parabolic) §7.6.1
* hyperbolic §7.5.1
* Shale–Weil, see oscillator representation
* $SL_{2}{}(\mathbb{R}{})$ group §3.2, Example 3.2—Example 3.2
* in Banach space §6.1
* square integrable §4, Example 4.25, Example 4.26, Example 4.7, Example 5.2
* representations
* Fock–Segal–Bargmann §7.3.1
* linear §3
* repulsive
* harmonic oscillator §7.5.4
* resolvent Example 4.21, §6.1, §6.5, Definition 6.2
* Riemann
* mapping theorem §5.5
* Schrödinger
* group §7.1.2, §7.4
* representation §7.3.1, §7.3.1, §7.4.1, Example 7.18, Example 7.4
* classic (parabolic) §7.6.1
* hyperbolic §7.5.1
* sections
* conic §1.1, §2.2
* semiclassical
* limit §7, Remark 7.9
* semigroup
* cancellative Remark 4.6
* Shale–Weil representation, see oscillator representation
* similarity, see principle of similarity and correspondence
* simply connected domain Corollary 5.10
* $SL_{2}{}(\mathbb{R}{})$ group §1
* representation §3.2, Example 3.2—Example 3.2
* in Banach space §6.1
* $\mathrm{Sp}(2)$ §7.1.2
* space
* Bergman §1.2, §4, Example 4.9, §5.5, item 8.2.ii
* configuration §7.3.1, §7.5.1, §7.6.1, §7.6.4, Example 7.12
* cycles, of §2.2
* Fock–Segal–Bargmann Example 4.12, §7.4.1, §7.6.5, Example 7.5, item 8.4.i
* FSB, see Fock–Segal–Bargmann space
* Hardy §1.2, §4, Example 4.8, §5.3, §5.5, §6.1, Definition 6.15, item 8.2.ii
* Krein §3.2, §7
* phase §3.1, §7, §7.3.1, §7.4, §7.5.1, Example 7.12, Example 7.5
* point §2.2
* projective §2.2
* spectral
* covariant distance Definition 6.15
* spectrum §6.2, Definition 6.2
* contravariant item ii, §6.4, Definition 6.5, Proposition 6.9
* stability §6.4
* discrete item 8.3.ii
* joint §6.1
* mapping Theorem 6.3
* square integrable
* representation §4, Example 4.25, Example 4.26, Example 4.7, Example 5.2
* stability
* contravariant spectrum §6.4
* state §7.2.2
* coherent, see wavelet
* vacuum, see mother wavelet
* subalgebra
* Cartan Remark 3.7
* subgroup
* $A$ §1.1, Example 2.1
* orbit §1.1
* generator §3.3.1
* $K$ §1.1, Example 2.1
* orbit §1.1
* $N$ §1.1, Example 2.1
* orbit §1.1
* supersymmetry Remark 3.6
* support
* functional calculus, see spectrum
* symbol
* contravariant §4.3
* covariant §4.3
* symbolic calculus, see covariant calculus
* covariant Example 4.16
* symplectic
* form §7.1.1
* group §7, Remark 7.15
* transformation §3.1, §3.1, §7, §7.5.4
* theorem
* Atiyah-Singer (index) §1.1
* Lidskii Theorem 6.14
* Riemann mapping §5.5
* spectral mapping Theorem 6.3
* Toeplitz
* operator item 8.2.ii
* token Remark 4.6
* trace §2.3
* transform
* Cayley §5.5
* covariant Definition 4.1
* induced Definition 5.1
* inverse Definition 4.24
* Radon Example 4.15
* wavelet, see wavelet transform
* transformation
* symplectic §3.1, §3.1, §7, §7.5.4
* transitive §1
* umbral calculus Remark 4.6
* unharmonic
* oscillator item ii, item ii, Example 7.7
* hyperbolic item ii
* unit
* circle §5.5
* disk §5.5
* hyperbolic ($\mathrm{j}$) §1.1
* imaginary ($\mathrm{i}$) §1.1
* nilpotent, see parabolic unit
* parabolic ($\varepsilon$) §1.1
* vacuum state, see mother wavelet
* vertex
* of a parabola §2.2
* wavelet §4, §4.1, Example 4.7, §5.1, §5.2, §6.1
* admissible §4, Example 4.25, Example 4.7
* mother Example 4.7, §6.1
* transform Example 4.7, §6.1, §7.4.1
* induced, see induced covariant transform
* reduced Example 5.2
* Weber–Hermite function §7.5.4, §7.5.5
* Wiener–Hopf factorisation Example 4.10
* zero
* divisor §1.1, §7.5.5
* order, of Example 6.12
* zero-radius cycle item i, §2.3
* $\varepsilon$ (parabolic unit) §1.1
* $\sigma$ ($\sigma:=\iota^{2}$) §1.1
|
arxiv-papers
| 2011-06-08T23:17:01 |
2024-09-04T02:49:19.460736
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Vladimir V. Kisil",
"submitter": "Vladimir V Kisil",
"url": "https://arxiv.org/abs/1106.1686"
}
|
1106.1694
|
2009 Vol. 9 No. XX, 000–000
11institutetext: Department of Physics, Shanxi Normal University, Linfen
041004, Shanxi, China; yuanjz@sxnu.edu.cn
# A Multicolour Photometric Study of the neglected eclipsing binary FT Ursae
Majoris
Jin-Zhao Yuan
###### Abstract
The multicolour photometric observations of the neglected eclipsing binary FT
Ursae Majoris (FT UMa) were obtained in 2010. The 2003 version of Wilson-
Devinney code was used to analyze the light curves in $B$, $V$, and $R$ bands
simultaneously. Based on the spectroscopic mass ratio $q=0.984$ published by
Pribulla et al., it is found that FT UMa is an evolved contact binary with a
contact degree of $15.3\%$. The low amplitude of light variations, $\sim 0.15$
mag, arises mainly from a moderately low inclination angle of
$i=62.^{\circ}80$ and almost identical components in size rather than the
light dilution of a third component, which contributes light of only $\sim
10\%$.
###### keywords:
stars: binaries: close — stars: binaries: eclipsing — stars: individual: FT
Ursae Majoris
## 1 Introduction
FT UMa is a relatively bright target with the maximum magnitude of
$V_{max}=9.25$. But, the variable star was not known until the systematic
Hipparcos survey took place because of its relatively low amplitude of
photometric variability, i.e., $\sim 0.15$ mag in the $V$ band. First
photometric study was carried by Ozavci et al. (2007), who obtained the
photometric mass ratio of $q=0.25\pm 0.01$. Pribulla et al. (2009), however,
derived the mass ratio, $q=0.984\pm 0.019$, from the radial velocities of FT
UMa. So, the photometric study of the eclipsing binary FT UMa should be re-
carried out based on the spectroscopic mass ratio.
In this paper, the absolute physical parameters were determined based on the
CCD multicolour photometric observations. Two interesting properties are
discussed in the last section.
## 2 Observations
New multicolour CCD photometric observations of FT UMa were carried out on
2010 January 17, 20, and 21, and December 4 and 5 using the 85-cm telescope at
the Xinglong Station of National Astronomical Observatory of China (NAOC),
equipped with a primary-focus CCD photometer. The telescope provides a field
of view of about $16.^{{}^{\prime}}5\times 16.^{{}^{\prime}}5$ at a scale of
$0.^{{}^{\prime\prime}}96$ per pixel and a limit magnitude of about 17 mag in
the $V$ band (Zhou et al. 2009). The standard Johnson-Cousin-Bessel $BVR$
filters were used simultaneously. HD 233579 was selected as a comparison star
and J08532982+5123196 as a check star. The coordinates are listed in Table 1.
The data reduction was performed by using the aperture photometry package of
IRAF111IRAF is developed by the National Optical Astronomy Observatories,
which are operated by the Association of Universities for Research in
Astronomy, Inc., under contract to the National Science Foundation. (bias
subtraction, flat-field division). Extinction corrections were ignored as the
comparison star is very close to the variable. In total, 1597, 1579, and 1527
CCD images in the $B$, $V$, and $R$ bands were obtained, respectively. Several
new times of primary minimum are derived from the new observation by using a
parabolic fitting method. Two times of light minima (i.e., HJD
$2455217.3290\pm 0.0002$ and HJD $2455217.9832\pm 0.0005$) were obtained by
averaging those in three bands.
The light curves are displayed in Figure 1. The orbital period adopted to
calculate the phase was taken from Pribulla et al. (2009), i.e., 0.6547038
days. As shown in the bottom panel of Figure 1, the differential magnitudes
between the comparison star and the check star varied within $\sim 0.04$ mag,
which is due to the inappropriate check star.
Table 1: Coordinates of FT UMa and its Comparison and Check Stars.
Stars | $\alpha_{2000}$ | $\delta_{2000}$
---|---|---
FT UMa | $08^{h}54^{m}30.4^{s}$ | $51^{\circ}14^{\prime}40.3^{\prime\prime}$
Comparison | $08^{h}53^{m}31.5^{s}$ | $51^{\circ}26^{\prime}47.2^{\prime\prime}$
Check | $08^{h}53^{m}29.8^{s}$ | $51^{\circ}23^{\prime}19.7^{\prime\prime}$
Figure 1: Top panel: the light curves of FT UMa in the $B$, $V$, and $R$ bands
obtained on 2010 January 17, 20 and 21, and December 4 and 5. Bottom panel:
the differential light curves of the comparison star relative to the check
star in the $B$ band.
## 3 Photometric solutions with the W-D method
As shown in Figure 1, the data show a nonuniform phase coverage. The data
averaged in phase with 0.01 phase bins were used hereafter. The light curves
were analyzed using the 2003 version of the Wilson-Devinney code (Wilson &
Devinney 1971; Wilson 1979, 1990). In the process of solution, the
spectroscopic mass ratio of $q=0.984$ published by Rucinski et al. (2009) was
fixed. Moreover, Pribulla et al. (2009) gave a spectral type of F0. So an
effective temperature of $T_{1}=7178$K was assumed for the primary component
according to the table of Gray (2005). This is close to the critical
temperature of stars with convective and radiative envelopes, i.e., 7200K. For
security, both convective and radiative envelopes were taken into account. The
gravity-darkening coefficients, $g_{1}=g_{2}=0.320$, were used for convective
case and $g_{1}=g_{2}=1.000$ for radiative case. The bolometric albedos,
$A_{1}=A_{2}=0.5$, were used for convective case and $A_{1}=A_{2}=1.0$ for
radiative case. The logarithmic limb-darkening coefficients came from van
Hamme (1993). The photometric parameters are listed in Table 2.
As suggested by Pribulla et al. (2009), the photometric solution started with
mode 2 (i.e., detached mode). After some differential corrections, the
solution converged to mode 3 (i.e., overcontact mode) in convective case, and
to mode 5 (i.e., semi-detached mode) in radiative case. All of the parameters
derived from the model are listed in Table 2. The the sum of the residuals
squared for convective case is smaller than that for radiative case. So, the
convective case is more plausible. The theoretical light curves are plotted in
Figure 2 as solid lines, which fit the observations very well except for the
deviations around the second maxima, which are due to the relatively low
quality of the points around the second maxima.
Using the value of $(M_{1}+M_{2})~{}\mathrm{sin}^{3}{}i=2.077~{}M_{\odot}$
(Pribulla et al. 2009), the following physical parameters can be derived:
$M_{1}=1.49(5)~{}M_{\odot}$, $R_{1}=1.79(2)~{}R_{\odot}$,
$L_{1}=7.68(19)~{}L_{\odot}$, $M_{2}=1.46(5)~{}M_{\odot}$,
$R_{2}=1.78(2)~{}R_{\odot}$, $L_{2}=6.86(22)~{}L_{\odot}$, and
$a=4.55(5)~{}R_{\odot}$. Given the mass-radius relation of
$R/R_{\odot}=(M/M_{\odot})^{0.73}$ and the mass-luminosity relation of
$L/L_{\odot}=1.2(M/M_{\odot})^{4.0}$ for ZAMS stars, a main sequence star with
a mass of $\sim 1.49~{}M_{\odot}$ has a radius of $\sim 1.34~{}R_{\odot}$ and
a luminosity of $\sim 5.91~{}L_{\odot}$, both of which are lower than those of
two components. We can conclude that FT UMa has evolved off the main sequence.
Table 2: Photometric Solutions for FT UMa. Parameters | Convective case | Radiative case
---|---|---
configure | overcontact | semi-detached
$g_{1}=g_{2}$ | 0.32 | 1.00
$A_{1}=A_{2}$ | 0.5 | 1.0
$x_{1bol}$ | 0.642 | 0.642
$x_{2bol}$ | 0.641 | 0.641
$y_{1bol}$ | 0.257 | 0.257
$y_{2bol}$ | 0.253 | 0.243
$x_{1B}$ | 0.781 | 0.781
$x_{1V}$ | 0.683 | 0.683
$x_{1R}$ | 0.584 | 0.584
$x_{2B}$ | 0.785 | 0.799
$x_{2V}$ | 0.688 | 0.705
$x_{2R}$ | 0.592 | 0.611
$y_{1B}$ | 0.294 | 0.294
$y_{1V}$ | 0.294 | 0.294
$y_{1R}$ | 0.294 | 0.294
$y_{2B}$ | 0.283 | 0.252
$y_{2V}$ | 0.290 | 0.282
$y_{2R}$ | 0.291 | 0.285
$T_{1}$ (K) | 7178 | 7178
q ($M_{2}/M_{1}$ ) | 0.984 | 0.984
$\Omega_{in}$ | 3.7239 | 3.7239
$\Omega_{out}$ | 3.1880 | 3.1880
$i$ (deg) | 62.80 $\pm 1.01$ | 57.73$\pm 0.48$
$T_{2}$ (K) | 7003 $\pm 36$ | 6631 $\pm 49$
$\Omega_{1}$ | 3.6419$\pm 0.0142$ | 4.6120$\pm 0.0472$
$\Omega_{2}$ | 3.6419$\pm 0.0142$ | —
$L_{3}/(L_{1}+L_{2}+L_{3})$ ($B$) | $0.102\pm 0.004$ | $0.026\pm 0.002$
$L_{3}/(L_{1}+L_{2}+L_{3}$) ($V$) | $0.088\pm 0.004$ | $0.013\pm 0.002$
$L_{3}/(L_{1}+L_{2}+L_{3}$) ($R$) | $0.074\pm 0.004$ | $0.002\pm 0.002$
$r_{1}$ (pole) | 0.3679$\pm 0.0019$ | 0.2730$\pm 0.0035$
$r_{1}$ (side) | 0.3883$\pm 0.0023$ | 0.2788$\pm 0.0038$
$r_{1}$ (back) | 0.4247$\pm 0.0034$ | 0.2871$\pm 0.0042$
$r_{2}$ (pole) | 0.3652$\pm 0.0019$ | 0.3548
$r_{2}$ (side) | 0.3854$\pm 0.0023$ | 0.3726
$r_{2}$ (back) | 0.4219$\pm 0.0034$ | 0.4035
degree of overcontact (f) | 17.4%$\pm 2.5\%$ | —
Residual | 0.0091 | 0.0111
Figure 2: Same as the top panel of Figure 1. But the solid curves represent
the theoretical light curves computed with the parameters in Table 2.
## 4 Discussion and Conclusions
The investigation on the new multicolour light curves indicated that FT UMa is
an evolved contact binary. The system shows two atypical properties: the mass
ratio close to unity and the small photometric amplitude.
The typical mass ratios of contact binaries are between 0.2 and 0.5 (Gettel et
al. 2006). The contact binaries with large mass ratios, especially those with
unit mass ratios, can help us understand the evolution of contact binaries,
and study the link between A- and W-subtype W UMa binaries (Li et al. 2008).
In the case of FT UMa, two evolved components have almost the same masses and
radii, and therefore little mass exchange. If both of components can evolve
onto the subgiant stage, the system will coalesce directly into a single star
.
In addition to FT UMa, the mass ratios as high as unity can be seen in other
five contact binaries: V701 Sco (the spectroscopic mass ratio $q_{sp}=0.99$:
Bell & Malcolm 1987) and V753 Mon ($q_{sp}=0.970$: Rucinski et al. 2000), CT
Tau (the photometric mass ratio $q_{ph}=1.00$: Plewa & Włodarczyk 1993), V803
Aql ($q_{ph}=1.00$: Samec et al. 1993), and WZ And ($q_{ph}=1.00$: Zhang &
Zhang 2006).
Moreover, the large total mass of $2.95~{}M_{\odot}$ and the large mass ratio
of $q=0.984$ are consistent with the fact that the mass ratio of the W UMa-
type systems increases with the increase of their total mass (Li et al. 2008).
Generally, a close third component, orbital inclination and the relative
geometrical size of two components can affect the amplitude of photometric
variations. FT UMa has an orbital inclination of $62.8^{\circ}$ and almost
identical components in size, and therefore show a relatively low photometric
amplitude, $\sim 0.15$ mag. Pribulla et al. (2009) concluded that a third
component contributes about an half light of the system and reduces the
amplitude of photometric variations. But our solution suggested that the light
contribution is about $10\%$. Just as noted by Pribulla et al. (2009), the
center-of-mass velocity of the close pair was changeless during their
observational run, compared with the variable velocity of third component.
This indicates that the mass of the third component is much smaller than the
total mass of the binary pair, suggesting that the light dilution of a third
component is negligible. In order to confirm the light and mass of the third
component, investigation on long-term orbital period and radial velocity are
needed.
###### Acknowledgements.
We thank an anonymous referee for some useful suggestions. This work is
supported by Natural Science Foundation of Shanxi Normal University (No.
ZR09002).
## References
* Bell & Malcolm (1987) Bell, S. A., & Malcolm, G. J. 1987, MNRAS, 226, 899
* Gettel et al. (2006) Gettel, S. J., Geske, M. T., & McKay, T. A. 2006, AJ, 131, 621
* Gray (2005) Gray, D. F. 2005, The Observation and Analysis of Stellar Photospheres (3rd ed.: Cambridge: Cambridge Univ. Press), page 506
* Li et al. (2008) Li, L. F., Zhang, F. H., Han, Z. W., Jiang, D. K., & Jiang, T. Y. 2008, MNRAS, 387, 97
* Ozavci et al. (2007) Ozavci, I., Selam, S. O., & Albayrak, B. 2007, Proc. of XV Turkish National Astronomy Meeting (Istanbul: Kultur Univ. Press), No. 60, 997
* Plewa & Wlodarczyk (1993) Plewa, T., & Włodarczyk, K. J. 1993, Acta Astron., 43, 249
* Pribulla et al. (2009) Pribulla, T., Rucinski, S. M., DeBond, H., De Ridder, A., Karmo, T., Thomson, J. R., Croll, B., Ogoza, W., Pilecki, B., & Siwak, M. 2009, AJ, 137, 3646
* Rucinski et al. (2000) Rucinski, S. M., Lu, W. X., & Mochnacki, S. W. 2000, AJ, 120, 1133
* Samec et al. (1993) Samec, R. G., Su, W., & Dewitt, J. R. 1993, PASP, 105, 1441
* Hamme (1993) van Hamme, W. 1993, AJ, 106, 2096
* Wilson & Devinney (1971) Wilson, R. E., & Devinney E. J. 1971, ApJ, 166, 605
* Wilson (1979) Wilson, R. E. 1979, ApJ, 234, 1054
* Wilson (1990) Wilson, R. E. 1990, ApJ, 356, 613
* (14) Zhang, X. B., & Zhang, R. X. 2006, New Astron., 11, 339
* Zhou et al. (2009) Zhou, A. Y., Jiang, X. J., Zhang, Y. P., & Wei, J. Y. 2009, RAA(Research in Astronomy and Astrophysics), 9, 349
|
arxiv-papers
| 2011-06-09T01:22:46 |
2024-09-04T02:49:19.488224
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Jin-Zhao Yuan",
"submitter": "Yuan Jinzhao Mr",
"url": "https://arxiv.org/abs/1106.1694"
}
|
1106.1709
|
11institutetext: Commonwealth Scientific and Industrial Research Organization
(CSIRO), Australia
# The application of compressive sampling to radio astronomy
II: Faraday rotation measure synthesis
F. Li 11 S. Brown 11 T. J. Cornwell 11 F. de Hoog 11
(Received date; accepted date)
Faraday rotation measure (RM) synthesis is an important tool to study and
analyze galactic and extra-galactic magnetic fields. Since there is a Fourier
relation between the Faraday dispersion function and the polarized radio
emission, full reconstruction of the dispersion function requires knowledge of
the polarized radio emission at both positive and negative square wavelengths
$\lambda^{2}$. However, one can only make observations for $\lambda^{2}>0$.
Furthermore observations are possible only for a limited range of wavelengths.
Thus reconstructing the Faraday dispersion function from these limited
measurements is ill-conditioned. In this paper, we propose three new
reconstruction algorithms for RM synthesis based upon compressive
sensing/sampling (CS). These algorithms are designed to be appropriate for
Faraday thin sources only, thick sources only, and mixed sources respectively.
Both visual and numerical results show that the new RM synthesis methods
provide superior reconstructions of both magnitude and phase information than
RM-CLEAN.
††offprints: F. Li
## 1 Introduction
The intrinsic polarization of a synchrotron emitting source together with
knowledge of propagation effects through intervening media provide critical
diagnostics for magnetic field orientation and fluctuations in a wide range of
astrophysical contexts. Faraday rotation is a physical phenomenon where the
position angle of linearly polarized radiation propagating through a magneto-
ionic medium is rotated as a function of frequency. As introduced in Brentjens
& de Bruyn (2005) and Heald (2009), Faraday rotation measure synthesis is an
important tool for analysing radio polarization data where multiple emitting
regions are present along a single line of sight. Observations of
extragalactic sources, which by necessity must be viewed through the Faraday
rotating and emitting Galactic interstellar-medium (de Bruyn et al. 2006;
Brown & Rudnick 2009; Schnitzeler et al. 2007, 2009), are an obvious example
of this regime. Burn (1966) introduced the Faraday dispersion function
$F(\phi)$, which describes the intrinsic polarized flux per unit Faraday depth
$\phi$ (in rad m-2), and its relationship with the complex polarized emission
$P(\lambda^{2})$ as
$P(\lambda^{2})=\int_{-\infty}^{\infty}F(\phi)\mathrm{e}^{2\mathrm{i}\phi\lambda^{2}}\;\mathrm{d}\phi,$
(1)
where $\lambda$ is the wavelength. Note that $P$ can also be written as
$P=Q+\mathrm{i}U$, where $Q$ and $U$ represent the emission of Stokes $Q$ and
Stokes $U$, respectively.
To study multiple emitting and Faraday rotating regions along each line of
sight, we need to reconstruct the Faraday dispersion function, which is, in
general, a complex-valued function of the Faraday depth $\phi$. From Eq. (1) ,
we can invert the expression to yield:
$F(\phi)=\frac{1}{\pi}\int_{-\infty}^{\infty}P(\lambda^{2})\mathrm{e}^{-2\mathrm{i}\phi\lambda^{2}}\;\mathrm{d}\lambda^{2}.$
(2)
However, the problem is that we can not observe the polarized emission at
wavelengths where $\lambda^{2}<0$. Even for the wavelength range
$\lambda^{2}>0$, it is impossible to observe all wavelengths or frequencies.
Brentjens & de Bruyn (2005) propose a synthesis method by first introducing an
observing window function $M(\lambda^{2})$. The observed complex polarized
emission can then be described as
$\widetilde{P}(\lambda^{2})=M(\lambda^{2}){P}(\lambda^{2}).$ (3)
In this paper, the tilde denotes the observed quantities.
If the observing window function is $M(\lambda^{2})$ with $m$ channels, the RM
spread function (RMSF) is be defined by
$R(\phi)=K\sum_{i=1}^{m}M(\lambda_{i}^{2})\mathrm{e}^{-2\mathrm{i}\phi(\lambda^{2}_{i}-\lambda^{2}_{0})},$
(4)
where the parameter $\lambda^{2}_{0}$ is the mean of the sampled values
between $\lambda^{2}_{1}$ and $\lambda^{2}_{m}$ within the observation window
$M(\lambda^{2})$; $i$ is the $i^{\rm{th}}$ channel in the observation window,
and $K$ is a normalising constant of the window function $M(\lambda^{2})$. In
this paper, we assume as a simplification that all channels have uniform
weights for the $m$ channels in the observing window function.
In Brentjens & de Bruyn (2005), the reconstructed Faraday rotation measure
synthesis can be written in discrete form as
$\widetilde{F}(\phi)\approx
K\sum_{i=1}^{m}\widetilde{P}(\lambda^{2}_{i})\mathrm{e}^{-2\mathrm{i}\phi(\lambda^{2}_{i}-\lambda^{2}_{0})},$
(5)
where $\widetilde{F}(\phi)$ is the reconstructed Faraday dispersion function.
From Eq. (5), we can see that the Faraday dispersion function can be
reconstructed provided that the spectral coverage is sufficient.
However, the reconstructed results generally include some side lobes. Using
the terminology of radio interferometry, the result of Brentjens & de Bruyn’
method is a dirty version of the Faraday dispersion function and is
abbreviated as “the dirty curve”. It is the convolution of $F(\phi)$ and the
RMSF, and a deconvolution step may be used to clean it up. By borrowing the
cleaning procedure in the image deconvolution method of Högbom CLEAN (Högbom
1974). Heald (2009) proposes the RM-CLEAN method which deconvolves
$\widetilde{F}(\phi)$ with the RMSF to remove the sidelobe response.
Recently, Frick et al. (2010) proposed a wavelet-based Faraday RM synthesis
method. In that approach, the authors assume specific magnetic field
symmetries in order to project the observed polarization emissions onto
$\lambda^{2}<0$.
Compressive sensing/sampling (CS) (Candès & Wakin 2008; Candès 2006; Wakin
2008) has been one of the most active areas in signal and image processing
over the last few years. Since CS was proposed, it has attracted very
substantial interest, and has been applied in many research areas (Wakin et
al. 2006; Lustig et al. 2007; Puy et al. 2010; Mishali et al. 2009; Bobin &
Starck 2009). In radio astronomy, CS has attracted attention as a tool for
image deconvolution. Wiaux et al. (2009a) compare the CS-based deconvolution
methods with the Högbom CLEAN method (Högbom 1974) on simulated uniform random
sensing matrices with different coverage rates. They apply compressive
sampling for deconvolution by assuming the target signal is sparse. Wiaux et
al. (2009b) proposed a new spread spectrum technique for radio interferometry
by using the non-negligible and constant component of the antenna separation
in the pointing direction. Recently, a new CS-based image deconvolution method
was introduced in Li et al. (2011) in which an isotropic undecimated wavelet
transform is adopted as a dictionary for sparse representation for sky images.
In this paper, we propose three new CS-based RM synthesis methods. In Section
2, the three CS-based RM synthesis methods are proposed. The implementation
details of the general experiment layout is given in Section 3. Simulation
results from the traditional methods are compared with those from CS-based
methods in Section 4. The final conclusions are given in Section 5.
## 2 CS-based RM synthesis
CS is primarily a sampling theory for sparse signals. A sensing matrix (Candès
et al. 2006b) is used to sample a signal with sparsity (few non-zero terms) or
a sparse representation with respect to a given basis function dictionary.
Given a limited number of measurements, generally less than the number of
unknowns in the target signal, the target signal can be reconstructed by
optimisation of an L1 norm. More information on the key concepts (such as
sparsity, incoherence, the restricted isometry property, and the L1 norm
reconstruction) and results can be found in Candès & Romberg (2007); Candès et
al. (2006a); Candès & Wakin (2008); Candès (2006).
CS includes two steps: sensing/sampling and reconstruction. This is in
contrast to Nyquist-Shannon theory which measures the target signal directly
without the reconstruction step. In this paper, we will focus on the
reconstruction step (calculating the Faraday dispersion function given an
observing window) rather than the sensing step (the selection of the observing
window), because the observing frequency range and the bandwidth for each
channel are usually fixed for a given telescope array.
To proceed with the CS approach, we rewrite the Fourier relationship as a
matrix equation. The projection of the Faraday dispersion function to the
polarized emission can be described as a matrix $Y$ of size $m\times N$
$\mathrm{Y}(j,N/2+k)=\mathrm{e}^{2\mathrm{i}\phi_{k}\lambda^{2}_{j}},j={1},\cdots,{m};k={1-N/2},\cdots,{N/2}.$
(6)
The inverse of the projection is the conjugate transpose of $Y$
$\mathrm{Y}^{\ast}(N/2+k,j)=\mathrm{e}^{-2\mathrm{i}\phi_{k}\lambda^{2}_{j}},j={1},\cdots,{m};k={1-N/2},\cdots,{N/2},$
(7)
where $\ast$ denotes the conjugate transpose. Suppose $f$ denotes the original
Faraday dispersion function $F(\phi)$ in a vector format of length $N$, then
the relationship between the Faraday dispersion function and the observed
radio emission is:
$\mathrm{Y}\mathbf{f}=\widetilde{\mathbf{p}},$ (8)
where $\widetilde{p}$ denotes the observed polarized emission in a vector
format of length $m$.
Because we can only measure a limited number of observations with the limited
number of channels, i.e. $m<<N$, there are many different potential Faraday
dispersion functions consistent with the measurements. To resolve these
ambiguities, the usual approach is to use some prior information to select a
solution. The prior information can be: the Faraday dispersion function is
real; the Faraday dispersion function has only point like signals which are
sparse in the Faraday depth domain or the Faraday dispersion function has a
sparse presentation with respect to a dictionary of basis functions, to name
just a few. Our three synthesis methods are based upon the last two structural
assumptions.
Before introducing our new RM synthesis methods, we need to review two
technical terms: Faraday thin and Faraday thick. A source can be either
Faraday thin if $\lambda^{2}\bigtriangleup\phi\ll 1$, or Faraday thick if
$\lambda^{2}\bigtriangleup\phi\gg 1$, where $\bigtriangleup\phi$ is the extent
of the source along the axis of Faraday depth $\phi$. Faraday thin sources can
be well described by Dirac $\delta$ function of $\phi$, while Faraday thick
sources have extensive support on the Faraday depth axis (Brentjens & de Bruyn
2005). Note that the definition of Faraday thin or thick is wavelength
dependent.
### 2.1 RM synthesis for Faraday thin sources
The relationship between the Faraday dispersion function and the observed
polarized radio emission is a Fourier pair if $\lambda^{2}=\pi u$, where $u$
is a wavelength related parameter. Since the space and Fourier domain are
perfectly incoherent (Candès & Romberg 2007), we can apply CS for RM synthesis
in a straightforward manner provided there are Faraday thin sources along the
line of sight since the screen is necessarily sparse.
In this context, CS recommends solving for the Faraday dispersion function by
minimising the L1 norm (summed absolute value) of the dispersion function as
inimising the L1 norm optimises the sparsity of the reconstruction. There
remains one further obstacle - the dispersion function is complex. We handle
this by summing the L1 norm of the real and imaginary parts:
${\rm{min}}\;\\{{\|\mathrm{Re}(\mathbf{f})\|_{l_{1}}+\|\mathrm{Im}(\mathbf{f})\|_{l_{1}}}\\}\;\;\;s.t.\;\mathrm{Y}\mathbf{f}=\widetilde{\mathbf{p}},$
(9)
where Re$(\bullet)$ and Im$(\bullet)$ denote the real and the imaginary parts,
respectively. By forming a real-valued vector of double length (comprising of
the real part and the imaginary part) of the complex-valued vector, almost all
L1 norm optimization solvers can be used for Eq. 9. This CS-based rotation
measure synthesis for Faraday thin sources is abbreviated as CS-RM-Thin. This
is similar in concept to RM-CLEAN because the assumption for RM-CLEAN is that
the Faraday dispersion function comprising of spike like signals. However,
results in Section 4 show that CS-RM-Thin can provides superior results to RM-
CLEAN.
### 2.2 RM synthesis for Faraday thick sources
CS-RM-Thin can work effectively when the Faraday dispersion function includes
Faraday thin sources along the line of sight. This limits its application for
the case when there are some Faraday thick sources along the line of sight.
However, CS can still reconstruct the Faraday dispersion function efficiently
provided that we can find a suitable dictionary of basis functions that can
decompose the extended sources into a sparse representation as described in
Candès (2006). In this paper, we adopt the Daubechies D8 wavelet transforms
(Daubechies 1992) as the dictionary. Other wavelet transforms can also be
adopted, the selection depends on the property of the Faraday dispersion
function. We choose the D8 wavelet transform, because we assume that the
Faraday dispersion function with thick sources is a sinc-like signal.
We can rewrite Eq. (8) as
$\mathrm{Y}\mathrm{W}^{-1}\alpha=\widetilde{\mathbf{p}},$ (10)
where $\mathrm{W}^{-1}$ is the inverse wavelet transform matrix of size
$N\times N$; $\alpha$ is the wavelet coefficient of the Faraday dispersion
function $\mathbf{f}$. The wavelet transform matrix is denoted as
$\mathrm{W}$, therefore, $\alpha=\mathrm{W}\mathbf{f}$. Other symbols follow
the definitions in Eq. (8). Under the condition that
$\mathrm{Y}\mathbf{f}=\widetilde{\mathbf{p}}$, we adopt the following
assumption: both the real part and the imaginary part of the Faraday thick
sources will have a sparse representation in the wavelet domain independently.
Then the wavelet based CS RM synthesis method for Faraday thick sources can be
written as
${\rm{min}}\;\\{{\|\mathrm{W}\cdot{\rm{Re}}(\mathbf{f})\|_{l_{1}}+\|\mathrm{W}\cdot{\rm{Im}}(\mathbf{f})\|_{l_{1}}}\\}\;\;\;s.t.\;\mathrm{Y}\mathbf{f}=\widetilde{\mathbf{p}}.$
(11)
This CS-based rotation measure synthesis for Faraday thick sources is
abbreviated as CS-RM-Thick.
### 2.3 RM synthesis for Faraday mixed sources
So far, we have proposed two RM synthesis methods: CS-RM-Thin and CS-RM-Thick
for solving Faraday thin sources and thick sources, respectively. However,
this begs the question: which method should be selected if there are both
Faraday thin sources and thick sources along the line of sight? Moreover, how
can we make a selection if we have no prior information about the Faraday
dispersion function, i.e. we are not sure what it looks like? Clearly, neither
of them is suitable, we therefore need another solution for solving the above
problems. Let us assume that there are both Faraday thin sources and Faraday
thick sources in $F(\phi)$. Suppose $\mathbf{f}_{\rm{thin}}$ denotes the
Faraday thin sources in $F(\phi)$ in a vector format of length $N$;
$\mathbf{f}_{\rm{thick}}$ denotes the Faraday thick sources in $F(\phi)$ in a
vector format of length $N$, then
$\mathbf{f}_{\rm{thin}}+\mathbf{f}_{\rm{thick}}=\mathbf{f}$. Eq.(8) can be
rewritten as
$\mathrm{Y}\mathbf{f}_{\rm{thin}}+\mathrm{Y}\mathbf{f}_{\rm{thick}}=\widetilde{\mathbf{p}}.$
(12)
Since we know that the L1 norm can preserve sparsity; Faraday thin sources
show sparsity in the Faraday depth domain; Faraday thick sources show sparsity
in the wavelet domain, we propose the following solution for the mixed
circumstance
$\displaystyle{\rm{min}}\;\\{{\|\mathrm{Re}(\mathbf{f}_{\rm{thin}})\|_{l_{1}}+\|\mathrm{Im}(\mathbf{f}_{\rm{thin}})}\|_{l_{1}}+\|{\mathrm{W}\cdot{\rm{Re}}(\mathbf{f}_{\rm{thick}})\|_{l_{1}}}$
$\displaystyle+\|\mathrm{W}\cdot{\rm{Im}}(\mathbf{f}_{\rm{thick}})\|_{l_{1}}\\}\;\;\;\;\;s.t.\;\mathrm{Y}\mathbf{f}=\widetilde{\mathbf{p}},$
(13)
where the definition of $\mathrm{W}$ is the same as the above subsection. The
above solution for Faraday mixed sources is still based on the spirit of CS by
preserving the sparsity in the Faraday depth domain for Faraday thin sources
and in the wavelet domain for Faraday thick sources, simultaneously. This CS-
based rotation measure synthesis for Faraday mixed sources is abbreviated as
CS-RM-Mix.
## 3 Implementation details
In this section, the implementation details of the above three proposed CS-
based RM synthesis methods will be given.
### 3.1 Preparation
To create a general experiment layout, we borrow some definitions and
conclusions from Brentjens & de Bruyn (2005). Both the diagram of the
wavelength square $\lambda^{2}$ and $\phi$ are displayed in figure 1. The
maximum observable Faraday depth $\|\phi_{\mathrm{max}}\|$ is given in
Brentjens & de Bruyn (2005)
$\|\phi_{\mathrm{max}}\|\approx\frac{\sqrt{3}}{\delta\lambda^{2}},$ (14)
where $\delta\lambda^{2}$ is the width of an observing channel. The full width
at half maximum (FWHM) of the main peak of the RMSF can be estimated by
$\delta\phi\approx\frac{2\sqrt{3}}{\bigtriangleup\lambda^{2}},$ (15)
where $\bigtriangleup\lambda^{2}$ is the width of the total $\lambda^{2}$
distribution.
Before using CS-based RM synthesis, the following two steps are needed:
1. 1.
Select the resolution of the Faraday depth $\phi_{R}$. Since we know the
maximum observable Faraday depth $\phi_{\rm{max}}$ from Eq. (14) and the FWHM
of the main peak of the rotation measure spread function from Eq. (15), we can
select a grid resolution parameter $\phi_{R}$ in $\phi$ space, which should be
four or five times less than $\delta\phi$, to achieve Nyquist sampling.
However, for some observational window functions $M(\lambda^{2})$, the maximum
scale (Faraday thickness) that one is sensitive to, estimated by
$\frac{\pi}{\lambda^{2}_{\mathrm{min}}}$, is actually smaller than
$\delta\phi$. In these cases, it might be practical to Nyquist sample this
smaller scale in order to calculate $\phi_{R}$. Based on $\phi_{\rm{max}}$ and
$\bigtriangleup\phi$, we can calculate the number of grid points $N$ as
$N=\rm{floor}(\frac{2\phi_{\rm{max}}}{\phi_{R}}).$ (16)
2. 2.
Constructing the two matrices $\mathrm{Y}$ and $\mathrm{Y}^{\ast}$.
Figure 1: This diagram shows the relationship between parameters in
$\lambda^{2}$ domain and $\phi$ domain, respectively.
The selection of these CS-based RM synthesis methods depends on the prior
knowledge about the Faraday dispersion function. If we assume it includes
Faraday thin sources only along the line of sight, we should select CS-RM-
Thin. On the other hand, CS-RM-Thick should be used if we know that there are
Faraday thick sources only. When we know that there are both Faraday thin
sources and thick sources along the line of sight, CS-RM-Mix should be used.
In most circumstances, we have no prior information about the Faraday
dispersion function, CS-RM-Mix can always be used to reconstruct a reliable
result as a compromise.
### 3.2 L1 norm solvers for CS-Based RM synthesis methods
For CS-RM-Thin and CS-RM-Thick, many optimization methods (Beck & Teboulle
2009; Becker et al. 2011; Boyd & Vandenberghe 2004) can be used to solve the
L1 norm minimization problem in a straightforward manner. There are many
solvers or toolboxes, for example, L1-Magic Matlab toolbox which can be
download from http://www.acm.caltech.edu/l1magic/. In this paper, L1-Magic is
adopted for solving equations (9) and (11). Fast Iterative Shrinkage-
Thresholding Algorithm (FISTA) (Beck & Teboulle 2009) can also be used for
solving this problem if we rewrite Eq. (9) or (11) in a Lagrangian form.
As far as CS-RM-Mix is concerned, the solvers or toolboxes introduced above
can be used for solving Eq. (2.3) indirectly. Suppose
$\alpha_{\mathrm{thick}}$ denotes the wavelet coefficients of the thick
sources $\mathbf{f_{\mathrm{thick}}}$ in the Faraday dispersion function in a
vector format, and $\mathrm{W}^{-1}$ is the inverse wavelet transform matrix,
then we have
$\mathbf{f_{\mathrm{{thick}}}}=\mathrm{W}^{-1}\alpha_{\mathrm{thick}}.$ (17)
Bring the above equation into Eq. (12), we have
$\mathrm{Y}\mathbf{f}_{\rm{thin}}+\mathrm{Y}\mathrm{W}^{-1}\alpha_{\mathrm{thick}}=\widetilde{\mathbf{p}}.$
(18)
Furthermore, Eq. (18) can be rewritten as
$\left[\mathrm{Y}\;\mathrm{Y}\right]\left[\begin{array}[]{cc}\mathrm{I}&\mathrm{O}\\\
\mathrm{O}&\mathrm{W}^{-1}\end{array}\right]\left[\begin{array}[]{c}\mathbf{f}_{\mathrm{thin}}\\\
\alpha_{\mathrm{thick}}\end{array}\right]=\widetilde{\mathbf{p}}_{m\times 1},$
(19)
where $\mathrm{I}$ denotes the identity matrix of size $N\times N$, and
$\mathrm{O}$ is the matrix of all zeros with the size of $N\times N$. If we
denotes $\mathrm{Y_{mix}}=[\mathrm{Y}\;\mathrm{Y}]_{m\times 2N}$,
$\mathrm{T}$=$\left[\begin{array}[]{cc}\mathrm{I}&\mathrm{O}\\\
\mathrm{O}&\mathrm{W}^{-1}\end{array}\right]_{2N\times 2N}$ and
$\mathbf{c}=\left[\begin{array}[]{c}\mathbf{f}_{\mathrm{thin}}\\\
\alpha_{\mathrm{thick}}\end{array}\right]_{2N\times 1}$, almost all L1 norm
minimization solvers can be used to solve Eq. (2.3) with:
(20)
To help readers who are unfamiliar with L1 norm minimization to use or
implement our proposed CS-based RM synthesis methods, we have developed a
simple algorithm based on the iterative soft-thresholding algorithm (ISTA)
(Beck & Teboulle 2009) for CS-RM-Mix as an example. The algorithm is as
follows:
1. 1.
Initialization:
1. (a)
Choose parameters: the soft-threshold $\tau$ (this can be set by 1 for most
circumstances) , the stopping-threshold $\delta$ (this can be set by the noise
level)
2. (b)
Set the number of iteration $l=\rm{floor}(\tau/\delta)$
3. (c)
$\mathbf{f}_{\rm{thin}}=\mathrm{Y}^{\ast}\widetilde{\mathbf{p}}$;
$\mathbf{f}_{\rm{thick}}=0$
2. 2.
Within $l$ iterations:
1. (a)
Reconstructing the Faraday thin sources
1. i.
Calculate the residual
$\mathbf{r}=\widetilde{\mathbf{p}}-\mathrm{Y}\mathbf{f}_{\rm{thin}}-\mathrm{Y}\mathbf{f}_{\rm{thick}}$
2. ii.
Calculate the gradient $\mathbf{d}=\mathrm{Y}^{\ast}\mathbf{r}$
3. iii.
Update $\mathbf{f}_{\rm{thin}}=\mathbf{f}_{\rm{thin}}+\mathbf{d}$
4. iv.
Soft threshold ${\rm{Re}}(\mathbf{f}_{\rm{thin}})$ and
${\rm{Im}}(\mathbf{f}_{\rm{thin}})$, respectively. Set any values below $\tau$
to zero and update $\mathbf{f}_{\rm{thin}}$
2. (b)
Reconstructing the Faraday thick sources
1. i.
Calculate the residual
$\mathbf{r}=\widetilde{\mathbf{p}}-\mathrm{Y}\mathbf{f}_{\rm{thin}}-\mathrm{Y}\mathbf{f}_{\rm{thick}}$
2. ii.
Calculate the gradient $\mathbf{d}=\mathrm{Y}^{\ast}\mathbf{r}$
3. iii.
Update $\mathbf{f}_{\rm{thick}}=\mathbf{f}_{\rm{thick}}+\mathbf{d}$
4. iv.
Calculate the wavelet coefficients for both the real part and imagery part of
$\mathbf{f}_{\rm{thick}}$, i.e.
$\mathrm{W}\cdot{\rm{Re}}(\mathbf{f}_{\rm{thick}})$ and
$\mathrm{W}\cdot{\rm{Im}}(\mathbf{f}_{\rm{thick}})$
5. v.
Soft threshold the wavelet coefficients of
$\mathrm{W}\cdot{\rm{Re}}(\mathbf{f}_{\rm{thick}})$ and
$\mathrm{W}\cdot{\rm{Im}}(\mathbf{f}_{\rm{thick}})$. Set any values below
$\tau$ to zero and update $\mathrm{W}\cdot{\rm{Re}}(\mathbf{f}_{\rm{thick}})$
and $\mathrm{W}\cdot{\rm{Im}}(\mathbf{f}_{\rm{thick}})$
6. vi.
Calculate the inverse wavelet transform for both the real part and imaginary
part, respectively, then update $\mathbf{f}_{\rm{thick}}$
3. (c)
$\tau=\tau-\delta$
3. 3.
Reconstructed Faraday dispersion function
$\widetilde{\mathbf{f}}=\mathbf{f}_{\rm{thin}}+\mathbf{f}_{\rm{thick}}$
Note the Faraday thin sources and thick sources are reconstructed separately.
This can be helpful when astronomers focus on either Faraday thin sources or
thick sources only.
If we ignore the step 2b and set $\mathbf{f}_{\rm{thick}}=0$, the above
algorithm will degenerate to CS-RM-Thin. On the contrary, if we ignore the
step 2a and set $\mathbf{f}_{\rm{thin}}=0$, the algorithm will degenerate to
CS-RM-Thick. In this paper, these CS-based rotation measure synthesis methods
are implemented in MATLAB. Our code may be found at
http://code.google.com/p/csra/downloads11footnotetext: Download the file
“CS_RM.zip” which includes both CS-RM-Thin and CS-RM-Thick algorithms.
## 4 Experimental results
We have adopted the standard test platform in Brentjens & de Bruyn (2005). In
this platform there 126 observing channels within Window 1 (0.036 to 0.5m)
evenly distributed in $\lambda^{2}$. Three different Faraday dispersion
functions are simulated to test these CS-Based RM synthesis methods for
Faraday thin sources, Faraday thick sources and mixed sources cases.
### 4.1 Simulation results for Faraday thin sources
We simulate a Faraday dispersion function containing four Faraday thin
sources. See figure 2 for the function. From left to right, these sources are:
$F(-10)=10-4\mathrm{i}$ Jy m2 rad-1, $F(-17)=-7+5\mathrm{i}$ Jy m2 rad-1,
$F(40)=9-7\mathrm{i}$ Jy m2 rad-1 and $F(88)=-4+3\mathrm{i}$ Jy m2 rad-1. The
true Faraday dispersion function is shown in the top left corner of figure 2.
The thin solid line shows the real value, the dashed line the imaginary part,
and the thick solid line the amplitude. The Faraday dispersion function in
this test is complex valued, i.e. the intrinsic polarization angles are non-
zero. We have selected this simulated dispersion function rather than the
standard test (real valued Faraday dispersion function) in Brentjens & de
Bruyn (2005) to investigate the behaviour when the intrinsic polarization
angles are non-zero. From Eq. 15, we can calculate that the FWHM of the RMSF
is around $14$ rad m2. The dirty curve which is calculated by assuming all
unmeasured emission to be zeros, is shown in figure 2 (b). The result of RM-
CLEAN is shown in figure 2 (c). Note that RM-CLEAN cannot correctly
reconstruct the magnitude or phase of the Faraday dispersion function in this
case. It has been demonstrated previously that RM-CLEAN has difficulty with
the separation of sources below the FWHM (Farnsworth et al. 2011). In figure 2
(a), from left to right, the distance between the first and the second sources
is smaller than the FWHM. In figure 2 (c), the first source and second source
are merged to form an unrealistic source. Results of CS-RM-Thin, CS-RM-Thick
and CS-RM-Mix are shown in the figure 2 (d), (e) and (f), respectively. As one
might expect, the result of CS-RM-Thick is poor, because the Faraday
dispersion function has no Faraday thick sources. In figure 2, though CS-RM-
Mix does a better job than CS-RM-Thick and RM-CLEAN in terms of magnitude and
phase of the reconstructed disperse functions, the result is far from good
enough. The result of CS-RM-Thin is shown in figure 2) (d). We can see that
CS-RM-Thin gives the best result, reconstructing the Faraday dispersion
function without any error. This is consistent with CS theory which says that
we can reconstruct the sparse signal exactly with “overwhelming probability”
(Candès & Wakin 2008; Candès 2006; Wakin 2008). In this test, we select
$\phi_{R}=3.6$ rad m2 which is around one quarter of FWHM, so $N=480$.
To carry out a numerical comparison, we use the root mean square (RMS) error
to characterise the difference between the reconstructed
$\widetilde{\mathbf{f}}$ and the Faraday dispersion function $\mathbf{f}$:
$RMS=\sqrt{\frac{\sum_{-N/2+1}^{N/2}(\mathbf{f}-\widetilde{\mathbf{f}})^{2}}{N}}.$
(21)
The RMS error is calculated for all the candidate methods, and the results are
listed in table 1. Results for this test can be found in the first row of the
table. CS-RM-Thin gives the best result.
Figure 2: We have tested our methods on a Faraday dispersion function with four Faraday thin sources. From left to right in the first row are: (a) Original $F(\phi)$, (b) Dirty curve, (c) RM-CLEAN. From left to right in the second row are: (d) CS-RM-Thin, (e) CS-RM-Thick, (f) CS-RM-Mix. The thin solid line shows the real value, the dashed line the imaginary part, and the thick solid line the amplitude. All horizontal axis units are rad m-2, i.e. $\phi$, and all vertical axis units are Jy m2 rad-1. Figure 3: Reconstructed results of a Faraday dispersion function with two Faraday thick sources. From left to right in the first row are: (a) Original $F(\phi)$, (b) Dirty curve, (c) RM-CLEAN. From left to right in the second row are: (d) CS-RM-Thin, (e) CS-RM-Thick, (f) CS-RM-Mix. Table 1: Numerical comparison results (RMS error) | Dirty Faraday dispersion function | RM-CLEAN | CS-RM-Thin | CS-RM-Thick | CS-RM-Mix
---|---|---|---|---|---
Test with Faraday thin sources | 1.40 | 0.78 | 0.00 | 0.84 | 0.76
Test with Faraday thick sources | 2.18 | 0.91 | 1.07 | 0.72 | 0.77
Test with Faraday mix sources | 2.45 | 1.03 | 0.95 | 0.81 | 0.80
### 4.2 Simulation results for Faraday thick sources
We now test our CS-based methods for a Faraday dispersion function with
Faraday thick sources. Here we assume that the Faraday dispersion function
includes two sources $F(\phi)=2-2\mathrm{i}$ Jy m2 rad-1 where
$-120\leq\phi\leq 40$ and $F(\phi)=-6-3\mathrm{i}$ Jy m2 rad-1 where
$30\leq\phi\leq 70$. The simulated Faraday dispersion function is shown in
figure 3 (a). We adopt the previous observing window for this test and the
dirty Faraday dispersion function is shown in figure 3 (b). Note that this
only provides us with the approximate shape of the Faraday dispersion
function. As mentioned in Frick et al. (2010), the magnitude of $F(\phi)$
indicates the polarized emission of the region with Faraday depth $\phi$ and
its phase defines the intrinsic position angle. For the study of polarized
emission of galaxies, the magnitude of $F(\phi)$ is very important, and for
the study of orientation of the magnetic field perpendicular to the line of
sight, the phase information of $F(\phi)$ is crucial. Unfortunately, Brentjens
& de Bruyn’ method can not reconstruct reliable phase information for
$F(\phi)$. The cleaned version is shown in figure 3 (c). RM-CLEAN also failed
to reconstruct the phase information. CS-RM-Thin does not work well for this
case, because $F(\phi)$ is not sparse. The result of CS-RM-Thin is shown in
figure 3 (d). CS-RM-Mix is also used for this test by assuming that there are
both Faraday thin sources and thick sources. The result of CS-RM-Mix is shown
in figure 3 (f). The reconstructed result from CS-RM-Thick is shown in figure
3 (e). Even though CS-RM-Mix gives a much better result than CS-RM-Thin and
RM-CLEAN, the result is not as good as that of CS-RM-Thick. CS-RM-Thick
provides the best approximation to the original Faraday dispersion function in
terms of both magnitude and phase. This is also supported by the numerical
comparison in table 1. From the second row of the table, we can see that CS-
RM-Thick gives the smallest RMS error $0.72$ which is slightly better than
that of CS-RM-Mix $0.77$.
### 4.3 Simulation results for Faraday mixed sources
So far we have tested our methods for both Faraday thin sources and Faraday
thick sources, then we will test our CS-based methods for the mixed
circumstances - both Faraday thin sources and thick sources. The simulated
Faraday dispersion function for this test is shown in the top left corner of
figure 4. There are three sources along the line of sight. From left to right,
these sources are: $F(-58)=-4+3\mathrm{i}$ Jy m2 rad-1,
$F(-30)=10-4\mathrm{i}$ Jy m2 rad-1, $F(\phi)=2-6\mathrm{i}$ Jy m2 rad-1 where
$41\leq\phi\leq 100$. The original Faraday dispersion function is shown in the
top left corner of figure 2. We use the previous observing window for this
test. Based on the observing window (with 126 observing channels distributed
between 0.036 m to 0.5 m), the RMSF is calculated and shown in figure 4 (b).
the dirty curve is shown in figure 4 (c). The cleaned version by RM-CLEAN is
shown in figure 4 (d). RM-CLEAN performs badly in this test, because there is
a Faraday thick source. In general, RM-CLEAN can only work well when there are
Faraday thin sources along the line of sight. The results of CS-RM-Thin and
CS-RM-Thick are shown in figure 4 (e) and (f), respectively. We can see that
CS-RM-Thin reconstructs the two Faraday thin sources nicely, but failed to
reconstruct the Faraday thick source. On the contrary, CS-RM-Thick can
properly reconstruct the Faraday thick source, but expends the two Faraday
thin sources by mistake. As introduced above, CS-RM-Mix can separate the
Faraday thin components $\mathbf{f}_{\rm{thin}}$ and the Faraday thick
components $\mathbf{f}_{\rm{thick}}$ during the reconstruction. In this test,
the soft-threshold $\tau=1$ and $\delta=0.001$ in the proposed algorithm for
CS-RM-Mix. The results of CS-RM-Mix are shown in the last row of figure 4. The
separated Faraday components $\mathbf{f}_{\rm{thin}}$ and
$\mathbf{f}_{\rm{thick}}$ are shown in figure 4 (g) and (h), respectively. The
sum of the separated components is the reconstructed Faraday dispersion
function which is shown in figure 4 (i). We can see that the result of CS-RM-
Mix takes advantage of the results of both CS-RM-Thin and CS-RM-Thick, and it
gives the closest approximation to the original $F(\phi)$. From objective
evaluation point of view, CS-RM-Mix gives the minimum RMS error $0.80$ which
can be seen from the third row of table 1.
Figure 4: Reconstructed results of a Faraday dispersion function with two
Faraday thin sources and a thick source. From left to right in the first row
are: (a) Original $F(\phi)$, (b) RMSF of the observing window with 126
observing channels distributed between 0.036 m to 0.5 m, (c) Dirty curve. From
left to right in the second row are: (d) RM-CLEAN, (e) CS-RM-Thin, (f) CS-RM-
Thick. From left to right in the third row are: (g) Thin components
$\mathbf{f}_{\rm{thin}}$ by using CS-RM-Mix, (h) Thick components
$\mathbf{f}_{\rm{thick}}$ by using CS-RM-Mix, (i) CS-RM-Mix i.e.
$\mathbf{f}_{\rm{thin}}+\mathbf{f}_{\rm{thick}}$. All horizontal axis units
are rad m-2, and all vertical axis units are Jy m2 rad-1.
### 4.4 Discussion
In the figures above, we show the reconstructed dispersion function without
smoothing and addition of the residuals - a step commonly know as restoring.
For real applications, restoration is an option if the robustness is
insufficient.
From the above three tests, we can see that there is no single CS-based RM
synthesis method with an outstanding performance for any circumstances. The
best reconstruction can only be achieved when we have some prior knowledge
about the Faraday dispersion function and select the relevant CS-based RM
synthesis method. If such information is not available, CS-RM-Mix can always
be used as a compromise. Another option is that we can either select CS-RM-
Thin with a large $\phi_{R}$ or CS-RM-Thick with a small $\phi_{R}$. If a
large $\phi_{R}$ is selected, $\mathbf{f}$ is likely to be a sparse vector, so
CS-RM-Thin should be selected for the reconstruction. On the other hand, a
small $\phi_{R}$ can expand compact sources into extended sources, so CS-RM-
Thick will become suitable for the reconstruction. It does not mean that CS-
RM-Thick can be used for any cases with a small $\phi_{R}$, because a smaller
$\phi_{R}$ (which means a larger $N$ from Eq. 16) brings more unknowns in
$\mathbf{f}$ and more uncertainty. We have to balance these factors.
For RM synthesis, the observing window is quite similar to the frequency
filters. For example, the previously introduced observing Window 1 is like a
low pass filter in the wavelength squared domain. Therefore, we should bear in
mind, to observe radio sources with Faraday thick sources under the same
restriction of $m$ and $\delta\lambda^{2}$, the higher frequency observing
band the better.
These CS-based RM synthesis methods are not limited to the optimisation
methods L1-Magic solver, FISTA and ISTA. Other L1 norm optimization solvers
can also be adopted for solving Eqs. (9), (11) and (2.3). Though the wavelet
transform is used as the sparse representations dictionary in this paper,
there are some other potential basis functions can be used to achieve sparsity
for the Faraday thick sources.
In summary, the performance of CS-based RM synthesis methods depend on the
observing window, the resolution of $\phi$, the number of measurements, and
the sparsity of the Faraday dispersion function. The reconstruction of these
CS-based RM synthesis methods take less time than RM-CLEAN in general. For
example, CS-RM-Thin takes 3 seconds for the above tests in contrast with 5
seconds of RM-CLEAN. The calculation time really depends on the construction
of the matrix $\mathrm{Y}$, the larger $N$ and $m$ ($N=480$, $m=126$ for the
above tests), the more time it takes. The computer is a 2.53-GHz Core 2 Duo
MacBook Pro with 4GB RAM.
## 5 Conclusions
Faraday rotation measure synthesis is a very useful tool to study
astrophysical magnetic fields. The problem in RM synthesis is to reconstruct
the Faraday dispersion function given incomplete observations. From CS, we
know that a signal with sparsity can be well reconstructed based on few
measurements. We propose three CS-based RM synthesis methods by finding sparse
representations of the Faraday dispersion functions $F(\phi)$ for different
circumstances. CS-RM-Thin, CS-RM-Thick and CS-RM-Mix can be used for Faraday
thin sources only, thick sources only and mixed sources, respectively. In
general, Faraday thin sources show sparsity in the Faraday depth domain
$\phi$, therefore, we apply the CS reconstruction methods (L1 norm
optimization solvers) in a straightforward manner i.e. CS-RM-Thin. Although
Faraday thick sources are not sparse in the Faraday depth domain, they are
sparse in the wavelet domain for a suitably chosen basis wavelet. Therefore,
we apply the L1 norm optimization solvers in the wavelet domain i.e. CS-RM-
Thick. When there are Faraday mixed sources along the line of sight, we
preserve the sparsity by using L1 norm in the Faraday depth domain and the
wavelet domain simultaneously i.e. CS-RM-Mix.
As shown in the experimental results, the performance of these CS-based
methods is markedly superior to the traditional RM synthesis methods
(Brentjens & de Bruyn 2005; Heald 2009) in terms of magnitude and angle of the
reconstructed Faraday dispersion function. Exemplified by figure 2, both
Brentjens & de Bruyn’ method and RM-CLEAN do not work well in disentangling
two closely spaced sources. In contrast, CS-RM-Thin can separate the sources.
## 6 Acknowledgements
We thank Jean-Luc Starck for early discussions on Compressive Sampling. In
addition, comments, suggestions and derivations made by Dr Brentjens during
the review process are very much appreciated.
## References
* Beck & Teboulle (2009) Beck, A. & Teboulle, M. 2009, SIAM Journal on Imaging Sciences, 2, 183
* Becker et al. (2011) Becker, S., Bobin, J., & Candès, E. 2011, SIAM Journal on Imaging Sciences, 4, 1
* Bobin & Starck (2009) Bobin, J. & Starck, J.-L. 2009, Wavelets XIII, 7446, 74460I
* Boyd & Vandenberghe (2004) Boyd, S. & Vandenberghe, L. 2004, Convex optimization (Cambridge University Press), 716
* Brentjens & de Bruyn (2005) Brentjens, M. & de Bruyn, A. 2005, Astronomy and Astrophysics, 441, 1217
* Brown & Rudnick (2009) Brown, S. & Rudnick, L. 2009, The Astronomical Journal, 137, 3158
* Burn (1966) Burn, B. 1966, Monthly Notices of the Royal Astronomical Society, 133, 67
* Candès (2006) Candès, E. 2006, Int. Congress of Mathematics, Madrid, Spain, 2006, 3, 1433
* Candès & Romberg (2007) Candès, E. & Romberg, J. 2007, Inverse Problems, 23, 969
* Candès et al. (2006a) Candès, E., Romberg, J., & Tao, T. 2006a, IEEE Transactions on information Theory, 52, 489
* Candès et al. (2006b) Candès, E., Romberg, J., & Tao, T. 2006b, Communications on Pure and Applied Mathematics, 59, 1207
* Candès & Wakin (2008) Candès, E. & Wakin, M. 2008, IEEE Signal Processing Magazine, 25, 21 30
* Daubechies (1992) Daubechies, I. 1992, Ten lectures on wavelets (Society for Industrial and Applied Mathematics Philadelphia, PA, USA)
* de Bruyn et al. (2006) de Bruyn, A., Katgert, P., Haverkorn, M., & Schnitzeler, D. 2006, Astronomische Nachrichten, 327, 487
* Farnsworth et al. (2011) Farnsworth, D., Rudnick, L., & Brown, S. 2011, Arxiv preprint: 1103.4149
* Frick et al. (2010) Frick, P., Sokoloff, D., Stepanov, R., & Beck, R. 2010, Monthly Notices of the Royal Astronomical Society: Letters, 401, L24
* Heald (2009) Heald, G. 2009, Cosmic Magnetic Fields: From Planets, 259, 591
* Högbom (1974) Högbom, J. 1974, Astronomy and Astrophysics Supplement, 15, 417
* Li et al. (2011) Li, F., Cornwell, T., & de Hoog, F. 2011, A&A, 528, A31
* Lustig et al. (2007) Lustig, M., Donoho, D., & Pauly, J. 2007, Magnetic Resonance in Medicine, 58, 1182
* Mishali et al. (2009) Mishali, M., Eldar, Y. C., Dounaevsky, O., & Shoshan, E. 2009, CCIT Repor No. 751
* Puy et al. (2010) Puy, G., Wiaux, Y., Gruetter, R., Thiran, J., & De, D. V. 2010, IEEE International Symp. on Biomedical Imaging: From Nano to macro
* Schnitzeler et al. (2007) Schnitzeler, D., Katgert, P., & de Bruyn, A. 2007, Astronomy and Astrophysics, 471, L21
* Schnitzeler et al. (2009) Schnitzeler, D., Katgert, P., & de Bruyn, A. 2009, Astronomy and Astrophysics, 494, 611
* Wakin (2008) Wakin, M. 2008, IEEE Signal Processing Magazine, 1623
* Wakin et al. (2006) Wakin, M., Laska, J., Duarte, M., & Baron, D. 2006, International Conference on Image Processing, ICIP 2006, Atlanta George, Oct 2006, 1437
* Wiaux et al. (2009a) Wiaux, Y., Jacques, L., Puy, G., Scaife, A., & Vandergheynst, P. 2009a, Monthly Notices of the Royal Astronomical Society, 395, 1733
* Wiaux et al. (2009b) Wiaux, Y., Jacques, L., Puy, G., Scaife, A. M. M., & Vandergheynst, P. 2009b, Monthly Notices of the Royal Astronomical Society, 395, 1733
|
arxiv-papers
| 2011-06-09T06:06:44 |
2024-09-04T02:49:19.493738
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Feng Li, Shea Brown, Tim J. Cornwell, Frank de Hoog",
"submitter": "Feng Li",
"url": "https://arxiv.org/abs/1106.1709"
}
|
1106.1736
|
# dustybox and dustywave: Two test problems for numerical simulations of two
fluid astrophysical dust-gas mixtures
Guillaume Laibe, Daniel J. Price
Monash Centre for Astrophysics (MoCA) and School of Mathematical Sciences,
Monash University, Clayton, Vic 3800, Australia
###### Abstract
In this paper we present the analytic solutions for two test problems
involving two-fluid mixtures of dust and gas in an astrophysical context. The
solutions provide a means of benchmarking numerical codes designed to simulate
the non-linear dynamics of dusty gas. The first problem, dustybox, consists of
two interpenetrating homogeneous fluids moving with relative velocity
difference. We provide exact solutions to the full non-linear problem for a
range of drag formulations appropriate to astrophysical fluids (i.e., various
prescriptions for Epstein and Stokes drag in different regimes). The second
problem, dustywave consists of the propagation of linear acoustic waves in a
two-fluid gas-dust mixture. We provide the analytic solution for the case when
the two fluids are interacting via a linear drag term. Both test problems are
simple to set up in any numerical code and can be run with periodic boundary
conditions. The solutions we derive are completely general with respect to
both the dust-to-gas ratio and the amplitude of the drag coefficient. A
stability analysis of waves in a gas-dust system is also presented, showing
that sound waves in an astrophysical dust-gas mixture are linearly stable.
###### keywords:
hydrodynamics — methods: analytical — methods: numerical — waves — ISM: dust,
extinction
††pagerange: dustybox and dustywave: Two test problems for numerical
simulations of two fluid astrophysical dust-gas mixtures–B††pubyear: 2011
## 1 Introduction
Dust – from sub-micron-sized grains to centimetre-sized pebbles – is involved
in many astrophysical problems. In particular, it provides provide the
materials from which the solid cores required for the planet formation process
are built (see e.g. Chiang & Youdin, 2010). Dust grains are also the main
sources of the opacities in star-forming molecular clouds, thus controlling
the thermodynamics. Furthermore, observations are mostly sensitive to the dust
– rather than the gas – emission. With the advent of the _Spitzer_ and
_Herschel_ space telescopes our observational knowledge of dust at different
wavelengths in young stellar and planetary objects has improved substantially.
Millimetre and sub-millimetre observations will similarly be vastly improved
with the arrival of _ALMA_ that will achieve a spatial resolution
$<0.1\arcsec$ at millimetre wavelengths (Turner & Wootten, 2007).
Consequently, numerical simulations of astrophysical dust-gas mixtures are
essential to improve our understanding of the systems we will be able to
observe. A dust-gas mixture is usually treated using a continuous two-fluid
description, and a large class of numerical solvers have been developed. In an
astrophysical context, two types of methods are generally adopted: grid-based
codes (e.g. Fromang & Papaloizou, 2006; Paardekooper & Mellema, 2006; Johansen
et al., 2007; Miniati, 2010) or particle-based Smoothed Particle Hydrodynamics
(SPH) codes (e.g. Monaghan, 1997; Maddison et al., 2003; Barrière-Fouchet et
al., 2005).
However, even with a continuous description of the mixture, the equations
remain too complicated to be solved analytically for most problems, which
presents a major difficulty for benchmarking numerical codes. Currently, the
only known analytic solution in use is the solution for two interpenetrating
homogeneous flows, given, e.g., by Monaghan & Kocharyan (1995) and Miniati
(2010) for a linear drag regime and extended to one particular non-linear
regime by Paardekooper & Mellema (2006). Knowing the analytic solution even
for this simple case allows a precise benchmark of the various drag
prescriptions that are appropriate in different astrophysical environments
(e.g., Baines et al. 1965). On the other hand, no usable analytic solution
exists for the propagation of waves in a dust-gas mixture in a regime relevant
to astrophysics, despite the rich literature on the topic in the many other
areas where dust-gas mixtures are of interest (for example in aerosols,
emulsions or even bubbly gases, c.f. Marble 1970; Ahuja 1973; Gumerov et al.
1988; Temkin 1998). Such solutions are of great interest as 1) they constitute
a demanding test for a code’s accuracy since small perturbations are easily
swamped by numerical noise and 2) have to be correctly simulated as they often
appear in physical simulations. In the absence of such a solution,
astrophysical codes (e.g. Youdin & Johansen, 2007; Miniati, 2010; Bai & Stone,
2010) have generally been validated against the linear growth rates for the
streaming instability (Youdin & Goodman, 2005). Such a test problem is by
definition limited to checking the growth rate of a given mode rather than
validating against a full analytic solution. Another approach has been to
study numerical solutions for dusty-gas shock tubes (Miura & Glass, 1982;
Paardekooper & Mellema, 2006), where approximate solutions can be derived
(Miura & Glass, 1982) but again no complete analytic solution exists.
In this paper we present the full analytic solutions for two specific problems
concerning two-fluid gas and dust mixtures in astrophysics. The first,
dustybox (Sec. 2), is an extension of the interpenetrating flow solutions
discussed above to the main drag regimes relevant to astrophysical dusty gases
(i.e., Epstein and Stokes drag at different Reynolds and Mach numbers). The
second, dustywave (Sec. 3) is the solution for linear waves in a dust-gas
mixture, assuming a linear drag regime.
Our aim is that these solutions will be utilised as standard tests for
benchmarking numerical codes designed to simulate dusty gas in astrophysics.
While it is beyond the scope of this paper to benchmark a particular code
using the two tests, the solutions we have derived were developed precisely
for this purpose (for a new two-fluid SPH code that we are developing) and
will be used to do so in a subsequent paper.
## 2 dustybox: Two interpenetrating fluids
The first test problem, dustybox, consists of two fluids with uniform
densities $\rho_{\mathrm{g}}$ and $\rho_{\mathrm{d}}$ given a constant initial
differential velocity ($\Delta{\bf v}_{\mathrm{0}}={\bf v}_{\rm g,0}-{\bf
v}_{\rm d,0}$). We assume that the gas pressure $P$ remains constant. This
test is perhaps the simplest two-fluid problem that can be set up, for example
by setting up two uniform fluids in a periodic box with opposite initial
velocities. Thus it is a straightforward test to perform in any numerical
code. Similar tests have been considered by Monaghan & Kocharyan (1995) for a
single grain with a linear drag coefficient and by Paardekooper & Mellema
(2006) for one particular non-linear drag regime. However, the simplicity of
this test means that it can be used to test the correct implementation in a
numerical code of both linear and non-linear drag regimes relevant to
astrophysics, for which we provide the full range of solutions.
### 2.1 Equations of motion
The simplified equations of motion are given by:
$\displaystyle\rho_{\mathrm{g}}\frac{\mathrm{d}{\bf
v}_{\mathrm{g}}}{\mathrm{d}t}$ $\displaystyle=$ $\displaystyle-
Kf\left(\Delta{\bf v}\right)\left({\bf v}_{\mathrm{g}}-{\bf
v}_{\mathrm{d}}\right),$ (1)
$\displaystyle\rho_{\mathrm{d}}\frac{\mathrm{d}{\bf
v}_{\mathrm{d}}}{\mathrm{d}t}$ $\displaystyle=$ $\displaystyle
Kf\left(\Delta{\bf v}\right)\left({\bf v}_{\mathrm{g}}-{\bf
v}_{\mathrm{d}}\right),$ (2)
where momentum is exchanged between the two phases via the drag term ($K$
being an arbitrary drag coefficient) and the function $f(\Delta{\bf v})$
specifies any non-linear functional dependence of the drag term on the
differential velocity (i.e. $f=1$ in a linear drag regime). In formulating
(1)-(2) it has been assumed that the effect of the collisions between the dust
particles are negligible (i.e., no dust pressure or viscosity); that the dust
phase occupies a negligibly small fraction of the volume (i.e., zero volume
fraction: the estimated volume fraction is $\sim 10^{-12}$ in planet forming
systems); that the gas is inviscid; the two phases are in thermal equilibrium
and that the only way for the two phases to exchange momentum comes from the
drag term (that is, additional terms due to carried mass, Basset and Saffman
forces have been neglected).
### 2.2 Analytic solutions
Defining the barycentric velocity according to
${\bf v}^{*}=\displaystyle\frac{\rho_{\mathrm{g}}{\bf v}_{\rm
g,0}+\rho_{\mathrm{d}}{\bf v}_{\rm
d,0}}{\rho_{\mathrm{g}}+\rho_{\mathrm{d}}},$ (3)
and adding (1) and (2) shows that the solutions to this equation set are of
the form:
$\displaystyle{\bf v}_{\mathrm{g}}\left(t\right)$ $\displaystyle=$
$\displaystyle\displaystyle{\bf
v}^{*}+\frac{\rho_{\mathrm{d}}}{\rho_{\mathrm{g}}+\rho_{\mathrm{d}}}\Delta{\bf
v}\left(t\right),$ (4) $\displaystyle{\bf v}_{\mathrm{d}}\left(t\right)$
$\displaystyle=$ $\displaystyle\displaystyle{\bf
v}^{*}-\frac{\rho_{\mathrm{g}}}{\rho_{\mathrm{g}}+\rho_{\mathrm{d}}}\Delta{\bf
v}\left(t\right).$ (5)
The evolution of the differential velocity $\Delta{\bf v}\left(t\right)$
depends on the drag regime. If the initial velocities of the two fluids have
the same direction (say $x$), Eqs. (4)–(5) reduce to two coupled scalar
equations:
$\displaystyle v_{\mathrm{g},x}\left(t\right)$ $\displaystyle=$
$\displaystyle\displaystyle
v^{*}_{x}+\frac{\rho_{\mathrm{d}}}{\rho_{\mathrm{g}}+\rho_{\mathrm{d}}}\Delta
v_{x}\left(t\right),$ (6) $\displaystyle v_{\mathrm{d},x}\left(t\right)$
$\displaystyle=$ $\displaystyle\displaystyle
v^{*}_{x}-\frac{\rho_{\mathrm{g}}}{\rho_{\mathrm{g}}+\rho_{\mathrm{d}}}\Delta
v_{x}\left(t\right),$ (7)
where $\Delta v_{x}$ is given by the differential equation
$\frac{\mathrm{d}\Delta
v_{x}}{\mathrm{d}t}=-K\left(\frac{1}{\rho_{\mathrm{g}}}+\frac{1}{\rho_{\mathrm{d}}}\right)f\left(\Delta
v_{x}\right)\Delta v_{x}.$ (8)
Drag type | $f$ | $\Delta v_{x}\left(t\right)$
---|---|---
Linear | $1$ | $\Delta v_{x,\mathrm{0}}e^{-K\left(\frac{1}{\rho_{\mathrm{g}}}+\frac{1}{\rho_{\mathrm{d}}}\right)t}$
Quadratic | $\left|\Delta v_{x}\right|$ | $\displaystyle\frac{\Delta v_{x,\mathrm{0}}}{1+\epsilon\Delta v_{x,\mathrm{0}}K\left(\frac{1}{\rho_{\mathrm{g}}}+\frac{1}{\rho_{\mathrm{d}}}\right)t}$
Power-law | $\left|\Delta v_{x}\right|^{a}$ | $\displaystyle\frac{\Delta v_{x,\mathrm{0}}}{\left(1+a\left(\epsilon\Delta v_{x,\mathrm{0}}\right)^{a}K\left(\frac{1}{\rho_{\mathrm{g}}}+\frac{1}{\rho_{\mathrm{d}}}\right)t\right)^{\frac{1}{a}}}$
Third-order expansion | $1+a_{3}\Delta v_{x}^{2}$, $a_{3}>0$ | $\displaystyle\frac{\Delta v_{x,\mathrm{0}}e^{-K\left(\frac{1}{\rho_{\mathrm{g}}}+\frac{1}{\rho_{\mathrm{d}}}\right)t}}{\sqrt{1+a_{3}\Delta v_{x,\mathrm{0}}^{2}\left(1-e^{-2K\left(\frac{1}{\rho_{\mathrm{g}}}+\frac{1}{\rho_{\mathrm{d}}}\right)t}\right)}}$
Mixed | $\sqrt{1+a_{2}\Delta v_{x}^{2}}$, $a_{2}>0$ | $\displaystyle\frac{\epsilon}{\sqrt{a_{2}}}\sqrt{\left(\frac{\sinh\left(K\left(\frac{1}{\rho_{\mathrm{g}}}+\frac{1}{\rho_{\mathrm{d}}}\right)t\right)+\sqrt{1+a_{2}\Delta v_{x,\mathrm{0}}^{2}}\cosh\left(K\left(\frac{1}{\rho_{\mathrm{g}}}+\frac{1}{\rho_{\mathrm{d}}}\right)t\right)}{\cosh\left(K\left(\frac{1}{\rho_{\mathrm{g}}}+\frac{1}{\rho_{\mathrm{d}}}\right)t\right)+\sqrt{1+a_{2}\Delta v_{x,\mathrm{0}}^{2}}\sinh\left(K\left(\frac{1}{\rho_{\mathrm{g}}}+\frac{1}{\rho_{\mathrm{d}}}\right)t\right)}\right)^{2}-1}$
Table 1: Expressions of $\Delta{\bf v}\left(t\right)$ for several drag regimes
$f$ in a solid-gas mixture where the two phases have initially two different
velocities. The pressure and densities of the medium are constant and the
volume of the dust particles is neglected. $\epsilon=+1$ if $\Delta
v_{x,\mathrm{0}}>0$ and $\epsilon=-1$ if $\Delta v_{x,\mathrm{0}}<0$. Figure
1: Examples of the analytic solutions for the decay of the dust velocity in
the dustybox test, assuming a dust-gas mixture with $\rho_{\mathrm{g}}=1$,
$\rho_{\mathrm{d}}=0.01$, $v_{\mathrm{d},0}=1$, $v_{\mathrm{g},0}=0$ and $K=1$
for the linear, quadratic, power-law (with $a=0.4$), third order expansion
(with $a_{3}=0.5$) and the mixed (with $a_{2}=5$) drag regimes.
The analytic expression for $\Delta v_{x}\left(t\right)$ in five drag regimes
$f(\Delta v)$ relevant to astrophysics in this particular configuration are
given in Table 1. The linear solution (top row) holds for Epstein drag at low
Mach number and Stokes drag at low Reynolds number. A quadratic relation
(second row) is relevant for Epstein drag at high Mach number and Stokes drag
at large Reynolds numbers. Power-law drag occurs for Stokes drag at
intermediate Reynolds numbers (in which case the exponent is given by
$a=0.4$). The third order expansion has been proposed for Epstein drag at
intermediate Mach numbers (Baines et al., 1965). The mixed drag regime (bottom
row) connects the linear and quadratic regimes for Epstein drag, used recently
by Paardekooper & Mellema (2006).
Finally, it should be noted that the stability of the dustybox problem, though
likely, has only been verified numerically. Proving stability with full
generality is a difficult problem due to the non-zero mean velocities for each
fluid — producing a dispersion relation that is a quadratic equation with
complex coefficients. However, it can be shown that the solution is stable for
particular choices of $K$, $v_{0}$, $c_{s}$ and $\rho_{0}$.
### 2.3 dustybox example
As an example, the standard linear Epstein drag regime (Baines et al., 1965)
would correspond to $K=\rho_{g}c_{s}/(\rho_{int}s)$ (where $c_{s}$ is the
sound speed, $\rho_{int}$ is the intrinsic density of the dust grains and $s$
is the grain size) and $f=1$. Thus, using the solution from Table 1, we would
obtain the complete expression for the velocity in each phase according to
$\displaystyle v_{\mathrm{g},x}\left(t\right)$ $\displaystyle=$
$\displaystyle\displaystyle
v^{*}_{x}+\frac{\rho_{\mathrm{d}}}{\rho_{\mathrm{g}}+\rho_{\mathrm{d}}}\Delta
v_{x,\mathrm{0}}e^{-K\left(\frac{1}{\rho_{\mathrm{g}}}+\frac{1}{\rho_{\mathrm{d}}}\right)t},$
(9) $\displaystyle v_{\mathrm{d},x}\left(t\right)$ $\displaystyle=$
$\displaystyle\displaystyle
v^{*}_{x}-\frac{\rho_{\mathrm{g}}}{\rho_{\mathrm{g}}+\rho_{\mathrm{d}}}\Delta
v_{x,\mathrm{0}}e^{-K\left(\frac{1}{\rho_{\mathrm{g}}}+\frac{1}{\rho_{\mathrm{d}}}\right)t}.$
(10)
Examples of the solutions for the decay of the dust velocity in the 5
different drag regimes for a typical astrophysical dust-gas mixture (i.e.,
$1\%$ dust-to-gas ratio) characterised by $\rho_{\mathrm{g}}=1$,
$\rho_{\mathrm{d}}=0.01$, $v_{\mathrm{d},0}=1$, $v_{\mathrm{g},0}=0$ and
assuming $K=1$ are shown in Fig. 1. It may be observed, for example, that for
this particular choice of parameters the quadratic and power law drag regimes
(which would correspond to using a Stokes instead of an Epstein drag
prescription in an accretion disc calculation) give less efficient relaxation
of the dust phase to the barycentric velocity.
## 3 dustywave: Sound waves in a dust-gas mixture
The second test problem, dustywave, consists of linear sound waves propagating
in a uniform density two-fluid (dust-gas) medium with a linear drag term.
Similar to the first problem, this test can easily be performed in 1, 2 or 3
dimensions in any numerical code using periodic boundary conditions. As
previously, we provide analytic solutions for an arbitrary linear drag
coefficient and/or dust-to-gas ratio.
The setup consists of a sound wave propagating in the $x$-direction. We
introduce the coefficient $c_{\mathrm{s}}$, so that a small perturbation in
the gas density $\delta\rho_{\mathrm{g}}$ is related to a small perturbation
in the gas pressure $\delta P$ by the relation $\delta
P=c_{\mathrm{s}}^{2}\delta\rho_{\mathrm{g}}$. $c_{\mathrm{s}}$ is thus the
sound speed of the gas phase if no dust were present.
### 3.1 Equations of motion
For this system, the equations of motion are:
$\displaystyle\displaystyle\rho_{\mathrm{g}}\left(\frac{\partial
v_{\mathrm{g}}}{\partial t}+v_{\mathrm{g}}\frac{\partial
v_{\mathrm{g}}}{\partial x}\right)$ $\displaystyle=$
$\displaystyle\displaystyle-K\left(v_{\mathrm{g}}-v_{\mathrm{d}}\right)-\frac{\partial
P}{\partial x},$ (11)
$\displaystyle\displaystyle\rho_{\mathrm{d}}\left(\frac{\partial
v_{\mathrm{d}}}{\partial t}+v_{\mathrm{d}}\frac{\partial
v_{\mathrm{d}}}{\partial x}\right)$ $\displaystyle=$
$\displaystyle\displaystyle+K\left(v_{\mathrm{g}}-v_{\mathrm{d}}\right),$ (12)
$\displaystyle\displaystyle\frac{\partial\rho_{\mathrm{g}}}{\partial
t}+\frac{\partial\rho_{\mathrm{g}}v_{\mathrm{g}}}{\partial x}$
$\displaystyle=$ $\displaystyle 0,$ (13)
$\displaystyle\displaystyle\frac{\partial\rho_{\mathrm{d}}}{\partial
t}+\frac{\partial\rho_{\mathrm{d}}v_{\mathrm{d}}}{\partial x}$
$\displaystyle=$ $\displaystyle 0.$ (14)
The assumptions made in obtaining these equations are identical to those
discussed in Sec. 2, except for the additional term due to the gas pressure
gradient.
### 3.2 Linear expansion
We assume that the equilibrium velocities and densities of the fluid mixture
are given by: $v_{\mathrm{g}}=v_{\mathrm{d}}=0$,
$\rho_{\mathrm{g}}=\rho_{\mathrm{g},0}$ and
$\rho_{\mathrm{d}}=\rho_{\mathrm{d},0}$. We then consider small perturbations
and perform an acoustic linear expansion of Eqs. (11)–(14):
$\displaystyle\displaystyle\rho_{\mathrm{g},0}\frac{\partial
v_{\mathrm{g}}}{\partial t}$ $\displaystyle=$
$\displaystyle\displaystyle-K\left(v_{\mathrm{g}}-v_{\mathrm{d}}\right)-c_{\mathrm{s}}^{2}\frac{\partial\delta\rho_{\mathrm{g}}}{\partial
x},$ (15) $\displaystyle\displaystyle\rho_{\mathrm{d},0}\frac{\partial
v_{\mathrm{d}}}{\partial t}$ $\displaystyle=$
$\displaystyle\displaystyle+K\left(v_{\mathrm{g}}-v_{\mathrm{d}}\right),$ (16)
$\displaystyle\displaystyle\frac{\partial\delta\rho_{\mathrm{g}}}{\partial
t}+\rho_{\mathrm{g},0}\frac{\partial v_{\mathrm{g}}}{\partial x}$
$\displaystyle=$ $\displaystyle 0,$ (17)
$\displaystyle\displaystyle\frac{\partial\delta\rho_{\mathrm{d}}}{\partial
t}+\rho_{\mathrm{d},0}\frac{\partial v_{\mathrm{d}}}{\partial x}$
$\displaystyle=$ $\displaystyle 0.$ (18)
As this system is linear, we search for solutions under the form of
monochromatic plane waves. The total solution is a linear combination of those
monochromatic plane waves whose coefficients are fixed by the initial
conditions. The perturbations are assumed to be of the general form
$\displaystyle v_{\mathrm{g}}$ $\displaystyle=$ $\displaystyle V_{\rm
g}e^{i(kx-\omega t)},$ (19) $\displaystyle v_{\mathrm{d}}$ $\displaystyle=$
$\displaystyle V_{\rm d}e^{i(kx-\omega t)},$ (20)
$\displaystyle\delta\rho_{\mathrm{g}}$ $\displaystyle=$ $\displaystyle D_{\rm
g}e^{i(kx-\omega t)},$ (21) $\displaystyle\delta\rho_{\mathrm{d}}$
$\displaystyle=$ $\displaystyle D_{\rm d}e^{i(kx-\omega t)},$ (22)
where in general the perturbation amplitudes $V_{\rm g}$, $V_{\rm d}$, $D_{\rm
g}$ and $D_{\rm d}$ are complex quantities.
### 3.3 Linear solutions
Using (19)–(22) in (15)–(18), we find that the resulting system admits non-
trivial solutions provided the following condition holds:
$\left|\begin{array}[]{cccc}-\displaystyle
i\omega+\frac{1}{t_{\mathrm{g}}}&\displaystyle-\frac{1}{tg}&\displaystyle+\frac{ikc_{\mathrm{s}}^{2}}{\rho_{\mathrm{g},0}}&0\\\\[10.00002pt]
\displaystyle-\frac{1}{t_{\mathrm{d}}}&\displaystyle-i\omega+\frac{1}{td}&0&0\\\\[10.00002pt]
ik\rho_{\mathrm{g},0}&0&-i\omega&0\\\\[10.00002pt]
0&ik\rho_{\mathrm{d},0}&0&-i\omega\end{array}\right|=0,$ (23)
where we have set $t_{\mathrm{g}}=\displaystyle\frac{\rho_{\mathrm{g},0}}{K}$
and $t_{\mathrm{d}}=\displaystyle\frac{\rho_{\mathrm{d},0}}{K}$. This
condition provides the dispersion relation of the system,
$\omega^{3}+i\omega^{2}\left(\frac{1}{t_{\mathrm{g}}}+\frac{1}{t_{\mathrm{d}}}\right)-k^{2}c_{\mathrm{s}}^{2}\omega-i\frac{k^{2}c_{\mathrm{s}}^{2}}{t_{\mathrm{d}}}=0.$
(24)
This cubic equations admits three complex roots $\omega_{n=1,2,3}$ whose
imaginary parts are always negative, ensuring the linear stability of the
system (see Appendix A). This implies that the full solution of the problem
consists of a linear combination of three independent modes that will take the
form of exponentially decaying monochromatic waves. For example, for the the
gas velocity the solution will be given by
$\displaystyle v_{\mathrm{g}}(x,t)$ $\displaystyle=$ $\displaystyle
e^{\omega_{1\mathrm{i}}t}\left[V_{1\mathrm{g,r}}\cos\left(kx-\omega_{1\mathrm{r}}t\right)-V_{1\mathrm{g,i}}\sin\left(kx-\omega_{1\mathrm{r}}t\right)\right],$
$\displaystyle+$ $\displaystyle
e^{\omega_{2\mathrm{i}}t}\left[V_{2\mathrm{g,r}}\cos\left(kx-\omega_{2\mathrm{r}}t\right)-V_{2\mathrm{g,i}}\sin\left(kx-\omega_{2\mathrm{r}}t\right)\right],$
$\displaystyle+$ $\displaystyle
e^{\omega_{3\mathrm{i}}t}\left[V_{3\mathrm{g,r}}\cos\left(kx-\omega_{3\mathrm{r}}t\right)-V_{3\mathrm{g,i}}\sin\left(kx-\omega_{3\mathrm{r}}t\right)\right].$
where the subscripts $r$ and $i$ refer to the real and imaginary parts of the
complex variables $\omega_{1,2,3}$ and $V_{g,1,2,3}$. The solutions for
$v_{\mathrm{d}}$, $\delta\rho_{\mathrm{g}}$ and $\delta\rho_{\mathrm{d}}$ are
of the same form, with the amplitudes replaced by the real and imaginary parts
of $V_{\rm d}$, $D_{\rm g}$ and $D_{\rm d}$, respectively.
### 3.4 Solving for the coefficients
Obtaining the full analytic solution thus requires two steps:
1. 1.
Solving the cubic equation (24) to determine the complex variables
$\omega_{1}$, $\omega_{2}$ and $\omega_{3}$ for the 3 modes; and
2. 2.
Solving for the 24 coefficients determining the amplitudes $V_{\rm g}$,
$V_{\rm d}$, $D_{\rm g}$ and $D_{\rm d}$ for each of the 3 modes.
Step i) can be achieved straightforwardly using the known analytic solutions
for a cubic equation (given for completeness in Appendix B). Since in general
such solutions require a cubic equation with real coefficients, it is
convenient to solve for the variable $\omega=-iy$, which reduces Eq. (24) to
the form
$y^{3}-y^{2}\left(\frac{1}{t_{\mathrm{g}}}+\frac{1}{t_{\mathrm{d}}}\right)+k^{2}c_{\mathrm{s}}^{2}y-\frac{k^{2}c_{\mathrm{s}}^{2}}{t_{\mathrm{d}}}=0,$
(26)
which has purely real coefficients, as required.
Step ii) is less straightforward and consists of two substeps. The first
substep is to constrain the amplitude coefficients using the 8 constraints
given by the initial conditions (i.e., the phase and amplitude of the initial
mode in the numerical simulation, which constrain both the real and imaginary
parts of the initial amplitudes). Although the solution can in principle be
found for any given combination of initial perturbations to $v$ and $\rho$ for
the two phases, the solutions we provide assume initial conditions of the form
$\displaystyle v_{\mathrm{g}}(x,0)$ $\displaystyle=$ $\displaystyle v_{\rm
g,0}\sin(kx),$ (27) $\displaystyle v_{\mathrm{d}}(x,0)$ $\displaystyle=$
$\displaystyle v_{\rm d,0}\sin(kx),$ (28)
$\displaystyle\rho_{\mathrm{g}}(x,0)$ $\displaystyle=$ $\displaystyle\rho_{\rm
g,0}+\delta\rho_{\rm g,0}\sin(kx),$ (29) $\displaystyle\rho_{\mathrm{d}}(x,0)$
$\displaystyle=$ $\displaystyle\rho_{\rm d,0}+\delta\rho_{\rm d,0}\sin(kx),$
(30)
giving the 8 constraints
$\displaystyle V_{\rm 1g,r}+V_{\rm 2g,r}+V_{\rm 3g,r}$ $\displaystyle=$
$\displaystyle 0,$ (31) $\displaystyle V_{\rm 1g,i}+V_{\rm 2g,i}+V_{\rm 3g,i}$
$\displaystyle=$ $\displaystyle-v_{\rm g,0},$ (32) $\displaystyle V_{\rm
1d,r}+V_{\rm 2d,r}+V_{\rm 3d,r}$ $\displaystyle=$ $\displaystyle 0,$ (33)
$\displaystyle V_{\rm 1d,i}+V_{\rm 2d,i}+V_{\rm 3d,i}$ $\displaystyle=$
$\displaystyle-v_{\rm d,0},$ (34) $\displaystyle D_{\rm 1g,r}+D_{\rm
2g,r}+D_{\rm 3g,r}$ $\displaystyle=$ $\displaystyle 0,$ (35) $\displaystyle
D_{\rm 1g,i}+D_{\rm 2g,i}+D_{\rm 3g,i}$ $\displaystyle=$
$\displaystyle-\delta\rho_{\rm g,0},$ (36) $\displaystyle D_{\rm 1d,r}+D_{\rm
2d,r}+D_{\rm 3d,r}$ $\displaystyle=$ $\displaystyle 0,$ (37) $\displaystyle
D_{\rm 1d,i}+D_{\rm 2d,i}+D_{\rm 3d,i}$ $\displaystyle=$
$\displaystyle-\delta\rho_{\rm d,0}.$ (38)
The second substep is to determine the remaining 16 coefficients by
substituting each of the expressions for the perturbations (Eq. LABEL:eq:Vg
and the equivalents for $v_{\rm d}$, $\delta\rho_{\mathrm{g}}$ and
$\delta\rho_{\mathrm{d}}$) and the 8 constraints (31)–(38) into the evolution
equations (15)–(18). The remaining analysis is straightforward but laborious,
hence we perform this step using the computer algebra system maple. The
resulting expressions for the 24 coefficients may be easily obtained in this
manner, however their expressions are too lengthy to be usefully transcribed
in this paper. Instead, we provide, for practical use, both the maple
worksheet and a Fortran (90) routine111Available as supplementary files
accompanying the arXiv.org version of this paper. that evaluates the analytic
expressions for the coefficients (produced via an automated translation of the
maple output). The Fortran routine has been used to compute the example
solutions shown in Figures 2 and 3. Note that, although the initial conditions
are constrained to be of the form (27)–(30), the solutions provided are
completely general with respect to both the amplitude of the drag coefficient
and the dust-to-gas ratio. The examples we show employ a dust-to-gas ratio and
drag coefficients that are typically relevant during the planet formation
process. For this test it should be kept in mind that the solutions assume
linearity of the wave amplitudes and do not therefore predict possible non-
linear evolution of the system — for example the potential for mode
splitting/merging or self-modulation, effects that are known to occur in
multi-fluid systems.
Figure 2: Example analytic solution of the dustywave test showing the
propagation of a sound wave in a periodic domain in a two-fluid gas-dust
mixture. The panels shows the time evolution (top to bottom, time as
indicated) of the velocity in the gas (solid/black) and dust (dashed/red)
respectively assuming $\rho_{\mathrm{g}}=\rho_{\mathrm{d}}=1$ (i.e., a gas-to-
dust ratio of unity) and a drag coefficient $K=1$ giving a characteristic
stopping time of $t_{s}=1/2$. The solution with $K=1$ shows efficient damping
of the initial perturbation in both fluids. Figure 3: Further examples of
analytic solutions to the dustywave test, as in Fig. 2 but showing the
solution at $t=5$ for a range of drag coefficients $K=[0.01,0.1.1.10,100]$
(top to bottom, as indicated in the legend). At low $K$ the waves are
essentially decoupled in the two fluids (top panel), while an intermediate
drag coefficient produces the most efficient damping (middle panels). At large
$K$ (bottom panels), although the differential velocities are quickly damped
the overall amplitudes decrease more slowly since the waves tend to move
together.
### 3.5 dustywave examples
Figure 2 shows a typical time evolution of the gas and dust velocities
(solid/black and red/dashed lines, respectively) assuming a dust-to-gas ratio
of unity (as would occur during the late stages of the planet formation
process) and a drag coefficient $K=1$, giving a characteristic stopping time
of $t_{s}=1/[K(1/\rho_{\mathrm{g}}+1/\rho_{\mathrm{d}})]=1/2$. At $t=1$
(bottom panel) it may be observed that the differential velocity between the
two phases has been efficiently damped by the mutual drag between the two
fluids.
Figure 3 demonstrates how the characteristics of the solution change as the
drag coefficient varies, again assuming a gas-to-dust ratio of unity, with the
solution shown at $t=5$. At low drag (top panel, $K=0.01$, e.g. for dust in
the interstellar medium) both the dust and gas evolve essentially
independently over the timescale shown. Thus, the solution in the gas is
simply a travelling wave with a sound speed close to the gas sound speed,
while the dust retains its initial velocity profile. As the drag coefficient
increases to unity (top three panels), the solution tends towards the
efficient coupling that occurs at $K\sim 1$ (for this gas-to-dust ratio) shown
in Fig. 2, which represents the “critical damping” solution where both the gas
and dust velocities relax to zero. As the damping is increased further (bottom
two panels) the damping of the differential velocities occurs in a fraction of
a period, implying that, although the differential velocity between the fluids
is quickly damped, the removal of kinetic energy is less efficient since the
two waves essentially evolve together, relaxing slowly — but in tandem — to
zero.
## 4 Conclusion
In this paper we have provided the analytic solutions to two problems
involving two-fluid astrophysical dust-gas mixtures in order to supply a
practical means of benchmarking numerical simulations of dusty gas dynamics.
The test problems are simple to setup for both particle and grid-based codes
and can be performed using periodic boundary conditions. A summary of the
setup for each problem is given in Table 2. It may be noted that both
solutions are completely general with respect to both the amplitude of the
drag coefficient and the dust-to-gas ratio. The subroutines for computing the
dustywave solution are provided as supplementary files to the version of this
paper posted on arXiv.org. The dustywave solution has also been incorporated
into the splash222http://users.monash.edu.au/$\sim$dprice/splash/
visualisation tool for SPH simulations (Price, 2007).
Test problem | Initial conditions | Boundary conds.
---|---|---
dustybox | ${\bf v}_{\mathrm{g}}=v_{0}\hat{\bf r}$
${\bf v}_{\mathrm{d}}=-v_{0}\hat{\bf r}$
$\rho_{\mathrm{g}}=\rho_{\rm g,0}$
$\rho_{\mathrm{d}}=\rho_{\rm d,0}$
$c_{\rm s,0}={\rm const}$ | Periodic
dustywave | ${\bf v}_{\mathrm{g}}=v_{\rm g,0}\sin({\bf k}\cdot{\bf r})\hat{\bf r}$
${\bf v}_{\mathrm{d}}=v_{\rm d,0}\sin({\bf k}\cdot{\bf r})\hat{\bf r}$
$\rho_{\mathrm{g}}=\rho_{\rm g,0}+\delta\rho_{\rm g,0}\sin({\bf k}\cdot{\bf
r})$
$\rho_{\mathrm{d}}=\rho_{\rm d,0}+\delta\rho_{\rm d,0}\sin({\bf k}\cdot{\bf
r})$
$c_{\rm s,0}={\rm const}$ | Periodic
Table 2: Summary of the setup for each of the test problems.
## Acknowledgments
We thank Matthew Bate, Ben Ayliffe and Joe Monaghan for useful discussions. We
are grateful to the Australian Research Council for funding via Discovery
project grant DP1094585.
## References
* Ahuja (1973) Ahuja A. S., 1973, J. Appl. Phys., 44, 4863
* Bai & Stone (2010) Bai X.-N., Stone J. M., 2010, ApJS, 190, 297
* Baines et al. (1965) Baines M. J., Williams I. P., Asebiomo A. S., 1965, MNRAS, 130, 63
* Barrière-Fouchet et al. (2005) Barrière-Fouchet L., Gonzalez J.-F., Murray J. R., Humble R. J., Maddison S. T., 2005, A&A, 443, 185
* Chiang & Youdin (2010) Chiang E., Youdin A. N., 2010, Ann. Rev. Earth Plan. Sci., 38, 493
* Fromang & Papaloizou (2006) Fromang S., Papaloizou J., 2006, A&A, 452, 751
* Gumerov et al. (1988) Gumerov N. A., Ivandaev A. I., Nigmatulin R. I., 1988, J. Fluid Mech., 193, 53
* Johansen et al. (2007) Johansen A., Oishi J. S., Low M., Klahr H., Henning T., Youdin A., 2007, Nature, 448, 1022
* Maddison et al. (2003) Maddison S. T., Humble R. J., Murray J. R., 2003, in D. Deming & S. Seager ed., Scientific Frontiers in Research on Extrasolar Planets Vol. 294 of Astronomical Society of the Pacific Conference Series, Building Planets with Dusty Gas. pp 307–310
* Marble (1970) Marble F. E., 1970, Ann. Rev. Fluid Mech., 2, 397
* Miniati (2010) Miniati F., 2010, J. Comp. Phys., 229, 3916
* Miura & Glass (1982) Miura H., Glass I. I., 1982, Roy. Soc. Lon. Proc. Ser. A, 382, 373
* Monaghan (1997) Monaghan J., 1997, J. Comp. Phys., 138, 801
* Monaghan & Kocharyan (1995) Monaghan J. J., Kocharyan A., 1995, Comp. Phys. Commun., 87, 225
* Paardekooper & Mellema (2006) Paardekooper S., Mellema G., 2006, A&A, 459, L17
* Price (2007) Price D. J., 2007, PASA, 24, 159
* Temkin (1998) Temkin S., 1998, J. Acoustical Society of America, 103, 838
* Turner & Wootten (2007) Turner J. L., Wootten H. A., 2007, Highlights of Astronomy, 14, 520
* Youdin & Johansen (2007) Youdin A., Johansen A., 2007, ApJ, 662, 613
* Youdin & Goodman (2005) Youdin A. N., Goodman J., 2005, ApJ, 620, 459
## Appendix A Stability of linear waves in a dust-gas mixture
The roots of Eq. (26) can be three real single roots or one real roots and two
conjugated complex roots (plus all the degenerated cases). Noting
$y=y_{\mathrm{r}}+iy_{\mathrm{i}}$, we see e.g. from Eq. (LABEL:eq:Vg) that if
$y_{\mathrm{r}}<0$ (i.e. $=\omega_{\mathrm{i}}>0$), the solution diverges with
time and the system is unstable.
Physically, the evolution of the solid-gas mixture given by Eqs. (15)–(18) is
characterised by three intrinsic time scales: the time
$t_{\mathrm{p}}=\left(kc_{\mathrm{s}}\right)^{-1}$ required for the gas
pressure to equilibrate the gas phase, the time $t_{\mathrm{s}}$ (defined
according to $t_{\mathrm{s}}^{-1}=t_{\mathrm{g}}^{-1}+t_{\mathrm{d}}^{-1}$)
required by the drag to relax the centre of mass of the fluid (see Sec. 2) and
$t_{\mathrm{d}}$, the time required for the dust to force the gas evolution.
Thus, two ratios of these independent timescales are sufficient to fully
describe the evolution of the system. Defining the following dimensionless
quantities:
$\displaystyle Y$ $\displaystyle\equiv$ $\displaystyle\displaystyle
yt_{\mathrm{p}},$ (39) $\displaystyle\epsilon$ $\displaystyle\equiv$
$\displaystyle\displaystyle\frac{t_{\mathrm{p}}}{t_{\mathrm{s}}},$ (40)
$\displaystyle\epsilon_{\mathrm{d}}$ $\displaystyle\equiv$
$\displaystyle\displaystyle\frac{t_{\mathrm{p}}}{t_{\mathrm{d}}},$ (41)
giving Eq. (26) in the form
$P_{0}\left(Y\right)=Y^{3}-Y^{2}\epsilon+Y-\epsilon_{\mathrm{d}}=0.$ (42)
Determining the sign of the real part of the roots of Eq. (42) specifies
whether the system is unstable or not. For this purpose, we introduce:
$\displaystyle P$ $\displaystyle=$
$\displaystyle\displaystyle\frac{\epsilon^{2}}{9}-\frac{1}{3},$ (43)
$\displaystyle Q$ $\displaystyle=$
$\displaystyle\displaystyle\frac{\epsilon^{3}}{27}+\frac{1}{2}\left(\epsilon_{\mathrm{d}}-\frac{\epsilon}{3}\right),$
(44) $\displaystyle D$ $\displaystyle=$ $\displaystyle
P^{3}-Q^{2}=\displaystyle-\frac{\epsilon^{3}\epsilon_{\mathrm{d}}}{27}+\frac{\epsilon^{2}}{108}+\frac{\epsilon\epsilon_{\mathrm{d}}}{6}-\frac{\epsilon_{\mathrm{d}}^{2}}{4}-\frac{1}{27}.$
(45)
The roots of Eq. (42) are denoted $r_{-1}$, $r_{0}$ and $r_{+1}$. We have the
following three cases:
* •
Case(i): $P<0$,
* •
Case (ii): $P>0$ and $D<0$,
* •
Case (iii): $P>0$ and $D>0$,
In cases (i) and (ii) one root is real (denoted $\mu$) while the two remaining
roots are the complex conjugates of each other (denoted $\alpha\pm i\beta$).
Factorising Eq. 42 using the three roots gives the relations
$\displaystyle\mu+2\alpha$ $\displaystyle=$ $\displaystyle\epsilon,$ (46)
$\displaystyle 2\mu\alpha+\left(\alpha^{2}+\beta^{2}\right)$ $\displaystyle=$
$\displaystyle 1,$ (47) $\displaystyle\mu\left(\alpha^{2}+\beta^{2}\right)$
$\displaystyle=$ $\displaystyle\epsilon_{\mathrm{d}}.$ (48)
The last equation implies $\mu>0$. Combining (46)–(48) gives an equation for
$\alpha$ of the form
$f\left(\alpha\right)=\alpha^{3}-\epsilon\alpha^{2}+\frac{\alpha}{4}\left(\epsilon^{2}+1\right)-\frac{\left(\epsilon-\epsilon_{\mathrm{d}}\right)}{8}=0,$
(49)
which admits only one or three positive roots provided
$\epsilon-\epsilon_{\mathrm{d}}=\frac{t_{\mathrm{p}}}{t_{\mathrm{g}}}>0$.
Indeed, this is the case since we have $f\left(0\right)<0$,
$f^{\prime}\left(0\right)>0$,
$\lim\limits_{\begin{subarray}{c}\alpha\to+\infty\end{subarray}}f\left(\alpha\right)$
and a positive X axis value
$\alpha_{\mathrm{c}}=\displaystyle\frac{\epsilon}{3}$ for the centre of
symmetry of the cubic function. Therefore, in cases (i) and (ii), the real
part of the complex roots is positive and the system is stable.
In case (iii), the three roots are real. To determine their signs, we
calculate the Sturm polynomials of $P_{0}\left(Y\right)$:
$\displaystyle P_{0}\left(Y\right)$ $\displaystyle=$
$\displaystyle\displaystyle Y^{3}-Y^{2}\epsilon+Y-\epsilon_{\mathrm{d}},$ (50)
$\displaystyle P_{1}\left(Y\right)$ $\displaystyle=$
$\displaystyle\displaystyle 3Y^{2}-2\epsilon Y+1,$ (51) $\displaystyle
P_{2}\left(Y\right)$ $\displaystyle=$
$\displaystyle\displaystyle-\left[\left(\frac{2}{3}-\frac{2\epsilon^{2}}{9}\right)Y+\left(\frac{\epsilon}{9}-\epsilon_{\mathrm{d}}\right)\right],$
(52) $\displaystyle P_{3}\left(Y\right)$ $\displaystyle=$
$\displaystyle\displaystyle\frac{243D}{\left(\epsilon^{2}-3\right)^{2}}.$ (53)
As the three roots are real, $D>0$ and $P_{3}>0$. We can then apply the Sturm
theorem to determine the number of positive roots. Using $V\left(Y\right)$ to
denote the number of consecutive sign changes in the sequence
$\left[P_{0}\left(Y\right),P_{1}\left(Y\right),P_{2}\left(Y\right),P_{3}\left(Y\right)\right]$,
the number of positive roots of $P_{0}\left(Y\right)$ is given by
$V\left(0\right)-\lim\limits_{\begin{subarray}{c}Y\to+\infty\end{subarray}}V\left(Y\right)$.
Thus, if $\epsilon_{\mathrm{d}}-\frac{\epsilon}{9}<0$, the three roots are
positive. However, if $\epsilon_{\mathrm{d}}-\frac{\epsilon}{9}>0$, only one
root is positive and the two remaining ones are negative. However,
$\epsilon_{\mathrm{d}}$ has to be smaller than the larger positive root of
$D(\epsilon_{\mathrm{d}})=0$ for $D$ in order to remain positive (and thus,
for the system to be unstable), which is never the case if
$9\epsilon_{\mathrm{d}}>\epsilon>\sqrt{3}$.
Such a solid-gas mixture would thus be unconditionally stable. However, if
$\epsilon$ is large enough (which means physically, that the damping due to
the drag occurs faster than the equilibrium produced by the pressure) and the
ratio $\displaystyle\frac{\epsilon_{\mathrm{d}}}{\epsilon}$ is sufficiently
large (i.e., a large dust-to-gas ratio and thus, an efficient forcing of the
gas motion from the dust), and instability may develop when additional
physical processes are involved (an example being the streaming instability
that occurs in a differentially rotating flow, c.f. Youdin & Goodman 2005).
## Appendix B Real and imaginary parts of the roots of a cubic with real
coefficients
The cubic equation given by Eq. 26 can be solved using the known analytic
solution to a cubic equation, though for this problem we require both the real
and imaginary components of all three solutions. We consider the following
normalised cubic equation with respect to the variable $x$:
$f\left(x\right)=x^{3}+ax^{2}+bx+c,$ (54)
where $a$, $b$ and $c$ are real coefficients. We introduce the quantities $P$,
$Q$ and $D$, given by:
$P=\frac{a^{2}-3b}{9},$ (55) $Q=\frac{ab}{6}-\frac{c}{2}-\frac{a^{3}}{27},$
(56)
and
$D=P^{3}-Q^{2}.$ (57)
Denoting the roots of Eq.(54) by $r_{-1}$, $r_{0}$ and $r_{+1}$, the solutions
can be obtained by considering the following three cases:
1. Case i)
$P<0$,
$\displaystyle r_{0}$ $\displaystyle=$
$\displaystyle\displaystyle\frac{-a}{3}+2\sqrt{-P}\,\sinh\left(t\right),$ (58)
$\displaystyle r_{\pm 1}$ $\displaystyle=$
$\displaystyle\displaystyle\frac{-a}{3}-\sqrt{-P}\,\sinh\left(t\right)\pm
i\sqrt{-3P}\,\mathrm{cosh}\left(t\right).$ (59)
where
$t=\frac{1}{3}\,\mathrm{arcsinh}\left(\frac{Q}{\sqrt{\left(-P\right)^{3}}}\right).$
(60)
2. Case ii)
$P>0$ and $D<0$,
$\displaystyle r_{0}$ $\displaystyle=$
$\displaystyle\displaystyle\frac{-a}{3}+u+v,$ (61) $\displaystyle r_{\pm 1}$
$\displaystyle=$ $\displaystyle\displaystyle\frac{-a}{3}-\frac{u+v}{2}\pm
i\sqrt{3}\frac{u-v}{2}.$ (62)
where:
$\displaystyle u$ $\displaystyle=$
$\displaystyle\displaystyle\sqrt[3]{Q-\sqrt{-D}},$ (63) $\displaystyle v$
$\displaystyle=$ $\displaystyle\displaystyle\sqrt[3]{Q+\sqrt{-D}}.$ (64)
3. Case iii)
$P>0$ and $D>0$,
$\displaystyle
r_{n}=\frac{-a}{3}+2\sqrt{P}\mathrm{cos}\left(\displaystyle\frac{2\pi
n+\mathrm{arccos}\left(\frac{Q}{\sqrt{P^{3}}}\right)}{3}\right),$ (65)
with $n=0,\pm 1$.
|
arxiv-papers
| 2011-06-09T08:25:08 |
2024-09-04T02:49:19.502262
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Guillaume Laibe (Monash), Daniel J. Price (Monash)",
"submitter": "Guillaume Laibe",
"url": "https://arxiv.org/abs/1106.1736"
}
|
1106.1881
|
# Inductively Coupled Augmented Railgun
Thomas B. Bahder William C. McCorkle Aviation and Missile Research,
Development, and Engineering Center,
US Army RDECOM, Redstone Arsenal, AL 35898, U.S.A.
###### Abstract
We derive the non-linear dynamical equations for an augmented electromagnetic
railgun, whose augmentation circuit is inductively coupled to the gun circuit.
We solve these differential equations numerically using example parameter
values. We find a complicated interaction between the augmentation circuit,
gun circuit, and mechanical degrees of freedom, leading to a complicated
optimization problem. For certain values of parameters, we find that an
augmented electromagnetic railgun has an armature kinetic energy that is 42%
larger than the same railgun with no augmentation circuit. Optimizing the
parameters may lead to further increase in performance.
## I Introduction
Launch systems, such as the electromagnetic railgun (EMG), have complex
electrical transient phenomena IEEE (2008); McCorkle and Bahder (2008). During
launch, the transient involves the build-up and penetration of a magnetic
field into the metallic material. The dynamics of magnetic field penetration
into the metal rails is described by a well-known magnetic diffusion equation
Knoepfel (2000), which leads to a skin effect where a large current is
transported inside a narrow channel. In a railgun that has a moving conducting
armature, the effect is called a velocity skin effect (VSE) and is believed to
be one of the major problems in limiting railgun performance Knoepfel (2000);
Young and Hughes (1982), because it leads to intense Joule heating of the
conducting materials, such as rails and armatures. To what extent the VSE is
responsible for limiting the performance of solid armatures is still the
subject of research Drobyshevski et al. (1999); Stefani et al. (2005);
Schneider et al. (2007, 2009).
For high-performance EMGs, such as those planned by the navy for nuclear and
conventional warships Walls et al. (1999); Black (2006); McNab and Beach
(2007), the armature velocity must be increased while keeping the length of
the rails fixed. One approach to increasing the armature velocity is to use
some sort of augmentation to the EMG circuit. Various types of augmentation
circuits have been considered Kotas et al. (1986), including hard magnet
augmentation fields arold et al. (1994) and superconducting coils Homan and
Scholz (1984); Homan et al. (1986).
In this paper, we consider a lumped circuit model of an augmented EMG. The
augmented EMG consists of two inductively coupled circuits. The first circuit
is the EMG circuit, containing the rails connected to a voltage source,
$V_{g}(t)$, that powers the rails and armature. The augmentation consists of
an external circuit that is inductively coupled to the EMG circuit via
magnetic field, but is not connected electrically to the EMG circuit. The
augmentation circuit has its own voltage generator, $V_{a}(t)$. See Figure 1
for a schematic layout. Figure 2 shows the equivalent lumped-circuit model
that we are considering.
A simplified arrangement has been previously considered, where the EMG circuit
was augmented by a constant external magnetic field arold et al. (1994). In
our work, we assume that a real augmentation circuit produces the magnetic
field that couples to the EMG circuit, and hence, the EMG circuit is
interacting with the augmentation circuit through mutual inductance, see
Figure 1. This results in a “back action” on the augmentation circuit by the
EMG circuit. This back action modifies the current in the augmentation
circuit, resulting in a non-constant $B$-field due to the augmentation
circuit. Furthermore, the EMG circuit is coupled to the mechanical degree of
freedom, the moving armature, which leads to a variation (with armature
position) of the self inductance and resistance of the EMG circuit. The
variations of the self inductance, resistance and mutual inductance in the EMG
circuit act back on the augmentation circuit, leading to a complex interaction
between the gun circuit, the augmentation circuit, and the mechanical degree
of freedom (the armature). The resulting dynamical system is described by
three non-linear differential equations that are derived in Section III. In
order to derive these dynamical equations, we first consider Ohm’s law in the
presence of an external magnetic field, which is the field produced by the
augmentation circuit.
Figure 1: Schematic diagram of an inductively augmented EMG with its
surrounding augmentation circuit. Magnetic flux from the augmentation circuit
inductively links to the gun circuit.
## II Ohm’s Law in a Magnetic Field
Consider a series electrical circuit with a voltage generator, $V$, resistor
$R$, and inductor $L$. Assume that an external source of current creates a
time-dependent magnetic field, ${\bf B}_{e}$, and at some time a quantity of
magnetic flux from this field, $\phi_{e}$, passes through the electrical
circuit. The time-dependent external magnetic field, ${\bf B}_{e}$, creates a
non-conservative electric field, ${\bf E}_{e}$, which are related by the
Maxwell equation
${\rm curl}\,{\bf{E}}_{e}=-\frac{\partial{\bf B}_{e}}{\partial t}$ (1)
Taking an integration contour along the wire of the electrical circuit, and
integrating Eq. (1) over the area enclosed by this contour, using Stoke’s rule
to change the surface integral to a line integral, we have the emf induced in
the electrical circuit, $\varepsilon_{e}$, given in terms of the rate of
change of the external flux (due to the external magnetic field ${\bf
B}_{e}$):
$\varepsilon_{e}=\oint{{\bf{E}}_{e}\cdot
d{\bf{l}}}=-\frac{{\partial\phi_{e}}}{{\partial t}}$ (2)
There are two sources that do work on the electrical circuit, the voltage
generator, $V$, and the emf, $\varepsilon_{e}$, induced in the circuit by the
external magnetic field, ${\bf B}_{e}$. The power entering the circuit,
$\varepsilon_{e}\,I+I\,V$, goes into Joule heating in the resistor and into
increasing the stored magnetic field energy:
$\varepsilon_{e}\,I+I\,V=I^{2}R+\frac{d}{dt}\frac{1}{2}LI^{2}$ (3)
Dividing Eq. (3) by $I$, and solving for the externally induced emf,
$\varepsilon_{e}$, in terms of the circuit quantities and the rate of change
of external flux, $\phi_{e}$,
$\varepsilon_{e}=-V+IR+L\frac{dI}{dt}=-\frac{d\phi_{e}}{dt}$ (4)
Equation (4) can be rewritten in terms of the total emf as
$-V+IR=-L\frac{dI}{dt}-\frac{d\phi_{e}}{dt}\equiv-\frac{d\phi}{dt}$ (5)
where $-L{dI}/{dt}={d\phi_{i}}/{dt}$ is the emf generated in the circuit due
to the internal current $I$ in the circuit, and $-{d\phi_{e}}/{dt}$ is the emf
generated in the circuit by the external magnetic field ${\bf B}_{e}$. The sum
of the internal and external flux is the total flux: $\phi=\phi_{i}+\phi_{e}$.
Equation (5) relates the rate of change of total flux through the circuit,
$d\phi/dt$, to the voltage drops along the circuit, $-V+IR$, and is an
expression of Ohm’s law in the presence of a magnetic field. We will use this
form of Ohm’s law to write the coupled equations for the augmented railgun.
## III Dynamical Equations
Consider an augmented railgun composed of an augmentation circuit with voltage
generator $V_{a}(t)$ and a gun circuit with voltage generator, $V_{g}(t)$. We
assume that the circuits are inductively coupled, but have no electrical
connection, see Figure 1. The equivalent circuit for the augmented railgun is
shown in Figure 2. The resistance of the gun circuit, $R_{g}(x)$, changes with
armature position $x(t)$ and can be written as
$R_{g}(x)=R_{g0}+R_{g}^{\prime}\,\,x(t)$ (6)
where $R_{g0}$ is the resistance of the gun circuit when $x=0$, and
$R_{g}^{\prime}$ is the gradient of resistance of the gun circuit at $x=0$.
Figure 2: The equivalent circuit is shown for the augmented railgun in Figure
1. Magnetic flux from the augmentation circuit couples to the gun circuit
through mutual inductance $M(x)$. The self inductance of the gun circuit,
$L_{g}(x)$, the mutual inductance, $M(x)$, and the resistance, $R_{g}(x)$, are
functions of the armature position, $x(t)$.
The total flux in the gun circuit, $\phi_{g}$, and the total flux in the
augmentation circuit, $\phi_{a}$, can be written as
$\displaystyle\phi_{g}$ $\displaystyle=$ $\displaystyle
L_{g}I_{g}+M_{\text{ga}}I_{a}$ (7) $\displaystyle\phi_{a}$ $\displaystyle=$
$\displaystyle L_{a}I_{a}+M_{\text{ag}}I_{g}$ (8)
where $I_{a}$ and $I_{g}$ are the currents in the augmentation circuit and gun
circuit, respectively, $L_{a}$ and $L_{g}$, are the self inductances of the
augmentation and gun circuits, respectively, and $M_{\text{ga}}$ and
$M_{\text{ag}}$ are the mutual inductances, which must be equal,
$M_{\text{ga}}=M_{\text{ag}}=M(x)$. The self inductance of the gun circuit,
$L_{g}(x)$, changes with armature position $x(t)$. Also, the area enclosed by
the gun circuit changes with armature position, and therefore, the coupling
between the augmented circuit and gun circuit, represented by the mutual
inductance, $M(x)$, changes with armature position, $x(t)$. Furthermore, in
order for the free energy of the system to be positive, the self inductances
and the mutual inductance must satisfy Landau et al. (1984)
$M(x)=k\sqrt{L_{a}L_{g}(x)}$ (9)
for all values of $x$. Here, the coupling coefficient must satisfy $|k|<1$. We
can write the self inductance of the gun circuit as
$L_{g}(x)=L_{g0}+L_{g}^{\prime}\,\,x(t)$ (10)
where $L_{g0}$ is the inductance when $x=0$ and $L_{g}^{\prime}$ is the
inductance gradient of the gun circuit. Similarly, the mutual inductance
between augmented circuit and gun circuit can be written as
$M(x)=M_{0}+M^{\prime}\,\,x(t)$ (11)
where $M_{0}$ is the mutual inductance when $x=0$ and $M^{\prime}$ is the
mutual inductance gradient. For $x=0$ and $x=\ell$, where $\ell$ is the rail
length, Eq. (9) gives
$\displaystyle M_{0}$ $\displaystyle=$ $\displaystyle
k\sqrt{L_{a}L_{\text{g0}}}$ (12) $\displaystyle M^{\prime}$ $\displaystyle=$
$\displaystyle\frac{1}{\ell}\left[k\sqrt{L_{a}\left(L_{\text{g0}}+L_{g}^{\prime}\,\ell\right)}-M_{0}\right]$
(13)
The coupling coefficient, $k$, can be positive or negative, and as mentioned
above, must satisfy $|k|<1$. The sign of $k$ determines the phase of the
inductive coupling between augmentation and gun circuits. Choosing the
coupling coefficient $k$, and the two self inductances, $L_{g0}$ and $L_{a}$,
Eq. (12) then determines the value of the mutual inductance, $M_{0}$. Then,
choosing a value for the rail length $\ell$, and the self inductance gradient,
$L_{g}^{\prime}$, Eq. (13) determines the mutual inductance gradient,
$M^{\prime}$. See Table 1 for parameter values used.
Table 1: Electromagnetic gun and augmentation circuit parameters. Quantity | Symbol | Value
---|---|---
length of rails (gun length) | $\ell$ | 10.0 m
mass of armature | $m$ | 20 kg
coupling coefficient | $k$ | $\pm$0.80
self inductance of rails at $x=0$ | $L_{g0}$ | 6.0$\times$10-5 H
self inductance of augmentation circuit | $L_{a}$ | 6.0$\times$10-5 H
self inductance gradient of rails | $L_{g}^{\prime}$ | 0.60$\times$10-6 H/m
resistance of augmentation circuit | $R_{a}$ | 0.10 $\Omega$
resistance of gun circuit at $x=0$ | $R_{g0}$ | 0.10 $\Omega$
resistance gradient of gun circuit at $x=0$ | $R_{g}^{\prime}$ | 0.002 $\Omega/$m
voltage generator amplitude in gun circuit | $V_{g0}$ | 8.0$\times$105 Volt
voltage generator amplitude in augmentation circuit | $V_{a0}$ | 8.0$\times$105 Volt
Two dynamical equations for the augmented railgun are obtained by applying
Ohm’s law in Eq. (5) to the gun circuit and to the augmented circuit:
$\displaystyle-V_{a}+I_{a}R_{a}$ $\displaystyle=$
$\displaystyle-\frac{d}{dt}\left[L_{a}I_{a}+(M_{0}+M^{\prime}\,x(t))\,I_{g}\right]$
$\displaystyle-V_{g}+I_{g}R_{g}(x)$ $\displaystyle=$
$\displaystyle-\frac{d}{dt}\left[(L_{g0}+L_{g}^{\prime}\,x(t))\,I_{g}\right.$
(14) $\displaystyle+\left.(M_{0}+M^{\prime}\,x(t))\,I_{a}\right]$
The third dynamical equation is obtained from the coupling of the electrical
and mechanical degrees of freedomMcCorkle and Bahder (2008). Therefore, the
three non-linear coupled dynamical equations for $I_{g}(t)$, $I_{a}(t)$ and
$x(t)$ are given by:
$\displaystyle-V_{a}(t)+I_{a}(t)R_{a}$ $\displaystyle=$ $\displaystyle-
L_{a}\frac{dI_{a}}{dt}-\left(M_{0}+M^{\prime}x(t)\right)\frac{dI_{g}}{dt}-M^{\prime}I_{g}(t)\frac{dx(t)}{dt}$
(15) $\displaystyle-
V_{g}(t)+I_{g}(t)\left(R_{\text{g0}}+R_{g}^{\prime}\,x(t)\right)$
$\displaystyle=$
$\displaystyle-\left(L_{\text{g0}}+L_{g}^{\prime}\,x(t)\right)\frac{dI_{g}}{dt}-L_{g}^{\prime}\,\text{
}I_{g}(t)\frac{dx(t)}{dt}-\left(M_{0}+M^{\prime}x(t)\right)\frac{dI_{a}}{dt}-M^{\prime}\,I_{a}(t)\frac{dx(t)}{dt}\hskip
18.06749pt$ (16) $\displaystyle m\frac{d^{2}x(t)}{dt^{2}}$ $\displaystyle=$
$\displaystyle\frac{1}{2}L_{g}^{\prime}\,\,I_{g}{}^{2}(t)$ (17)
where $V_{g}(t)$ and $V_{a}(t)$ are the voltage generators that drive the gun
and augmentation circuits. From Eq. (17), we see that the EMG armature has a
positive acceleration independent of whether the gun voltage generator is a.c.
or d.c. because the armature acceleration is proportional to $I_{g}{}^{2}(t)$.
The armature velocity is essentially the integral of $I_{g}{}^{2}(t)$, and
therefore a higher final velocity will be achieved for d.c. current, and
associated d.c. gun voltage, $V_{g}(t)=V_{g0}$, where $V_{g0}$ is a constant.
Of course, the actual current will not be constant in the gun circuit because
of the coupling to the moving armature and to the augmentation circuit. We
want to search for solutions where the armature velocity is higher for a EMG
with the augmented circuit than for an EMG without an augmentation circuit. In
order to increase the coupling between the augmentation circuit and the gun
circuit, we choose an a.c. voltage generator in the augmentation circuit:
$V_{a}(t)=V_{a0}\sin(\omega t)$ (18)
where $V_{a0}$ is a constant amplitude for the voltage generator and $\omega$
is the angular frequency of the augmentation circuit voltage generator.
As an example of the complicated coupling between augmentation circuit and gun
circuits, we will also obtain solutions for a d.c. voltage generator for the
augmentation circuit. As we will see, the gun circuit causes a back action on
the augmentation circuit, leading to a non-constant current in the
augmentation circuit.
We need to choose initial conditions at time $t=0$. We assume that there is no
initial current in the gun and augmentation circuits and that the initial
position and velocity of the armature are zero:
$\displaystyle I_{g}(0)$ $\displaystyle=$ $\displaystyle 0$ (19)
$\displaystyle I_{a}(0)$ $\displaystyle=$ $\displaystyle 0$ (20)
$\displaystyle x(0)$ $\displaystyle=$ $\displaystyle 0$ (21)
$\displaystyle\frac{dx(0)}{dt}$ $\displaystyle=$ $\displaystyle 0$ (22)
Figure 3: For a.c. augmentation circuit voltage, with $\omega=10^{3}$ rad/sec,
the armature position, velocity, and augmentation and gun circuit currents are
plotted for two different values of coupling constant, $k=+0.80$ and
$k=-0.80$. Figure 4: For a.c. augmentation circuit voltage, with
$\omega=10^{3}$ rad/sec, the left plot shows the armature velocity vs. time
for three values of coupling constant, $k=-0.80$, $k=+0.80$, and $k=0.0$, for
parameter values in Table 1. Note that the velocity curves in this figure for
$k=-0.80$ and $k=+0.80$, are the same as the curves in Figure 3. The right
side plot shows the current in the gun circuit and in the decoupled ($k=0$)
augmentation circuit. The corresponding kinetic energy plots are shown in
Figure 5.
For the special case when $L_{g}^{\prime}=0$, $M^{\prime}=0$, and
$R_{g}^{\prime}=0$, the mechanical degree of freedom described by Eq. (17)
decouples from Eqs. (15)–(17). In this case, Eqs (15)–(16) describe a
transformer with primary and secondary circuits having voltage generators,
$V_{a}(t)$ and $V_{g}(t)$, respectively. The solution for the mechanical
degree of freedom is then $x(t)=0$ for all time $t$.
In what follows, we solve the dynamical Eqs. (15)–(17) numerically for the
case when the EMG and augmented circuits are coupled.
Figure 5: For a.c. augmentation circuit voltage, with $\omega=10^{3}$ rad/sec,
the armature kinetic energy is plotted vs. time for three values of coupling
coefficient, $k=-0.80$, 0.0, and 0.80, for parameters values shown in Table 1.
The kinetic energy is largest when $k=+0.80$. For the corresponding velocity
plots, see Figure 4. Figure 6: For d.c. augmentation circuit voltage (with
$\omega=0$), the armature position, velocity, and currents are shown for the
case when the voltage generators in augmentation circuit, $V_{a}(t)$, and gun
circuits, $V_{g}(t)$, are d.c. generators ($\omega=0$) having values shown in
Table 1.
## IV Numerical Solution
As described above, if we choose the parameters $k$, $L_{a}$, $L_{g0}$, then
$M_{0}$ is determined from Eq. (12). Next, if we choose $L_{g}^{\prime}$ and
$\ell$, then the inductance gradient, $M^{\prime}$, is determined by Eq. (13).
See Table 1 for values of parameters used in the calculations below.
The sign of the coupling coefficient, $k$, affects the interaction of the
augmentation and gun circuits in subtle ways. We solve Eqs. (15)–(17) for a
positive and negative value of the coupling coefficient $k$. Figure 3 shows a
plot of armature position, $x(t)$, velocity $v(t)$, and currents $I_{g}(t)$
and $I_{a}(t)$, for a gun circuit powered by a d.c. voltage $V_{g0}$, and an
augmentation circuit with an a.c. voltage source given by Eq.(18), using the
parameters in Table 1 and coupling coefficient values $k=+0.80$ and $k=-0.80$.
Note that the rail length is taken to be $\ell=10$ m, so the plots are only
valid for time $0\leq t\leq t_{f}$, where $t_{f}$ is given by $x(t_{f})=\ell$.
For $k=+0.80$ we have $t_{f}=5.563$ ms, and for $k=-0.80$ we have
$t_{f}=5.277$ ms, see Table 2. From Figure 3, for $k=+0.80$, for short times
$t\approx 2$ ms the armature velocity is smaller than for $k=-0.80$. However,
for longer times $t\approx 5$ ms, the armature velocity is higher for
$k=+0.80$. For $k=+0.80$, for short time $t\approx 2$ ms, the augmentation
circuit current $I_{a}(t)$ is out of phase with the gun current $I_{g}(t)$.
However, for $k=-0.80$, for short time $t\approx 2$ ms, the augmentation
circuit current and gun circuit currents are approximately in phase. Even
though the gun circuit has a d.c. voltage generator, the gun circuit current
varies due to the coupling with the augmentation circuit and with the
mechanical degree of freedom $x(t)$. These plots illustrate the complicated
coupling between the augmentation circuit, the gun circuit, and the mechanical
degree of freedom, $x(t)$. When the armature reaches the end of the rails at
$\ell=10$ m, its velocity is $v(t_{f})=4769$ m/s for $k=+0.80$, whereas its
velocity is $v(t_{f})=3336$ m/s for $k=-0.80$. The armature kinetic energy for
$k=0.0$ is 159.6 MJ, whereas for $k=+0.80$ the armature kinetc energy is 227.4
MJ, so there is a 42% increase in kinetic energy in the augmented EMG compared
to the non-agumented EMG.
Table 2: A.C. augmentation. For parameters in Table 1, taking the angular frequency of the augmentation circuit voltage in Eq.(18) as $\omega=10^{3}$ rad/sec, the shot time, $t_{f}$, armature velocity, $v(t_{f})$, and armature kinetic energy (KE) in units of mega Joule, are shown for three different values of coupling constant $k$. $k$ | $t_{f}$ [10-3 s] | $v(t_{f})\,\,$ [103 m/s] | KE [M J]
---|---|---|---
-0.80 | 5.277 | 3.336 | 111.3
0.0 | 5.545 | 3.994 | 159.6
+0.80 | 5.563 | 4.769 | 227.4
Table 3: D.C. augmentation. For parameters in Table 1, taking the angular frequency of the augmentation circuit voltage in Eq.(18) as $\omega=0$, the shot time, $t_{f}$, armature velocity, $v(t_{f})$, and armature kinetic energy (KE) in units of mega Joule, are shown for three different values of coupling constant $k$. $k$ | $t_{f}$ [10-3 s] | $v(t_{f})\,\,$ [103 m/s] | KE [M J]
---|---|---|---
-0.80 | 4.860 | 4.056 | 164.5
0.0 | 5.545 | 3.994 | 159.6
+0.80 | 6.182 | 3.868 | 149.6
We can decouple the augmentation circuit from the gun circuit by setting
$k=0$, and through Eq.(12) and (13), this leads to zero values for mutual the
inductances, $M_{0}=0$ and $M^{\prime}=0$, in Eq. (15)–(17). In this case, Eq.
(15) is decoupled from Eq. (16) and Eq. (17). Equation (15) then describes a
simple L-R circuit with a voltage generator $V_{a}(t)$, leading to a current
in the (decoupled) augmentation circuit that is purely sinusoidal. In this
case, Equations (16) and (17) describe a simple EMG that is not coupled to an
augmentation circuit. Figure 4 shows the velocity vs. time for this EMG with
no augmentation circuit. Also shown are the currents in the gun circuit and
(decoupled) augmentation circuit. When $k=0$, the time for the armature to
reach the end of the rails is $t_{f}=5.545$ ms and its velocity is $v=3994$
m/s. For the parameters in Table 1, the augmented EMG with $k=0.80$ has a 19 %
increase in velocity over the $k=0$ non-augmented EMG. As mentioned above, in
terms of kinetic energy, the augmented EMG with $k=0.80$ has a 42% larger
kinetic energy than the non-augmented gun (with $k=0.0$), see Figure 5, where
we plot the kinetic energy of the armature as a function of time.
Finally, we show the effect of the coupling between augmented and gun circuits
when the augmented circuit has a d.c. voltage generator with constant
amplitude voltage, $V_{a}(t)=V_{a0}$. We solve Eqs. (15)–(17) for coupling
coefficient values $k=+0.80$ and $k=-0.80$. For d.c. augmentation (with
$\omega=0$), for $k=+0.80$, the armature velocity at $t_{f}=6.182$ ms is
$v=3868$ m/s, while for $k=-0.80$ the armature velocity at $t_{f}=4.860$ ms is
$v=4056$ m/s, see Table 3. Therefore, for a d.c. voltage generator in the
augmentation circuit, the negative value $k=-0.80$ leads to a higher armature
velocity than the positive $k$ value, which is just opposite for the case of
an a.c. voltage generator in the augmentation circuit. The plots are shown in
Figure 6 and the kinetic energy is shown in Figure 7. Note that even though
the voltage generators in the gun and augmentation circuits are now taken to
be d.c., the resulting currents, $I_{g}(t)$ and $I_{a}(t)$, are not constant,
due to the interaction between the augmentation and gun circuits and these
circuits interacting with the mechanical degree of freedom $x(t)$ of the
armature.
Figure 7: For d.c. augmentation circuit voltage (with $\omega=0$), the
armature kinetic energy is plotted vs. time for three values of coupling
coefficient, $k=-0.80$, 0.0, and 0.80, for parameters values shown in Table 1.
The kinetic energy is largest when $k=-0.80$. For the corresponding velocity
plots, see Figure 6.
## V Summary
We have written down the non-linear coupled differential equations for an
augmented EMG that is inductively coupled to the augmentation circuit. We
solved these differential equations numerically using example values of
parameters. For the sample parameters that we used, with $k=+0.80$ the a.c.
augmented EMG had a 42% larger kinetic energy of the armature than the same
EMG with no augmentation. We have made no effort to vary the parameter values
to optimize the augmented EMG performance. The improvement in performance may
be higher for other parameter values. In this work we have neglected many
practical design concerns, such as heating and melting of the rails McCorkle
and Bahder (2008). We have considered the simplest configuration of an
inductively coupled augmented EMG, with a single external circuit coupling to
the EMG circuit. Other configurations may be explored, such as multi-stage
inductively coupled circuits to the EMG circuit. Further work is needed to
explore in detail the parameter space of this augmentation scheme as well as
alternative augmentation schemes.
## References
* IEEE (2008) IEEE, in _Electromagnetic Launch Technology_ , edited by IEEE (IEEE, Victoria, British Columbia, Canada, 2008), Proceedings of a meeting held 10-13 June 2008.
* McCorkle and Bahder (2008) W. C. McCorkle and T. B. Bahder, 27th Army Science Conference, Nov.-Dec. 2010, Orlando, Florida, USA (2008), URL http://arxiv.org/abs/0810.2985.
* Knoepfel (2000) H. E. Knoepfel, _Magnetic Fields: A Comprehensive Theoretical Treatise for Practical Use_ (Wiley, New York, 2000).
* Young and Hughes (1982) F. Young and W. Hughes, IEEE Trans. Magn. MAG-18, 33 (1982).
* Drobyshevski et al. (1999) E. M. Drobyshevski, R. O. Kurakin, S. I. Rozov, B. G. Zhukov, M. V. Beloborodyy, and V. G. Latypov, J. Phys. D, Appl. Phys. 32, 2910 (1999).
* Stefani et al. (2005) F. Stefani, R. Merrill, and T.Watt, IEEE Trans. Magn. 41, 437 (2005).
* Schneider et al. (2007) M. Schneider, R. Schneider, V. Stankevic, S. Balevicius, and N. Zurauskiene, IEEE Trans. Magn. 43, 370 (2007).
* Schneider et al. (2009) M. Schneider, O. Liebfried, V. Stankevic, S. Balevicius, and N. Zurauskiene, IEEE Trans. Magn. 45, 430 (2009).
* Walls et al. (1999) W. A. Walls, W. F. Weldon, S. B. Pratap, M. Palmer, and D. Adams, IEEE Trans. Magn. 35, 262 (1999).
* Black (2006) B. C. Black, Ph.D. thesis, Naval Postgraduate School, Monterey, California (2006).
* McNab and Beach (2007) I. R. McNab and F. C. Beach, IEEE Trans. Magn. 43, 463 (2007).
* Kotas et al. (1986) J. Kotas, C. Guderjahn, and F. Littman, IEEE Trans. Mag. 22, 1573 (1986).
* arold et al. (1994) E. arold, B. Bukiet, and W. Peter, IEEE Trans. Mag. 30, 1433 (1994).
* Homan and Scholz (1984) C. G. Homan and W. Scholz, IEEE Trans. Mag. 20, 366 (1984).
* Homan et al. (1986) C. G. Homan, C. E. Cummings, and C. M. Fowler, IEEE Trans. Mag. 22, 1527 (1986).
* Landau et al. (1984) L. D. Landau, E. M. Lifshitz, and L. P. Pitaevskii, _Electrodynamics of Continuous Media_ (Pergamon Press, New York, 1984), 2nd ed.
|
arxiv-papers
| 2011-06-09T19:04:29 |
2024-09-04T02:49:19.512011
|
{
"license": "Public Domain",
"authors": "Thomas B. Bahder and William C. McCorkle",
"submitter": "Thomas B. Bahder",
"url": "https://arxiv.org/abs/1106.1881"
}
|
1106.2037
|
# Einstein’s vierbein field theory of curved space
Jeffrey Yepez Air Force Research Laboratory, Hanscom Air Force Base, MA 01731
(January 20, 2008)
###### Abstract
General Relativity theory is reviewed following the vierbein field theory
approach proposed in 1928 by Einstein. It is based on the vierbein field taken
as the “square root” of the metric tensor field. Einstein’s vierbein theory is
a gauge field theory for gravity; the vierbein field playing the role of a
gauge field but not exactly like the vector potential field does in Yang-Mills
theory–the correction to the derivative (the covariant derivative) is not
proportional to the vierbein field as it would be if gravity were strictly a
Yang-Mills theory. Einstein discovered the spin connection in terms of the
vierbein fields to take the place of the conventional affine connection. To
date, one of the most important applications of the vierbein representation is
for the derivation of the correction to a 4-spinor quantum field transported
in curved space, yielding the correct form of the covariant derivative. Thus,
the vierbein field theory is the most natural way to represent a relativistic
quantum field theory in curved space. Using the vierbein field theory,
presented is a derivation of the the Einstein equation and then the Dirac
equation in curved space. Einstein’s original 1928 manuscripts translated into
English are included.
vierbein, general relativity, gravitational gauge theory, Dirac equation in
curved space
###### Contents
1. I Introduction
1. I.1 Similarity to Yang-Mills gauge theory
2. II Mathematical framework
1. II.1 Local basis
2. II.2 Vierbein field
3. III Connections
1. III.1 Affine connection
2. III.2 Spin connection
3. III.3 Tetrad postulate
4. IV Curvature
1. IV.1 Riemann curvature from the affine connection
2. IV.2 Riemann curvature from the spin connection
5. V Mathematical constructs
1. V.1 Consequence of tetrad postulate
2. V.2 Affine connection in terms of the metric tensor
3. V.3 Invariant volume element
4. V.4 Ricci tensor
6. VI Gravitational action
1. VI.1 Free field gravitational action
2. VI.2 Variation with respect to the vierbein field
3. VI.3 Action for a gravitational source
4. VI.4 Full gravitational action
7. VII Einstein’s action
1. VII.1 Lagrangian density form 1
2. VII.2 Lagrangian density form 2
3. VII.3 First-order fluctuation in the metric tensor
4. VII.4 Field equation in the weak field limit
8. VIII Relativistic chiral matter in curved space
1. VIII.1 Invariance in flat space
2. VIII.2 Invariance in curved space
3. VIII.3 Covariant derivative of a spinor field
9. IX Conclusion
10. X Acknowledgements
11. I $n$-Bein and metric
12. II Distant parallelism and rotational invariance
13. III Invariants and covariants
14. I The underlying field equation
15. II The field equation in the first approximation
## I Introduction
The purpose of this manuscript is to provide a self-contained review of the
procedure for deriving the Einstein equations for gravity and the Dirac
equation in curved space using the vierbein field theory. This gauge field
theory approach to General Relativity (GR) was discovered by Einstein in 1928
in his pursuit of a unified field theory of gravity and electricity. He
originally published this approach in two successive letters appearing one
week apart Einstein, 1928b ; Einstein, 1928a . The first manuscript, a seminal
contribution to mathematical physics, adds the concept of distant parallelism
to Riemann’s theory of curved manifolds that is based on comparison of distant
vector magnitudes, which before Einstein did not incorporate comparison of
distant directions.
Historically there appears to have been a lack of interest in Einstein’s
research following his discovery of general relativity, principally from the
late 1920’s onward. Initial enthusiasm for Einstein’s unification approach
turned into a general rejection. In Born’s July 15th, 1925 letter (the most
important in the collection) to Einstein following the appearance of his
student Heisenberg’s revolutionary paper on the matrix representation of
quantum mechanics, Born writes Born, (1961):
> Einstein’s field theory …was intended to unify electrodynamics and
> gravitation …. I think that my enthusiam about the success of Einstein’s
> idea was quite genuine. In those days we all thought that his objective,
> which he pursued right to the end of his life, was attainable and also very
> important. Many of us became more doubtful when other types of fields
> emerged in physics, in addition to these; the first was Yukawa’s meson
> field, which is a direct generalization of the electromagnetic field and
> describes nuclear forces, and then there were the fields which belong to the
> other elementary particles. After that we were inclined to regard Einstein’s
> ceaseless efforts as a tragic error.
Weyl and Pauli’s rejection of Einstein’s thesis of distant parallelism also
helped paved the way for the view that Einstein’s findings had gone awry.
Furthermore, as the belief in the fundamental correctness of quantum theory
solidified by burgeoning experimental verifications, the theoretical physics
community seemed more inclined to latch onto Einstein’s purported repudiation
of quantum mechanics: he failed to grasp the most important direction of
twentieth century physics.
Einstein announced his goal of achieving a unified field theory before he
published firm results. It is already hard not to look askance at an audacious
unification agenda, but it did not help when the published version of the
manuscript had a fundamental error in its opening equation; even though this
error was perhaps introduced by the publisher’s typist, it can cause
confusion.111The opening equation (1a) was originally typeset as
$\mathfrak{H}=h\,g^{\mu\nu}\;,\;{\Lambda_{\mu}}^{\alpha}_{\beta},\;{\Lambda_{\nu}}^{\beta}_{\alpha},\;\cdots$
offered as the Hamiltonian whose variation at the end of the day yields the
Einstein and Maxwell’s equations. I have corrected this in the translated
manuscript in the appendix. As far as I know at the time of this writing in
2008, the two 1928 manuscripts have never been translated into English.
English versions of these manuscripts are provided as part of this review–see
the Appendix–and are included to contribute to its completeness.
In the beginning of the year 1928, Dirac introduced his famous square root of
the Klein-Gordon equation, establishing the starting point for the story of
relativistic quantum field theory, in his paper on the quantum theory of the
electron Dirac, (1928). This groundbreaking paper by Dirac may have inspired
Einstein, who completed his manuscripts a half year later in the summer of
1928. With deep insight, Einstein introduced the vierbein field, which
constitutes the square root of the metric tensor field.222The culmination of
Einstein’s new field theory approach appeared in Physical Review in 1948
Einstein, (1948), entitled “A Generalized Theory of Gravitation.” Einstein and
Dirac’s square root theories mathematically fit well together; they even
become joined at the hip when one considers the dynamical behavior of chiral
matter in curved space.
Einstein’s second manuscript represents a simple and intuitive attempt to
unify gravity and electromagnetism. He originally developed the vierbein field
theory approach with the goal of unifying gravity and quantum theory, a goal
which he never fully attained with this representation. Nevertheless, the
vierbein field theory approach represents progress in the right direction.
Einstein’s unification of gravity and electromagnetism, using only fields in
four-dimensional spacetime, is conceptually much simpler than the well known
Kaluza-Klein approach to unification that posits an extra compactified spatial
dimension. But historically it was the Kaluza-Klein notion of extra dimensions
that gained popularity as it was generalized to string theory. In
contradistinction, Einstein’s approach requires no extra constructs, just the
intuitive notion of distant parallelism. In the Einstein vierbein field
formulation of the connection and curvature, the basis vectors in the tangent
space of a spacetime manifold are not derived from any coordinate system of
that manifold.
Although Einstein is considered one of the founding fathers of quantum
mechanics, he is not presently considered one of the founding fathers of
relativistic quantum field theory in flat space. This is understandable since
in his theoretical attempts to discover a unified field theory he did not
predict any of the new leptons or quarks, nor their weak or strong gauge
interactions, in the Standard Model of particle physics that emerged some two
decades following his passing. However, Einstein did employ local rotational
invariance as the gauge symmetry in the first 1928 manuscript and discovered
what today we call the spin connection, the gravitational gauge field
associated with the Lorentz group as the local symmetry group (viz. local
rotations and boosts).
This he accomplished about three decades before Yang and Mills discovered
nonabelian gauge theory Yang and Mills, (1954), the antecedent to the Glashow-
Salam-Weinberg electroweak unification theory Glashow, (1961); Salam, (1966);
Weinberg, (1967) that is the cornerstone of the Standard Model. Had Einstein’s
work toward unification been more widely circulated instead of rejected,
perhaps Einstein’s original discovery of $n$-component gauge field theory
would be broadly considered the forefather of Yang-Mills theory.333Einstein
treated the general case of an $n$-Bein field. In Section I.1, I sketch a few
of the strikingly similarities between the vierbein field representation of
gravity and the Yang-Mills nonabelian gauge theory.
With the hindsight of 80 years of theoretical physics development from the
time of the first appearance of Einstein’s 1928 manuscripts, today one can see
the historical rejection is a mistake. Einstein could rightly be considered
one of the founding fathers of relativistic quantum field theory in curved
space, and these 1928 manuscripts should not be forgotten. Previous attempts
have been made to revive interest in the vierbein theory of gravitation purely
on the grounds of their superior use for representing GR, regardless of
unification Kaempffer, (1968). Yet, this does not go far enough. One should
also make the case for the requisite application to quantum fields in curved
space.
Einstein is famous for (inadvertently) establishing another field of
contemporary physics with the discovery of distant quantum entanglement.
Nascent quantum information theory was borne from the seminal 1935 Einstein,
Podolsky, and Rosen (EPR) Physical Review paper444This is the most cited
physics paper ever and thus a singular exception to the general lack of
interest in Einstein’s late research., “Can Quantum-Mechanical Description of
Physical Reality Be Considered Complete?” Can Einstein equally be credited for
establishing the field of quantum gravity, posthumously?
The concepts of vierbein field theory are simple, and the mathematical
development is straightforward, but in the literature one can find the
vierbein theory a notational nightmare, making this pathway to GR appear more
difficult than it really is, and hence less accessible. In this manuscript, I
hope to offer an accessible pathway to GR and the Dirac equation in curved
space. The development of the vierbein field theory presented here borrows
first from treatments given by Einstein himself that I have discussed above
Einstein, 1928b ; Einstein, 1928a , as well as excellent treatments by
Weinberg Weinberg, (1972) and Carroll Carroll, (2004).555Both Weinberg and
Carroll treat GR in a traditional manner. Their respective explanations of
Einstein’s vierbein field theory are basically incidental. Carroll relegates
his treatment to a single appendix. 666Another introduction to GR but which
does not deal with the vierbein theory extensively is the treatment by
D’Inverno D’Inverno, (1995). However, Weinberg and Carroll review vierbein
theory as a sideline to their main approach to GR, which is the standard
coordinate-based approach of differential geometry. An excellent treatment of
quantum field theory in curved space is given by Birrell and Davies Birrell
and Davies, (1982), but again with a very brief description of the vierbein
representation of GR. Therefore, it is hoped that a self-contained review of
Einstein’s vierbein theory and the associated formulation of the relativistic
wave equation should be helpful gathered together in one place.
### I.1 Similarity to Yang-Mills gauge theory
This section is meant to be an outline comparing the structure of GR and Yang-
Mills (YM) theories Yang and Mills, (1954). There are many previous treatments
of this comparison—a recent treatment by Jackiw is recommended Jackiw, (2005).
The actual formal review of the vierbein theory does not begin until Section
II.
The dynamics of the metric tensor field in GR can be cast in the form of a YM
gauge theory used to describe the dynamics of the quantum field in the
Standard Model. In GR, dynamics is invariant under an external local
transformation, say $\Lambda$, of the Lorentz group SO(3,1) that includes
rotations and boosts. Furthermore, any quantum dynamics occurring within the
spacetime manifold is invariant under internal local Lorentz transformations,
say $U_{\Lambda}$, of the spinor representation of the SU(4) group.
Explicitly, the internal Lorentz transformation of a quantum spinor field in
unitary form is
$U_{\Lambda}=e^{-\frac{i}{2}\omega_{\mu\nu}(x)S^{\mu\nu}},$ (1)
where $S^{\mu\nu}$ is the tensor generator of the transformation.777A $4\times
4$ fundamental representation of SU(4) are the $4^{2}-1=15$ Dirac matrices,
which includes four vectors $\gamma^{\mu}$, six tensors
$\sigma^{\mu\nu}=\frac{i}{2}[\gamma^{\mu},\gamma^{\nu}]$, one pseudo scalar
$i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}\equiv\gamma^{5}$, and four axial
vectors $\gamma^{5}\gamma^{\mu}$. The generator associated with the internal
Lorentz transformation is $S^{\mu\nu}=\frac{1}{2}\sigma^{\mu\nu}$. Similarly,
in the Standard Model, the dynamics of the Dirac particles (leptons and
quarks) is invariant under local transformations of the internal gauge group,
SU(3) for color dynamics and SU(2) for electroweak dynamics.888A $3\times 3$
fundamental representation of SU(3) are the $3^{2}-1=8$ Gell-Mann matrices.
And, a $2\times 2$ fundamental representation of SU(2) are the $2^{2}-1=3$
Pauli matrices. The internal unitary transformation of the multiple component
YM field in unitary form is
$U=e^{i\Theta^{a}(x)T^{a}},$ (2)
where the hermitian generators $T^{a}=T^{a\dagger}$ are in the adjoint
representation of the gauge group. The common unitarity of (1) and (2)
naturally leads to many parallels between the GR and YM theories.
From (1), it seems that $\omega_{\mu\nu}(x)$ should take the place of the
gauge potential field, and in this case it is clearly a second rank field
quantity. But in Einstein’s action in his vierbein field representation of GR,
the lowest order fluctuation of a second-rank vierbein field itself
${e^{\mu}}_{a}(x)=\delta^{\mu}_{a}+{k^{\mu}}_{a}(x)+\cdots$
plays the roles of the potential field where the gravitational field strength
${F^{\mu\nu}}_{a}$ is related to a quantity of the form
${F^{\mu\nu}}_{a}=\partial^{\mu}{k^{\nu}}_{a}(x)-\partial^{\nu}{k^{\mu}}_{a}(x).$
(3)
(3) vanishes for apparent or pseudo-gravitational fields that occur in
rotating or accelerating local inertial frames but does not vanish for
gravitational fields associated with curved space. Einstein’s expression for
the Lagrangian density (that he presented as a footnote in his second
manuscript) gives rise to a field strength of the form of (3). An expanded
version of his proof of the equivalence of his vierbein-based action principle
with gravity in the weak field limit is presented in Section VII.
In any case, a correction to the usual derivative
$\partial_{\mu}\rightarrow{\cal D}_{\mu}\equiv\partial_{\mu}+\Gamma_{\mu}$
is necessary in the presence of a gravitational field. The local
transformation has the form
$\displaystyle\psi$ $\displaystyle\rightarrow$ $\displaystyle U_{\Lambda}\psi$
(4a) $\displaystyle\Gamma_{\mu}$ $\displaystyle\rightarrow$ $\displaystyle
U_{\Lambda}\Gamma_{\mu}U^{\dagger}_{\Lambda}-\left(\partial_{\mu}U_{\Lambda}\right)U^{\dagger}_{\Lambda}.$
(4b)
(4) is derived in Section VIII.2. This is similar to Yang-Mills gauge theory
where a correction to the usual derivative
$\partial_{\mu}\rightarrow{\cal D}_{\mu}\equiv\partial_{\mu}-iA_{\mu}$
is also necessary in the presence of a non-vanishing gauge field. In YM, the
gauge transformation has the form
$\displaystyle\psi$ $\displaystyle\rightarrow$ $\displaystyle U\,\psi$ (5a)
$\displaystyle A_{\mu}$ $\displaystyle\rightarrow$ $\displaystyle
UA_{\mu}U^{\dagger}-\,U\partial_{\mu}U^{\dagger},$ (5b)
which is just like (4). Hence, this “gauge field theory” approach to GR is
useful in deriving the form of the Dirac equation in curved space. In this
context, the requirement of invariance of the relativistic quantum wave
equation to local Lorentz transformations leads to a correction of the form
$\displaystyle\Gamma_{\mu}$ $\displaystyle=$
$\displaystyle\frac{1}{2}{e^{\beta}}_{k}\left(\partial_{\mu}e_{\beta
h}\right)S^{hk}$ (6a) $\displaystyle=$
$\displaystyle\partial_{\mu}\left(\frac{1}{2}k_{\beta
h}S^{h\beta}\right)+\cdots$ (6b)
This is derived in Section VIII. A problem that is commonly cited regarding
the gauge theory representation of General Relativity is that the correction
is not directly proportional to the gauge potential as would be the case if it
were strictly a YM theory (e.g. we should be able to write
$\Gamma_{\mu}(x)=u^{a}{k_{\mu a}}(x)$ where $u^{a}$ is some constant four-
vector).
## II Mathematical framework
### II.1 Local basis
The conventional coordinate-based approach to GR uses a “natural” differential
basis for the tangent space $T_{p}$ at a point $p$ given by the partial
derivatives of the coordinates at $p$
$\hat{\bf e}_{(\mu)}=\partial_{(\mu)}.$ (7)
Some 4-vector $A\in T_{p}$ has components
$A=A^{\mu}\hat{\bf e}_{(\mu)}=(A_{0},A_{1},A_{2},A_{3}).$ (8)
To help reinforce the construction of the frame, we use a triply redundant
notation of using a bold face symbol to denote a basis vector $\bf e$,
applying a caret symbol $\,\,\hat{\bf e}\,\,$ as a hat to denote a unit basis
vector, and enclosing the component subscript with parentheses $\hat{\bf
e}_{(\mu)}$ to denote a component of a basis vector.999With $\hat{\bf
e}_{(\bullet)}\in\\{e_{0},e_{1},e_{2},e_{3}\\}$ and
$\partial_{(\bullet)}\in\\{\partial_{0},\partial_{1},\partial_{2},\partial_{3}\\}$,
some authors write (7) concisely as $e_{\mu}=\partial_{\mu}.$ I will not use
this notation because I would like to reserve $e_{\mu}$ to represent the
lattice vectors $e_{\mu}\equiv\gamma_{a}{e_{\mu}}^{a}$ where $\gamma_{a}$ are
Dirac matrices and ${e_{\mu}}^{a}$ is the vierbein field defined below. It
should be nearly impossible to confuse a component of an orthonormal basis
vector in $T_{p}$ with a component of any other type of object. Also, the use
of a Greek index, such as $\mu$, denotes a component in a coordinate system
representation. Furthermore, the choice of writing the component index as a
superscript as in $A^{\mu}$ is the usual convention for indicating this is a
component of a contravariant vector. A contravariant vector is often just
called a vector, for simplicity of terminology.
In the natural differential basis, the cotangent space, here denoted by
$T^{\ast}_{p}$, is spanned by the differential elements
$\hat{\bf e}^{(\mu)}={\bf dx}^{(\mu)},$ (9)
which lie in the direction of the gradient of the coordinate functions.101010
Again, with $\hat{\bf e}^{(\bullet)}\in\\{e^{0},e^{1},e^{2},e^{3}\\}$ and
${\bf dx}^{(\bullet)}\in\\{dx^{0},dx^{1},dx^{2},dx^{3}\\}$, for brevity some
authors write (9) as $e^{\mu}=dx^{\mu}.$ But I reserve $e^{\mu}$ to represent
anti-commuting 4-vectors, $e^{\mu}\equiv\gamma^{a}{e^{\mu}}_{a}$, unless
otherwise noted. $T^{\ast}_{p}$ is also called the dual space of $T_{p}$.
Some dual 4-vector $A\in T_{p}^{\ast}$ has components
$A=A_{\mu}\hat{\bf e}^{(\mu)}=g_{\mu\nu}A^{\nu}{\bf e}^{(\mu)}.$ (10)
Writing the component index $\mu$ as a subscript in $A_{\mu}$ again follows
the usual convention for indicating one is dealing with a component of a
covariant vector. Again, in an attempt to simplify terminology, a covariant
vector is often called a 1-form, or simply a dual vector. Yet, remembering
that a vector and dual vector (1-form) refer to an element of the tangent
space $T_{p}$ and the cotangent space $T^{\ast}_{p}$, respectively, may not
seem all that much easier than remembering the prefixes contravariant and
covariant in the first place.
The dimension of (7) is inverse length, $[\hat{\bf e}_{(\mu)}]=\frac{1}{L}$,
and this is easy to remember because a first derivative of a function is
always tangent to that function. For a basis element, $\mu$ is a subscript
when $L$ is in the denominator. Then (9), which lives in the cotangent space
and as the dimensional inverse of (7), must have dimensions of length
$[\hat{\bf e}^{(\mu)}]=L$. So, for a dual basis element, $\mu$ is a
superscript when $L$ is in the numerator. That they are dimensional inverses
is expressed in the following tensor product space
$\hat{\bf e}^{(\mu)}\otimes\hat{\bf e}_{(\nu)}={\mathbf{1}^{\mu}}_{\nu},$ (11)
where $\mathbf{1}$ is the identity, which is of course dimensionless.
We are free to choose any orthonormal basis we like to span $T_{p}$, so long
as it has the appropriate signature of the manifold on which we are working.
To that end, we introduce a set of basis vectors $\hat{\bf e}_{(a)}$, which we
choose as non-coordinate unit vectors, and we denote this choice by using
small Latin letters for indices of the non-coordinate frame. With this
understanding, the inner product may be expressed as
$\left(\hat{\bf e}_{(a)},\hat{\bf e}_{(b)}\right)=\eta_{ab},$ (12)
where $\eta_{ab}=\text{diag}(1,-1,-1,-1)$ is the Minkowski metric of flat
spacetime.
### II.2 Vierbein field
This orthonormal basis that is independent of the coordinates is termed a
tetrad basis.111111To help avoid confusion, please note that the term tetrad
in the literature is often used as a synonym for the term vierbein. Here we
use the terms to mean two distinct objects: $\hat{\bf e}_{(a)}$ and
$e^{\mu}(x)$, respectively. Although we cannot find a coordinate chart that
covers the entire curved manifold, we can choose a fixed orthonormal basis
that is independent of position. Then, from a local perspective, any vector
can be expressed as a linear combination of the fixed tetrad basis vectors at
that point. Denoting an element of the tetrad basis by $\hat{\bf e}_{(a)}$, we
can express the coordinate basis (whose value depends on the local curvature
at a point $x$ in the manifold) in terms of the tetrads as the following
linear combination
$\hat{\bf e}_{(\mu)}(x)={e_{\mu}}^{a}(x)\,\hat{\bf e}_{(a)},$ (13)
where the functional components ${e_{\mu}}^{a}(x)$ form a $4\times 4$
invertible matrix. We will try not to blur the distinction between a vector
and its components. The term “vierbein field” is used to refer to the whole
transformation matrix in (13) with 16 components, denoted by
${e_{\mu}}^{a}(x)$. The vierbeins ${e_{\mu}}^{a}(x)$, for $a=1,2,3,4$,
comprise four legs–vierbein in German means four-legs.
The inverse of the vierbein has components, ${e^{\mu}}_{a}$ (switched
indices), that satisfy the orthonormality conditions
${e^{\mu}}_{a}(x){e_{\nu}}^{a}(x)=\delta^{\mu}_{\nu},\quad{e_{\mu}}^{a}(x){e^{\mu}}_{b}(x)=\delta^{a}_{b}.$
(14)
The inverse vierbein serves as a transformation matrix that allows one to
represent the tetrad basis $\hat{\bf e}_{(a)}(x)$ in terms of the coordinate
basis $\hat{\bf e}_{(\mu)}$:
$\hat{\bf e}_{(a)}={e^{\mu}}_{a}(x)\,\hat{\bf e}_{(\mu)}.$ (15)
Employing the metric tensor $g_{\mu\nu}$ to induce the product of the vierbein
field and inverse vierbein field, the inner product-signature constraint is
$g_{\mu\nu}(x){e^{\mu}}_{a}(x)\,{e^{\nu}}_{b}(x)=\eta_{ab},$ (16)
or using (14) equivalently we have
$g_{\mu\nu}(x)={e_{\mu}}^{a}(x){e_{\nu}}^{b}(x)\eta_{ab}.$ (17)
So, the vierbein field is the “square root” of the metric.
Hopefully, you can already see why one should include the vierbein field
theory as a member of our tribe of “square root” theories. These include the
Pythagorean theorem for the distance interval
$ds=\sqrt{\eta_{\mu\nu}dx^{\mu}dx^{\nu}}=\sqrt{dt^{2}-dx^{2}-dy^{2}-dz^{2}}$,
the mathematicians’ beloved complex analysis (based on $\sqrt{-1}$ as the
imaginary number), quantum mechanics (e.g. pathways are assigned amplitudes
which are the square root of probabilities), quantum field theory (e.g. Dirac
equation as the square root of the Klein Gordon equation), and quantum
computation based on the universal $\sqrt{\text{\sc swap}}$ conservative
quantum logic gate. To this august list we add the vierbein as the square root
of the metric tensor.
Now, we may form a dual orthonormal basis, which we denote by $\hat{\bf
e}^{(a)}$ with a Latin superscript, of 1-forms in the cotangent space
$T^{\ast}_{p}$ that satisfies the tensor product condition
$\hat{\bf e}^{(a)}\otimes\hat{\bf e}_{(b)}={\mathbf{1}^{a}}_{b}.$ (18)
This non-coordinate basis 1-form can be expressed as a linear combination of
coordinate basis 1-forms
$\hat{\bf e}^{(a)}={e_{\mu}}^{a}(x)\,\hat{\bf e}^{(\mu)}(x),$ (19)
where $\hat{\bf e}^{(\mu)}=dx^{\mu}$, and vice versa using the inverse
vierbein field
$\hat{\bf e}^{(\mu)}(x)={e^{\mu}}_{a}(x)\,\hat{\bf e}^{(a)}.$ (20)
Any vector at a spacetime point has components in the coordinate and non-
coordinate orthonormal basis
${\bf V}=V^{\mu}\,\hat{\bf e}_{(\mu)}=V^{a}\,\hat{\bf e}_{(a)}.$ (21)
So, its components are related by the vierbein field transformation
$V^{a}={e_{\mu}}^{a}V^{\mu}\qquad\text{and}\qquad V^{\mu}={e^{\mu}}_{a}V^{a}.$
(22)
The vierbeins allow us to switch back and forth between Latin and Greek bases.
Multi-index tensors can be cast in terms of mixed-index components, as for
example
${V^{a}}_{b}={e_{\mu}}^{a}{V^{\mu}}_{b}={e^{\nu}}_{b}{V^{a}}_{\nu}={e_{\mu}}^{a}{e^{\nu}}_{b}{V^{\mu}}_{\nu}.$
(23)
The behavior of inverse vierbeins is consistent with the conventional notion
of raising and lowering indices. Here is an example with the metric tensor
field and the Minkowski metric tensor
${e^{\mu}}_{a}=g^{\mu\nu}\eta_{ab}\,{e_{\nu}}^{b}.$ (24)
The identity map has the form
${\bf e}={e_{\nu}}^{a}{\bf dx}^{(\nu)}\otimes\hat{\bf e}_{(a)}.$ (25)
We can interpret ${e_{\nu}}^{a}$ as a set of four Lorentz 4-vectors. That is,
there exists one 4-vector for each non-coordinate index $a$.
We can make local Lorentz transformations (LLT) at any point. The signature of
the Minkowski metric is preserved by a Lorentz transformation
$\text{LLT:}\quad\hat{\bf e}_{(a)}\rightarrow\hat{\bf
e}_{(a^{\prime})}={\Lambda^{a}}_{a^{\prime}}(x)\hat{\bf e}_{(a)},$ (26)
where ${\Lambda^{a}}_{a^{\prime}}(x)$ is an inhomogeneous (i.e. position
dependent) transformation that satisfies
${\Lambda^{a}}_{a^{\prime}}{\Lambda^{b}}_{b^{\prime}}\eta_{ab}=\eta_{a^{\prime}b^{\prime}}.$
(27)
A Lorentz transformation can also operate on basis 1-forms, in
contradistinction to the ordinary Lorenz transformation
${\Lambda^{a^{\prime}}}_{a}$ that operates on basis vectors.
${\Lambda^{a^{\prime}}}_{a}$ transforms upper (contravariant) indices, while
${\Lambda^{a}}_{a^{\prime}}$ transforms lower (covariant) indices.
And, we can make general coordinate transformations (GCT)
$\text{GCT:}\quad{T^{a\mu}}_{b\nu}\rightarrow{T^{a^{\prime}\mu^{\prime}}}_{b^{\prime}\nu^{\prime}}=\underbrace{{\Lambda^{a^{\prime}}}_{a}}_{\begin{matrix}\text{\tiny
prime}\\\ \text{\tiny 1st}\\\ \text{\tiny(contra-}\\\ \text{\tiny
variant)}\end{matrix}}\frac{\partial x^{\mu^{\prime}}}{\partial
x^{\mu}}\underbrace{{\Lambda^{b}}_{b^{\prime}}}_{\begin{matrix}\text{\tiny
prime}\\\ \text{\tiny 2nd}\\\ \text{\tiny(co-}\\\ \text{\tiny
variant)}\end{matrix}}\frac{\partial x^{\nu}}{\partial
x^{\nu^{\prime}}}{T^{a\mu}}_{b\nu}.$ (28)
## III Connections
### III.1 Affine connection
Curvature of a Riemann manifold will cause a distortion in a vector field, say
a coordinate field $X^{\alpha}(x)$, and this is depicted in Figure 1.
$i$$j$$\delta
x^{\alpha}$$X^{\alpha}+\color[rgb]{0,0,1}\bar{\delta}X^{\alpha}$$X^{\alpha}(x)$$X^{\alpha}+\delta
X^{\alpha}$$\Gamma^{\alpha}_{\beta\gamma}X^{\beta}\delta X^{\gamma}$ Figure 1:
Two spacetime points $x^{\alpha}$ and $x^{\alpha}+\delta x^{\alpha}$, labeled
as $i$ and $j$, respectively. The 4-vector at point $i$ is $X^{\alpha}(x)$,
and the 4-vector at nearby point $j$ is $X^{\alpha}(x+\delta
x)=X^{\alpha}(x)+\delta X^{\alpha}(x)$. The parallel transported 4-vector at
$j$ is $X^{\alpha}(x)+{\color[rgb]{0,0,1}\bar{\delta}X^{\alpha}}(x)$ (blue).
The affine connection is $\Gamma^{\alpha}_{\beta\gamma}$. (For simplicity the
parallel transport is rendered as if the space is flat).
The change in the coordinate field from one point $x$ to an adjacent point
$x+\delta x$ is
$X^{\alpha}(x+\delta x)=X^{\alpha}(x)+\underbrace{\delta
x^{\beta}\partial_{\beta}X^{\alpha}}_{\delta X^{\alpha}(x)}.$ (29)
So, the change of the coordinate field due to the manifold is defined as
$\delta X^{\alpha}(x)\equiv\delta
x^{\beta}(x)\partial_{\beta}X^{\alpha}=X^{\alpha}(x+\delta x)-X^{\alpha}(x).$
(30)
The difference of the two coordinate vectors at point $j$ is
$[X^{\alpha}+\delta
X^{\alpha}]-[X^{\alpha}+{\color[rgb]{0,0,1}\bar{\delta}X^{\alpha}}]=\delta
X^{\alpha}(x)-{\color[rgb]{0,0,1}\bar{\delta}X^{\alpha}}(x).$ (31)
${\color[rgb]{0,0,1}\bar{\delta}X^{\alpha}}$ must vanish if either $\delta
x^{\alpha}$ vanishes or $X^{\alpha}$ vanishes. Therefore, we choose
${\color[rgb]{0,0,1}\bar{\delta}X^{\alpha}}=-\Gamma^{\alpha}_{\beta\gamma}(x)X^{\beta}(x)\delta
x^{\gamma},$ (32)
where $\Gamma^{\alpha}_{\beta\gamma}$ is a multiplicative factor, called the
affine connection. Its properties are yet to be determined. At this stage, we
understand it as a way to account for the curvature of the manifold.
The covariant derivative may be constructed as follows:
$\nabla_{\gamma}X^{\alpha}(x)\equiv\frac{1}{\delta
x^{\gamma}}\\{X^{\alpha}(x+\delta
x)-[X^{\alpha}(x)+{\color[rgb]{0,0,1}\bar{\delta}X^{\alpha}}(x)]\\}.$ (33)
I do not use a limit in the definition to define the derivative. Instead, I
would like to just consider the situation where $\delta x^{\gamma}$ is a small
finite quantity. We will see below that this quantity drops out, justifying
the form of (33). Inserting (29) and (32) into (33), yields
$\displaystyle\nabla_{\gamma}X^{\alpha}(x)$ $\displaystyle=$
$\displaystyle\frac{1}{\delta x^{\gamma}}\\{X^{\alpha}(x)+{\delta
x^{\gamma}\partial_{\gamma}X^{\alpha}}$
$\displaystyle-X^{\alpha}(x)+\Gamma^{\alpha}_{\beta\gamma}(x)X^{\beta}(x)\delta
x^{\gamma}\\}$ $\displaystyle=$
$\displaystyle\partial_{\gamma}X^{\alpha}(x)+\Gamma^{\alpha}_{\beta\gamma}(x)X^{\beta}(x).$
(34b)
So we see that $\delta x^{\gamma}$ cancels out and no limiting process to an
infinitesimal size was really needed. Dropping the explicit dependence on $x$,
as this is to be understood, we have the simple expression for the covariant
derivative
$\nabla_{\gamma}X^{\alpha}=\partial_{\gamma}X^{\alpha}+\Gamma^{\alpha}_{\beta\gamma}X^{\beta}.$
(35)
In coordinate-based differential geometry, the covariant derivative of a
tensor is given by its partial derivative plus correction terms, one for each
index, involving an affine connection contracted with the tensor.
### III.2 Spin connection
In non-coordinate-based differential geometry, the ordinary affine connection
coefficients $\Gamma^{\lambda}_{\mu\nu}$ are replaced by spin connection
coefficients, denoted ${{\omega_{\mu}}^{a}}_{b}$, but otherwise the principle
is the same. Each Latin index gets a correction factor that is the spin
connection contracted with the tensor, for example
$\nabla_{\mu}{X^{a}}_{b}=\partial_{\mu}{X^{a}}_{b}+{{\omega_{\mu}}^{a}}_{c}{X^{c}}_{b}-{{\omega_{\mu}}^{c}}_{b}{X^{a}}_{c}.$
(36)
The correction is positive for a upper index and negative for a lower index.
The spin connection is used to take covariant derivatives of spinors, whence
its name.
The covariant derivative of a vector $X$ in the coordinate basis is
$\displaystyle\nabla X$ $\displaystyle=$
$\displaystyle\left(\nabla_{\mu}X^{\nu}\right)dx^{\mu}\otimes\partial_{\nu}$
(37a) $\displaystyle=$
$\displaystyle\left(\partial_{\mu}X^{\nu}+\Gamma^{\nu}_{\mu\lambda}X^{\lambda}\right)dx^{\mu}\otimes\partial_{\nu}.$
(37b)
The same object in a mixed basis, converted to the coordinate basis, is
$\displaystyle\\!\\!\\!\nabla X\\!\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\\!\left(\nabla_{\mu}X^{a}\right)dx^{\mu}\otimes\hat{\bf
e}_{(a)}$ (38a) $\displaystyle=$
$\displaystyle\left(\partial_{\mu}X^{a}+{{\omega_{\mu}}^{a}}_{b}X^{b}\right)dx^{\mu}\otimes\hat{\bf
e}_{(a)}$ (38b) $\displaystyle=$
$\displaystyle\left(\partial_{\mu}\left({e_{\nu}}^{a}X^{\nu}\right)+{{\omega_{\mu}}^{a}}_{b}{e_{\lambda}}^{b}X^{\lambda}\right)dx^{\mu}\otimes\left({e^{\sigma}}_{a}\partial_{\sigma}\right)$
$\displaystyle=$
$\displaystyle{e^{\sigma}}_{a}\left({e_{\nu}}^{a}\partial_{\mu}X^{\nu}+X^{\nu}\partial_{\mu}{e_{\nu}}^{a}+{{\omega_{\mu}}^{a}}_{b}{e_{\lambda}}^{b}X^{\lambda}\right)dx^{\mu}\otimes\partial_{\sigma}$
$\displaystyle=$
$\displaystyle\left(\partial_{\mu}X^{\sigma}+{e^{\sigma}}_{a}\partial_{\mu}{e_{\nu}}^{a}X^{\nu}+{e^{\sigma}}_{a}{e_{\lambda}}^{b}{{\omega_{\mu}}^{a}}_{b}X^{\lambda}\right)dx^{\mu}\otimes\partial_{\sigma}.$
Now, relabeling indices $\sigma\rightarrow\nu\rightarrow\lambda$ gives
$\displaystyle\nabla X\\!\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\\!\left(\partial_{\mu}X^{\nu}+{e^{\nu}}_{a}\partial_{\mu}{e_{\lambda}}^{a}X^{\lambda}+{e^{\nu}}_{a}{e_{\lambda}}^{b}{{\omega_{\mu}}^{a}}_{b}X^{\lambda}\right)dx^{\mu}\otimes\partial_{\nu}$
$\displaystyle=$
$\displaystyle\left[\partial_{\mu}X^{\nu}+\left({e^{\nu}}_{a}\partial_{\mu}{e_{\lambda}}^{a}+{e^{\nu}}_{a}{e_{\lambda}}^{b}{{\omega_{\mu}}^{a}}_{b}\right)X^{\lambda}\right]dx^{\mu}\otimes\partial_{\nu}.$
Therefore, comparing (37b) with (III.2), the affine connection in terms of the
spin connection is
$\Gamma^{\nu}_{\mu\lambda}={e^{\nu}}_{a}\partial_{\mu}{e_{\lambda}}^{a}+{e^{\nu}}_{a}{e_{\lambda}}^{b}{{\omega_{\mu}}^{a}}_{b}.$
(40)
This can be solved for the spin connection
${{\omega_{\mu}}^{a}}_{b}={e_{\nu}}^{a}{e^{\lambda}}_{b}\Gamma^{\nu}_{\mu\lambda}-{e^{\lambda}}_{b}\partial_{\mu}{e_{\lambda}}^{a}.$
(41)
### III.3 Tetrad postulate
The tetrad postulate is that the covariant derivative of the vierbein field
vanishes, $\nabla_{\mu}{e_{\nu}}^{a}=0$, and this is merely a restatement of
the relation we just found between the affine and spin connections (41). Left
multiplying by ${e_{\nu}}^{b}$ gives
$\displaystyle{{\omega_{\mu}}^{a}}_{b}{e_{\nu}}^{b}$ $\displaystyle=$
$\displaystyle{e_{\sigma}}^{a}{e^{\lambda}}_{b}{e_{\nu}}^{b}\Gamma^{\sigma}_{\mu\lambda}-{e^{\lambda}}_{b}{e_{\nu}}^{b}\partial_{\mu}{e_{\lambda}}^{a}$
(42a) $\displaystyle=$
$\displaystyle{e_{\sigma}}^{a}\Gamma^{\sigma}_{\mu\nu}-\partial_{\mu}{e_{\nu}}^{a}.$
(42b)
Rearranging terms, we have the tetrad postulate
$\nabla_{\mu}{e_{\nu}}^{a}\equiv\partial_{\mu}{e_{\nu}}^{a}-{e_{\sigma}}^{a}\Gamma^{\sigma}_{\mu\nu}+{{\omega_{\mu}}^{a}}_{b}{e_{\nu}}^{b}=0.$
(43)
Let us restate (as a reminder) the correction rules for applying connections.
The covariant derivatives of a coordinate vector and 1-form are
$\displaystyle\nabla_{\mu}X^{\nu}$ $\displaystyle=$
$\displaystyle\partial_{\mu}X^{\nu}+\Gamma^{\nu}_{\mu\lambda}X^{\lambda}$
(44a) $\displaystyle\nabla_{\mu}X_{\nu}$ $\displaystyle=$
$\displaystyle\partial_{\mu}X_{\nu}-\Gamma^{\lambda}_{\mu\nu}X_{\lambda},$
(44b)
and similarly the covariant derivatives of a non-coordinate vector and 1-form
are
$\displaystyle\nabla_{\mu}X^{a}$ $\displaystyle=$
$\displaystyle\partial_{\mu}X^{a}+{{\omega_{\mu}}^{a}}_{b}X^{b}$ (45a)
$\displaystyle\nabla_{\mu}X_{a}$ $\displaystyle=$
$\displaystyle\partial_{\mu}X_{a}-{{\omega_{\mu}}^{b}}_{a}X_{b}.$ (45b)
We require a covariant derivative such as (45a) to be Lorentz invariant
$\displaystyle{\Lambda^{a^{\prime}}}_{a}:\nabla_{\mu}X^{a}$
$\displaystyle\rightarrow$
$\displaystyle\nabla_{\mu}\left({\Lambda^{a^{\prime}}}_{a}X^{a}\right)$ (46a)
$\displaystyle=$
$\displaystyle\left(\nabla_{\mu}{\Lambda^{a^{\prime}}}_{a}\right)X^{a}+{\Lambda^{a^{\prime}}}_{a}\nabla_{\mu}X^{a}.$
(46b)
Therefore, the covariant derivative is Lorentz invariant,
$\displaystyle\nabla_{\mu}X^{a}$ $\displaystyle=$
$\displaystyle{\Lambda^{a^{\prime}}}_{a}\nabla_{\mu}X^{a},$ (47)
so long as the covariant derivative of the Lorentz transformation vanishes,
$\nabla_{\mu}{\Lambda^{a^{\prime}}}_{a}=0.$ (48)
This imposes a constraint that allows us to see how the spin connection
behaves under a Lorentz transformation
$\nabla_{\mu}{\Lambda^{a^{\prime}}}_{b}=\partial_{\mu}{\Lambda^{a^{\prime}}}_{b}+{{\omega_{\mu}}^{a^{\prime}}}_{c}{\Lambda^{c}}_{b}-{{\omega_{\mu}}^{c}}_{b}{\Lambda^{a^{\prime}}}_{c}=0,$
(49)
which we write as follows
${\Lambda^{b}}_{b^{\prime}}\partial_{\mu}{\Lambda^{a^{\prime}}}_{b}+{{\omega_{\mu}}^{a^{\prime}}}_{c}{\Lambda^{b}}_{b^{\prime}}{\Lambda^{c}}_{b}-{{\omega_{\mu}}^{c}}_{b}{\Lambda^{b}}_{b^{\prime}}{\Lambda^{a^{\prime}}}_{c}=0.$
(50)
Now ${\Lambda^{b}}_{b^{\prime}}{\Lambda^{c}}_{b}=\delta^{c}_{b^{\prime}}$, so
we arrive at the transformation of the spin connection induced by a Lorentz
transformation
${{\omega_{\mu}}^{a^{\prime}}}_{b^{\prime}}={{\omega_{\mu}}^{c}}_{b}{\Lambda^{b}}_{b^{\prime}}{\Lambda^{a^{\prime}}}_{c}-{\Lambda^{b}}_{b^{\prime}}\partial_{\mu}{\Lambda^{a^{\prime}}}_{b}.$
(51)
This means that the spin connection transforms inhomogeneously so that
$\nabla_{\mu}X^{a}$ can transform like a Lorentz 4-vector.
The exterior derivative is defined as follows
$\displaystyle{(dX)_{\mu\nu}}^{a}$ $\displaystyle\equiv$
$\displaystyle\nabla_{\mu}{X_{\nu}}^{a}-\nabla_{\nu}{X_{\mu}}^{a}$
$\displaystyle=$
$\displaystyle\partial_{\mu}{X_{\nu}}^{a}+{{\omega_{\mu}}^{a}}_{b}{X_{\nu}}^{b}-\Gamma^{\lambda}_{\mu\nu}{X_{\lambda}}^{a}$
$\displaystyle-$
$\displaystyle\partial_{\nu}{X_{\mu}}^{a}-{{\omega_{\nu}}^{a}}_{b}{X_{\mu}}^{b}+\Gamma^{\lambda}_{\nu\mu}{X_{\lambda}}^{a}$
$\displaystyle=$
$\displaystyle\partial_{\mu}{X_{\nu}}^{a}-\partial_{\nu}{X_{\mu}}^{a}+{{\omega_{\mu}}^{a}}_{b}{X_{\nu}}^{b}-{{\omega_{\nu}}^{a}}_{b}{X_{\mu}}^{b}.$
Now, to make a remark about Cartan’s notation as written in (10), one often
writes the non-coordinate basis 1-form (19) as
$e^{a}\equiv\hat{\bf e}^{(a)}={e_{\mu}}^{a}dx^{\mu}.$ (53)
The spin connection 1-form is
${\omega^{a}}_{b}={{\omega_{\mu}}^{a}}_{b}dx^{\mu}.$ (54)
It is conventional to define a differential form
$dA\equiv\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$ (55)
and a wedge product
$A\wedge B\equiv A_{\mu}B_{\nu}-A_{\nu}B_{\mu},$ (56)
which are both anti-symmetric in the Greek indices. With this convention,
originally due to Élie Cartan, the torsion can be written concisely in terms
of the frame and spin connection 1-forms as
$T^{a}=de^{a}+{\omega^{a}}_{b}\wedge e^{b}.$ (57)
The notation is so compact that it is easy to misunderstand what it
represents. For example, writing the torsion explicitly in the coordinate
basis we have
$\displaystyle{T_{\mu\nu}}^{\lambda}$ $\displaystyle=$
$\displaystyle{e^{\lambda}}_{a}{T_{\mu\nu}}^{a}$ $\displaystyle=$
$\displaystyle{e^{\lambda}}_{a}\left(\partial_{\mu}{e_{\nu}}^{a}-\partial_{\nu}{e_{\mu}}^{a}+{{\omega_{\mu}}^{a}}_{b}{e_{\nu}}^{b}-{{\omega_{\nu}}^{a}}_{b}{e_{\mu}}^{b}\right),$
which fully expanded gives us
${T_{\mu\nu}}^{\lambda}={e^{\lambda}}_{a}\partial_{\mu}{e_{\nu}}^{a}+{e^{\lambda}}_{a}{e_{\nu}}^{b}{{\omega_{\mu}}^{a}}_{b}-{e^{\lambda}}_{a}\partial_{\nu}{e_{\mu}}^{a}-{e^{\lambda}}_{a}{e_{\mu}}^{b}{{\omega_{\nu}}^{a}}_{b}.$
(59)
Since the affine connection is
$\Gamma^{\lambda}_{\mu\nu}={e^{\lambda}}_{a}\partial_{\mu}{e_{\nu}}^{a}+{e^{\lambda}}_{a}{e_{\nu}}^{b}{{\omega_{\mu}}^{a}}_{b},$
(60)
the torsion then reduces to the simple expression
${T_{\mu\nu}}^{\lambda}=\Gamma^{\lambda}_{\mu\nu}-\Gamma^{\lambda}_{\nu\mu}.$
(61)
So, the torsion vanishes when the affine connection is symmetric in its lower
two indices.
## IV Curvature
We now derive the Riemann curvature tensor, and we do so in two ways. The
first way gives us an expression for the curvature in terms of the affine
connection and the second way gives us an equivalent expression in terms of
the spin connection. The structure of both expressions are the same, so
effectively the affine and spin connections can be interchanged, as long as
one properly accounts for Latin and Greek indices.
### IV.1 Riemann curvature from the affine connection
In this section, we will derive the Riemann curvature tensor in terms of the
affine connection. The development in this section follows the conventional
approach of considering parallel transport around a plaquette, as shown in
Figure 2. (The term plaquette is borrowed from condensed matter theory and
refers to a cell of a lattice.) So, this is our first pass at understanding
the origin of the Riemann curvature tensor. In the following section, we will
then re-derive the curvature tensor directly from the spin connection.
$x^{\alpha}$$x^{\alpha}+\delta x^{\alpha}$$x^{\alpha}+\delta
x^{\alpha}+dx^{\alpha}$$x^{\alpha}+dx^{\alpha}$$\delta
x^{\alpha}$$dx^{\alpha}$ Figure 2: General plaquette of cell sizes $\delta
x^{\alpha}$ and $dx^{\alpha}$ with its initial spacetime point at 4-vector
$x_{\alpha}$ (bottom left corner). The plaquette is a piece of a curved
manifold.
For the counterclockwise path ($x^{\alpha}\rightarrow x^{\alpha}+\delta
x^{\alpha}\rightarrow x^{\alpha}+\delta x^{\alpha}+dx^{\alpha}$), we have:
$\displaystyle X^{\alpha}(x+\delta x)$ $\displaystyle=$ $\displaystyle
X^{\alpha}(x)+{\color[rgb]{0,0,1}\bar{\delta}X^{\alpha}}(x)$ (62a)
$\displaystyle\stackrel{{\scriptstyle(\ref{parallel_dX})}}{{=}}$
$\displaystyle
X^{\alpha}(x)-\Gamma^{\alpha}_{\beta\gamma}(x)X^{\beta}(x)\delta x^{\gamma}.$
(62b)
At the end point $x+\delta x+dx$, we have
$\displaystyle X^{\alpha}(x+\delta x+dx)$ $\displaystyle=$ $\displaystyle
X^{\alpha}(x+\delta x)+{\color[rgb]{0,0,1}\bar{\delta}X^{\alpha}}(x+\delta x)$
$\displaystyle\stackrel{{\scriptstyle(\ref{4_vector_connection})}}{{=}}$
$\displaystyle
X^{\alpha}(x)-\Gamma^{\alpha}_{\beta\gamma}(x)X^{\beta}(x)\delta x^{\gamma}$
$\displaystyle+{\color[rgb]{0,0,1}\bar{\delta}X^{\alpha}}(x+\delta x).$
Now, we need to evaluate the last term (parallel transport term) on the R.H.S.
$\displaystyle{\color[rgb]{0,0,1}\bar{\delta}X^{\alpha}}(x+\delta x)$
$\displaystyle\stackrel{{\scriptstyle(\ref{parallel_dX})}}{{=}}$
$\displaystyle-\Gamma^{\alpha}_{\beta\gamma}(x+\delta x)X^{\beta}(x+\delta
x)dx^{\gamma}$ (64a)
$\displaystyle\stackrel{{\scriptstyle(\ref{4_vector_connection})}}{{=}}$
$\displaystyle-\left[\Gamma^{\alpha}_{\beta\gamma}(x)+\partial_{\delta}\Gamma^{\alpha}_{\beta\gamma}(x)\delta
x^{\delta}\right]\left[X^{\beta}(x)-\Gamma^{\beta}_{\mu\nu}(x)X^{\mu}(x)\delta
x^{\nu}\right]dx^{\gamma}$ $\displaystyle=$
$\displaystyle-\Gamma^{\alpha}_{\beta\gamma}X^{\beta}dx^{\gamma}-\partial_{\delta}\Gamma^{\alpha}_{\beta\gamma}X^{\beta}\delta
x^{\delta}dx^{\gamma}+\Gamma^{\alpha}_{\beta\gamma}\Gamma^{\beta}_{\mu\nu}X^{\mu}\delta
x^{\nu}dx^{\gamma}+\underbrace{\partial_{\delta}\Gamma^{\alpha}_{\beta\gamma}\Gamma^{\beta}_{\mu\nu}X^{\mu}\delta
x^{\delta}\delta x^{\nu}dx^{\gamma}}_{\text{neglect }3^{\text{rd}}\text{ order
term}},$
where for brevity in the last expression we drop the explicit functional
dependence on $x$, as this is understood. Inserting (64) into (63), the
4-vector at the end point is
$X^{\alpha}(x+\delta
x+dx)=X^{\alpha}-\Gamma^{\alpha}_{\beta\gamma}X^{\beta}\delta
x^{\gamma}-\Gamma^{\alpha}_{\beta\gamma}X^{\beta}dx^{\gamma}-\partial_{\delta}\Gamma^{\alpha}_{\beta\gamma}X^{\beta}\delta
x^{\delta}dx^{\gamma}+\Gamma^{\alpha}_{\beta\gamma}\Gamma^{\beta}_{\mu\nu}X^{\mu}\delta
x^{\nu}dx^{\gamma}.$ (65)
Interchanging the indices of $\delta x$ and $dx$ in the last two terms, we
have
$X^{\alpha}(x+\delta
x+dx)=X^{\alpha}-\Gamma^{\alpha}_{\beta\gamma}X^{\beta}\delta
x^{\gamma}-\Gamma^{\alpha}_{\beta\gamma}X^{\beta}dx^{\gamma}-\partial_{\gamma}\Gamma^{\alpha}_{\beta\delta}X^{\beta}\delta
x^{\gamma}dx^{\delta}+\Gamma^{\alpha}_{\beta\nu}\Gamma^{\beta}_{\mu\gamma}X^{\mu}\delta
x^{\gamma}dx^{\nu}.$ (66)
Then, replacing $\nu$ with $\delta$ in this last term, we have
$\displaystyle X^{\alpha}(x+\delta
x+dx)=X^{\alpha}-\Gamma^{\alpha}_{\beta\gamma}X^{\beta}\delta
x^{\gamma}-\Gamma^{\alpha}_{\beta\gamma}X^{\beta}dx^{\gamma}-\partial_{\gamma}\Gamma^{\alpha}_{\beta\delta}X^{\beta}\delta
x^{\gamma}dx^{\delta}+\Gamma^{\alpha}_{\beta\delta}\Gamma^{\beta}_{\mu\gamma}X^{\mu}\delta
x^{\gamma}dx^{\delta}.$ (67)
For the clockwise path ($x^{\alpha}\rightarrow
x^{\alpha}+dx^{\alpha}\rightarrow x^{\alpha}+dx^{\alpha}+\delta x^{\alpha}$),
we get the same result as before with $dx^{\alpha}$ and $\delta x^{\alpha}$
interchanged:
$\displaystyle X^{\alpha}(x+\delta
x+dx)=X^{\alpha}-\Gamma^{\alpha}_{\beta\gamma}X^{\beta}dx^{\gamma}-\Gamma^{\alpha}_{\beta\gamma}X^{\beta}\delta
x^{\gamma}-\partial_{\gamma}\Gamma^{\alpha}_{\beta\delta}X^{\beta}dx^{\gamma}\delta
x^{\delta}+\Gamma^{\alpha}_{\beta\delta}\Gamma^{\beta}_{\mu\gamma}X^{\mu}dx^{\gamma}\delta
x^{\delta}.$ (68)
Interchanging the indices $\delta$ and $\gamma$ everywhere, we have
$\displaystyle X^{\alpha}(x+\delta x+dx)$ $\displaystyle=$ $\displaystyle
X^{\alpha}-\Gamma^{\alpha}_{\beta\delta}X^{\beta}dx^{\delta}-\Gamma^{\alpha}_{\beta\delta}X^{\beta}\delta
x^{\delta}-\partial_{\delta}\Gamma^{\alpha}_{\beta\gamma}X^{\beta}dx^{\delta}\delta
x^{\gamma}+\Gamma^{\alpha}_{\beta\gamma}\Gamma^{\beta}_{\mu\delta}X^{\mu}dx^{\delta}\delta
x^{\gamma},$ (69)
which now looks like (67) in the indices of the differentials. Hence, as we
compute (67) minus (69), the zeroth and first order terms cancel, and the
remaining second order terms in (67) and (69) add with the common factor
$\delta x^{\gamma}dx^{\delta}$ (differential area)
$\displaystyle\triangle X^{\alpha}$ $\displaystyle=$ $\displaystyle
X^{\alpha}(x+\delta x+dx)-X^{\alpha}(x+dx+\delta x)$ (70a) $\displaystyle=$
$\displaystyle\left(\partial_{\delta}\Gamma^{\alpha}_{\beta\gamma}X^{\beta}-\partial_{\gamma}\Gamma^{\alpha}_{\beta\delta}X^{\beta}+\Gamma^{\alpha}_{\beta\delta}\Gamma^{\beta}_{\mu\gamma}X^{\mu}-\Gamma^{\alpha}_{\beta\gamma}\Gamma^{\beta}_{\mu\delta}X^{\mu}\right)\delta
x^{\gamma}dx^{\delta}$ (70b) $\displaystyle=$
$\displaystyle\left(\partial_{\delta}\Gamma^{\alpha}_{\beta\gamma}-\partial_{\gamma}\Gamma^{\alpha}_{\beta\delta}+\Gamma^{\alpha}_{\mu\delta}\Gamma^{\mu}_{\beta\gamma}-\Gamma^{\alpha}_{\mu\gamma}\Gamma^{\mu}_{\beta\delta}\right)X^{\beta}\delta
x^{\gamma}dx^{\delta}.$ (70c)
So, the difference of transporting the vector $X^{\alpha}$ along the two
separate routes around the plaquette is related to the curvature of the
manifold as follows
$\triangle X^{\alpha}={R^{\alpha}}_{\beta\delta\gamma}X^{\beta}\delta
x^{\gamma}dx^{\delta},$ (71)
and from here we arrive at our desired result and identify the Riemann
curvature tensor as
${R^{\alpha}}_{\beta\delta\gamma}\equiv\partial_{\delta}\Gamma^{\alpha}_{\beta\gamma}-\partial_{\gamma}\Gamma^{\alpha}_{\beta\delta}+\Gamma^{\alpha}_{\mu\delta}\Gamma^{\mu}_{\beta\gamma}-\Gamma^{\alpha}_{\mu\gamma}\Gamma^{\mu}_{\beta\delta}.$
(72)
Notice, from the identity (72), the curvature tensor is anti-symmetric in its
last two indices,
${R^{\alpha}}_{\beta\delta\gamma}=-{R^{\alpha}}_{\beta\gamma\delta}$.
### IV.2 Riemann curvature from the spin connection
In this section, we will show that the Riemann curvature tensor (72) can be
simply expressed in terms of the spin connection as follows:
${R^{a}}_{b}=d{\omega^{a}}_{b}+{\omega^{a}}_{c}\wedge{\omega^{c}}_{b}.$ (73)
Here the Greek indices are suppressed for brevity. So, the first step is to
explicitly write out the curvature tensor in all its indices and then to use
the vierbein field to convert the Latin indices to Greek indices, which gives
us
${R^{\lambda}}_{\sigma\mu\nu}\equiv{e^{\lambda}}_{a}{e_{\sigma}}^{b}\left(\partial_{\mu}{{\omega_{\nu}}^{a}}_{b}-\partial_{\nu}{{\omega_{\mu}}^{a}}_{b}+{{\omega_{\mu}}^{a}}_{c}{{\omega_{\nu}}^{c}}_{b}-{{\omega_{\nu}}^{a}}_{c}{{\omega_{\mu}}^{c}}_{b}\right).$
(74)
The quantity in parentheses is a spin curvature. Next, we will use (41), which
I restate here for convenience
${{\omega_{\mu}}^{a}}_{b}={e_{\rho}}^{a}{e^{\tau}}_{b}\Gamma^{\rho}_{\mu\tau}-{e^{\tau}}_{b}\partial_{\mu}{e_{\tau}}^{a}.$
(75)
Inserting (75) into (74) gives
$\displaystyle{R^{\lambda}}_{\sigma\mu\nu}$ $\displaystyle=$
$\displaystyle{e^{\lambda}}_{a}{e_{\sigma}}^{b}\left[\partial_{\mu}\middle({e_{\rho}}^{a}{e^{\tau}}_{b}\Gamma^{\rho}_{\nu\tau}\middle)-\partial_{\nu}\middle({e_{\rho}}^{a}{e^{\tau}}_{b}\Gamma^{\rho}_{\mu\tau}\middle)-\partial_{\mu}{e^{\tau}}_{b}\partial_{\nu}{e_{\tau}}^{a}+\partial_{\nu}{e^{\tau}}_{b}\partial_{\mu}{e_{\tau}}^{a}\right.$
$\displaystyle+$
$\displaystyle\left({e_{\rho}}^{a}{e^{\tau}}_{c}\Gamma^{\rho}_{\mu\tau}-{e^{\tau}}_{c}\partial_{\mu}{e_{\tau}}^{a}\middle)\middle({e_{\rho^{\prime}}}^{c}{e^{\tau^{\prime}}}_{b}\Gamma^{\rho^{\prime}}_{\nu\tau^{\prime}}-{e^{\tau^{\prime}}}_{b}\partial_{\nu}{e_{\tau^{\prime}}}^{c}\right)$
$\displaystyle-$
$\displaystyle\left.\left({e_{\rho}}^{a}{e^{\tau}}_{c}\Gamma^{\rho}_{\nu\tau}-{e^{\tau}}_{c}\partial_{\nu}{e_{\tau}}^{a}\middle)\middle({e_{\rho^{\prime}}}^{c}{e^{\tau^{\prime}}}_{b}\Gamma^{\rho^{\prime}}_{\mu\tau^{\prime}}-{e^{\tau^{\prime}}}_{b}\partial_{\mu}{e_{\tau^{\prime}}}^{c}\right)\right].$
Reducing this expression is complicated to do. Since the first term in (75)
depends on the affine connection $\Gamma$ and the second term depends on
$\partial_{\mu}$, we will reduce (IV.2) in two passes, first considering terms
that involve derivatives of the vierbein field and then terms that do not.
So as a first pass toward reducing (IV.2), we will consider all terms with
derivatives of vierbeins, and show that these vanish. To begin with, the first
order derivative terms that appear with $\partial_{\mu}$ acting on vierbein
fields are the following:
$\displaystyle{e^{\lambda}}_{a}{e_{\sigma}}^{b}\left[\partial_{\mu}\left({e_{\rho}}^{a}{e^{\tau}}_{b}\right)\Gamma^{\rho}_{\nu\tau}\right.$
$\displaystyle-$
$\displaystyle{e^{\tau}}_{c}\left(\partial_{\mu}{e_{\tau}}^{a}\right){e_{\rho}}^{c}{e^{\tau}}_{b}\Gamma^{\rho}_{\nu\tau}+{e_{\rho}}^{a}{e^{\tau}}_{c}\Gamma^{\rho}_{\nu\tau}{e^{\tau^{\prime}}}_{b}\partial_{\mu}\left.{e_{\tau^{\prime}}}^{c}\right]$
$\displaystyle=$
$\displaystyle\left[{e^{\lambda}}_{a}{e_{\sigma}}^{b}{e^{\tau}}_{b}\partial_{\mu}{e_{\rho}}^{a}+{e^{\lambda}}_{a}{e_{\sigma}}^{b}{e_{\rho}}^{a}\partial_{\mu}{e^{\tau}}_{b}\right]\Gamma^{\rho}_{\nu\tau}-{e^{\lambda}}_{a}{e_{\sigma}}^{b}{e^{\tau}}_{c}{e_{\rho}}^{c}{e^{\tau^{\prime}}}_{b}\left(\partial_{\mu}{e_{\tau}}^{a}\right)\Gamma^{\rho}_{\nu\tau^{\prime}}$
$\displaystyle+$
$\displaystyle{e^{\lambda}}_{a}{e_{\sigma}}^{b}{e_{\rho}}^{a}{e^{\tau}}_{c}{e^{\tau^{\prime}}}_{b}\partial_{\mu}{e_{\tau^{\prime}}}^{c}\Gamma^{\rho}_{\nu\tau}$
(77b) $\displaystyle=$
$\displaystyle{e^{\lambda}}_{a}\delta^{\tau}_{\sigma}\partial_{\mu}{e_{\rho}}^{a}\Gamma^{\rho}_{\nu\tau}+\delta^{\lambda}_{\rho}{e_{\sigma}}^{b}\partial_{\mu}{e^{\tau}}_{b}\Gamma^{\rho}_{\nu\tau}-\delta^{\tau}_{\rho}\delta_{\sigma}^{\tau^{\prime}}{e^{\lambda}}_{a}\partial_{\mu}{e_{\tau}}^{a}\Gamma^{\rho}_{\nu\tau^{\prime}}+\delta^{\lambda}_{\rho}\delta^{\tau^{\prime}}_{\sigma}{e^{\tau}}_{c}\partial_{\mu}{e_{\tau^{\prime}}}^{c}\Gamma^{\rho}_{\nu\tau}$
(77c) $\displaystyle=$
$\displaystyle{e^{\lambda}}_{a}\partial_{\mu}{e_{\rho}}^{a}\Gamma^{\rho}_{\nu\sigma}+{e_{\sigma}}^{b}\partial_{\mu}{e^{\tau}}_{b}\Gamma^{\lambda}_{\nu\tau}-{e^{\lambda}}_{a}\partial_{\mu}{e_{\rho}}^{a}\Gamma^{\rho}_{\nu\sigma}+{e^{\tau}}_{b}\partial_{\mu}{e_{\sigma}}^{b}\Gamma^{\lambda}_{\nu\tau}$
(77d) $\displaystyle=$
$\displaystyle\partial_{\mu}\left({e_{\sigma}}^{b}{e^{\tau}}_{b}\right)\Gamma^{\lambda}_{\nu\tau}$
(77e) $\displaystyle=$
$\displaystyle\partial_{\mu}\left(\delta^{\tau}_{\sigma}\right)\Gamma^{\lambda}_{\nu\tau}$
(77f) $\displaystyle=$ $\displaystyle 0.$ (77g)
Similarly, all the first order derivative terms that appear with
$\partial_{\nu}$ vanish as well. So, all the first order derivative terms
vanish in (IV.2). Next, we consider all second order derivatives, both
$\partial_{\mu}$ and $\partial_{\nu}$, acting on vierbein fields. All the
terms with both $\partial_{\mu}$ and $\partial_{\nu}$ are the following
$\displaystyle{e^{\lambda}}_{a}{e_{\sigma}}^{b}{e^{\tau}}_{c}{e^{\tau^{\prime}}}_{b}\left(\partial_{\mu}{e_{\tau}}^{a}\right)\partial_{\nu}{e_{\tau^{\prime}}}^{c}$
$\displaystyle-$
$\displaystyle{e^{\lambda}}_{a}{e_{\sigma}}^{b}{e^{\tau}}_{c}{e^{\tau^{\prime}}}_{b}\left(\partial_{\nu}{e_{\tau}}^{a}\right)\left(\partial_{\mu}{e_{\tau^{\prime}}}^{c}\right)-{e^{\lambda}}_{a}{e_{\sigma}}^{b}(\partial_{\mu}{e^{\tau}}_{b})\partial_{\nu}{e_{\tau}}^{a}+{e^{\lambda}}_{a}{e_{\sigma}}^{b}(\partial_{\nu}{e^{\tau}}_{b})\partial_{\mu}{e_{\tau}}^{a}$
(78a) $\displaystyle=$
$\displaystyle{e^{\lambda}}_{a}{e^{\tau}}_{c}\left(\partial_{\mu}{e_{\tau}}^{a}\right)\partial_{\nu}{e_{\sigma}}^{c}-{e^{\lambda}}_{a}{e^{\tau}}_{c}\left(\partial_{\nu}{e_{\tau}}^{a}\right)\partial_{\mu}{e_{\sigma}}^{c}-\left(\partial_{\nu}{e^{\lambda}}_{a}\right)\left(\partial_{\mu}{e_{\sigma}}^{b}\right){e^{\tau}}_{b}{e_{\tau}}^{a}$
$\displaystyle+$
$\displaystyle\left(\partial_{\mu}{e^{\lambda}}_{a}\right)\left(\partial_{\nu}{e_{\sigma}}^{b}\right){e^{\tau}}_{b}{e_{\tau}}^{a}$
$\displaystyle=$
$\displaystyle{e^{\lambda}}_{a}{e^{\tau}}_{c}\left[\left(\partial_{\mu}{e_{\tau}}^{a}\right)\partial_{\nu}{e_{\sigma}}^{c}-\left(\partial_{\nu}{e_{\tau}}^{a}\right)\partial_{\mu}{e_{\sigma}}^{c}\right]-\partial_{\nu}{e^{\lambda}}_{a}\partial_{\mu}{e_{\sigma}}^{a}+\partial_{\mu}{e^{\lambda}}_{a}\partial_{\nu}{e_{\sigma}}^{a}$
(78b) $\displaystyle=$
$\displaystyle-{e^{\lambda}}_{a}{e_{\tau}}^{a}\left(\partial_{\mu}{e^{\tau}}_{c}\right)\partial_{\nu}{e_{\sigma}}^{c}+{e^{\lambda}}_{a}{e_{\tau}}^{a}\left(\partial_{\nu}{e^{\tau}}_{c}\right)\partial_{\mu}{e_{\sigma}}^{c}-\partial_{\nu}{e^{\lambda}}_{a}\partial_{\mu}{e_{\sigma}}^{a}+\partial_{\mu}{e^{\lambda}}_{a}\partial_{\nu}{e_{\sigma}}^{a}\quad$
(78c) $\displaystyle=$
$\displaystyle-\delta^{\lambda}_{\tau}\left(\partial_{\mu}{e^{\tau}}_{c}\right)\partial_{\nu}{e_{\sigma}}^{c}+\delta^{\lambda}_{\tau}\left(\partial_{\nu}{e^{\tau}}_{c}\right)\partial_{\mu}{e_{\sigma}}^{c}-\partial_{\nu}{e^{\lambda}}_{a}\partial_{\mu}{e_{\sigma}}^{a}+\partial_{\mu}{e^{\lambda}}_{a}\partial_{\nu}{e_{\sigma}}^{a}$
(78d) $\displaystyle=$
$\displaystyle-\left(\partial_{\mu}{e^{\lambda}}_{c}\right)\partial_{\nu}{e_{\sigma}}^{c}+\left(\partial_{\nu}{e^{\lambda}}_{c}\right)\partial_{\mu}{e_{\sigma}}^{c}-\partial_{\nu}{e^{\lambda}}_{a}\partial_{\mu}{e_{\sigma}}^{a}+\partial_{\mu}{e^{\lambda}}_{a}\partial_{\nu}{e_{\sigma}}^{a}$
(78e) $\displaystyle=$ $\displaystyle 0.$ (78f)
Hence, all the second order derivative terms in (IV.2) vanish, as do the first
order terms. Note that we made use of the fact
$\partial_{\mu}\left({e^{\lambda}}_{a}{e_{\tau}}^{a}\right)=0$, so as to swap
the order of differentiation,
$\partial_{\mu}\left({e^{\lambda}}_{a}\right){e_{\tau}}^{a}=-{e^{\lambda}}_{a}\left(\partial_{\mu}{e_{\tau}}^{a}\right).$
(79)
Finally, as a second pass toward reducing (IV.2) to its final form, we now
consider all the remaining terms (no derivatives of the vierbein fields), and
these lead to the curvature tensor expressed solely as a function of the
affine connection:
$\displaystyle{R^{\lambda}}_{\sigma\mu\nu}$ $\displaystyle=$
$\displaystyle{e^{\lambda}}_{a}{e_{\sigma}}^{b}\left[{e_{\rho}}^{a}{e^{\tau}}_{b}\left(\partial_{\mu}\Gamma^{\rho}_{\nu\tau}-\partial_{\nu}\Gamma^{\rho}_{\mu\tau}\right)+{e_{\rho}}^{a}{e^{\tau^{\prime}}}_{b}\left(\Gamma^{\rho}_{\mu\tau}\Gamma^{\tau}_{\nu\tau^{\prime}}-\Gamma^{\rho}_{\nu\tau}\Gamma^{\tau}_{\mu\tau^{\prime}}\right)\right]$
(80a) $\displaystyle=$
$\displaystyle\delta^{\lambda}_{\rho}\delta^{\tau}_{\sigma}\left(\partial_{\mu}\Gamma^{\rho}_{\nu\tau}-\partial_{\nu}\Gamma^{\rho}_{\mu\tau}\right)+\delta_{\rho}^{\lambda}\delta^{\tau^{\prime}}_{\sigma}\left(\Gamma^{\rho}_{\mu\tau}\Gamma^{\tau}_{\nu\tau^{\prime}}-\Gamma^{\rho}_{\nu\tau}\Gamma^{\tau}_{\mu\tau^{\prime}}\right).$
(80b)
Applying the Kronecker deltas, we arrive at the final result
${R^{\lambda}}_{\sigma\mu\nu}=\partial_{\mu}\Gamma^{\lambda}_{\nu\sigma}-\partial_{\nu}\Gamma^{\lambda}_{\mu\sigma}+\Gamma^{\lambda}_{\mu\tau}\Gamma^{\tau}_{\nu\sigma}-\Gamma^{\lambda}_{\nu\tau}\Gamma^{\tau}_{\mu\sigma},$
(81)
which is identical to (72). If we had not already derived the curvature
tensor, we could have written (81) down by inspection because of its
similarity to (74), essentially replacing the spin connection with the affine
connection.
## V Mathematical constructs
Here we assemble a number of preliminary identities that we will use later to
derive the Einstein equation. An identity we will need allows us to evaluate
the trace of $M^{-1}\partial_{\mu}M$ where $M$ is a 2-rank tensor
$Tr[M^{-1}\partial_{\mu}M]=\partial_{\mu}\ln|M|.$ (82)
As an example of this identity, consider the following $2\times 2$ matrix and
its inverse
$M=\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\qquad
M^{-1}=\frac{1}{|M|}\begin{pmatrix}d&-b\\\ -c&a\end{pmatrix}.$ (83)
A demonstration of the trace identity (82) for the simplest case of one
spatial dimension is
$\displaystyle\text{Tr}\left[M^{-1}\partial_{x}M\right]$ $\displaystyle=$
$\displaystyle\text{Tr}\left[\frac{1}{ad-bc}\begin{pmatrix}d&-b\\\
-c&a\end{pmatrix}\begin{pmatrix}\partial_{x}a&\partial_{x}b\\\
\partial_{x}c&\partial_{x}d\end{pmatrix}\right]$ (84a) $\displaystyle=$
$\displaystyle\text{Tr}\left[\frac{1}{ad-
bc}\begin{pmatrix}\partial_{x}a\,d-b\partial_{x}c&\partial_{x}b\,d-b\partial_{x}d\\\
-\partial_{x}a\,c+a\partial_{x}c&-\partial_{x}b\,c+a\partial_{x}d\end{pmatrix}\right]$
(84b) $\displaystyle=$ $\displaystyle\frac{1}{ad-
bc}\left[\partial_{x}(ad)-\partial_{x}(bc)\right]$ (84c) $\displaystyle=$
$\displaystyle\frac{\partial_{x}(ad-bc)}{ad-bc}$ (84d) $\displaystyle=$
$\displaystyle\partial_{x}\ln|M|.$ (84e)
This identity holds for matrices of arbitrary size. In our case, we shall need
this identity for the case of $4\times 4$ matrices.
The contravariant and covariant metric tensors are orthogonal
$g^{\lambda\mu}g_{\mu\nu}=\delta^{\lambda}_{\nu},$ (85)
so
$g^{\mu\nu}\rightarrow(g_{\mu\nu})^{-1}.$ (86)
We also define the negative determinant of the metric tensor
$g\equiv-\text{Det}\,g_{\mu\nu}.$ (87)
### V.1 Consequence of tetrad postulate
The tetrad postulate of Section III.3 is that the vierbein field is invariant
under parallel transport
$\nabla_{\mu}{e_{\nu}}^{a}=0.$ (88)
That the metric tensor is invariant under parallel transport then immediately
follows
$\displaystyle\nabla_{\mu}g_{\nu\lambda}$ $\displaystyle=$
$\displaystyle\nabla_{\mu}\left({e_{\nu}}^{a}{e_{\lambda}}^{c}n_{ab}\right)$
(89a) $\displaystyle=$
$\displaystyle\left(\nabla_{\mu}{e_{\nu}}^{a}\right){e_{\lambda}}^{b}n_{ab}+{e_{\nu}}^{a}\left(\nabla_{\mu}{e_{\lambda}}^{b}\right)n_{ab}\qquad$
(89b) $\displaystyle\stackrel{{\scriptstyle(\ref{tetrad_postulate})}}{{=}}$
$\displaystyle 0.$ (89c)
This is called metric compatibility.
### V.2 Affine connection in terms of the metric tensor
Now we can make use of (89) to compute the affine connection. Permuting
indices, we can write
$\nabla_{\rho}g_{\mu\nu}=\partial_{\rho}g_{\mu\nu}-\Gamma^{\lambda}_{\rho\mu}g_{\lambda\nu}-\Gamma^{\lambda}_{\rho\nu}g_{\mu\lambda}=0$
(90a)
$\nabla_{\mu}g_{\nu\rho}=\partial_{\mu}g_{\nu\rho}-\Gamma^{\lambda}_{\mu\nu}g_{\lambda\rho}-\Gamma^{\lambda}_{\mu\rho}g_{\nu\lambda}=0$
(90b)
$\nabla_{\nu}g_{\rho\mu}=\partial_{\nu}g_{\rho\mu}-\Gamma^{\lambda}_{\nu\rho}g_{\lambda\mu}-\Gamma^{\lambda}_{\nu\mu}g_{\rho\lambda}=0.$
(90c)
Now, we take (90a) $-$ (90b) $-$ (90c):
$\partial_{\rho}g_{\mu\nu}-\partial_{\mu}g_{\nu\rho}-\partial_{\nu}g_{\rho\mu}+2\Gamma^{\lambda}_{\mu\nu}g_{\lambda\rho}=0.$
(91)
Multiplying through by $g^{\sigma\rho}$ allows us to solve for the affine
connection
$\Gamma^{\sigma}_{\mu\nu}=\frac{1}{2}g^{\sigma\rho}\left(\partial_{\mu}g_{\nu\rho}+\partial_{\nu}g_{\rho\mu}-\partial_{\rho}g_{\mu\nu}\right).$
(92)
Then, contracting the $\sigma$ and $\mu$ indices, we have
$\displaystyle\Gamma^{\mu}_{\mu\nu}$ $\displaystyle=$
$\displaystyle\frac{1}{2}g^{\mu\rho}\left(\partial_{\mu}g_{\nu\rho}+\partial_{\nu}g_{\rho\mu}-\partial_{\rho}g_{\mu\nu}\right)$
(93a) $\displaystyle=$
$\displaystyle\frac{1}{2}g^{\mu\rho}\left(\partial_{\nu}g_{\rho\mu}+\partial_{\mu}g_{\rho\nu}-\partial_{\rho}g_{\mu\nu}\right)$
(93b) $\displaystyle=$
$\displaystyle\frac{1}{2}g^{\mu\rho}\partial_{\nu}g_{\rho\mu}+\frac{1}{2}g^{\mu\rho}\left\\{\partial_{\mu}g_{\rho\nu}-\partial_{\rho}g_{\mu\nu}\right\\}.$
(93c)
Since the metric tensor is symmetric and the last term in brackets is anti-
symmetric in the $\mu\,\rho$ indices, the product must vanish. Thus
$\Gamma^{\mu}_{\mu\nu}=\frac{1}{2}g^{\mu\rho}\partial_{\nu}g_{\rho\mu}.$ (94)
Furthermore, rewriting (94) as the trace of the similarity transformation of
the covariant derivative
$\text{Tr}[M^{-1}\partial_{\nu}M]=\partial_{\nu}\ln\text{Det}\,M$ that we
demonstrated in (84), we have
$\displaystyle\Gamma^{\mu}_{\mu\nu}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\text{Tr}\left(g^{\lambda\rho}\partial_{\nu}g_{\rho\mu}\right)$
(95a) $\displaystyle=$
$\displaystyle\frac{1}{2}\partial_{\nu}\ln\text{Det}\,g_{\rho\mu}$ (95b)
$\displaystyle=$ $\displaystyle\frac{1}{2}\partial_{\nu}\ln(-g),\qquad
g\equiv-\text{Det}\,g_{\rho\mu}$ (95c) $\displaystyle=$
$\displaystyle\partial_{\nu}\ln\sqrt{-g}$ (95d) $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{-g}}\partial_{\nu}\sqrt{-g}.$ (95e)
Equating (94) to (95e), we have
$\partial_{\nu}\sqrt{-g}=\frac{1}{2}\sqrt{-g}\,g^{\mu\rho}\partial_{\nu}g_{\rho\mu}.$
(96)
For generality, we write this corollary to (95e) as follows:
$\delta\sqrt{-g}=\frac{1}{2}\sqrt{-g}\,g^{\mu\nu}\delta\,g_{\mu\nu}.$ (97)
### V.3 Invariant volume element
Now, we consider the transport of a general 4-vector
$\nabla_{\nu}V^{\mu}=\partial_{\nu}V^{\mu}+\Gamma^{\mu}_{\nu\lambda}V^{\lambda}.$
(98)
Therefore, the 4-divergence of $V^{\mu}$ is
$\displaystyle\nabla_{\mu}V^{\mu}$ $\displaystyle=$
$\displaystyle\partial_{\mu}V^{\mu}+\Gamma^{\mu}_{\mu\lambda}V^{\lambda}$
(99a)
$\displaystyle\stackrel{{\scriptstyle(\ref{once_contracted_affine_connection_final})}}{{=}}$
$\displaystyle\partial_{\mu}V^{\mu}+\frac{1}{\sqrt{-g}}\left(\partial_{\lambda}\sqrt{-g}\right)V^{\lambda}$
(99b) $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{-g}}\partial_{\mu}\left(\sqrt{-g}V^{\mu}\right).$
(99c)
If $V^{\mu}$ vanishes at infinity, then integrating over all space yields
$\int d^{4}x\sqrt{-g}\,\nabla_{\mu}V^{\mu}=\int
d^{4}x\,\partial_{\mu}\left(\sqrt{-g}V^{\mu}\right)=0,$ (100)
which is a covariant form of Gauss’s theorem where the invariant volume
element is
$dV=\sqrt{-g}\,d^{4}x.$ (101)
### V.4 Ricci tensor
The Ricci tensor is the second rank tensor formed from the Riemann curvature
tensor as follows:
$R_{\sigma\nu}\equiv
R^{\lambda}_{\sigma\lambda\nu}=g^{\lambda\mu}R_{\mu\sigma\lambda\nu},$ (102)
where the Riemann tensor is
${R^{\rho}}_{\sigma\mu\nu}=\partial_{\mu}\Gamma^{\rho}_{\nu\sigma}-\partial_{\nu}\Gamma^{\rho}_{\mu\sigma}+\Gamma^{\rho}_{\mu\lambda}\Gamma^{\lambda}_{\nu\sigma}-\Gamma^{\rho}_{\nu\lambda}\Gamma^{\lambda}_{\mu\sigma}.$
(103)
Therefore, the Ricci tensor can be written as
$\displaystyle R_{\sigma\nu}$ $\displaystyle=$
$\displaystyle\partial_{\rho}\Gamma^{\rho}_{\nu\sigma}-\partial_{\nu}\Gamma^{\rho}_{\rho\sigma}+\Gamma^{\rho}_{\rho\lambda}\Gamma^{\lambda}_{\nu\sigma}-\Gamma^{\rho}_{\nu\lambda}\Gamma^{\lambda}_{\rho\sigma}$
(104a) $\displaystyle=$
$\displaystyle\partial_{\rho}\Gamma^{\rho}_{\nu\sigma}-\Gamma^{\lambda}_{\nu\rho}\Gamma^{\rho}_{\lambda\sigma}-\partial_{\nu}\Gamma^{\rho}_{\sigma\rho}+\Gamma^{\lambda}_{\nu\sigma}\Gamma^{\rho}_{\lambda\rho}.\qquad$
(104b)
Now, using the correction for a covariant vector
$\nabla_{\rho}A_{\nu}=\partial_{\rho}A_{\nu}-\Gamma^{\lambda}_{\rho\nu}A_{\lambda},$
(105)
we can write (104b) as
$R_{\sigma\nu}=\nabla_{\rho}\Gamma^{\rho}_{\nu\sigma}-\nabla_{\nu}\Gamma^{\rho}_{\sigma\rho}.$
(106)
(106) is known as the Palatini identity. The scalar curvature is the following
contraction of the Ricci tensor
$R\equiv g^{\mu\nu}R_{\mu\nu}.$ (107)
## VI Gravitational action
### VI.1 Free field gravitational action
The action for the source free gravitational field is
$I_{G}=\frac{1}{16\pi G}\int d^{4}x\,\sqrt{-g}\,R(x).$ (108)
The equation of motion for the metric tensor can be determined by varying
(108) with respect to the metric tensor field. The variation is carried out in
several stages. The variation of the Lagrangian density is
$\delta\left(\sqrt{-g}R\right)=\sqrt{-g}R_{\mu\nu}\delta
g^{\mu\nu}+R\,\delta\sqrt{-g}+\sqrt{-g}\,g^{\mu\nu}\delta R_{\mu\nu}.$ (109)
From the Palatini identity (106), the change in the Ricci tensor can be
written as
$\delta
R_{\mu\nu}=-\nabla_{\nu}\delta\Gamma^{\lambda}_{\mu\lambda}+\nabla_{\lambda}\delta\Gamma^{\lambda}_{\mu\nu},$
(110)
where we commute the variational change with the covariant derivative. Now, we
can expand the third term on the R.H.S. of (109) as follows:
$\displaystyle\sqrt{-g}\,g^{\mu\nu}\delta R_{\mu\nu}$
$\displaystyle\stackrel{{\scriptstyle(\ref{variation_of_Ricci_tensor})}}{{=}}$
$\displaystyle-\sqrt{-g}\left[g^{\mu\nu}\nabla_{\nu}\delta\Gamma^{\lambda}_{\mu\lambda}-g^{\mu\nu}\nabla_{\lambda}\delta\Gamma^{\lambda}_{\mu\nu}\right]$
$\displaystyle\stackrel{{\scriptstyle(\ref{invariant_g_mu_nu_under_parallel_transport})}}{{=}}$
$\displaystyle-\sqrt{-g}\left[\nabla_{\nu}\underbrace{\left(g^{\mu\nu}\delta\Gamma^{\lambda}_{\mu\lambda}\right)}_{\text{like}\,V^{\nu}}-\nabla_{\lambda}\underbrace{\left(g^{\mu\nu}\delta\Gamma^{\lambda}_{\mu\nu}\right)}_{\text{like}\,V^{\mu}}\right],$
since $\nabla_{\mu}g_{\nu\rho}=0$. Now from (99c) we know
$\nabla_{\nu}V^{\nu}=\frac{1}{\sqrt{-g}}\partial_{\nu}(\sqrt{-g}V^{\nu})$, so
for the variation of the third term we have $\sqrt{-g}\,g^{\mu\nu}\delta
R_{\mu\nu}=-\partial_{\nu}\left(\sqrt{-g}\,g^{\mu\nu}\delta\Gamma^{\lambda}_{\mu\lambda}\right)+\partial_{\lambda}\left(\sqrt{-g}\,g^{\mu\nu}\delta\Gamma^{\lambda}_{\mu\nu}\right).$
(111c)
These surface terms drop out when integrated over all space, so the third term
on the R.H.S. of (109) vanishes. Finally, inserting the result (97)
$\delta\sqrt{-g}=\frac{1}{2}\sqrt{-g}\,g^{\mu\nu}\delta g_{\mu\nu}$
into the second term on the R.H.S. of (109) we can write the variation of the
gravitational action (108) entirely in terms of the variation of the metric
tensor field
$\delta I_{G}=\frac{1}{16\pi G}\int d^{4}x\,\sqrt{-g}\left[R_{\mu\nu}\delta
g^{\mu\nu}+\frac{1}{2}g^{\mu\nu}R\,\delta g_{\mu\nu}\right].$ (112)
The variation of identity vanishes,
$\delta[\delta^{\mu}_{\lambda}]=\delta\left[g^{\mu\tau}g_{\tau\lambda}\right]=\left(\delta
g^{\mu\tau}\right)g_{\tau\lambda}+g^{\mu\tau}\delta g_{\tau\lambda}=0,$ (113)
from which we find the following useful identity
$\delta
g^{\mu\tau}g^{\nu\lambda}g_{\tau\lambda}+g^{\nu\lambda}g^{\mu\tau}\delta
g_{\tau\lambda}=0$ (114)
or
$\delta g^{\mu\nu}=-g^{\nu\lambda}g^{\mu\tau}\delta g_{\tau\lambda}.$ (115)
With this identity, we can write the variation of the free gravitational
action as
$\displaystyle\delta I_{G}$ $\displaystyle=$ $\displaystyle-\frac{1}{16\pi
G}\int d^{4}x\,\sqrt{-g}\left[R_{\mu\nu}g^{\mu\tau}g^{\nu\lambda}\delta
g_{\tau\lambda}-\frac{1}{2}g^{\mu\nu}R\delta g_{\mu\nu}\right]$ (116b)
$\displaystyle=$ $\displaystyle-\frac{1}{16\pi G}\int
d^{4}x\,\sqrt{-g}\left[R^{\mu\nu}-\frac{1}{2}g^{\mu\nu}R\right]\delta
g_{\mu\nu}.$
Since the variation of the metric does not vanish in general, for the
gravitational action to vanish the quantity in square brackets must vanish.
This quantity is call the Einstein tensor
$G^{\mu\nu}\equiv R^{\mu\nu}-\frac{1}{2}g^{\mu\nu}R.$ (117)
So, the equation of motion for the free gravitation field is simply
$G^{\mu\nu}=0.$ (118)
### VI.2 Variation with respect to the vierbein field
In terms of the vierbein field, the metric tensor is
$g_{\mu\nu}(x)={e_{\mu}}^{a}(x){e_{\nu}}^{b}(x)\eta_{ab},$ (119)
so its variation can be directly written in terms of the variation of the
vierbein
$\displaystyle\delta g_{\mu\nu}$ $\displaystyle=$
$\displaystyle\delta{e_{\mu}}^{a}{e_{\nu}}^{b}\eta_{ab}+{e_{\mu}}^{a}\delta{e_{\nu}}^{b}\eta_{ab}$
(120a) $\displaystyle=$ $\displaystyle\delta{e_{\mu}}^{a}e_{\nu a}+e_{\mu
a}\delta{e_{\nu}}^{a}$ (120b)
$\displaystyle\stackrel{{\scriptstyle(\ref{vierbien_derivative_identity})}}{{=}}$
$\displaystyle-{e_{\mu}}^{a}\delta e_{\nu a}-\delta e_{\mu a}{e_{\nu}}^{a}.$
(120c) Now, we can look upon $\delta e_{\mu a}$ as a field quantity whose
indices we can raise or lower with the appropriate use of the metric tensor.
Thus, we can write the variation of the metric as $\displaystyle\delta
g_{\mu\nu}$ $\displaystyle=$ $\displaystyle-
g_{\nu\lambda}{e_{\mu}}^{a}\delta{e^{\lambda}}_{a}-g_{\mu\lambda}\delta{e^{\lambda}}_{a}{e_{\nu}}^{a}$
(120d) $\displaystyle=$
$\displaystyle-\left(g_{\mu\lambda}{e_{\nu}}^{a}+g_{\nu\lambda}{e_{\mu}}^{a}\right)\delta{e^{\lambda}}_{a}.$
(120e)
Therefore, inserting this into (116b), the variation of the source-free
gravitational action with respect to the vierbein field is
$\displaystyle\delta I_{G}$ $\displaystyle=$ $\displaystyle\frac{1}{16\pi
G}\int
d^{4}x\sqrt{-g}\left(R^{\mu\nu}-\frac{1}{2}g^{\mu\nu}R\middle)\middle(g_{\mu\lambda}{e_{\nu}}^{a}+g_{\nu\lambda}{e_{\mu}}^{a}\right)\delta{e^{\lambda}}_{a}$
(121a) $\displaystyle=$ $\displaystyle\frac{1}{16\pi G}\int
d^{4}x\sqrt{-g}\left({R_{\lambda}}^{\nu}{e_{\nu}}^{a}-\frac{1}{2}\delta^{\nu}_{\lambda}R\,{e_{\nu}}^{a}+{R^{\mu}}_{\lambda}{e_{\mu}}^{a}-\frac{1}{2}\delta^{\mu}_{\lambda}R\,{e_{\mu}}^{a}\right)\delta{e^{\lambda}}_{a}$
(121b) $\displaystyle=$ $\displaystyle\frac{1}{8\pi G}\int
d^{4}x\sqrt{-g}\left[\left({R^{\mu}}_{\lambda}-\frac{1}{2}\delta^{\mu}_{\lambda}R\right){e_{\mu}}^{a}\right]\delta{e^{\lambda}}_{a}.$
(121c)
Since the variation of the vierbein field does not vanish in general, for the
gravitational action to vanish the quantity in square brackets must vanish.
Multiplying this by $g^{\lambda\nu}$, the equation of motion is
$G^{\mu\nu}{e_{\mu}}^{a}=0,$ (122)
which leads to (118) since ${e_{\mu}}^{a}\neq 0$. Yet (122) is a more general
equation of motion since it allows cancelation across components instead of
the simplest case where each component of $G^{\mu\nu}$ vanishes separately.
### VI.3 Action for a gravitational source
The fundamental principle in general relativity is that the presence of matter
warps the spacetime manifold in the vicinity of the source. The vierbein field
allows us to quantify this principle in a rather direct way. The variation of
the action for the matter source to lowest order is linearly proportional to
the variation of the vierbein field
$\delta I_{M}=\int d^{4}x\sqrt{-g}\,{u_{\lambda}}^{a}\delta{e^{\lambda}}_{a},$
(123)
where the components ${u_{\lambda}}^{a}$ are constants of proportionality.
However, the usual definition of the matter action is as a functional
derivative with respect to the metric
$\frac{\delta I_{M}}{\delta g_{\mu\nu}}\equiv\frac{1}{2}\int
d^{4}x\sqrt{-g}\,T^{\mu\nu}.$ (124)
So, in consideration of (123) and (124), we should write
$\displaystyle{u_{\lambda}}^{a}\delta{e^{\lambda}}_{a}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\,T^{\mu\nu}\,\delta g_{\mu\nu}$ (125a)
$\displaystyle\stackrel{{\scriptstyle(\ref{variation_of_metric_in_terms_of_vierbein_field})}}{{=}}$
$\displaystyle-\frac{1}{2}\,T^{\mu\nu}\left(g_{\mu\lambda}{e_{\nu}}^{a}+g_{\nu\lambda}{e_{\mu}}^{a}\right)\delta{e^{\lambda}}_{a},\qquad$
(125b)
which we can solve for $T^{\mu\nu}$. Dividing out $\delta{e^{\lambda}}_{a}$
and then multiplying through by $g^{\lambda\beta}$ we get
$\displaystyle{u}^{\beta a}$ $\displaystyle=$
$\displaystyle-\frac{1}{2}\,T^{\mu\nu}\left(\delta^{\beta}_{\mu}{e_{\nu}}^{a}+\delta^{\beta}_{\nu}{e_{\mu}}^{a}\right)$
(126a) $\displaystyle=$ $\displaystyle-\frac{1}{2}\
\left(T^{\beta\nu}{e_{\nu}}^{a}+T^{\mu\beta}{e_{\mu}}^{a}\right)$ (126b)
$\displaystyle=$ $\displaystyle-T^{\beta\nu}{e_{\nu}}^{a},$ (126c)
since the energy-momentum tensor is symmetric. Thus, we have
$T^{\mu\nu}=-{u}^{\mu a}{e^{\nu}}_{a}.$ (127)
Alternatively, again in consideration of (123) and (124), we could also write
$\displaystyle{u_{\lambda}}^{a}\delta{e^{\lambda}}_{a}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\,T^{\mu\nu}\,\delta g_{\mu\nu}$ (128a)
$\displaystyle\stackrel{{\scriptstyle(\ref{variation_of_metric_in_terms_of_vierbein_field_positive})}}{{=}}$
$\displaystyle\frac{1}{2}\,T^{\mu\nu}\left(\delta{e_{\mu}}^{a}e_{\nu a}+e_{\mu
a}\delta{e_{\nu}}^{a}\right)$ (128b) $\displaystyle=$
$\displaystyle\frac{1}{2}\,T_{\mu\nu}\left(\delta{e^{\mu}}_{a}e^{\nu a}+e^{\mu
a}\delta{e^{\nu}}_{a}\right)$ (128c) $\displaystyle=$
$\displaystyle\frac{1}{2}\,\left(T_{\lambda\nu}\delta{e^{\lambda}}_{a}e^{\nu
a}+T_{\mu\lambda}e^{\mu a}\delta{e^{\lambda}}_{a}\right)\qquad$ (128d)
$\displaystyle=$ $\displaystyle\frac{1}{2}\,\left(T_{\lambda\nu}e^{\nu
a}+T_{\mu\lambda}e^{\mu a}\right)\delta{e^{\lambda}}_{a}$ (128e)
$\displaystyle=$ $\displaystyle T_{\lambda\nu}e^{\nu
a}\delta{e^{\lambda}}_{a}.$ (128f)
This implies that the energy-stress tensor is proportional to the vierbein
field
$T_{\mu\nu}=e_{\mu a}{u_{\nu}}^{a}.$ (129)
Consequently, the variation of the energy-stress tensor is then
$\delta T_{\mu\nu}=\delta e_{\mu
a}{u_{\nu}}^{a}=g_{\mu\lambda}\delta{e^{\lambda}}_{a}{u_{\nu}}^{a}.$ (130)
This can also be written as
${T^{\lambda}}_{\nu}={e^{\lambda}}_{a}{u_{\nu}}^{a}\rightarrow\delta{T^{\lambda}}_{\nu}=\delta{e^{\lambda}}_{a}{u_{\nu}}^{a}.$
(131)
Inserting this back into the action for a graviational source (123) we have
$\displaystyle I_{M}$ $\displaystyle=$ $\displaystyle\int
d^{4}x\sqrt{-g}\,{T^{\lambda}}_{\lambda}$ (132a) $\displaystyle=$
$\displaystyle\int d^{4}x\sqrt{-g}\,g^{\mu\nu}T_{\mu\nu}.$ (132b)
### VI.4 Full gravitational action
The variation of the full gravitational action is the sum of variations of the
source-free action and gravitational action for matter
$\delta I=\delta I_{G}+\delta I_{M}.$ (133)
Inserting (121c) and (132b) into (133) then gives
$\delta I_{G}=\\!\\!\\!\int d^{4}x\sqrt{-g}\left[\frac{1}{8\pi
G}\left({R^{\mu}}_{\lambda}-\frac{1}{2}\delta^{\mu}_{\lambda}R\right){e_{\mu}}^{a}+{u_{\lambda}}^{a}\right]\delta{e^{\lambda}}_{a}.$
(134)
Therefore, with the requirement that $\delta I_{G}=0$, we obtain the equation
of motion of the vierbein field
$\left({R^{\mu}}_{\lambda}-\frac{1}{2}\delta^{\mu}_{\lambda}R\right){e_{\mu}}^{a}=-8\pi
G{u_{\lambda}}^{a}.$ (135)
Multiplying through by $e_{\nu a}$
$\left({R^{\mu}}_{\lambda}-\frac{1}{2}\delta^{\mu}_{\lambda}R\right){e_{\mu}}^{a}e_{\nu
a}=-8\pi Ge_{\nu a}{u_{\lambda}}^{a}$ (136)
gives the well known Einstein equation
$R_{\nu\lambda}-\frac{1}{2}g_{\nu\lambda}R=-8\pi G\,T_{\nu\lambda}.$ (137)
## VII Einstein’s action
In this section, we review the derivation of the equation of motion of the
metric field in the weak field approximation. We start with a form of the
Lagrangian density presented in Einstein, 1928a for the vierbein field
theory. Einstein’s intention was the unification of electromagnetism with
gravity.
With $h$ denoting the determinant of $|e_{\mu a}|$ (i.e. $h\equiv\sqrt{-g}$),
the useful identity
$\delta\sqrt{-g}=\frac{1}{2}\sqrt{-g}\,g^{\mu\nu}\delta\,g_{\mu\nu}$
can be rewritten strictly in terms of the vierbein field as follows
$\displaystyle\delta h$ $\displaystyle=$
$\displaystyle\frac{1}{2}h\,g^{\mu\nu}\delta g_{\mu\nu}$ (138a)
$\displaystyle=$
$\displaystyle\frac{1}{2}h\,g^{\mu\nu}\delta({e_{\mu}}^{a}{e_{\nu}}^{b})\,\eta_{ab}$
(138b) $\displaystyle=$
$\displaystyle\frac{1}{2}h\,g^{\mu\nu}\delta{e_{\mu}}^{a}{e_{\nu}}^{b}\eta_{ab}+\frac{1}{2}h\,g^{\mu\nu}{e_{\mu}}^{a}\delta{e_{\nu}}^{b}\eta_{ab}\qquad$
(138c) $\displaystyle=$
$\displaystyle\frac{1}{2}h\,\delta{e_{\mu}}^{a}{e^{\mu}}_{a}+\frac{1}{2}h\,{e^{\nu}}_{b}\delta{e_{\nu}}^{b}$
(138d) $\displaystyle=$ $\displaystyle h\,\delta{e_{\mu}}^{a}{e^{\mu}}_{a}.$
(138e)
### VII.1 Lagrangian density form 1
With the following definition
$\Lambda^{\nu}_{\alpha\beta}\equiv\frac{1}{2}e^{\nu
a}(\partial_{\beta}e_{\alpha a}-\partial_{\alpha}e_{\beta a}),$ (139)
the first Lagrangian density that we consider is the following:
$\displaystyle{\cal L}$ $\displaystyle=$ $\displaystyle
h\,g^{\mu\nu}\;{\Lambda_{\mu}}^{\alpha}_{\beta}\;{\Lambda_{\nu}}^{\beta}_{\alpha},$
$\displaystyle=$ $\displaystyle\frac{h}{4}g^{\mu\nu}e^{\alpha a}e^{\beta
b}(\partial_{\beta}e_{\mu a}-\partial_{\mu}e_{\beta
a})(\partial_{\alpha}e_{\nu b}-\partial_{\nu}e_{\alpha b}).\qquad$
For a weak field, we have the following first-order expansion
$e_{\mu a}=\delta_{\mu a}-k_{\mu a}\cdots.$ (141)
The lowest-order change (2nd order in $\delta h$) is
$\displaystyle\delta{\cal L}$ $\displaystyle=$
$\displaystyle\frac{h}{4}\eta^{\mu\nu}\delta^{\alpha a}\delta^{\beta
b}(\partial_{\beta}k_{\mu a}-\partial_{\mu}k_{\beta
a})(\partial_{\alpha}k_{\nu b}-\partial_{\nu}k_{\alpha b})\qquad$ (142b)
$\displaystyle=$
$\displaystyle\frac{h}{4}\eta^{\mu\nu}(\partial_{\beta}{k_{\mu}}^{\alpha}-\partial_{\mu}{k_{\beta}}^{\alpha})(\partial_{\alpha}{k_{\nu}}^{\beta}-\partial_{\nu}{k_{\alpha}}^{\beta})$
$\displaystyle=$
$\displaystyle\frac{h}{4}\eta^{\mu\nu}(\partial_{\beta}{k_{\mu}}^{\alpha}\partial_{\alpha}{k_{\nu}}^{\beta}-\partial_{\beta}{k_{\mu}}^{\alpha}\partial_{\nu}{k_{\alpha}}^{\beta}$
$\displaystyle-\,\partial_{\mu}{k_{\beta}}^{\alpha}\partial_{\alpha}{k_{\nu}}^{\beta}+\partial_{\mu}{k_{\beta}}^{\alpha}\partial_{\nu}{k_{\alpha}}^{\beta})$
$\displaystyle=$
$\displaystyle\frac{h}{4}\left(\eta^{\mu\alpha}\partial_{\beta}{k_{\mu}}^{\nu}\partial_{\nu}{k_{\alpha}}^{\beta}-\eta^{\mu\nu}\partial_{\beta}{k_{\mu}}^{\alpha}\partial_{\nu}{k_{\alpha}}^{\beta}\right.$
$\displaystyle\left.-\,\eta^{\mu\alpha}\partial_{\mu}{k_{\beta}}^{\nu}\partial_{\nu}{k_{\alpha}}^{\beta}+\eta^{\mu\nu}\partial_{\mu}{k_{\beta}}^{\alpha}\partial_{\nu}{k_{\alpha}}^{\beta}\right)$
$\displaystyle=$
$\displaystyle\frac{h}{4}\left(-\eta^{\mu\alpha}\partial_{\beta}\partial_{\nu}{k_{\mu}}^{\nu}+\eta^{\mu\nu}\partial_{\beta}\partial_{\nu}{k_{\mu}}^{\alpha}\right.$
(142f)
$\displaystyle\left.+\,\eta^{\mu\alpha}\partial_{\mu}\partial_{\nu}{k_{\beta}}^{\nu}-\eta^{\mu\nu}\partial_{\mu}\partial_{\nu}{k_{\beta}}^{\alpha}\right){k_{\alpha}}^{\beta}.$
So $\delta\int d^{4}x\,{\cal L}=0$ implies the equation of motion
$-\partial_{\beta}\partial_{\nu}k^{\alpha\nu}+\partial_{\beta}\partial_{\nu}k^{\nu\alpha}+\partial^{\alpha}\partial_{\nu}{k_{\beta}}^{\nu}-\partial^{2}{k_{\beta}}^{\alpha}=0$
(143a) or
$\partial^{2}{k_{\beta}}^{\alpha}-\partial_{\mu}\partial_{\beta}k^{\mu\alpha}+\partial_{\beta}\partial_{\mu}k^{\alpha\mu}-\partial_{\mu}\partial^{\alpha}{k_{\beta}}^{\mu}=0.$
(143b)
The above equation of motion (143b) is identical to Eq. (5) in Einstein’s
second paper.
### VII.2 Lagrangian density form 2
Now the second Lagrangian density we consider is the following:
$\displaystyle{\cal L}\\!\\!\\!$ $\displaystyle=$ $\displaystyle
h\,g_{\mu\nu}g^{\alpha\sigma}g^{\beta\tau}\Lambda^{\mu}_{\alpha\beta}\Lambda^{\nu}_{\sigma\tau}$
$\displaystyle=$
$\displaystyle\frac{h}{4}g_{\mu\nu}g^{\alpha\sigma}g^{\beta\tau}e^{\mu
a}e^{\nu b}\left(\partial_{\beta}e_{\alpha a}-\partial_{\alpha}e_{\beta
a}\right)\left(\partial_{\tau}e_{\sigma b}-\partial_{\sigma}e_{\tau
b}\right).$
We will see this leads to the same equation of motion that we got from the
first form of the Lagrangian density. The lowest-order change is
$\displaystyle\delta{\cal L}\\!\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\\!\frac{h}{4}\eta_{\mu\nu}\eta^{\alpha\sigma}\eta^{\beta\tau}\delta^{\mu
a}\delta^{\nu b}\left(\partial_{\beta}k_{\alpha a}-\partial_{\alpha}k_{\beta
a}\right)\left(\partial_{\tau}e_{\sigma b}-\partial_{\sigma}e_{\tau b}\right)$
(145b) $\displaystyle=$
$\displaystyle\frac{h}{4}\eta^{\alpha\sigma}\eta^{\beta\tau}\left(\partial_{\beta}k_{\alpha\nu}-\partial_{\alpha}k_{\beta\nu}\right)\left(\partial_{\tau}{k_{\sigma}}^{\nu}-\partial_{\sigma}{k_{\tau}}^{\nu}\right)$
$\displaystyle=$
$\displaystyle\frac{h}{4}\left(\partial_{\beta}k_{\alpha\nu}-\partial_{\alpha}k_{\beta\nu}\right)\left(\partial^{\beta}k^{\alpha\nu}-\partial^{\alpha}k^{\beta\nu}\right)$
(145e) $\displaystyle=$
$\displaystyle\frac{h}{4}\left(\partial_{\beta}k_{\alpha\nu}\partial^{\beta}k^{\alpha\nu}-\partial_{\beta}k_{\alpha\nu}\partial^{\alpha}k^{\beta\nu}-\partial_{\alpha}k_{\beta\nu}\partial^{\beta}k^{\alpha\nu}\right.$
$\displaystyle\left.+\,\partial_{\alpha}k_{\beta\nu}\partial^{\alpha}k^{\beta\nu}\right)$
$\displaystyle=$
$\displaystyle\frac{h}{4}\left(\partial_{\beta}k_{\alpha\nu}\partial^{\beta}k^{\alpha\nu}-\partial_{\beta}k_{\alpha\nu}\partial^{\alpha}k^{\beta\nu}-\partial_{\beta}k_{\alpha\nu}\partial^{\alpha}k^{\beta\nu}\right.$
$\displaystyle\left.+\,\partial_{\beta}k_{\alpha\nu}\partial^{\beta}k^{\alpha\nu}\right)$
$\displaystyle=$
$\displaystyle\frac{h}{2}\left(-\partial^{2}k^{\alpha\nu}+\partial_{\beta}\partial^{\alpha}k^{\beta\nu}\right)k_{\alpha\nu}.$
(145f)
This implies the following:
$\partial^{2}k^{\alpha\nu}-\partial_{\beta}\partial^{\alpha}k^{\beta\nu}=0$
(146a) or
$\partial^{2}{k_{\beta}}^{\alpha}-\partial_{\mu}\partial_{\beta}k^{\mu\alpha}=0.$
(146b)
These are the first two terms in Einstein’s Eq. (5).
Notice that we started with a Lagrangian density with the usual quadratic form
in the field strength of the form (145e), which is
${\cal L}=\frac{h}{4}F_{\alpha\beta\nu}F^{\alpha\beta\nu},$ (147)
where the field strength is
$F^{\alpha\beta\nu}=\partial^{\alpha}k^{\beta\nu}-\partial^{\beta}k^{\alpha\nu}.$
(148)
If we had varied the action with respect to $k^{\beta\nu}$, then we would have
obtained the same equation of motion (146b).
### VII.3 First-order fluctuation in the metric tensor
The metric tensor expressed in terms of the vierbein field is
$g_{\alpha\beta}={e_{\alpha}}^{a}e_{\beta
a}=\left(\delta^{a}_{\alpha}+{k_{\alpha}}^{a}\right)\left(\delta_{\beta
a}+k_{\beta a}\right).$ (149)
So the first order fluctuation of the metric tensor field is the symmetric
tensor
$\overline{g_{\alpha\beta}}\equiv
g_{\alpha\beta}-\delta_{\alpha\beta}=k_{\alpha\beta}+k_{\beta\alpha}\cdots.$
(150)
We define the electromagnetic four-vector by contracting the field strength
tensor
$\varphi_{\mu}\equiv\Lambda_{\mu\alpha}^{\alpha}=\frac{1}{2}e^{\alpha
a}\left(\partial_{\alpha}e_{\mu a}-\partial_{\mu}e_{\alpha a}\right).$ (151)
This implies
$\varphi_{\mu}=\frac{1}{2}\delta^{\alpha a}\left(\partial_{\alpha}k_{\mu
a}-\partial_{\mu}k_{\alpha a}\right),$ (152)
so we arrive at
$2\varphi_{\mu}=\partial_{\alpha}{k_{\mu}}^{\alpha}-\partial_{\mu}{k_{\alpha}}^{\alpha}.$
(153)
### VII.4 Field equation in the weak field limit
The equation of motion for the fluctuation of the metric tensor from the first
form of the Lagrangian density is obtained by adding (143b) to itself but with
$\alpha$ and $\beta$ exchanged:
$\begin{split}&\partial^{2}k_{\beta\alpha}-\partial^{\mu}\partial_{\beta}k_{\mu\alpha}+\partial_{\beta}\partial^{\mu}k_{\alpha\mu}-\partial^{\mu}\partial_{\alpha}k_{\beta\mu}\\\
+&\partial^{2}k_{\alpha\beta}-\partial^{\mu}\partial_{\alpha}k_{\mu\beta}+\partial_{\alpha}\partial^{\mu}k_{\beta\mu}-\partial^{\mu}\partial_{\beta}k_{\alpha\mu}=0,\quad\end{split}$
(154)
which has a cancellation of four terms leaving
$\partial^{2}\overline{g_{\alpha\beta}}-\partial^{\mu}\partial_{\alpha}k_{\mu\beta}-\partial^{\mu}\partial_{\beta}k_{\mu\alpha}=0.$
(155)
Similarly, we arrive at the same result starting with the equation of motion
for the fluctuation of the metric tensor obtained from the second form of the
Lagrangian density, again by adding (146a) to itself but with $\alpha$ and
$\beta$ exchanged:
$\begin{split}&\partial^{2}k_{\beta\alpha}-\partial^{\mu}\partial_{\beta}k_{\mu\alpha}\\\
+&\,\partial^{2}k_{\alpha\beta}-\partial^{\mu}\partial_{\alpha}k_{\mu\beta}=0.\end{split}$
(156)
In this case, the sum is exactly the same as what we just obtained in (155)
but with no cancellation of terms
$\partial^{2}\overline{g_{\alpha\beta}}-\partial^{\mu}\partial_{\alpha}k_{\mu\beta}-\partial^{\mu}\partial_{\beta}k_{\mu\alpha}=0.$
(157)
Using (153) above, just with relabeled indices,
$2\varphi_{\alpha}=\partial_{\mu}{k_{\alpha}}^{\mu}-\partial_{\alpha}{k_{\mu}}^{\mu}.$
(158)
Taking derivatives of (158) we have ancillary equations of motion:
$-\partial_{\mu}\partial_{\beta}{k_{\alpha}}^{\mu}+\partial_{\alpha}\partial_{\beta}{k_{\mu}}^{\mu}=-2\partial_{\beta}\varphi_{\alpha}$
(159a) and
$-\partial_{\mu}\partial_{\alpha}{k_{\beta}}^{\mu}+\partial_{\alpha}\partial_{\beta}{k_{\mu}}^{\mu}=-2\partial_{\alpha}\varphi_{\beta}.$
(159b)
Adding the ancilla (159) to our equation of motion (157) gives
$\displaystyle-\partial^{2}\overline{g_{\alpha\beta}}+\partial^{\mu}\partial_{\alpha}(k_{\mu\beta}$
$\displaystyle+$ $\displaystyle
k_{\beta\mu})+\partial^{\mu}\partial_{\beta}(k_{\mu\alpha}+k_{\alpha\mu})$
$\displaystyle-$ $\displaystyle
2\partial_{\alpha}\partial_{\beta}k_{\mu}^{\mu}=2(\partial_{\beta}\varphi_{\alpha}+\partial_{\alpha}\varphi_{\beta}).$
Then making use of (150) this can be written in terms of the symmetric first-
order fluctuation of the metric tensor field
$\displaystyle\left.\frac{1}{2}\middle(-\partial^{2}\overline{g_{\alpha\beta}}+\partial^{\mu}\partial_{\alpha}\overline{g_{\mu\beta}}\right.$
$\displaystyle+$
$\displaystyle\left.\partial^{\mu}\partial_{\beta}\overline{g_{\mu\alpha}}-\partial_{\alpha}\partial_{\beta}\overline{{{g}_{\mu}}^{\mu}}\frac{}{}\right)$
(161) $\displaystyle=$
$\displaystyle\partial_{\beta}\varphi_{\alpha}+\partial_{\alpha}\varphi_{\beta}.$
This result is the same as Eq. (7) in Einstein’s second paper. In the case of
the vanishing of $\phi_{\alpha}$, (161) agrees to first order with the
equation of General Relativity
$R_{\alpha\beta}=0.$ (162)
Thus, Einstein’s action expressed explicitly in terms of the vierbein field
reproduces the law of the pure gravitational field in weak field limit.
## VIII Relativistic chiral matter in curved space
### VIII.1 Invariance in flat space
The external Lorentz transformations, $\Lambda$ that act on 4-vectors, commute
with the internal Lorentz transformations, $U(\Lambda)$ that act on spinor
wave functions, i.e.
$[{\Lambda^{\mu}}_{\nu},U(\Lambda)]=0.$ (163)
Note that we keep the indices on $U(\Lambda)$ suppressed, just as we keep the
indices of the Dirac matrices and the component indices of $\psi$ suppressed
as is conventional when writing matrix multiplication. Only the exterior
spacetime indices are explicitly written out. With this convention, the
Lorentz transformation of a Dirac gamma matrix is expressed as follows:
$U(\Lambda)^{-1}\gamma^{\mu}U(\Lambda)={\Lambda^{\mu}}_{\sigma}\gamma^{\sigma}.$
(164)
The invariance of the Dirac equation in flat space under a Lorentz
transformation is well known Peskin and Schroeder, (1995):
$\displaystyle\left[i\gamma^{\mu}\partial_{\mu}-m\right]\psi(x)$
$\displaystyle\stackrel{{\scriptstyle\text{\tiny LLT}}}{{\longrightarrow}}$
$\displaystyle\left[i\gamma^{\mu}{\left(\Lambda^{-1}\right)^{\nu}}_{\mu}\partial_{\nu}-m\right]U(\Lambda)\psi\left(\Lambda^{-1}x\right)$
(165a) $\displaystyle=$ $\displaystyle
U(\Lambda)U(\Lambda)^{-1}\left[i\gamma^{\mu}{\left(\Lambda^{-1}\right)^{\nu}}_{\mu}\partial_{\nu}-m\right]U(\Lambda)\psi\left(\Lambda^{-1}x\right)$
(165b)
$\displaystyle\stackrel{{\scriptstyle(\ref{eq:Lambda_D_commute})}}{{=}}$
$\displaystyle
U(\Lambda)\left[i\,U(\Lambda)^{-1}\gamma^{\mu}U(\Lambda){\left(\Lambda^{-1}\right)^{\nu}}_{\mu}\partial_{\nu}-m\right]\psi\left(\Lambda^{-1}x\right)$
(165c)
$\displaystyle\stackrel{{\scriptstyle(\ref{eq:Lambda_D_similarity_transform})}}{{=}}$
$\displaystyle
U(\Lambda)\left[i\,\Lambda_{\sigma}^{\mu}\gamma^{\sigma}{\left(\Lambda^{-1}\right)^{\nu}}_{\mu}\partial_{\nu}-m\right]\psi\left(\Lambda^{-1}x\right)$
(165d) $\displaystyle=$ $\displaystyle
U(\Lambda)\left[i\,\Lambda_{\sigma}^{\mu}{\left(\Lambda^{-1}\right)^{\nu}}_{\mu}\gamma^{\sigma}\partial_{\nu}-m\right]\psi\left(\Lambda^{-1}x\right)$
(165e) $\displaystyle=$ $\displaystyle
U(\Lambda)\left[i\,\delta_{\sigma}^{\nu}\,\gamma^{\sigma}\partial_{\nu}-m\right]\psi\left(\Lambda^{-1}x\right)$
(165f) $\displaystyle=$ $\displaystyle
U(\Lambda)\left[i\,\gamma^{\nu}\partial_{\nu}-m\right]\psi\left(\Lambda^{-1}x\right).$
(165g)
### VIII.2 Invariance in curved space
Switching to a compact notation for the interior Lorentz transformation,
$\Lambda_{\frac{1}{2}}\equiv U(\Lambda)$, (164) is
$\Lambda_{-\frac{1}{2}}\gamma^{\mu}\Lambda_{\frac{1}{2}}={\Lambda^{\mu}}_{\sigma}\gamma^{\sigma},$
(166)
where I put a minus on the subscript to indicate the inverse transformation,
i.e. $\Lambda_{-\frac{1}{2}}\equiv U(\Lambda)^{-1}$. Of course, exterior
Lorentz transformations can be used as a similarity transformation on the
Dirac matrices
${\Lambda^{\mu}}_{\sigma}\gamma^{\sigma}{\left(\Lambda^{-1}\right)^{\nu}}_{\mu}=\gamma^{\nu}.$
(167)
Below we will need the following identity:
$\displaystyle{\Lambda^{\mu}}_{\lambda}{e^{\lambda}}_{a}\gamma^{a}{{(\Lambda^{-1})}^{\nu}}_{\mu}\Lambda_{\frac{1}{2}}$
$\displaystyle\stackrel{{\scriptstyle(\ref{eq:Lambda_D_commute})}}{{=}}$
$\displaystyle\Lambda_{\frac{1}{2}}\Lambda_{-\frac{1}{2}}\left({\Lambda^{\mu}}_{\lambda}{e^{\lambda}}_{a}\gamma^{a}\right)\Lambda_{\frac{1}{2}}{{(\Lambda^{-1})}^{\nu}}_{\mu}$
(168c)
$\displaystyle\stackrel{{\scriptstyle(\ref{eq:Lorentz_interior_similarity_transformation})}}{{=}}$
$\displaystyle\Lambda_{\frac{1}{2}}{\Lambda^{\mu}}_{\sigma}\left({\Lambda^{\sigma}}_{\lambda}{e^{\lambda}}_{a}\gamma^{a}\right){{(\Lambda^{-1})}^{\nu}}_{\mu}$
$\displaystyle\stackrel{{\scriptstyle(\ref{eq:Lorentz_exterior_similarity_transformation})}}{{=}}$
$\displaystyle\Lambda_{\frac{1}{2}}{\Lambda^{\nu}}_{\lambda}{e^{\lambda}}_{a}\gamma^{a}.$
We require the Dirac equation in curved space be invariant under Lorentz
transformation when the curvature of space causes a correction $\Gamma_{\mu}$.
That is, we require
$\displaystyle{e^{\mu}}_{a}$
$\displaystyle\gamma^{a}\left(\partial_{\mu}+\Gamma_{\mu}\right)\psi(x)$
$\displaystyle\stackrel{{\scriptstyle\text{\tiny LLT}}}{{\longrightarrow}}$
$\displaystyle{\Lambda^{\mu}}_{\lambda}{e^{\lambda}}_{a}\gamma^{a}{{(\Lambda^{-1})}^{\nu}}_{\mu}\left(\partial_{\nu}+\Gamma^{\prime}_{\nu}\right)\Lambda_{\frac{1}{2}}\psi(\Lambda^{-1}x)$
$\displaystyle=$
$\displaystyle{\Lambda^{\mu}}_{\lambda}{e^{\lambda}}_{a}\gamma^{a}{{(\Lambda^{-1})}^{\nu}}_{\mu}\Lambda_{\frac{1}{2}}\left(\partial_{\nu}+\Lambda_{-\frac{1}{2}}\Gamma^{\prime}_{\nu}\Lambda_{\frac{1}{2}}\right)\psi(\Lambda^{-1}x)$
$\displaystyle+\,{\Lambda^{\mu}}_{\lambda}{e^{\lambda}}_{a}\gamma^{a}{\left(\Lambda^{-1}\right)^{\nu}}_{\mu}\left(\partial_{\nu}\,\Lambda_{\frac{1}{2}}\right)\psi(\Lambda^{-1}x)$
$\displaystyle\stackrel{{\scriptstyle(\ref{identity_Lorentz_exterior_veirbein_gamma_inverse_spinor})}}{{=}}$
$\displaystyle\Lambda_{\frac{1}{2}}{\Lambda^{\nu}}_{\lambda}{e^{\lambda}}_{a}\gamma^{a}\left[\left(\partial_{\nu}+\Lambda_{-\frac{1}{2}}\Gamma^{\prime}_{\nu}\Lambda_{\frac{1}{2}}\right)\psi(\Lambda^{-1}x)\right.$
$\displaystyle\left.+\,\Lambda_{-\frac{1}{2}}\left(\partial_{\nu}\,\Lambda_{\frac{1}{2}}\right)\psi(\Lambda^{-1}x)\right]$
$\displaystyle=$
$\displaystyle\Lambda_{\frac{1}{2}}{\Lambda^{\nu}}_{\lambda}{e^{\lambda}}_{a}\gamma^{a}\left[\left(\partial_{\nu}+\Gamma_{\nu}\right)\right.$
$\displaystyle-\,\underbrace{\Gamma_{\nu}+\Lambda_{-\frac{1}{2}}\Gamma^{\prime}_{\nu}\Lambda_{\frac{1}{2}}+\Lambda_{-\frac{1}{2}}\partial_{\nu}\left(\Lambda_{\frac{1}{2}}\right)}_{=0}\left]\psi(\Lambda^{-1}x).\right.\qquad$
In the last line we added and subtracted $\Gamma_{\nu}$. To achieve
invariance, the last three terms in the square brackets must vanish. Thus we
find the form of the local “gauge” transformation requires the correction
field to transform as follows:
$-\Gamma_{\nu}+\Lambda_{-\frac{1}{2}}\Gamma^{\prime}_{\nu}\Lambda_{\frac{1}{2}}+\Lambda_{-\frac{1}{2}}\partial_{\nu}\left(\Lambda_{\frac{1}{2}}\right)=0$
(170a) or
$\Gamma^{\prime}_{\nu}=\Lambda_{\frac{1}{2}}\Gamma_{\nu}\Lambda_{-\frac{1}{2}}-\partial_{\nu}\left(\Lambda_{\frac{1}{2}}\right)\Lambda_{-\frac{1}{2}}.$
(170b)
Therefore, the Dirac equation in curved space
$i\gamma^{a}{e^{\mu}}_{a}(x){\cal D}_{\mu}\,\psi-m\,\psi=0$ (171)
is invariant under a Lorentz transformation provided the generalized
derivative that we use is
${\cal D}_{\mu}=\partial_{\mu}+\Gamma_{\mu},$ (172)
where $\Gamma_{\mu}$ transforms according to (170b). This is analogous to a
gauge correction; however, in this case $\Gamma_{\mu}$ is not a vector
potential field.
### VIII.3 Covariant derivative of a spinor field
The Lorentz transformation for a spinor field is
$\Lambda_{\frac{1}{2}}=1+\frac{1}{2}\lambda_{ab}\,S^{ab},$ (173)
where the generator of the transformation is anti-symmetric $S^{ab}=-S^{ba}$.
The generator satisfies the following commutator
$[S^{hk},S^{ij}]=\eta^{hj}S^{ki}+\eta^{ki}S^{hj}-\eta^{hi}S^{kj}-\eta^{kj}S^{hi}.$
(174)
Thus, the local Lorentz transformations (LLT) of a Lorentz 4-vector, $x^{a}$
say, and a Dirac 4-spinor, $\psi$ say, are respectively:
$\text{LLT:}\qquad x^{a}\rightarrow x^{\prime a}={\Lambda^{a}}_{b}\,x^{b}$
(175)
and
$\text{LLT:}\qquad\psi\rightarrow\psi^{\prime}=\Lambda_{\frac{1}{2}}\psi.\quad$
(176)
The covariant derivative of a 4-vector is
$\nabla_{\gamma}X^{\alpha}=\partial_{\gamma}X^{\alpha}+\Gamma^{\alpha}_{\beta\gamma}X^{\beta},$
(177)
and the 4-vector at the nearby location is changed by the curvature of the
manifold. So we write it in terms of the original 4-vector with a correction
$X^{\alpha_{\parallel}}(x+\delta
x^{\alpha})=X^{\alpha_{\parallel}}(x)-\Gamma^{\alpha_{\parallel}}_{\beta\gamma}(x)X^{\beta}(x)\delta
x^{\gamma},$ (178)
as depicted in Fig. 3.
$x^{\alpha}$$x^{\alpha}+\delta
x^{\alpha}$$X^{\alpha_{\parallel}}(x)$$X^{\alpha_{\parallel}}(x+\delta
x)$$\Gamma^{\alpha_{\parallel}}_{\beta\gamma}(x)X^{\beta}(x)\delta x^{\gamma}$
Figure 3: Depiction of the case of an otherwise constant field distorted by
curved space. The field value $X^{\alpha}(x)$ is parallel transported along
the curved manifold (blue curve) by the distance $\delta x^{\alpha}$ going
from point $x^{\alpha}$ to $x^{\alpha}+\delta x^{\alpha}$.
Likewise, the correction to the vierbein field due the curvature of space is
${e^{\mu_{\parallel}}}_{k}(x+\delta
x^{\alpha})={e^{\mu_{\parallel}}}_{k}(x)-\Gamma^{\mu_{\parallel}}_{\beta\alpha}(x){e^{\beta}}_{k}(x)\delta
x^{\alpha}.$ (179)
The Lorentz transformation of a 2-rank tensor field is
${\Lambda^{a}}_{a^{\prime}}{\Lambda^{b}}_{b^{\prime}}\eta_{ab}=\eta_{a^{\prime}b^{\prime}}.$
(180)
Moreover, the Lorentz transformation is invertible
${\Lambda^{i}}_{a}{\Lambda_{j}}^{a}=\delta^{i}_{j}={\Lambda^{a}}_{j}{\Lambda_{a}}^{i},$
(181)
where the inverse is obtained by exchanging index labels, changing covariant
indices to contravariant indices and contravariant to covariant. In the case
of infinitesimal transformations we have
${\Lambda^{i}}_{j}(x)=\delta^{i}_{j}+{\lambda^{i}}_{j}(x),$ (182)
where
$0=\lambda_{ij}+\lambda_{ji}=\lambda^{ij}+\lambda^{ji}.$ (183)
Lorentz and inverse Lorentz transformations of the vierbein fields are
$\bar{e}^{\mu}_{\,\,h^{\prime}}(x)={\Lambda_{h^{\prime}}}^{a}(x){e^{\mu}}_{a}(x)$
(184)
and
${e^{\mu}}_{h}(x)={\Lambda^{a^{\prime}}}_{h}(x){{\bar{e}}^{\mu}}_{\;\;a^{\prime}}(x),$
(185)
where temporarily I am putting a bar over the transformed vierbein field as a
visual aid. Since the vierbein field is invertible, we can express the Lorentz
tranformation directly in terms of the vierbeins themselves
${\bar{e}_{\mu}}^{\,\,\,a^{\prime}}(x){e^{\mu}}_{h}(x)={\Lambda^{a^{\prime}}}_{h}(x).$
(186)
Now, we transport the Lorentz transformation tensor itself. The L.H.S. of
(186) has two upper indices, the Latin index $a^{\prime}$ and the Greek index
$\mu$, and we choose to use the upper indices to connect the Lorentz
transformation tensor between neighboring points. These indices are treated
differently: a Taylor expansion can be used to connect a quantity in its Latin
non-coordinate index at one point to a neighboring point, but the affine
connection must be used for the Greek coordinate index. Thus, we have
$\displaystyle{\Lambda^{h^{\prime}}}_{k}(x+\delta x^{\alpha})$
$\displaystyle=$ $\displaystyle{\bar{e}_{\mu_{\parallel}}}^{\quad
h^{\prime}}(x+\delta x^{\alpha}){e^{\mu_{\parallel}}}_{k}(x+\delta
x^{\alpha})$ $\displaystyle=$
$\displaystyle\left({\bar{e}_{\mu}}^{\,\,\,h}(x)+\frac{\partial{\bar{e}_{\mu}}^{\,\,\,h}}{\partial
x^{\alpha}}\delta x^{\alpha}\right){e^{\mu_{\parallel}}}_{k}(x+\delta
x^{\alpha})$
$\displaystyle\stackrel{{\scriptstyle(\ref{parallel_transport_correction_to_vierbein_field})}}{{=}}$
$\displaystyle\left({\bar{e}_{\mu}}^{\,\,\,h}(x)+\frac{\partial{\bar{e}_{\mu}}^{\,\,\,h}}{\partial
x^{\alpha}}\delta x^{\alpha}\right)$
$\displaystyle\times\left({e^{\mu}}_{k}-\Gamma^{\mu}_{\beta\alpha}(x){e^{\beta}}_{k}(x)\delta
x^{\alpha}\right)$ $\displaystyle=$
$\displaystyle\delta^{h}_{k}+\frac{\partial{\bar{e}_{\mu}}^{\,\,\,h}}{\partial
x^{\alpha}}\delta
x^{\alpha}{e^{\mu}}_{k}-\Gamma^{\mu}_{\beta\alpha}{e^{\beta}}_{k}\delta
x^{\alpha}{\bar{e}_{\mu}}^{\,\,\,h}$ $\displaystyle=$
$\displaystyle\delta^{h}_{k}+\left(\frac{\partial{\bar{e}_{\mu}}^{\,\,\,h}}{\partial
x^{\alpha}}\delta^{\mu}_{\beta}-\Gamma^{\mu}_{\beta\alpha}{\bar{e}_{\mu}}^{\,\,\,h}\right){e^{\beta}}_{k}\delta
x^{\alpha}$ $\displaystyle=$
$\displaystyle\delta^{h}_{k}+\left({e^{\mu}}_{k}\partial_{\alpha}{\bar{e}_{\mu}}^{\,\,\,h}-\Gamma^{\mu}_{\beta\alpha}{\bar{e}_{\mu}}^{\,\,\,h}{e^{\beta}}_{k}\right)\delta
x^{\alpha}$ $\displaystyle=$
$\displaystyle\delta^{h}_{k}-{{\omega_{\alpha}}^{h}}_{k}\delta x^{\alpha},$
(187g)
where the spin connection
${{\omega_{\alpha}}^{h}}_{k}=-{e^{\mu}}_{k}\partial_{\alpha}{\bar{e}_{\mu}}^{\,\,\,h}+\Gamma^{\mu}_{\beta\alpha}{\bar{e}_{\mu}}^{\,\,\,h}{e^{\beta}}_{k}$
(188)
is seen to have the physical interpretation of generalizing the infinitesimal
transformation (182) to the case of infinitesimal transport in curved space.
Relabeling indices, we have
$\displaystyle{{\omega_{\mu}}^{a}}_{b}$ $\displaystyle=$
$\displaystyle-{e^{\nu}}_{b}\partial_{\mu}{e_{\nu}}^{a}+\Gamma^{\sigma}_{\mu\nu}{e_{\sigma}}^{a}{e^{\nu}}_{b}$
(189a) $\displaystyle=$
$\displaystyle-{e^{\nu}}_{b}\left(\partial_{\mu}{e_{\nu}}^{a}-\Gamma^{\sigma}_{\mu\nu}{e_{\sigma}}^{a}\right)$
(189b) $\displaystyle=$
$\displaystyle-{e^{\nu}}_{b}\nabla_{\mu}{e_{\nu}}^{a},$ (189c)
where here the covariant derivative of the vierbien 4-vector is not
zero.121212 Remember the Tetrad postulate that we previously derived
$\nabla_{\mu}{e_{\nu}}^{a}=\partial_{\mu}{e_{\nu}}^{a}-{e_{\sigma}}^{a}\Gamma^{\sigma}_{\mu\nu}+{{\omega_{\mu}}^{a}}_{b}{e_{\nu}}^{b}=0$
is exactly (189b). Writing the Lorentz transformation in the usual
infinitesimal form
${\Lambda^{h}}_{k}=\delta^{h}_{k}+{\lambda^{h}}_{k}$ (190)
implies
$\displaystyle{\lambda^{h}}_{k}$ $\displaystyle=$
$\displaystyle-{{\omega_{\alpha}}^{h}}_{k}\,\delta x^{\alpha}$ (191a)
$\displaystyle=$
$\displaystyle{e^{\nu}}_{k}\left(\nabla_{\alpha}{e_{\nu}}^{h}\right)\delta
x^{\alpha}$ (191b) or
$\lambda_{hk}={e^{\beta}}_{k}\left(\nabla_{\alpha}e_{\beta h}\right)\delta
x^{\alpha}.$ (191c)
Using (173), the Lorentz transformation of the spinor field is
$\Lambda_{\frac{1}{2}}\psi=\left(1+\frac{1}{2}\lambda_{hk}\,S^{hk}\right)\psi=\psi+\delta\psi.$
(192)
This implies the change of the spinor is
$\displaystyle\delta\psi$ $\displaystyle=$
$\displaystyle\frac{1}{2}{e^{\beta}}_{k}\left(\nabla_{\alpha}e_{\beta
h}\right)\delta x^{\alpha}S^{hk}\psi$ (193a) $\displaystyle=$
$\displaystyle\Gamma_{\alpha}\psi\,\delta x^{\alpha},$ (193b)
where the correction to the spinor field is found to be
$\displaystyle\Gamma_{\alpha}$ $\displaystyle=$
$\displaystyle\frac{1}{2}{e^{\beta}}_{k}\left(\nabla_{\alpha}e_{\beta
h}\right)S^{hk}$ (194a) $\displaystyle=$
$\displaystyle\frac{1}{2}{e^{\beta}}_{k}\left(\partial_{\alpha}e_{\beta
h}-\Gamma_{\mu\beta}^{\sigma}e_{\sigma h}\right)S^{hk}$ (194b)
$\displaystyle=$
$\displaystyle\frac{1}{2}{e^{\beta}}_{k}\left(\partial_{\alpha}e_{\beta
h}\right)S^{hk}-\frac{1}{2}\Gamma_{\mu\beta}^{\sigma}(e_{\sigma
h}{e^{\beta}}_{k})S^{hk}\qquad$ (194c) $\displaystyle=$
$\displaystyle\frac{1}{2}{e^{\beta}}_{k}\left(\partial_{\alpha}e_{\beta
h}\right)S^{hk},$ (194d)
where the last term in (194c) vanishes because $e_{\sigma h}{e^{\beta}}_{k}$
is symmetric whereas $S^{hk}$ is anti-symmetric in the indices $h$ and $k$.
Thus, we have derived the form of the covariant derivative of the spinor wave
function
$\displaystyle{\cal D}_{\mu}\psi$ $\displaystyle=$
$\displaystyle\partial_{\mu}\psi+\Gamma_{\mu}\psi$ (195a) $\displaystyle=$
$\displaystyle\left(\partial_{\mu}+\frac{1}{2}{e^{\beta}}_{k}\nabla_{\mu}e_{\beta
h}\,S^{hk}\right)\psi$ (195b) $\displaystyle=$
$\displaystyle\left(\partial_{\mu}+\frac{1}{2}{e^{\beta}}_{k}\,\partial_{\mu}e_{\beta
h}\,S^{hk}\right)\psi.$ (195c)
This is the generalized derivative that is needed to correctly differentiate a
Dirac 4-spinor field in curved space.
## IX Conclusion
A detailed derivation of the Einstein equation from the least action principle
and a derivation of the relativistic Dirac equation in curved space from
considerations of invariance with respect to Lorentz transformations have been
presented. The field theory approach that was presented herein relied on a
factored decomposition of the metric tensor field in terms of a product of
vierbein fields that Einstein introduced in 1928. In this sense, the vierbein
field is considered the square root of the metric tensor. The motivation for
this decomposition follows naturally from the anti-commutator
$\\{{e^{\mu}}_{a}(x)\gamma^{a},{e^{\nu}}_{b}(x)\gamma^{b}\\}=2g^{\mu\nu}(x),$
(196)
where $\gamma^{a}$ are the Dirac matrices. Dirac originally discovered an
aspect of this important identity when he successfully attempted to write down
a linear quantum wave equation that when squared gives the well known Klein-
Gordon equation. Thus, dealing with relativistic quantum mechanics in flat
space, Dirac wrote this identity as
$\\{\gamma^{a},\gamma^{b}\\}=2\eta^{ab},$ (197a) where
$\eta=\text{diag}(1,-1,-1,-1)$. Einstein had the brilliant insight to write
the part of the identity that depends on the spacetime curvature as
${e^{\mu}}_{a}(x){e^{\nu}}_{b}(x)\eta^{ab}=g^{\mu\nu}(x).$ (197b)
Combining (197a) and (197b) into (196) is essential to correctly develop a
relativistic quantum field theory in curved space. However, (197b) in its own
right is a sufficient point of departure if one seeks to simply derive the
Einstein equation capturing the dynamical behavior of spacetime.
## X Acknowledgements
I would like to thank Carl Carlson for checking the derivations presented
above. I would like to thank Hans C. von Baeyer for his help searching for
past English translations of the 1928 Einstein manuscripts (of which none were
found) and his consequent willingness to translate the German text into
English.
## References
* Birrell and Davies, (1982) Birrell, N. and Davies, P. (1982). Quantum fields in curved space. Cambridge University Press.
* Born, (1961) Born, M. (1961). The Born-Einstein letters 1916-1955. International series in pure and applied physics. Macmillan, New York. ISBN-13: 978-1-4039-4496-2, republished in English in 2005.
* Carroll, (2004) Carroll, S. M. (2004). An Introduction to General Relativity, Spacetime and Geometry. Addison Wesley.
* D’Inverno, (1995) D’Inverno, R. (1995). Introducing Einstein’s relativity. Oxford.
* Dirac, (1928) Dirac, P. A. M. (1928). The Quantum Theory of the Electron. Royal Society of London Proceedings Series A, 117:610–624.
* (6) Einstein, A. (1928a). New possibility for a unified field theory of gravity and electricity. Sitzungsberichte der Preussischen Akademie der Wissenschaften. Physikalisch-Mathematische Klasse., pages 223–227.
* (7) Einstein, A. (1928b). Riemann geometry with preservation of the concept of distant parallelism. Sitzungsberichte der Preussischen Akademie der Wissenschaften. Physikalisch-Mathematische Klasse., pages 217–221.
* Einstein, (1948) Einstein, A. (1948). A generalized theory of gravitation. Rev. Mod. Phys., 20(1):35–39.
* Glashow, (1961) Glashow, S. L. (1961). Partial-symmetries of weak interactions. Nuclear Physics, 22(4):579 – 588.
* Jackiw, (2005) Jackiw, R. (2005). 50 years of Yang-Mills theory, chapter 10, pages 229–251. World Scientific, Singapore. Edited by Gerardus t’Hooft.
* Kaempffer, (1968) Kaempffer, F. A. (1968). Vierbein field theory of gravitation. Phys. Rev., 165(5):1420–1423.
* Peskin and Schroeder, (1995) Peskin, M. E. and Schroeder, D. V. (1995). An Introduction to Quantum Field Theory. Westview Press of the Perseus Books Group, New York, 1st edition.
* Salam, (1966) Salam, A. (1966). Magnetic monopole and two photon theories of c-violation. Physics Letters, 22(5):683 – 684.
* Weinberg, (1967) Weinberg, S. (1967). A model of leptons. Phys. Rev. Lett., 19(21):1264–1266.
* Weinberg, (1972) Weinberg, S. (1972). Gravitation and Cosmology, Principles and applications of the general theory of relativity. Wiley.
* Yang and Mills, (1954) Yang, C. N. and Mills, R. L. (1954). Conservation of isotopic spin and isotopic gauge invariance. Phys. Rev., 96(1):191–195.
The following two manuscripts, translated here in English by H.C. von Baeyer
and into LaTeX by the author, originally appeared in German in
Sitzungsberichte der Preussischen Akademie der Wissenschaften, Physikalisch-
Mathematische Klasse in the summer of 1928.
## Einstein’s 1928 manuscript on distant parallelism
Riemann geometry with preservation of the concept of distant parallelism
A. Einstein
June 7, 1928
Riemann geometry led in general relativity to a physical description of the
gravitational field, but does not yield any concepts that can be applied to
the electromagnetic field. For this reason the aim of theoreticians is to find
natural generalizations or extensions of Riemann geometry that are richer in
content, in hopes of reaching a logical structure that combines all physical
field concepts from a single point of view. Such efforts have led me to a
theory which I wish to describe without any attempt at physical interpretation
because the naturalness of its concepts lends it a certain interest in its own
right.
Riemannian geometry is characterized by the facts that the infinitesimal
neighborhood of every point $P$ has a Euclidian metric, and that the
magnitudes of two line elements that belong to the infinitesimal neighborhoods
of two finitely distant points $P$ and $Q$ are comparable. However, the
concept of parallelism of these two line elements is missing; for finite
regions the concept of direction does not exist. The theory put forward in the
following is characterized by the introduction, in addition to the Riemann
metric, of a “direction,” or of equality of direction, or of “parallelism” for
finite distances. Correspondingly, new invariants and tensors will appear in
addition to those of Riemann geometry.
## I $n$-Bein and metric
At the arbitrary point $P$ of the $n$-dimensional continuum erect an
orthogonal $n$-Bein from $n$ unit vectors representing an orthogonal
coordinate system. Let $A_{a}$ be the components of a line element, or of any
other vector, w.r.t. this local system ($n$-Bein). For the description of a
finite region introduce furthermore the Gaussian coordinate system $x^{\nu}$.
Let $A^{\nu}$ be the $\nu$-components of the vector ($A$) w.r.t. the latter,
furthermore let ${h_{a}}^{\nu}$ be the $\nu$ components of the unit vectors
that form the $n$-Bein. Then131313We use Greek letters for the coordinate
indices, Latin letters for Bein indices.
$A^{\nu}=h^{\nu}_{a}A_{a}\;\cdots.$ (1)
By inverting (1) and calling $h_{\nu a}$ the normalized sub-determinants
(cofactor) of ${h_{a}}^{\nu}$ we obtain
$A^{a}=h^{\mu}_{a}A^{\mu}\;\cdots.$ (1a)
The magnitude $A$ of the vector ($A$), on account of the Euclidian property of
the infinitesimal neighborhoods, is given by
$A^{2}=\sum A_{a}^{2}=h_{\mu a}h_{\nu a}A^{\mu}A^{\nu}\;\cdots.$ (2)
The metric tensor components $g_{\mu\nu}$ are given by the formula
$g_{\mu\nu}=h_{\mu a}h_{\nu a},\;\cdots$ (3)
where, of course, the index $a$ is summed over. With fixed $a$, the
${h_{a}}^{\nu}$ are the components of a contravariant vector. The following
relations also hold:
$h_{\mu a}h^{\nu}_{a}=\delta^{\nu}_{\mu}\;\cdots$ (4) $h_{\mu
a}h^{\mu}_{b}=\delta_{ab},\;\cdots$ (5)
where $\delta=1$ or $\delta=0$ depending on whether the two indices are equal
or different. The correctness of (4) and (5) follows from the above definition
of ${h_{\nu a}}$ as normalized subdeterminants of $h_{a}^{\nu}$. The vector
character of ${h_{\nu a}}$ follows most easily from the fact that the left
hand side, and hence also the right hand side, of (1a) is invariant under
arbitrary coordinate transformations for any choice of the vector ($A$).
The $n$-Bein field is determined by $n^{2}$ functions $h_{a}^{\nu}$, while the
Riemann metric is determined by merely $\frac{n(n+1)}{2}$ quantities
$g_{\mu\nu}$. According to (3) the metric is given by the $n$-Bein field, but
not vice versa.
## II Distant parallelism and rotational invariance
By positing the $n$-Bein field, the existence of the Riemann metric and of
distant parallelism are expressed simultaneously. If ($A$) and ($B$) are two
vectors at the points $P$ and $Q$ respectively, which w.r.t. the local
$n$-Beins have equal local coordinates (i.e. $A_{a}=B_{a}$) they are to be
regarded as equal (on account of (2)) and as “parallel.”
If we consider only the essential, i.e. the objectively meaningful, properties
to be the metric and distant parallelism, we recognize that the $n$-Bein field
is not yet completely determined by these demands. Both the metric and distant
parallelism remain intact if one replaces the $n$-Beins of all points of the
continuum by others which result from the original ones by a common rotation.
We call this replaceability of the $n$-Bein field rotational invariance and
assume: Only rotationally invariant mathematical relationships can have real
meaning.
Keeping a fixed coordinate system, and given a metric as well as a distant
parallelism relationship, the ${h_{a}}^{\mu}$ are not yet fully determined; a
substitution of the ${h_{a}}^{\nu}$ is still possible which corresponds to
rotational invariance, i.e. the equation
$A^{\ast}_{a}=d_{am}A_{m}\;\cdots$ (6)
where the $d_{am}$ is chosen to be orthogonal and independent of the
coordinates. ($A_{a}$) is an arbitrary vector w.r.t. the local coordinate
system; ($A^{\ast}_{a}$) is the same one in terms of the rotated local system.
According to (1a), equation (6) yields
$h^{\ast}_{\mu a}A^{\mu}=d_{am}h_{\mu m}A^{\mu}$
or
$h^{\ast}_{\mu a}=d_{am}h_{\mu m},\;\cdots$ (6a) where $\displaystyle
d_{am}d_{bm}$ $\displaystyle=$ $\displaystyle
d_{ma}d_{mb}=\delta_{ab},\;\cdots$ (6b) $\displaystyle\frac{\partial
d_{am}}{\partial x^{\nu}}$ $\displaystyle=$ $\displaystyle 0.\;\cdots$ (6c)
The assumption of rotational invariance then requires that equations
containing $h$ are to be regarded as meaningful only if they retain their form
when they are expressed in terms of $h^{\ast}$ according to (6). Or: $n$-Bein
fields related by local uniform rotations are equivalent. The law of
infinitesimal parallel transport of a vector in going from a point ($x^{\nu}$)
to a neighboring point ($x^{\nu}+dx^{\nu}$) is evidently characterized by the
equation
$dA_{a}=0\;\cdots$ (7)
which is to say the equation
$0=d(h_{\mu a}A^{\nu})=\frac{\partial h_{\mu a}}{\partial
x^{\tau}}A^{\mu}dx^{\tau}+h_{\mu a}dA^{\mu}=0.$
Multiplying by $h_{a}^{\nu}$ and using (5), this equation becomes
$\text{where}\hskip
36.135pt\left.\begin{array}[]{l}dA^{\nu}=-\Delta^{\nu}_{\mu\sigma}A^{\mu}dx^{\tau}\\\
{}\\\ \Delta^{\nu}_{\mu\sigma}=h^{\nu a}\frac{\partial h_{\mu a}}{\partial
x^{\sigma}}.\end{array}\right\\}$ (7a)
This parallel transport law is rotationally invariant and is unsymmetrical
with respect to the lower indices of $\Delta^{\nu}_{\mu\sigma}$. If the vector
($A$) is moved along a closed path according to this law, it returns to
itself; this means that the Riemann tensor $R$, defined in terms of the
transport coefficients $\Delta^{\nu}_{\mu\sigma}$,
$R^{i}_{k,lm}=-\frac{\partial\Delta^{i}_{kl}}{\partial
x^{m}}+\frac{\partial\Delta^{i}_{km}}{\partial x^{l}}+\Delta^{i}_{\alpha
l}\Delta^{\alpha}_{km}-\Delta^{i}_{\alpha m}\Delta^{\alpha}_{kl}$
will vanish identically because of (7a)—as can be verified easily.
Besides this parallel transport law there is another (nonintegrable)
symmetrical law of transport that belongs to the Riemann metric according to
(2) and (3). It is given by the well-known equations
$\left.\begin{array}[]{l}\overline{d}A^{\nu}=-\Gamma^{\nu}_{\mu\sigma}A^{\mu}dx^{\tau}\\\
{}\\\ \Gamma^{\nu}_{\mu\sigma}=\frac{1}{2}g^{\nu a}\left(\frac{\partial
g_{\mu\alpha}}{\partial x^{\tau}}+\frac{\partial g_{\tau\alpha}}{\partial
x^{\mu}}-\frac{\partial g_{\mu\sigma}}{\partial
x^{\alpha}}\right).\end{array}\right\\}$ (8)
The $\Gamma^{\nu}_{\mu\sigma}$ symbols are given in terms of the $n$-Bein
field $h$ according to (3). It should be noted that
$g^{\mu\nu}=h^{\mu}_{a}h^{\nu}_{a}.\;\cdots$ (9)
Equations (4) and (5) imply
$g^{\mu\lambda}g_{\nu\lambda}=\delta^{\mu}_{\nu}$
which defines $g^{\mu\nu}$ in terms of $g_{\mu\nu}$. This law of transport
based on the metric is of course also rotationally invariant in the sense
defined above.
## III Invariants and covariants
In the manifold we have been studying, there exist, in addition to the tensors
and invariants of Riemann geometry, which contain the quantities $h$ only in
the combinations given by (3), further tensors and invariants, of which we
want to consider only the simplest.
Starting from a vector ($A^{\nu}$) at the point ($x^{\nu}$), the two
transports $d$ and $\bar{d}$ to the neighboring point ($x^{\nu}+dx^{\nu}$)
result in the two vectors
$A^{\nu}+dA^{\nu}$
and
$A^{\nu}+\overline{d}A^{\nu}.$
The difference
$dA^{\nu}-\overline{d}A^{\nu}=(\Gamma^{\nu}_{\alpha\beta}-\Delta^{\nu}_{\alpha\beta})A^{\alpha}dx^{\beta}$
is also a vector. Hence
$\Gamma^{\nu}_{\alpha\beta}-\Delta^{\nu}_{\alpha\beta}$
is a tensor, and so is its antisymmetric part
$\frac{1}{2}(\Delta^{\nu}_{\alpha\beta}-\Delta^{\nu}_{\beta\alpha})=\Lambda^{\nu}_{\alpha\beta}.\;\cdots$
The fundamental meaning of this tensor in the theory here developed emerges
from the following: If this tensor vanishes, the continuum is Euclidian. For
if
$0=2\Lambda^{\nu}_{\alpha\beta}=h^{a}\left(\frac{\partial h_{\alpha
a}}{\partial x^{\beta}}+\frac{\partial h_{\beta a}}{\partial
x^{\alpha}}\right),$ (10)
then multiplication by $h_{\nu b}$ yields
$0=\frac{\partial h_{\alpha b}}{\partial x^{\beta}}+\frac{\partial h_{\beta
b}}{\partial x^{\alpha}}.$
We can therefore put
$h_{\alpha b}=\frac{\partial\Psi_{b}}{\partial x^{\alpha}}.$
The field is therefore derivable from $n$ scalars $\Psi_{b}$. We choose the
coordinates according to the equation
$\Psi_{b}=x^{b}.$ (11)
Then, according to (7a) all $\Delta^{\nu}_{\alpha\beta}$ vanish, and the
$h_{\mu a}$ as well as the $g_{\mu\nu}$ are constant.
Since the tensor $\Lambda^{\nu}_{\alpha\beta}$ is evidently also formally the
simplest one allowed by our theory, the simplest characterization of the
continuum will be tied to $\Lambda^{\nu}_{\alpha\beta}$, not to the more
complicated Riemann curvature tensor. The simplest forms that can come into
play here are the vector
$\Lambda^{\alpha}_{\mu\alpha}$
as well as the invariants
$g^{\mu\nu}\Lambda^{\alpha}_{\mu\beta}\Lambda^{\beta}_{\nu\alpha}\qquad\text{and}\qquad
g_{\mu\nu}g^{\alpha\sigma}g^{\beta\tau}\Lambda^{\mu}_{\alpha\beta}\Lambda^{\nu}_{\sigma\tau}.$
From one of the latter (or from linear combinations) an invariant integral $J$
can be constructed by multiplication with the invariant volume element
$h\;d\tau,$
where $h$ is the determinant of $|h_{\mu a}|$, and $d\tau$ is the product
$dx_{1}\dots dx_{n}$. The assumption
$\delta J=0$
yields 16 differential equations for the 16 values of $h_{\mu a}$.
Whether one can get physically meaningful laws in this way will be
investigated later.
It is helpful to compare Weyl’s modification of Riemann’s theory with the
theory developed here:
> WEYL: Comparison neither of distant vector magnitudes nor of directions;
> RIEMANN: Comparison of distant vector magnitudes, but not of distant
> directions;
> THIS THEORY: Comparison of distant vector magnitude and directions.
## Einstein’s 1928 manuscript on unification of gravity and electromagnetism
New possibility for a unified field theory of gravity and electricity
A. Einstein
June 14, 1928
A few days ago I explained in a short paper in these Reports how it is
possible to use an n-Bein-Field to formulate a geometric theory based on the
fundamental concepts of the Riemann metric and distant parallelism. At the
time I left open the question whether this theory could serve to represent
physical relationships. Since then I have discovered that this theory—at least
in first approximation—yields the field equations of gravity and
electromagnetism very simply and naturally. It is therefore conceivable that
this theory will replace the original version of the theory of relativity.
The introduction of distant parallelism implies that according to this theory
there is something like a straight line, i.e. a line whose elements are all
parallel to each other; of course such a line is in no way identical to a
geodesic. Furthermore, in contrast to the usual general theory of relativity,
there is the concept of relative rest of two mass points (parallelism of two
line elements which belong to two different worldlines.)
In order for the general theory to be useful immediately as field theory one
must assume the following:
1. 1.
The number of dimensions is 4 ($n=4$).
2. 2.
The fourth local component $A_{a}$ ($a=4$) of a vector is pure imaginary, and
hence so are the components of the four legs of the Vier-Bein, the quantities
${h^{\mu}}_{4}$ and $h_{\mu 4}$.141414Instead one could also define the square
of the magnitude of the local vector $A$ to be
$A_{1}^{2}+A_{2}^{2}+A_{3}^{2}-A_{4}^{2}$ and introduce Lorentz
transformations instead of rotations of the local n-Bein. In that case all the
$h$’s would be real, but the immediate connection with the general theory
would be lost.
The coefficients $g_{\mu\nu}\;(=h_{\mu\alpha}h_{\nu\alpha})$ of course all
become real. Accordingly, we choose the square of the magnitude of a timelike
vector to be negative.
## I The underlying field equation
Let the variation of a Hamiltonian integral vanish for variations of the field
potentials $h_{\mu\alpha}$ (or $h^{\mu}_{\alpha}$ ) that vanish on the
boundary of a domain:
$\delta\left\\{\int\mathfrak{H}d\tau\right\\}=0.\;\cdots$ (1)
$\mathfrak{H}=h\,g^{\mu\nu}\;{\Lambda_{\mu}}^{\alpha}_{\beta}\;{\Lambda_{\nu}}^{\beta}_{\alpha},\;\cdots$
(1a)
where the quantities $h\;(=\det h_{\mu\alpha})$, $g^{\mu\nu}$, and
$\Lambda^{\alpha}_{\mu\nu}$ are defined in (9) and (10) of the previous paper.
Let the $h$ field describe the electrical and the gravitational field
simultaneously. A “purely gravitational field” results when equation (1) is
fulfilled and, in addition,
$\phi_{\mu}={\Lambda_{\mu}}^{\alpha}_{\alpha}\;\cdots$ (2)
vanish, which represents a covariant and rotationally invariant subsidiary
condition.151515Here there remains a certain ambiguity of interpretation,
because one could also characterize the pure gravitational field by the
vanishing of $\frac{\partial\phi_{\mu}}{\partial
x_{\nu}}-\frac{\partial\phi_{\nu}}{\partial x_{\mu}}$.
## II The field equation in the first approximation
If the manifold is the Minkowski world of special relativity, one can choose
the coordinates in such a way that
$h_{11}=h_{22}=h_{33}=1,h_{44}=j\;(=\sqrt{-1})$, and that all other $h$’s
vanish. This set of values is somewhat inconvenient for calculation. For that
reason we prefer to choose the $x_{4}$ coordinate in this § to be pure
imaginary; in that case the Minkowski world (absence of any field when the
coordinates are chosen appropriately) can be described by
$h_{\mu a}=\delta_{\mu a}\;\cdots$ (3)
The case of infinitely weak fields can be represented suitably by
$h_{\mu a}=\delta_{\mu a}+k_{\mu\alpha}\;\cdots$ (4)
where the $k_{\mu\alpha}$ are small quantities of first order. Neglecting
quantities of third or higher order we must replace (1a) by (1b), considering
(10) and (7a) of the previous paper:
$\mathfrak{H}=-\frac{1}{4}\left(\frac{\partial k_{\mu\alpha}}{\partial
x_{\beta}}-\frac{\partial k_{\beta\alpha}}{\partial
x_{\mu}}\right)\left(\frac{\partial k_{\mu\beta}}{\partial
x_{\alpha}}-\frac{\partial k_{\alpha\beta}}{\partial x_{\mu}}\right).\;\cdots$
(1b)
After variation one obtains the field equations in the first approximation
$\frac{\partial^{2}k_{\beta\alpha}}{\partial
x_{\mu}^{2}}-\frac{\partial^{2}k_{\mu\alpha}}{\partial x_{\mu}\partial
x_{\beta}}+\frac{\partial^{2}k_{\alpha\mu}}{\partial x_{\beta}\partial
x_{\mu}}-\frac{\partial^{2}k_{\beta\mu}}{\partial x_{\mu}\partial
x_{\alpha}}=0.\;\cdots$ (5)
These are 16 equations161616On account of general covariance there are of
course four identities among the field equations. In the first approximation
considered here, this is expressed by the fact that the divergence of the left
side of (5) with respect to the index $\alpha$ vanishes identically. for the
16 components $k_{\alpha\beta}$. Our task now is to see whether this system of
equations contains the known laws of the gravitational and electromagnetic
fields. For this purpose we must introduce $g_{\alpha\beta}$ and
$\phi_{\alpha}$ in (5) in place of $k_{\alpha\beta}$. We must put
$g_{\alpha\beta}=h_{\alpha a}h_{\beta a}=(\delta_{\alpha a}+k_{\alpha
a})(\delta_{\beta a}+k_{\beta a}).$
Or, exact to first order,
$g_{\alpha\beta}-\delta_{\alpha\beta}=\overline{g_{\alpha\beta}}=k_{\alpha\beta}+k_{\beta\alpha}.\;\cdots$
(6)
From (2) one also gets the quantities to first order exactly
$2\phi_{\alpha}=\frac{\partial k_{\alpha\mu}}{\partial x_{\mu}}-\frac{\partial
k_{\mu\mu}}{\partial x_{\alpha}}$ (2a)
By exchanging $\alpha$ and $\beta$ in (5) and adding to (5) one gets
$\frac{\partial^{2}\overline{g_{\alpha\beta}}}{\partial
x_{\mu}^{2}}-\frac{\partial^{2}k_{\mu\alpha}}{\partial x_{\mu}\partial
x_{\beta}}-\frac{\partial^{2}k_{\mu\beta}}{\partial x_{\mu}\partial
x_{\alpha}}=0.$
If one adds to this equation the two following equations which follow from
(2a)
$\displaystyle-\frac{\partial^{2}k_{\alpha\mu}}{\partial x_{\mu}\partial
x_{\beta}}+\frac{\partial^{2}k_{\mu\mu}}{\partial x_{\beta}\partial
x_{\alpha}}$ $\displaystyle=$
$\displaystyle-2\frac{\partial\phi_{\alpha}}{\partial x_{\beta}}$
$\displaystyle-\frac{\partial^{2}k_{\beta\mu}}{\partial x_{\mu}\partial
x_{\alpha}}+\frac{\partial^{2}k_{\mu\mu}}{\partial x_{\alpha}\partial
x_{\beta}}$ $\displaystyle=$
$\displaystyle-2\frac{\partial\phi_{\beta}}{\partial x_{\alpha}},$
then one obtains, in view of (6),
$\displaystyle\frac{1}{2}\left(-\frac{\partial^{2}\overline{g_{\alpha\beta}}}{\partial
x_{\mu}^{2}}+\frac{\partial^{2}\overline{g_{\mu\alpha}}}{\partial
x_{\mu}\partial x_{\beta}}+\frac{\partial^{2}\overline{g_{\mu\beta}}}{\partial
x_{\mu}\partial x_{\alpha}}-\frac{\partial^{2}\overline{g_{\mu\mu}}}{\partial
x_{\alpha}\partial x_{\beta}}\right)$
$\displaystyle=\frac{\partial\phi_{\alpha}}{\partial
x_{\beta}}+\frac{\partial\phi_{\beta}}{\partial x_{\alpha}}.\;\cdots$ (7)
The case of vanishing electromagnetic fields is characterized by the vanishing
of $\phi_{\alpha}$. In that case, (7) agrees to first order with the equation
of General Relativity
$R_{\alpha\beta}=0$
(where $R_{\alpha\beta}$ is the once reduced Riemann tensor.) Thus it is
proved that our new theory correctly reproduces the law of the pure
gravitational field in the first approximation.
By differentiating (2a) with respect to $x_{\alpha}$ and taking into account
the equation obtained from (5) by reducing with respect to $\alpha$ and
$\beta$ one obtains
$\frac{\partial\phi_{\alpha}}{\partial x_{\alpha}}=0.\;\cdots$ (8)
Noting that the left side $L_{\alpha\beta}$ of (7) obeys the identity
$\frac{\partial}{\partial
x_{\beta}}\left(L_{\alpha\beta}-\frac{1}{2}\delta_{\alpha\beta}L_{\sigma\sigma}\right)=0,$
we find from (7) that
$\frac{\partial^{2}\phi_{\alpha}}{\partial
x_{\beta}^{2}}+\frac{\partial^{2}\phi_{\beta}}{\partial x_{\alpha}\partial
x_{\beta}}-\frac{\partial}{\partial
x_{\alpha}}\left(\frac{\partial\phi_{\sigma}}{\partial x_{\sigma}}\right)=0$
or
$\frac{\partial^{2}\phi_{\alpha}}{\partial x_{\beta}^{2}}=0.\;\cdots$ (9)
Equations (8) and (9) together are known to be equivalent to Maxwell’s
equations for empty space. The new theory therefore yields Maxwell’s equations
in first approximation.
The separation of the gravitational field from the electromagnetic field
appears artificial according to this theory. And it is clear that equations
(5) imply more than (7), (8), and (9) together. Furthermore, it is remarkable
that in this theory the electric field does not enter the field equations
quadratically.
Note added in proof: One obtains very similar results by starting with the
Hamiltonian
$\mathfrak{H}=hg_{\mu\nu}g^{\alpha\sigma}g^{\beta\tau}{\Lambda^{\mu}_{\alpha}}_{\beta}{\Lambda^{\nu}_{\sigma}}_{\tau}.$
There is therefore at this time a certain uncertainty with respect to the
choice of $\mathfrak{H}$.
|
arxiv-papers
| 2011-06-10T12:32:54 |
2024-09-04T02:49:19.522322
|
{
"license": "Public Domain",
"authors": "Jeffrey Yepez",
"submitter": "Jeffrey Yepez",
"url": "https://arxiv.org/abs/1106.2037"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.